Articles de revues sur le sujet « Domain adaptation, domain-shift, image classification, neural networks »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Domain adaptation, domain-shift, image classification, neural networks.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 45 meilleurs articles de revues pour votre recherche sur le sujet « Domain adaptation, domain-shift, image classification, neural networks ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Wang, Xiaoqing, et Xiangjun Wang. « Unsupervised Domain Adaptation with Coupled Generative Adversarial Autoencoders ». Applied Sciences 8, no 12 (7 décembre 2018) : 2529. http://dx.doi.org/10.3390/app8122529.

Texte intégral
Résumé :
When large-scale annotated data are not available for certain image classification tasks, training a deep convolutional neural network model becomes challenging. Some recent domain adaptation methods try to solve this problem using generative adversarial networks and have achieved promising results. However, these methods are based on a shared latent space assumption and they do not consider the situation when shared high level representations in different domains do not exist or are not ideal as they assumed. To overcome this limitation, we propose a neural network structure called coupled generative adversarial autoencoders (CGAA) that allows a pair of generators to learn the high-level differences between two domains by sharing only part of the high-level layers. Additionally, by introducing a class consistent loss calculated by a stand-alone classifier into the generator optimization, our model is able to generate class invariant style-transferred images suitable for classification tasks in domain adaptation. We apply CGAA to several domain transferred image classification scenarios including several benchmark datasets. Experiment results have shown that our method can achieve state-of-the-art classification results.
Styles APA, Harvard, Vancouver, ISO, etc.
2

S. Garea, Alberto S., Dora B. Heras et Francisco Argüello. « TCANet for Domain Adaptation of Hyperspectral Images ». Remote Sensing 11, no 19 (30 septembre 2019) : 2289. http://dx.doi.org/10.3390/rs11192289.

Texte intégral
Résumé :
The use of Convolutional Neural Networks (CNNs) to solve Domain Adaptation (DA) image classification problems in the context of remote sensing has proven to provide good results but at high computational cost. To avoid this problem, a deep learning network for DA in remote sensing hyperspectral images called TCANet is proposed. As a standard CNN, TCANet consists of several stages built based on convolutional filters that operate on patches of the hyperspectral image. Unlike the former, the coefficients of the filter are obtained through Transfer Component Analysis (TCA). This approach has two advantages: firstly, TCANet does not require training based on backpropagation, since TCA is itself a learning method that obtains the filter coefficients directly from the input data. Second, DA is performed on the fly since TCA, in addition to performing dimensional reduction, obtains components that minimize the difference in distributions of data in the different domains corresponding to the source and target images. To build an operating scheme, TCANet includes an initial stage that exploits the spatial information by providing patches around each sample as input data to the network. An output stage performing feature extraction that introduces sufficient invariance and robustness in the final features is also included. Since TCA is sensitive to normalization, to reduce the difference between source and target domains, a previous unsupervised domain shift minimization algorithm consisting of applying conditional correlation alignment (CCA) is conditionally applied. The results of a classification scheme based on CCA and TCANet show that the DA technique proposed outperforms other more complex DA techniques.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Zhao, Fangwen, Weifeng Liu et Chenglin Wen. « A New Method of Image Classification Based on Domain Adaptation ». Sensors 22, no 4 (9 février 2022) : 1315. http://dx.doi.org/10.3390/s22041315.

Texte intégral
Résumé :
Deep neural networks can learn powerful representations from massive amounts of labeled data; however, their performance is unsatisfactory in the case of large samples and small labels. Transfer learning can bridge between a source domain with rich sample data and a target domain with only a few or zero labeled samples and, thus, complete the transfer of knowledge by aligning the distribution between domains through methods, such as domain adaptation. Previous domain adaptation methods mostly align the features in the feature space of all categories on a global scale. Recently, the method of locally aligning the sub-categories by introducing label information achieved better results. Based on this, we present a deep fuzzy domain adaptation (DFDA) that assigns different weights to samples of the same category in the source and target domains, which enhances the domain adaptive capabilities. Our experiments demonstrate that DFDA can achieve remarkable results on standard domain adaptation datasets.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Wang, Jing, Yi He, Wangyi Fang, Yiwei Chen, Wanyue Li et Guohua Shi. « Unsupervised domain adaptation model for lesion detection in retinal OCT images ». Physics in Medicine & ; Biology 66, no 21 (22 octobre 2021) : 215006. http://dx.doi.org/10.1088/1361-6560/ac2dd1.

Texte intégral
Résumé :
Abstract Background and objective. Optical coherence tomography (OCT) is one of the most used retinal imaging modalities in the clinic as it can provide high-resolution anatomical images. The huge number of OCT images has significantly advanced the development of deep learning methods for automatic lesion detection to ease the doctor’s workload. However, it has been frequently revealed that the deep neural network model has difficulty handling the domain discrepancies, which widely exist in medical images captured from different devices. Many works have been proposed to solve the domain shift issue in deep learning tasks such as disease classification and lesion segmentation, but few works focused on lesion detection, especially for OCT images. Methods. In this work, we proposed a faster-RCNN based, unsupervised domain adaptation model to address the lesion detection task in cross-device retinal OCT images. The domain shift is minimized by reducing the image-level shift and instance-level shift at the same time. We combined a domain classifier with a Wasserstein distance critic to align the shifts at each level. Results. The model was tested on two sets of OCT image data captured from different devices, obtained an average accuracy improvement of more than 8% over the method without domain adaptation, and outperformed other comparable domain adaptation methods. Conclusion. The results demonstrate the proposed model is more effective in reducing the domain shift than advanced methods.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Zhao, Sicheng, Chuang Lin, Pengfei Xu, Sendong Zhao, Yuchen Guo, Ravi Krishna, Guiguang Ding et Kurt Keutzer. « CycleEmotionGAN : Emotional Semantic Consistency Preserved CycleGAN for Adapting Image Emotions ». Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 juillet 2019) : 2620–27. http://dx.doi.org/10.1609/aaai.v33i01.33012620.

Texte intégral
Résumé :
Deep neural networks excel at learning from large-scale labeled training data, but cannot well generalize the learned knowledge to new domains or datasets. Domain adaptation studies how to transfer models trained on one labeled source domain to another sparsely labeled or unlabeled target domain. In this paper, we investigate the unsupervised domain adaptation (UDA) problem in image emotion classification. Specifically, we develop a novel cycle-consistent adversarial model, termed CycleEmotionGAN, by enforcing emotional semantic consistency while adapting images cycleconsistently. By alternately optimizing the CycleGAN loss, the emotional semantic consistency loss, and the target classification loss, CycleEmotionGAN can adapt source domain images to have similar distributions to the target domain without using aligned image pairs. Simultaneously, the annotation information of the source images is preserved. Extensive experiments are conducted on the ArtPhoto and FI datasets, and the results demonstrate that CycleEmotionGAN significantly outperforms the state-of-the-art UDA approaches.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Zhu, Yi, Xinke Zhou et Xindong Wu. « Unsupervised Domain Adaptation via Stacked Convolutional Autoencoder ». Applied Sciences 13, no 1 (29 décembre 2022) : 481. http://dx.doi.org/10.3390/app13010481.

Texte intégral
Résumé :
Unsupervised domain adaptation involves knowledge transfer from a labeled source to unlabeled target domains to assist target learning tasks. A critical aspect of unsupervised domain adaptation is the learning of more transferable and distinct feature representations from different domains. Although previous investigations, using, for example, CNN-based and auto-encoder-based methods, have produced remarkable results in domain adaptation, there are still two main problems that occur with these methods. The first is a training problem for deep neural networks; some optimization methods are ineffective when applied to unsupervised deep networks for domain adaptation tasks. The second problem that arises is that redundancy of image data results in performance degradation in feature learning for domain adaptation. To address these problems, in this paper, we propose an unsupervised domain adaptation method with a stacked convolutional sparse autoencoder, which is based on performing layer projection from the original data to obtain higher-level representations for unsupervised domain adaptation. More specifically, in a convolutional neural network, lower layers generate more discriminative features whose kernels are learned via a sparse autoencoder. A reconstruction independent component analysis optimization algorithm was introduced to perform individual component analysis on the input data. Experiments undertaken demonstrated superior classification performance of up to 89.3% in terms of accuracy compared to several state-of-the-art domain adaptation methods, such as SSRLDA and TLMRA.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Rezvaya, Ekaterina, Pavel Goncharov et Gennady Ososkov. « Using deep domain adaptation for image-based plant disease detection ». System Analysis in Science and Education, no 2 (2020) (30 juin 2020) : 59–69. http://dx.doi.org/10.37005/2071-9612-2020-2-59-69.

Texte intégral
Résumé :
Crop losses due to plant diseases isa serious problem for the farming sector of agricultureand the economy. Therefore, a multi-functional Plant Disease Detection Platform (PDDP) was developed in the LIT JINR. Deep learning techniques are successfully used in PDDP to solve the problem of recognizing plant diseases from photographs of their leaves. However, such methods require a large training dataset. At the same time, there are number of methods used to solve classification problems in cases of a small training dataset, asfor example,domain adaptation(DA)methods.In this paper, a comparative study of three DA methods is performed:Domain-Adversarial Training of Neural Networks (DANN), two-steps transfer learning and Unsupervised Domain Adaptation with Deep Metric Learning (M-ADDA).The advantage of the M-ADDA methodwas shown, which allowed toachieve 92% ofclassification accuracy.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Magotra, Arjun, et Juntae Kim. « Neuromodulated Dopamine Plastic Networks for Heterogeneous Transfer Learning with Hebbian Principle ». Symmetry 13, no 8 (26 juillet 2021) : 1344. http://dx.doi.org/10.3390/sym13081344.

Texte intégral
Résumé :
The plastic modifications in synaptic connectivity is primarily from changes triggered by neuromodulated dopamine signals. These activities are controlled by neuromodulation, which is itself under the control of the brain. The subjective brain’s self-modifying abilities play an essential role in learning and adaptation. The artificial neural networks with neuromodulated plasticity are used to implement transfer learning in the image classification domain. In particular, this has application in image detection, image segmentation, and transfer of learning parameters with significant results. This paper proposes a novel approach to enhance transfer learning accuracy in a heterogeneous source and target, using the neuromodulation of the Hebbian learning principle, called NDHTL (Neuromodulated Dopamine Hebbian Transfer Learning). Neuromodulation of plasticity offers a powerful new technique with applications in training neural networks implementing asymmetric backpropagation using Hebbian principles in transfer learning motivated CNNs (Convolutional neural networks). Biologically motivated concomitant learning, where connected brain cells activate positively, enhances the synaptic connection strength between the network neurons. Using the NDHTL algorithm, the percentage of change of the plasticity between the neurons of the CNN layer is directly managed by the dopamine signal’s value. The discriminative nature of transfer learning fits well with the technique. The learned model’s connection weights must adapt to unseen target datasets with the least cost and effort in transfer learning. Using distinctive learning principles such as dopamine Hebbian learning in transfer learning for asymmetric gradient weights update is a novel approach. The paper emphasizes the NDHTL algorithmic technique as synaptic plasticity controlled by dopamine signals in transfer learning to classify images using source-target datasets. The standard transfer learning using gradient backpropagation is a symmetric framework. Experimental results using CIFAR-10 and CIFAR-100 datasets show that the proposed NDHTL algorithm can enhance transfer learning efficiency compared to existing methods.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Chengqi Zhang*, Ling Guan** et Zheru Chi. « Introduction to the Special Issue on Learning in Intelligent Algorithms and Systems Design ». Journal of Advanced Computational Intelligence and Intelligent Informatics 3, no 6 (20 décembre 1999) : 439–40. http://dx.doi.org/10.20965/jaciii.1999.p0439.

Texte intégral
Résumé :
Learning has long been and will continue to be a key issue in intelligent algorithms and systems design. Emulating the behavior and mechanisms of human learning by machines at such high levels as symbolic processing and such low levels as neuronal processing has long been a dominant interest among researchers worldwide. Neural networks, fuzzy logic, and evolutionary algorithms represent the three most active research areas. With advanced theoretical studies and computer technology, many promising algorithms and systems using these techniques have been designed and implemented for a wide range of applications. This Special Issue presents seven papers on learning in intelligent algorithms and systems design from researchers in Japan, China, Australia, and the U.S. <B>Neural Networks:</B> Emulating low-level human intelligent processing, or neuronal processing, gave birth of artificial neural networks more than five decades ago. It was hoped that devices based on biological neural networks would possess characteristics of the human brain. Neural networks have reattracted researchers' attention since the late 1980s when back-propagation algorithms were used to train multilayer feed-forward neural networks. In the last decades, we have seen promising progress in this research field yield many new models, learning algorithms, and real-world applications, evidenced by the publication of new journals in this field. <B>Fuzzy Logic:</B> Since L. A. Zadeh introduced fuzzy set theory in 1965, fuzzy logic has increasingly become the focus of many researchers and engineers opening up new research and problem solving. Fuzzy set theory has been favorably applied to control system design. In the last few years, fuzzy model applications have bloomed in image processing and pattern recognition. <B>Evolutionary Algorithms:</B> Evolutionary optimization algorithms have been studied over three decades, emulating natural evolutionary search and selection so powerful in global optimization. The study of evolutionary algorithms includes evolutionary programming (EP), evolutionary strategies (ESs), genetic algorithms (GAs), and genetic programming (GP). In the last few years, we have also seen multiple computational algorithms combined to maximize system performance, such as neurofuzzy networks, fuzzy neural networks, fuzzy logic and genetic optimization, neural networks, and evolutionary algorithms. This Special Issue also includes papers that introduce combined techniques. <B>Wang</B> et al present an improved fuzzy algorithm for enhanced eyeground images. Examination of the eyeground image is effective in diagnosing glaucoma and diabetes. Conventional eyeground image quality is usually too poor for doctors to obtain useful information, so enhancement is required to eliminate this. Due to details and uncertainties in eyeground images, conventional enhancement such as histogram equalization, edge enhancement, and high-pass filters fail to achieve good results. Fuzzy enhancement enhances images in three steps: (1) transferring an image from the spatial domain to the fuzzy domain; (2) conducting enhancement in the fuzzy domain; and (3) returning the image from the fuzzy domain to the spatial domain. The paper detailing this proposes improved mapping and fast implementation. <B>Mohammadian</B> presents a method for designing self-learning hierarchical fuzzy logic control systems based on the integration of evolutionary algorithms and fuzzy logic. The purpose of such an approach is to provide an integrated knowledge base for intelligent control and collision avoidance in a multirobot system. Evolutionary algorithms are used as in adaptation for learning fuzzy knowledge bases of control systems and learning, mapping, and interaction between fuzzy knowledge bases of different fuzzy logic systems. Fuzzy integral has been found useful in data fusion. <B>Pham and Wagner</B> present an approach based on the fuzzy integral and GAs to combine likelihood values of cohort speakers. The fuzzy integral nonlinearly fuses similarity measures of an utterance assigned to cohort speakers. In their approach, Gas find optimal fuzzy densities required for fuzzy fusion. Experiments using commercial speech corpus T146 show their approach achieves more favorable performance than conventional normalization. Evolution reflects the behavior of a society. <B>Puppala and Sen</B> present a coevolutionary approach to generating behavioral strategies for cooperating agent groups. Agent behavior evolves via GAs, where one genetic algorithm population is evolved per individual in the cooperative group. Groups are evaluated by pairing strategies from each population and best strategy pairs are stored together in shared memory. The approach is evaluated using asymmetric room painting and results demonstrate the superiority of shared memory over random pairing in consistently generating optimal behavior patterns. Object representation and template optimization are two main factors affecting object recognition performance. <B>Lu</B> et al present an evolutionary algorithm for optimizing handwritten numeral templates represented by rational B-spline surfaces of character foreground-background-distance distribution maps. Initial templates are extracted from training a feed-forward neural network instead of using arbitrarily chosen patterns to reduce iterations required in evolutionary optimization. To further reduce computational complexity, a fast search is used in selection. Using 1,000 optimized numeral templates, the classifier achieves a classification rate of 96.4% while rejecting 90.7% of nonnumeral patterns when tested on NIST Special Database 3. Determining an appropriate number of clusters is difficult yet important. <B>Li</B> et al based their approach based on rival penalized competitive learning (RPCL), addressing problems of overlapped clusters and dependent components of input vectors by incorporating full covariance matrices into the original RPCL algorithm. The resulting learning algorithm progressively eliminates units whose clusters contain only a small amount of training data. The algorithm is applied to determine the number of clusters in a Gaussian mixture distribution and to optimize the architecture of elliptical function networks for speaker verification and for vowel classification. Another important issue on learning is <B>Kurihara and Sugawara's</B> adaptive reinforcement learning algorithm integrating exploitation- and exploration-oriented learning. This algorithm is more robust in dynamically changing, large-scale environments, providing better performance than either exploitation- learning or exploration-oriented learning, making it is well suited for autonomous systems. In closing we would like to thank the authors who have submitted papers to this Special Issue and express our appreciation to the referees for their excellent work in reading papers under a tight schedule.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Wittich, D., et F. Rottensteiner. « ADVERSARIAL DOMAIN ADAPTATION FOR THE CLASSIFICATION OF AERIAL IMAGES AND HEIGHT DATA USING CONVOLUTIONAL NEURAL NETWORKS ». ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2/W7 (16 septembre 2019) : 197–204. http://dx.doi.org/10.5194/isprs-annals-iv-2-w7-197-2019.

Texte intégral
Résumé :
<p><strong>Abstract.</strong> Domain adaptation (DA) can drastically decrease the amount of training data needed to obtain good classification models by leveraging available data from a source domain for the classification of a new (target) domains. In this paper, we address deep DA, i.e. DA with deep convolutional neural networks (CNN), a problem that has not been addressed frequently in remote sensing. We present a new method for semi-supervised DA for the task of pixel-based classification by a CNN. After proposing an encoder-decoder-based fully convolutional neural network (FCN), we adapt a method for adversarial discriminative DA to be applicable to the pixel-based classification of remotely sensed data based on this network. It tries to learn a feature representation that is domain invariant; domain-invariance is measured by a classifier’s incapability of predicting from which domain a sample was generated. We evaluate our FCN on the ISPRS labelling challenge, showing that it is close to the best-performing models. DA is evaluated on the basis of three domains. We compare different network configurations and perform the representation transfer at different layers of the network. We show that when using a proper layer for adaptation, our method achieves a positive transfer and thus an improved classification accuracy in the target domain for all evaluated combinations of source and target domains.</p>
Styles APA, Harvard, Vancouver, ISO, etc.
11

Bi, Hui, Zehao Liu, Jiarui Deng, Zhongyuan Ji et Jingjing Zhang. « Contrastive Domain Adaptation-Based Sparse SAR Target Classification under Few-Shot Cases ». Remote Sensing 15, no 2 (13 janvier 2023) : 469. http://dx.doi.org/10.3390/rs15020469.

Texte intégral
Résumé :
Due to the imaging mechanism of synthetic aperture radar (SAR), it is difficult and costly to acquire abundant labeled SAR images. Moreover, a typical matched filtering (MF) based image faces the problems of serious noise, sidelobes, and clutters, which will bring down the accuracy of SAR target classification. Different from the MF-based result, a sparse image shows better quality with less noise and higher image signal-to-noise ratio (SNR). Therefore, theoretically using it for target classification will achieve better performance. In this paper, a novel contrastive domain adaptation (CDA) based sparse SAR target classification method is proposed to solve the problem of insufficient samples. In the proposed method, we firstly construct a sparse SAR image dataset by using the complex image based iterative soft thresholding (BiIST) algorithm. Then, the simulated and real SAR datasets are simultaneously sent into an unsupervised domain adaptation framework to reduce the distribution difference and obtain the reconstructed simulated SAR images for subsequent target classification. Finally, the reconstructed simulated images are manually labeled and fed into a shallow convolutional neural network (CNN) for target classification along with a small number of real sparse SAR images. Since the current definition of the number of small samples is still vague and inconsistent, this paper defines few-shot as less than 20 per class. Experimental results based on MSTAR under standard operating conditions (SOC) and extended operating conditions (EOC) show that the reconstructed simulated SAR dataset makes up for the insufficient information from limited real data. Compared with other typical deep learning methods based on limited samples, our method is able to achieve higher accuracy especially under the conditions of few shots.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Wittich, D. « DEEP DOMAIN ADAPTATION BY WEIGHTED ENTROPY MINIMIZATION FOR THE CLASSIFICATION OF AERIAL IMAGES ». ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2020 (3 août 2020) : 591–98. http://dx.doi.org/10.5194/isprs-annals-v-2-2020-591-2020.

Texte intégral
Résumé :
Abstract. Fully convolutional neural networks (FCN) are successfully used for the automated pixel-wise classification of aerial images and possibly additional data. However, they require many labelled training samples to perform well. One approach addressing this issue is semi-supervised domain adaptation (SSDA). Here, labelled training samples from a source domain and unlabelled samples from a target domain are used jointly to obtain a target domain classifier, without requiring any labelled samples from the target domain. In this paper, a two-step approach for SSDA is proposed. The first step corresponds to a supervised training on the source domain, making use of strong data augmentation to increase the initial performance on the target domain. Secondly, the model is adapted by entropy minimization using a novel weighting strategy. The approach is evaluated on the basis of five domains, corresponding to five cities. Several training variants and adaptation scenarios are tested, indicating that proper data augmentation can already improve the initial target domain performance significantly resulting in an average overall accuracy of 77.5%. The weighted entropy minimization improves the overall accuracy on the target domains in 19 out of 20 scenarios on average by 1.8%. In all experiments a novel FCN architecture is used that yields results comparable to those of the best-performing models on the ISPRS labelling challenge while having an order of magnitude fewer parameters than commonly used FCNs.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Kwak, Geun-Ho, et No-Wook Park. « Unsupervised Domain Adaptation with Adversarial Self-Training for Crop Classification Using Remote Sensing Images ». Remote Sensing 14, no 18 (16 septembre 2022) : 4639. http://dx.doi.org/10.3390/rs14184639.

Texte intégral
Résumé :
Crop type mapping is regarded as an essential part of effective agricultural management. Automated crop type mapping using remote sensing images is preferred for the consistent monitoring of crop types. However, the main obstacle to generating annual crop type maps is the collection of sufficient training data for supervised classification. Classification based on unsupervised domain adaptation, which uses prior information from the source domain for target domain classification, can solve the impractical problem of collecting sufficient training data. This study presents self-training with domain adversarial network (STDAN), a novel unsupervised domain adaptation framework for crop type classification. The core purpose of STDAN is to combine adversarial training to alleviate spectral discrepancy problems with self-training to automatically generate new training data in the target domain using an existing thematic map or ground truth data. STDAN consists of three analysis stages: (1) initial classification using domain adversarial neural networks; (2) the self-training-based updating of training candidates using constraints specific to crop classification; and (3) the refinement of training candidates using iterative classification and final classification. The potential of STDAN was evaluated by conducting six experiments reflecting various domain discrepancy conditions in unmanned aerial vehicle images acquired at different regions and times. In most cases, the classification performance of STDAN was found to be compatible with the classification using training data collected from the target domain. In particular, the superiority of STDAN was shown to be prominent when the domain discrepancy was substantial. Based on these results, STDAN can be effectively applied to automated cross-domain crop type mapping without analyst intervention when prior information is available in the target domain.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Zhao, Liquan, et Yan Liu. « Spectral Normalization for Domain Adaptation ». Information 11, no 2 (27 janvier 2020) : 68. http://dx.doi.org/10.3390/info11020068.

Texte intégral
Résumé :
The transfer learning method is used to extend our existing model to more difficult scenarios, thereby accelerating the training process and improving learning performance. The conditional adversarial domain adaptation method proposed in 2018 is a particular type of transfer learning. It uses the domain discriminator to identify which images the extracted features belong to. The features are obtained from the feature extraction network. The stability of the domain discriminator directly affects the classification accuracy. Here, we propose a new algorithm to improve the predictive accuracy. First, we introduce the Lipschitz constraint condition into domain adaptation. If the constraint condition can be satisfied, the method will be stable. Second, we analyze how to make the gradient satisfy the condition, thereby deducing the modified gradient via the spectrum regularization method. The modified gradient is then used to update the parameter matrix. The proposed method is compared to the ResNet-50, deep adaptation network, domain adversarial neural network, joint adaptation network, and conditional domain adversarial network methods using the datasets that are found in Office-31, ImageCLEF-DA, and Office-Home. The simulations demonstrate that the proposed method has a better performance than other methods with respect to accuracy.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Chen, Cheng, Qi Dou, Hao Chen, Jing Qin et Pheng-Ann Heng. « Synergistic Image and Feature Adaptation : Towards Cross-Modality Domain Adaptation for Medical Image Segmentation ». Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 juillet 2019) : 865–72. http://dx.doi.org/10.1609/aaai.v33i01.3301865.

Texte intégral
Résumé :
This paper presents a novel unsupervised domain adaptation framework, called Synergistic Image and Feature Adaptation (SIFA), to effectively tackle the problem of domain shift. Domain adaptation has become an important and hot topic in recent studies on deep learning, aiming to recover performance degradation when applying the neural networks to new testing domains. Our proposed SIFA is an elegant learning diagram which presents synergistic fusion of adaptations from both image and feature perspectives. In particular, we simultaneously transform the appearance of images across domains and enhance domain-invariance of the extracted features towards the segmentation task. The feature encoder layers are shared by both perspectives to grasp their mutual benefits during the end-to-end learning procedure. Without using any annotation from the target domain, the learning of our unified model is guided by adversarial losses, with multiple discriminators employed from various aspects. We have extensively validated our method with a challenging application of crossmodality medical image segmentation of cardiac structures. Experimental results demonstrate that our SIFA model recovers the degraded performance from 17.2% to 73.0%, and outperforms the state-of-the-art methods by a significant margin.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Ma, Li, et Jiazhen Song. « Deep neural network-based domain adaptation for classification of remote sensing images ». Journal of Applied Remote Sensing 11, no 04 (20 septembre 2017) : 1. http://dx.doi.org/10.1117/1.jrs.11.042612.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Wu, Lei, Hefei Ling, Yuxuan Shi et Baiyan Zhang. « Instance Correlation Graph for Unsupervised Domain Adaptation ». ACM Transactions on Multimedia Computing, Communications, and Applications 18, no 1s (28 février 2022) : 1–23. http://dx.doi.org/10.1145/3486251.

Texte intégral
Résumé :
In recent years, deep neural networks have emerged as a dominant machine learning tool for a wide variety of application fields. Due to the expensive cost of manual labeling efforts, it is important to transfer knowledge from a label-rich source domain to an unlabeled target domain. The core problem is how to learn a domain-invariant representation to address the domain shift challenge, in which the training and test samples come from different distributions. First, considering the geometry of space probability distributions, we introduce an effective Hellinger Distance to match the source and target distributions on statistical manifold. Second, the data samples are not isolated individuals, and they are interrelated. The correlation information of data samples should not be neglected for domain adaptation. Distinguished from previous works, we pay attention to the correlation distributions over data samples. We design elaborately a Residual Graph Convolutional Network to construct the Instance Correlation Graph (ICG). The correlation information of data samples is exploited to reduce the domain shift. Therefore, a novel Instance Correlation Graph for Unsupervised Domain Adaptation is proposed, which is trained end-to-end by jointly optimizing three types of losses, i.e., Supervised Classification loss for source domain, Centroid Alignment loss to measure the centroid difference between source and target domain, ICG Alignment loss to match Instance Correlation Graph over two related domains. Extensive experiments are conducted on several hard transfer tasks to learn domain-invariant representations on three benchmarks: Office-31, Office-Home, and VisDA2017. Compared with other state-of-the-art techniques, our method achieves superior performance.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Huang, Yue, Han Zheng, Chi Liu, Xinghao Ding et Gustavo K. Rohde. « Epithelium-Stroma Classification via Convolutional Neural Networks and Unsupervised Domain Adaptation in Histopathological Images ». IEEE Journal of Biomedical and Health Informatics 21, no 6 (novembre 2017) : 1625–32. http://dx.doi.org/10.1109/jbhi.2017.2691738.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Goodarzi, Payman, Andreas Schütze et Tizian Schneider. « Comparison of different ML methods concerning prediction quality, domain adaptation and robustness ». tm - Technisches Messen 89, no 4 (25 février 2022) : 224–39. http://dx.doi.org/10.1515/teme-2021-0129.

Texte intégral
Résumé :
Abstract Nowadays machine learning methods and data-driven models have been used widely in different fields including computer vision, biomedicine, and condition monitoring. However, these models show performance degradation when meeting real-life situations. Domain or dataset shift or out-of-distribution (OOD) prediction is mentioned as the reason for this problem. Especially in industrial condition monitoring, it is not clear when we should be concerned about domain shift and which methods are more robust against this problem. In this paper prediction results are compared for a conventional machine learning workflow based on feature extraction, selection, and classification/regression (FESC/R) and deep neural networks on two publicly available industrial datasets. We show that it is possible to visualize the possible shift in domain using feature extraction and principal component analysis. Also, experimental competition shows that the cross-domain validated results of FESC/R are comparable to the reported state-of-the-art methods. Finally, we show that the results for simple randomly selected validation sets do not correctly represent the model performance in real-world applications.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Tang, Shixiang, Peng Su, Dapeng Chen et Wanli Ouyang. « Gradient Regularized Contrastive Learning for Continual Domain Adaptation ». Proceedings of the AAAI Conference on Artificial Intelligence 35, no 3 (18 mai 2021) : 2665–73. http://dx.doi.org/10.1609/aaai.v35i3.16370.

Texte intégral
Résumé :
Human beings can quickly adapt to environmental changes by leveraging learning experience. However, adapting deep neural networks to dynamic environments by machine learning algorithms remains a challenge. To better understand this issue, we study the problem of continual domain adaptation, where the model is presented with a labelled source domain and a sequence of unlabelled target domains. The obstacles in this problem are both domain shift and catastrophic forgetting. We propose Gradient Regularized Contrastive Learning (GRCL) to solve the obstacles. At the core of our method, gradient regularization plays two key roles: (1) enforcing the gradient not to harm the discriminative ability of source features which can, in turn, benefit the adaptation ability of the model to target domains; (2) constraining the gradient not to increase the classification loss on old target domains, which enables the model to preserve the performance on old target domains when adapting to an in-coming target domain. Experiments on Digits, DomainNet and Office-Caltech benchmarks demonstrate the strong performance of our approach when compared to the state-of-the-art.
Styles APA, Harvard, Vancouver, ISO, etc.
21

LIN, CHIN-TENG, HSI-WEN NEIN et WEN-CHIEH LIN. « A SPACE-TIME DELAY NEURAL NETWORK FOR MOTION RECOGNITION AND ITS APPLICATION TO LIPREADING ». International Journal of Neural Systems 09, no 04 (août 1999) : 311–34. http://dx.doi.org/10.1142/s0129065799000319.

Texte intégral
Résumé :
Motion recognition has received increasing attention in recent years owing to heightened demand for computer vision in many domains, including the surveillance system, multimodal human computer interface, and traffic control system. Most conventional approaches classify the motion recognition task into partial feature extraction and time-domain recognition subtasks. However, the information of motion resides in the space-time domain instead of the time domain or space domain independently, implying that fusing the feature extraction and classification in the space and time domains into a single framework is preferred. Based on this notion, this work presents a novel Space-Time Delay Neural Network (STDNN) capable of handling the space-time dynamic information for motion recognition. The STDNN is unified structure, in which the low-level spatiotemporal feature extraction and high-level space-time-domain recognition are fused. The proposed network possesses the spatiotemporal shift-invariant recognition ability that is inherited from the time delay neural network (TDNN) and space displacement neural network (SDNN), where TDNN and SDNN are good at temporal and spatial shift-invariant recognition, respectively. In contrast to multilayer perceptron (MLP), TDNN, and SDNN, STDNN is constructed by vector-type nodes and matrix-type links such that the spatiotemporal information can be accurately represented in a neural network. Also evaluated herein is the performance of the proposed STDNN via two experiments. The moving Arabic numerals (MAN) experiment simulates the object's free movement in the space-time domain on image sequences. According to these results, STDNN possesses a good generalization ability with respect to the spatiotemporal shift-invariant recognition. In the lipreading experiment, STDNN recognizes the lip motions based on the inputs of real image sequences. This observation confirms that STDNN yields a better performance than the existing TDNN-based system, particularly in terms of the generalization ability. In addition to the lipreading application, the STDNN can be applied to other problems since no domain-dependent knowledge is used in the experiment.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Jiang, Jianguo, Boquan Li, Baole Wei, Gang Li, Chao Liu, Weiqing Huang, Meimei Li et Min Yu. « FakeFilter : A cross-distribution Deepfake detection system with domain adaptation ». Journal of Computer Security 29, no 4 (18 juin 2021) : 403–21. http://dx.doi.org/10.3233/jcs-200124.

Texte intégral
Résumé :
Abuse of face swap techniques poses serious threats to the integrity and authenticity of digital visual media. More alarmingly, fake images or videos created by deep learning technologies, also known as Deepfakes, are more realistic, high-quality, and reveal few tampering traces, which attracts great attention in digital multimedia forensics research. To address those threats imposed by Deepfakes, previous work attempted to classify real and fake faces by discriminative visual features, which is subjected to various objective conditions such as the angle or posture of a face. Differently, some research devises deep neural networks to discriminate Deepfakes at the microscopic-level semantics of images, which achieves promising results. Nevertheless, such methods show limited success as encountering unseen Deepfakes created with different methods from the training sets. Therefore, we propose a novel Deepfake detection system, named FakeFilter, in which we formulate the challenge of unseen Deepfake detection into a problem of cross-distribution data classification, and address the issue with a strategy of domain adaptation. By mapping different distributions of Deepfakes into similar features in a certain space, the detection system achieves comparable performance on both seen and unseen Deepfakes. Further evaluation and comparison results indicate that the challenge has been successfully addressed by FakeFilter.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Mirkes, Evgeny M., Jonathan Bac, Aziz Fouché, Sergey V. Stasenko, Andrei Zinovyev et Alexander N. Gorban. « Domain Adaptation Principal Component Analysis : Base Linear Method for Learning with Out-of-Distribution Data ». Entropy 25, no 1 (24 décembre 2022) : 33. http://dx.doi.org/10.3390/e25010033.

Texte intégral
Résumé :
Domain adaptation is a popular paradigm in modern machine learning which aims at tackling the problem of divergence (or shift) between the labeled training and validation datasets (source domain) and a potentially large unlabeled dataset (target domain). The task is to embed both datasets into a common space in which the source dataset is informative for training while the divergence between source and target is minimized. The most popular domain adaptation solutions are based on training neural networks that combine classification and adversarial learning modules, frequently making them both data-hungry and difficult to train. We present a method called Domain Adaptation Principal Component Analysis (DAPCA) that identifies a linear reduced data representation useful for solving the domain adaptation task. DAPCA algorithm introduces positive and negative weights between pairs of data points, and generalizes the supervised extension of principal component analysis. DAPCA is an iterative algorithm that solves a simple quadratic optimization problem at each iteration. The convergence of the algorithm is guaranteed, and the number of iterations is small in practice. We validate the suggested algorithm on previously proposed benchmarks for solving the domain adaptation task. We also show the benefit of using DAPCA in analyzing single-cell omics datasets in biomedical applications. Overall, DAPCA can serve as a practical preprocessing step in many machine learning applications leading to reduced dataset representations, taking into account possible divergence between source and target domains.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Han, Yuna, et Byung-Woo Hong. « Deep Learning Based on Fourier Convolutional Neural Network Incorporating Random Kernels ». Electronics 10, no 16 (19 août 2021) : 2004. http://dx.doi.org/10.3390/electronics10162004.

Texte intégral
Résumé :
In recent years, convolutional neural networks have been studied in the Fourier domain for a limited environment, where competitive results can be expected for conventional image classification tasks in the spatial domain. We present a novel efficient Fourier convolutional neural network, where a new activation function is used, the additional shift Fourier transformation process is eliminated, and the number of learnable parameters is reduced. First, the Phase Rectified Linear Unit (PhaseReLU) is proposed, which is equivalent to the Rectified Linear Unit (ReLU) in the spatial domain. Second, in the proposed Fourier network, the shift Fourier transform is removed since the process is inessential for training. Lastly, we introduce two ways of reducing the number of weight parameters in the Fourier network. The basic method is to use a three-by-three sized kernel instead of five-by-five in our proposed Fourier convolutional neural network. We use the random kernel in our efficient Fourier convolutional neural network, whose standard deviation of the Gaussian distribution is used as a weight parameter. In other words, since only two scalars for each imaginary and real component per channel are required, a very small number of parameters is applied compressively. Therefore, as a result of experimenting in shallow networks, such as LeNet-3 and LeNet-5, our method achieves competitive accuracy with conventional convolutional neural networks while dramatically reducing the number of parameters. Furthermore, our proposed Fourier network, using a basic three-by-three kernel, mostly performs with higher accuracy than traditional convolutional neural networks in shallow and deep neural networks. Our experiments represent that presented kernel methods have the potential to be applied in all architecture based on convolutional neural networks.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Karasu Benyes, Yasmin, E. Celeste Welch, Abhinav Singhal, Joyce Ou et Anubhav Tripathi. « A Comparative Analysis of Deep Learning Models for Automated Cross-Preparation Diagnosis of Multi-Cell Liquid Pap Smear Images ». Diagnostics 12, no 8 (29 juillet 2022) : 1838. http://dx.doi.org/10.3390/diagnostics12081838.

Texte intégral
Résumé :
Routine Pap smears can facilitate early detection of cervical cancer and improve patient outcomes. The objective of this work is to develop an automated, clinically viable deep neural network for the multi-class Bethesda System diagnosis of multi-cell images in Liquid Pap smear samples. 8 deep learning models were trained on a publicly available multi-class SurePath preparation dataset. This included the 5 best-performing transfer learning models, an ensemble, a novel convolutional neural network (CNN), and a CNN + autoencoder (AE). Additionally, each model was tested on a novel ThinPrep Pap dataset to determine model generalizability across different liquid Pap preparation methods with and without Deep CORAL domain adaptation. All models achieved accuracies >90% when classifying SurePath images. The AE CNN model, 99.80% smaller than the average transfer model, maintained an accuracy of 96.54%. During consecutive training attempts, individual transfer models had high variability in performance, whereas the CNN, AE CNN, and ensemble did not. ThinPrep Pap classification accuracies were notably lower but increased with domain adaptation, with ResNet101 achieving the highest accuracy at 92.65%. This indicates a potential area for future improvement: development of a globally relevant model that can function across different slide preparation methods.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Sun, Yuanshuang, Yinghua Wang, Hongwei Liu, Liping Hu, Chen Zhang et Siyuan Wang. « Gradual Domain Adaptation with Pseudo-Label Denoising for SAR Target Recognition When Using Only Synthetic Data for Training ». Remote Sensing 15, no 3 (25 janvier 2023) : 708. http://dx.doi.org/10.3390/rs15030708.

Texte intégral
Résumé :
Because of the high cost of data acquisition in synthetic aperture radar (SAR) target recognition, the application of synthetic (simulated) SAR data is becoming increasingly popular. Our study explores the problems encountered when training fully on synthetic data and testing on measured (real) data, and the distribution gap between synthetic and measured SAR data affects recognition performance under the circumstances. We propose a gradual domain adaptation recognition framework with pseudo-label denoising to solve this problem. As a warm-up, the feature alignment classification network is trained to learn the domain-invariant feature representation and obtain a relatively satisfactory recognition result. Then, we utilize the self-training method for further improvement. Some pseudo-labeled data are selected to fine-tune the network, narrowing the distribution difference between the training data and test data for each category. However, the pseudo-labels are inevitably noisy, and the wrong ones may deteriorate the classifier’s performance during fine-tuning iterations. Thus, we conduct pseudo-label denoising to eliminate some noisy pseudo-labels and improve the trained classifier’s robustness. We perform pseudo-label denoising based on the image similarity to keep the label consistent between the image and feature domains. We conduct extensive experiments on the newly published SAMPLE dataset, and we design two training scenarios to verify the proposed framework. For Training Scenario I, the framework matches the result of neural architecture searching and achieves 96.46% average accuracy. For Training Scenario II, the framework outperforms the results of other existing methods and achieves 97.36% average accuracy. These results illustrate the superiority of our framework, which can reach state-of-the-art recognition levels with appropriate stability.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Villalonga, Gabriel, Joost Van de Weijer et Antonio M. López. « Recognizing New Classes with Synthetic Data in the Loop : Application to Traffic Sign Recognition ». Sensors 20, no 3 (21 janvier 2020) : 583. http://dx.doi.org/10.3390/s20030583.

Texte intégral
Résumé :
On-board vision systems may need to increase the number of classes that can be recognized in a relatively short period. For instance, a traffic sign recognition system may suddenly be required to recognize new signs. Since collecting and annotating samples of such new classes may need more time than we wish, especially for uncommon signs, we propose a method to generate these samples by combining synthetic images and Generative Adversarial Network (GAN) technology. In particular, the GAN is trained on synthetic and real-world samples from known classes to perform synthetic-to-real domain adaptation, but applied to synthetic samples of the new classes. Using the Tsinghua dataset with a synthetic counterpart, SYNTHIA-TS, we have run an extensive set of experiments. The results show that the proposed method is indeed effective, provided that we use a proper Convolutional Neural Network (CNN) to perform the traffic sign recognition (classification) task as well as a proper GAN to transform the synthetic images. Here, a ResNet101-based classifier and domain adaptation based on CycleGAN performed extremely well for a ratio ∼ 1 / 4 for new/known classes; even for more challenging ratios such as ∼ 4 / 1 , the results are also very positive.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Sobecki, Piotr, Rafał Jóźwiak, Katarzyna Sklinda et Artur Przelaskowski. « Effect of domain knowledge encoding in CNN model architecture—a prostate cancer study using mpMRI images ». PeerJ 9 (9 mars 2021) : e11006. http://dx.doi.org/10.7717/peerj.11006.

Texte intégral
Résumé :
Background Prostate cancer is one of the most common cancers worldwide. Currently, convolution neural networks (CNNs) are achieving remarkable success in various computer vision tasks, and in medical imaging research. Various CNN architectures and methodologies have been applied in the field of prostate cancer diagnosis. In this work, we evaluate the impact of the adaptation of a state-of-the-art CNN architecture on domain knowledge related to problems in the diagnosis of prostate cancer. The architecture of the final CNN model was optimised on the basis of the Prostate Imaging Reporting and Data System (PI-RADS) standard, which is currently the best available indicator in the acquisition, interpretation, and reporting of prostate multi-parametric magnetic resonance imaging (mpMRI) examinations. Methods A dataset containing 330 suspicious findings identified using mpMRI was used. Two CNN models were subjected to comparative analysis. Both implement the concept of decision-level fusion for mpMRI data, providing a separate network for each multi-parametric series. The first model implements a simple fusion of multi-parametric features to formulate the final decision. The architecture of the second model reflects the diagnostic pathway of PI-RADS methodology, using information about a lesion’s primary anatomic location within the prostate gland. Both networks were experimentally tuned to successfully classify prostate cancer changes. Results The optimised knowledge-encoded model achieved slightly better classification results compared with the traditional model architecture (AUC = 0.84 vs. AUC = 0.82). We found the proposed model to achieve convergence significantly faster. Conclusions The final knowledge-encoded CNN model provided more stable learning performance and faster convergence to optimal diagnostic accuracy. The results fail to demonstrate that PI-RADS-based modelling of CNN architecture can significantly improve performance of prostate cancer recognition using mpMRI.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Jameel, Syed Muslim, Manzoor Ahmed Hashmani, Mobashar Rehman et Arif Budiman. « Adaptive CNN Ensemble for Complex Multispectral Image Analysis ». Complexity 2020 (15 avril 2020) : 1–21. http://dx.doi.org/10.1155/2020/8361989.

Texte intégral
Résumé :
Multispectral image classification has long been the domain of static learning with nonstationary input data assumption. The prevalence of Industrial Revolution 4.0 has led to the emergence to perform real-time analysis (classification) in an online learning scenario. Due to the complexities (spatial, spectral, dynamic data sources, and temporal inconsistencies) in online and time-series multispectral image analysis, there is a high occurrence probability in variations of spectral bands from an input stream, which deteriorates the classification performance (in terms of accuracy) or makes them ineffective. To highlight this critical issue, firstly, this study formulates the problem of new spectral band arrival as virtual concept drift. Secondly, an adaptive convolutional neural network (CNN) ensemble framework is proposed and evaluated for a new spectral band adaptation. The adaptive CNN ensemble framework consists of five (05) modules, including dynamic ensemble classifier (DEC) module. DEC uses the weighted voting ensemble approach using multiple optimized CNN instances. DEC module can increase dynamically after new spectral band arrival. The proposed ensemble approach in the DEC module (individual spectral band handling by the individual classifier of the ensemble) contributes the diversity to the ensemble system in the simple yet effective manner. The results have shown the effectiveness and proven the diversity of the proposed framework to adapt the new spectral band during online image classification. Moreover, the extensive training dataset, proper regularization, optimized hyperparameters (model and training), and more appropriate CNN architecture significantly contributed to retaining the performance accuracy.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Kanimozhi, G., et P. Shanmugavadivu. « OPTIMIZED DEEP NEURAL NETWORKS ARCHITECTURE MODEL FOR BREAST CANCER DIAGNOSIS ». YMER Digital 20, no 11 (16 novembre 2021) : 161–75. http://dx.doi.org/10.37896/ymer20.11/15.

Texte intégral
Résumé :
Breast cancer has increasingly claimed the lives of women. Oncologists use digital mammograms as a viable source to detect breast cancer and classify it into benign and malignant based on the severity. The performance of the traditional methods on breast cancer detection could not be improved beyond a certain point due to the limitations and scope of computing. Moreover, the constrained scope of image processing techniques in developing automated breast cancer detection systems has motivated the researchers to shift their focus towards Artificial Intelligence based models. The Neural Networks (NN) have exhibited greater scope for the development of automated medical image analysis systems with the highest degree of accuracy. As NN model enables the automated system to understand the feature of problem-solving without being explicitly programmed. The optimization for NN offers an additional payoff on accuracy, computational complexity, and time. As the scope and suitability of optimization methods are data-dependent, the choice of selection of an appropriate optimization method itself is emerging as a prominent domain of research. In this paper, Deep Neural Networks (DNN) with different optimizers and Learning rates were designed for the prediction of breast cancer and its classification. Comparative performance analysis of five distinct first-order gradient-based optimization techniques, namely, Adaptive Gradient (Adagrad), Root Mean Square Propagation (RMSProp), Adaptive Delta (Adadelta), Adaptive Moment Estimation (Adam), and Stochastic Gradient Descent (SGD), is carried out to make predictions on the classification of breast cancer masses. For this purpose, the Mammographic Mass dataset was chosen for experimentation. The parameters determined for experiments were chosen on the number of hidden layers and learning rate along with hyperparameter tuning. The impacts of those optimizers were tested on the NN with One Hidden Layer (NN1HL), DNN with Three Hidden Layers (DNN4HL), and DNN with Eight Hidden Layers (DNN8HL). The experimental results showed that DNN8HL-Adam (DNN8HL-AM) had produced the highest accuracy of 91% among its counterparts. This research endorsed that the incorporation of optimizers in DNN contributes to an increased accuracy and optimized architecture for automated system development using neural networks.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Rendón, Eréndira, Roberto Alejo, Carlos Castorena, Frank J. Isidro-Ortega et Everardo E. Granda-Gutiérrez. « Data Sampling Methods to Deal With the Big Data Multi-Class Imbalance Problem ». Applied Sciences 10, no 4 (14 février 2020) : 1276. http://dx.doi.org/10.3390/app10041276.

Texte intégral
Résumé :
The class imbalance problem has been a hot topic in the machine learning community in recent years. Nowadays, in the time of big data and deep learning, this problem remains in force. Much work has been performed to deal to the class imbalance problem, the random sampling methods (over and under sampling) being the most widely employed approaches. Moreover, sophisticated sampling methods have been developed, including the Synthetic Minority Over-sampling Technique (SMOTE), and also they have been combined with cleaning techniques such as Editing Nearest Neighbor or Tomek’s Links (SMOTE+ENN and SMOTE+TL, respectively). In the big data context, it is noticeable that the class imbalance problem has been addressed by adaptation of traditional techniques, relatively ignoring intelligent approaches. Thus, the capabilities and possibilities of heuristic sampling methods on deep learning neural networks in big data domain are analyzed in this work, and the cleaning strategies are particularly analyzed. This study is developed on big data, multi-class imbalanced datasets obtained from hyper-spectral remote sensing images. The effectiveness of a hybrid approach on these datasets is analyzed, in which the dataset is cleaned by SMOTE followed by the training of an Artificial Neural Network (ANN) with those data, while the neural network output noise is processed with ENN to eliminate output noise; after that, the ANN is trained again with the resultant dataset. Obtained results suggest that best classification outcome is achieved when the cleaning strategies are applied on an ANN output instead of input feature space only. Consequently, the need to consider the classifier’s nature when the classical class imbalance approaches are adapted in deep learning and big data scenarios is clear.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Ashraf, Muhammad Nadeem, Muhammad Hussain et Zulfiqar Habib. « Deep Red Lesion Classification for Early Screening of Diabetic Retinopathy ». Mathematics 10, no 5 (23 février 2022) : 686. http://dx.doi.org/10.3390/math10050686.

Texte intégral
Résumé :
Diabetic retinopathy (DR) is an asymptotic and vision-threatening complication among working-age adults. To prevent blindness, a deep convolutional neural network (CNN) based diagnosis can help to classify less-discriminative and small-sized red lesions in early screening of DR patients. However, training deep models with minimal data is a challenging task. Fine-tuning through transfer learning is a useful alternative, but performance degradation, overfitting, and domain adaptation issues further demand architectural amendments to effectively train deep models. Various pre-trained CNNs are fine-tuned on an augmented set of image patches. The best-performing ResNet50 model is modified by introducing reinforced skip connections, a global max-pooling layer, and the sum-of-squared-error loss function. The performance of the modified model (DR-ResNet50) on five public datasets is found to be better than state-of-the-art methods in terms of well-known metrics. The highest scores (0.9851, 0.991, 0.991, 0.991, 0.991, 0.9939, 0.0029, 0.9879, and 0.9879) for sensitivity, specificity, AUC, accuracy, precision, F1-score, false-positive rate, Matthews’s correlation coefficient, and kappa coefficient are obtained within a 95% confidence interval for unseen test instances from e-Ophtha_MA. This high sensitivity and low false-positive rate demonstrate the worth of a proposed framework. It is suitable for early screening due to its performance, simplicity, and robustness.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Hütten, Nils, Richard Meyes et Tobias Meisen. « Vision Transformer in Industrial Visual Inspection ». Applied Sciences 12, no 23 (23 novembre 2022) : 11981. http://dx.doi.org/10.3390/app122311981.

Texte intégral
Résumé :
Artificial intelligence as an approach to visual inspection in industrial applications has been considered for decades. Recent successes, driven by advances in deep learning, present a potential paradigm shift and have the potential to facilitate an automated visual inspection, even under complex environmental conditions. Thereby, convolutional neural networks (CNN) have been the de facto standard in deep-learning-based computer vision (CV) for the last 10 years. Recently, attention-based vision transformer architectures emerged and surpassed the performance of CNNs on benchmark datasets, regarding regular CV tasks, such as image classification, object detection, or segmentation. Nevertheless, despite their outstanding results, the application of vision transformers to real world visual inspection is sparse. We suspect that this is likely due to the assumption that they require enormous amounts of data to be effective. In this study, we evaluate this assumption. For this, we perform a systematic comparison of seven widely-used state-of-the-art CNN and transformer based architectures trained in three different use cases in the domain of visual damage assessment for railway freight car maintenance. We show that vision transformer models achieve at least equivalent performance to CNNs in industrial applications with sparse data available, and significantly surpass them in increasingly complex tasks.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Москаленко, В’ячеслав Васильович, Альона Сергіївна Москаленко, Артем Геннадійович Коробов, Микола Олександрович Зарецький et Віктор Анатолійович Семашко. « МОДЕЛЬ ТА АЛГОРИТМ НАВЧАННЯ СИСТЕМИ ДЕТЕКТУВАННЯ МАЛОРОЗМІРНИХ ОБ’ЄКТІВ ДЛЯ МАЛОГАБАРИТНИХ БЕЗПІЛОТНИХ ЛІТАЛЬНИХ АПАРАТІВ ». RADIOELECTRONIC AND COMPUTER SYSTEMS, no 4 (20 décembre 2018) : 41–52. http://dx.doi.org/10.32620/reks.2018.4.04.

Texte intégral
Résumé :
The efficient model and learning algorithm of the small object detection system for compact aerial vehicle under conditions of restricted computing resources and the limited volume of the labeled learning set are developed. The four-stage learning algorithm of the object detector is proposed. At the first stage, selecting the type of deep convolutional neural network and the number of low-level layers that is pretrained on the ImageNet dataset for reusing takes place. The second stage involves unsupervised learning of high-level convolutional sparse coding layers using the modification of growing neural gas to automatically determine the required number of neurons and provide optimal distributions of the neurons over the data. Its application makes it possible to utilize the unlabeled learning datasets for the adaptation of the high-level feature description to the domain application area. At the third stage, the output feature map is formed by concatenation of feature maps from the different level of the deep convolutional neural network. At that, there is a reduction of output feature map using principal component analysis and followed by the building of decision rules. In order to perform the classification analysis of output, feature map is proposed to use information-extreme classifier learning on principles of boosting. Besides that, the orthogonal incremental extreme learning machine is used to build the regression model for the predict bounding box of the detected small object. The last stage involves fine-tuning of high-level layers of deep network using simulated annealing metaheuristic algorithm in order to approximate the global optimum of the complex criterion of learning efficiency of detection model. As a result of the use of proposed approach has been achieved 96% correctly detection of objects on the images of the open test dataset which indicates the suitability of the model and learning algorithm for practical use. In this case, the size of the learning dataset that has been used to construct the model was 500 unlabeled and 200 labeled learning samples
Styles APA, Harvard, Vancouver, ISO, etc.
35

Zhang, Guanghua, Bin Sun, Zhaoxia Zhang, Jing Pan, Weihua Yang et Yunfang Liu. « Multi-Model Domain Adaptation for Diabetic Retinopathy Classification ». Frontiers in Physiology 13 (1 juillet 2022). http://dx.doi.org/10.3389/fphys.2022.918929.

Texte intégral
Résumé :
Diabetic retinopathy (DR) is one of the most threatening complications in diabetic patients, leading to permanent blindness without timely treatment. However, DR screening is not only a time-consuming task that requires experienced ophthalmologists but also easy to produce misdiagnosis. In recent years, deep learning techniques based on convolutional neural networks have attracted increasing research attention in medical image analysis, especially for DR diagnosis. However, dataset labeling is expensive work and it is necessary for existing deep-learning-based DR detection models. For this study, a novel domain adaptation method (multi-model domain adaptation) is developed for unsupervised DR classification in unlabeled retinal images. At the same time, it only exploits discriminative information from multiple source models without access to any data. In detail, we integrate a weight mechanism into the multi-model-based domain adaptation by measuring the importance of each source domain in a novel way, and a weighted pseudo-labeling strategy is attached to the source feature extractors for training the target DR classification model. Extensive experiments are performed on four source datasets (DDR, IDRiD, Messidor, and Messidor-2) to a target domain APTOS 2019, showing that MMDA produces competitive performance for present state-of-the-art methods for DR classification. As a novel DR detection approach, this article presents a new domain adaptation solution for medical image analysis when the source data is unavailable.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Oliveira Santos, Bruno, Jónatas Valença, João P. Costeira et Eduardo Julio. « Domain adversarial training for classification of cracking in images of concrete surfaces ». AI in Civil Engineering 1, no 1 (28 décembre 2022). http://dx.doi.org/10.1007/s43503-022-00008-6.

Texte intégral
Résumé :
AbstractThe development of automatic methods to recognize cracks in surfaces of concrete has been under focus in recent years, firstly through computer vision methods and more recently focusing on convolutional neural networks that are delivering promising results. Challenges are still persisting in crack recognition, namely due to the confusion added by the myriad of elements commonly found on concrete surfaces. The robustness of these methods would deal with these elements if access to correspondingly heterogeneous datasets was possible. Even so, this would be a cumbersome methodology, since training would be needed for each particular case and models would be case dependent. Thus, efforts from the scientific community are focusing on generalizing neural network models to achieve high performance in images from different domains, slightly different from those in which they were effectively trained. The generalization of networks can be achieved by domain adaptation techniques at the training stage. Domain adaptation enables finding a feature space in which features from both domains are invariant, and thus, classes become separable. The work presented here proposes the DA-Crack method, which is a domain adversarial training method, to generalize a neural network for recognizing cracks in images of concrete surfaces. The domain adversarial method uses a convolutional extractor followed by a classifier and a discriminator, and relies on two datasets: a source labeled dataset and a target unlabeled small dataset. The classifier is responsible for the classification of images randomly chosen, while the discriminator is dedicated to uncovering to which dataset each image belongs. Backpropagation from the discriminator reverses the gradient used to update the extractor. This enables fighting the convergence promoted by the updating backpropagated from the classifier, and thus generalizing the extractor enabling it for crack recognition of images from both source and target datasets. Results show that the DA-Crack training method improved accuracy in crack classification of images from the target dataset in 54 percentage points, while accuracy on the source dataset remains unaffected.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Shankar, Shiv, et Sunita Sarawagi. « Labeled Memory Networks for Online Model Adaptation ». Proceedings of the AAAI Conference on Artificial Intelligence 32, no 1 (29 avril 2018). http://dx.doi.org/10.1609/aaai.v32i1.11781.

Texte intégral
Résumé :
Augmenting a neural network with memory that can grow without growing the number of trained parameters is a recent powerful concept with many exciting applications. In this paper, we establish their potential in online adapting a batch trained neural network to domain-relevant labeled data at deployment time. We present the design of Labeled Memory Network (LMN), a new memory augmented neural network (MANN) for fast online model adaptation. We highlight three key features of LMNs. First, LMNs treat memory as a second boosted stage following the trained network thereby allowing the memory and network to play complementary roles. Unlike all existing MANNs that write to memory at every cycle, LMNs provide better memory utilization by writing only labeled data with non-zero loss. Second, LMNs organize the memory with the discrete class label as the primary key unlike existing MANNs where key is a real vector derived from the input. This simple, yet surprisingly unexplored alternative organization, safeguards against catastrophic forgetting of rare labels that current LRU based MANNs are subject to. Finally, LMNs model the evolving expertise of memory and network using a RNN, to determine online their respective weights we evaluate online model adaptation strategies on five sequence prediction tasks, an image classification task, and two language modeling tasks. We show that LMNs are better than other MANNs designed for meta-learning. We also found them to be more accurate and faster than state-of-the-art methods of retuning model parameters for adapting to domain-specific labeled data.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Garea, Alberto S., Dora B. Heras, Francisco Argüello et Begüm Demir. « A hybrid CUDA, OpenMP, and MPI parallel TCA-based domain adaptation for classification of very high-resolution remote sensing images ». Journal of Supercomputing, 29 novembre 2022. http://dx.doi.org/10.1007/s11227-022-04961-y.

Texte intégral
Résumé :
AbstractDomain Adaptation (DA) is a technique that aims at extracting information from a labeled remote sensing image to allow classifying a different image obtained by the same sensor but at a different geographical location. This is a very complex problem from the computational point of view, specially due to the very high-resolution of multispectral images. TCANet is a deep learning neural network for DA classification problems that has been proven as very accurate for solving them. TCANet consists of several stages based on the application of convolutional filters obtained through Transfer Component Analysis (TCA) computed over the input images. It does not require backpropagation training, in contrast to the usual CNN-based networks, as the convolutional filters are directly computed based on the TCA transform applied over the training samples. In this paper, a hybrid parallel TCA-based domain adaptation technique for solving the classification of very high-resolution multispectral images is presented. It is designed for efficient execution on a multi-node computer by using Message Passing Interface (MPI), exploiting the available Graphical Processing Units (GPUs), and making efficient use of each multicore node by using Open Multi-Processing (OpenMP). As a result, an accurate DA technique from the point of view of classification and with high speedup values over the sequential version is obtained, increasing the applicability of the technique to real problems.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Shen, Jian, Yanru Qu, Weinan Zhang et Yong Yu. « Wasserstein Distance Guided Representation Learning for Domain Adaptation ». Proceedings of the AAAI Conference on Artificial Intelligence 32, no 1 (29 avril 2018). http://dx.doi.org/10.1609/aaai.v32i1.11784.

Texte intégral
Résumé :
Domain adaptation aims at generalizing a high-performance learner on a target domain via utilizing the knowledge distilled from a source domain which has a different but related data distribution. One solution to domain adaptation is to learn domain invariant feature representations while the learned representations should also be discriminative in prediction. To learn such representations, domain adaptation frameworks usually include a domain invariant representation learning approach to measure and reduce the domain discrepancy, as well as a discriminator for classification. Inspired by Wasserstein GAN, in this paper we propose a novel approach to learn domain invariant feature representations, namely Wasserstein Distance Guided Representation Learning (WDGRL). WDGRL utilizes a neural network, denoted by the domain critic, to estimate empirical Wasserstein distance between the source and target samples and optimizes the feature extractor network to minimize the estimated Wasserstein distance in an adversarial manner. The theoretical advantages of Wasserstein distance for domain adaptation lie in its gradient property and promising generalization bound. Empirical studies on common sentiment and image classification adaptation datasets demonstrate that our proposed WDGRL outperforms the state-of-the-art domain invariant representation learning approaches.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Zhai, Zhiwei, Sanne G. M. van Velzen, Nikolas Lessmann, Nils Planken, Tim Leiner et Ivana Išgum. « Learning coronary artery calcium scoring in coronary CTA from non-contrast CT using unsupervised domain adaptation ». Frontiers in Cardiovascular Medicine 9 (12 septembre 2022). http://dx.doi.org/10.3389/fcvm.2022.981901.

Texte intégral
Résumé :
Deep learning methods have demonstrated the ability to perform accurate coronary artery calcium (CAC) scoring. However, these methods require large and representative training data hampering applicability to diverse CT scans showing the heart and the coronary arteries. Training methods that accurately score CAC in cross-domain settings remains challenging. To address this, we present an unsupervised domain adaptation method that learns to perform CAC scoring in coronary CT angiography (CCTA) from non-contrast CT (NCCT). To address the domain shift between NCCT (source) domain and CCTA (target) domain, feature distributions are aligned between two domains using adversarial learning. A CAC scoring convolutional neural network is divided into a feature generator that maps input images to features in the latent space and a classifier that estimates predictions from the extracted features. For adversarial learning, a discriminator is used to distinguish the features between source and target domains. Hence, the feature generator aims to extract features with aligned distributions to fool the discriminator. The network is trained with adversarial loss as the objective function and a classification loss on the source domain as a constraint for adversarial learning. In the experiments, three data sets were used. The network is trained with 1,687 labeled chest NCCT scans from the National Lung Screening Trial. Furthermore, 200 labeled cardiac NCCT scans and 200 unlabeled CCTA scans were used to train the generator and the discriminator for unsupervised domain adaptation. Finally, a data set containing 313 manually labeled CCTA scans was used for testing. Directly applying the CAC scoring network trained on NCCT to CCTA led to a sensitivity of 0.41 and an average false positive volume 140 mm3/scan. The proposed method improved the sensitivity to 0.80 and reduced average false positive volume of 20 mm3/scan. The results indicate that the unsupervised domain adaptation approach enables automatic CAC scoring in contrast enhanced CT while learning from a large and diverse set of CT scans without contrast. This may allow for better utilization of existing annotated data sets and extend the applicability of automatic CAC scoring to contrast-enhanced CT scans without the need for additional manual annotations. The code is publicly available at https://github.com/qurAI-amsterdam/CACscoringUsingDomainAdaptation.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Matskevych, Alex, Adrian Wolny, Constantin Pape et Anna Kreshuk. « From Shallow to Deep : Exploiting Feature-Based Classifiers for Domain Adaptation in Semantic Segmentation ». Frontiers in Computer Science 4 (3 mars 2022). http://dx.doi.org/10.3389/fcomp.2022.805166.

Texte intégral
Résumé :
The remarkable performance of Convolutional Neural Networks on image segmentation tasks comes at the cost of a large amount of pixelwise annotated images that have to be segmented for training. In contrast, feature-based learning methods, such as the Random Forest, require little training data, but rarely reach the segmentation accuracy of CNNs. This work bridges the two approaches in a transfer learning setting. We show that a CNN can be trained to correct the errors of the Random Forest in the source domain and then be applied to correct such errors in the target domain without retraining, as the domain shift between the Random Forest predictions is much smaller than between the raw data. By leveraging a few brushstrokes as annotations in the target domain, the method can deliver segmentations that are sufficiently accurate to act as pseudo-labels for target-domain CNN training. We demonstrate the performance of the method on several datasets with the challenging tasks of mitochondria, membrane and nuclear segmentation. It yields excellent performance compared to microscopy domain adaptation baselines, especially when a significant domain shift is involved.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Wu, Yinhao, Bin Chen, An Zeng, Dan Pan, Ruixuan Wang et Shen Zhao. « Skin Cancer Classification With Deep Learning : A Systematic Review ». Frontiers in Oncology 12 (13 juillet 2022). http://dx.doi.org/10.3389/fonc.2022.893972.

Texte intégral
Résumé :
Skin cancer is one of the most dangerous diseases in the world. Correctly classifying skin lesions at an early stage could aid clinical decision-making by providing an accurate disease diagnosis, potentially increasing the chances of cure before cancer spreads. However, achieving automatic skin cancer classification is difficult because the majority of skin disease images used for training are imbalanced and in short supply; meanwhile, the model’s cross-domain adaptability and robustness are also critical challenges. Recently, many deep learning-based methods have been widely used in skin cancer classification to solve the above issues and achieve satisfactory results. Nonetheless, reviews that include the abovementioned frontier problems in skin cancer classification are still scarce. Therefore, in this article, we provide a comprehensive overview of the latest deep learning-based algorithms for skin cancer classification. We begin with an overview of three types of dermatological images, followed by a list of publicly available datasets relating to skin cancers. After that, we review the successful applications of typical convolutional neural networks for skin cancer classification. As a highlight of this paper, we next summarize several frontier problems, including data imbalance, data limitation, domain adaptation, model robustness, and model efficiency, followed by corresponding solutions in the skin cancer classification task. Finally, by summarizing different deep learning-based methods to solve the frontier challenges in skin cancer classification, we can conclude that the general development direction of these approaches is structured, lightweight, and multimodal. Besides, for readers’ convenience, we have summarized our findings in figures and tables. Considering the growing popularity of deep learning, there are still many issues to overcome as well as chances to pursue in the future.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Ranno, Nathan, et Dong Si. « Neural representations of cryo-EM maps and a graph-based interpretation ». BMC Bioinformatics 23, S3 (28 septembre 2022). http://dx.doi.org/10.1186/s12859-022-04942-1.

Texte intégral
Résumé :
Abstract Background Advances in imagery at atomic and near-atomic resolution, such as cryogenic electron microscopy (cryo-EM), have led to an influx of high resolution images of proteins and other macromolecular structures to data banks worldwide. Producing a protein structure from the discrete voxel grid data of cryo-EM maps involves interpolation into the continuous spatial domain. We present a novel data format called the neural cryo-EM map, which is formed from a set of neural networks that accurately parameterize cryo-EM maps and provide native, spatially continuous data for density and gradient. As a case study of this data format, we create graph-based interpretations of high resolution experimental cryo-EM maps. Results Normalized cryo-EM map values interpolated using the non-linear neural cryo-EM format are more accurate, consistently scoring less than 0.01 mean absolute error, than a conventional tri-linear interpolation, which scores up to 0.12 mean absolute error. Our graph-based interpretations of 115 experimental cryo-EM maps from 1.15 to 4.0 Å resolution provide high coverage of the underlying amino acid residue locations, while accuracy of nodes is correlated with resolution. The nodes of graphs created from atomic resolution maps (higher than 1.6 Å) provide greater than 99% residue coverage as well as 85% full atomic coverage with a mean of 0.19 Å root mean squared deviation. Other graphs have a mean 84% residue coverage with less specificity of the nodes due to experimental noise and differences of density context at lower resolutions. Conclusions The fully continuous and differentiable nature of the neural cryo-EM map enables the adaptation of the voxel data to alternative data formats, such as a graph that characterizes the atomic locations of the underlying protein or macromolecular structure. Graphs created from atomic resolution maps are superior in finding atom locations and may serve as input to predictive residue classification and structure segmentation methods. This work may be generalized to transform any 3D grid-based data format into non-linear, continuous, and differentiable format for downstream geometric deep learning applications.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Goëau, Hervé, Pierre Bonnet et Alexis Joly. « AI-based Identification of Plant Photographs from Herbarium Specimens ». Biodiversity Information Science and Standards 5 (31 août 2021). http://dx.doi.org/10.3897/biss.5.73751.

Texte intégral
Résumé :
Automated plant identification has recently improved significantly due to advances in deep learning and the availability of large amounts of field photos. As an illustration, the classification accuracy of 10K species measured in the LifeCLEF challenge (Goëau et al. 2018) reached 90%, very close to that of human experts. However, the profusion of field images only concerns a few tens of thousands of species, mainly located in North America and Western Europe. Conversely, the richest regions in terms of biodiversity, such as tropical countries, suffer from a shortage of training data (Pitman 2021). Consequently, the identification performance of the most advanced models on the flora of these regions is much lower (Goëau et al. 2019). Nevertheless, for several centuries, botanists have systematically collected, catalogued, and stored plant specimens in herbaria. Considerable recent efforts by the biodiversity informatics community, such as DiSSCo (Addink et al. 2018) and iDigBio (Matsunaga et al. 2013), have made millions of digitized specimens from these collections available online. A key question is therefore whether these digitized specimens could be used to improve the identification performance of species for which we have very few (if any) photos. However, this is a very difficult problem from a machine learning point of view. The visual appearance of a herbarium specimen is actually very different from a field photograph because the specimens are dried and crushed on a herbarium sheet before being digitized (Fig. 1). To advance research on this topic, we built a large dataset that we shared as one of the challenges of the LifeCLEF 2020 (Goëau et al. 2020) and 2021 evaluation campaigns (Goëau et al. 2021). It includes more than 320K herbarium specimens collected mostly from the Guiana Shield and the Northern Amazon Rainforest, focusing on about 1K plant species of the French Guiana flora. A valuable asset of this collection is that some of the specimens are accompanied by a few photos of the same specimen, allowing for more precise machine learning. In addition to this training data, we also built a test set for model evaluation, composed of 3,186 field photos collected by two of the best experts on Guyanese flora. Based on this dataset, about ten research teams have developed deep learning methods to address the challenge (including the authors of this abstract as the organizing team). A detailed description of these methods can be found in the technical notes written by the participating teams (Goëau et al. 2020, Goëau et al. 2021). The methods can be divided into two categories: those based on classical convolutional neural networks (CNN) trained simply by mixing digitized specimens and photos and those based on advanced domain adaptation techniques with the objective of learning a joint representation space between field and herbarium representations. those based on classical convolutional neural networks (CNN) trained simply by mixing digitized specimens and photos and those based on advanced domain adaptation techniques with the objective of learning a joint representation space between field and herbarium representations. The domain adaptation methods themselves were of two types, those based on adversarial regularization (Motiian et al. 2017) to force herbarium specimens and photos to have the same representations, metric learning to maximize inter-species distances and minimize intra-species distances in the representation space adversarial regularization (Motiian et al. 2017) to force herbarium specimens and photos to have the same representations, metric learning to maximize inter-species distances and minimize intra-species distances in the representation space In Table 1, we report the results achieved by the different methods evaluated during the 2020 edition of the challenge. The evaluation metric used is the mean reciprocal rank (MRR), i.e., the average of the inverse of the rank of the correct species in the list of the predicted species. In addition to this main score, a second MRR score is computed on a subset of the test set composed of the most difficult species, i.e., the ones that are the least frequently photographed in the field. The main outcomes we can derive from these results are the following: Classical deep learning models fail to identify plant photos from digitized herbarium specimens. The best classical CNN trained on the provided data resulted in a very low MRR score (0.011). Even with the of use additional training data (e.g. photos and digitized herbarium from GBIF) the MRR score remains very low (0.039). Domain adaptation methods provide significant improvement but the task remains challenging. The best MRR score (0.180) was achieved by using adversarial regularization (FSDA Motiian et al. 2017). This is much better than the classical CNN models but there is still a lot of progress to be made to reach the performance of a truly functional identification system (the MRR score on classical plant identification tasks can be up to 0.9). No method fits all. As shown in Table 1, the metric learning method has a significantly better MRR score on the most difficult species (0.107). However, the performance of this method on the species with more photos is much lower than the adversarial technique. In 2021, the challenge was run again but with additional information provided to train the models, i.e., species traits (plant life form, woodiness and plant growth form). The use of the species traits allowed slight performance improvement of the best adversarial adaptation method (with a MRR equal to 0.198). In conclusion, the results of the experiments conducted are promising and demonstrate the potential interest of digitized herbarium data for automated plant identification. However, progress is still needed before integrating this type of approach into production applications.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Dieter, Michael. « Amazon Noir ». M/C Journal 10, no 5 (1 octobre 2007). http://dx.doi.org/10.5204/mcj.2709.

Texte intégral
Résumé :
There is no diagram that does not also include, besides the points it connects up, certain relatively free or unbounded points, points of creativity, change and resistance, and it is perhaps with these that we ought to begin in order to understand the whole picture. (Deleuze, “Foucault” 37) Monty Cantsin: Why do we use a pervert software robot to exploit our collective consensual mind? Letitia: Because we want the thief to be a digital entity. Monty Cantsin: But isn’t this really blasphemic? Letitia: Yes, but god – in our case a meta-cocktail of authorship and copyright – can not be trusted anymore. (Amazon Noir, “Dialogue”) In 2006, some 3,000 digital copies of books were silently “stolen” from online retailer Amazon.com by targeting vulnerabilities in the “Search inside the Book” feature from the company’s website. Over several weeks, between July and October, a specially designed software program bombarded the Search Inside!™ interface with multiple requests, assembling full versions of texts and distributing them across peer-to-peer networks (P2P). Rather than a purely malicious and anonymous hack, however, the “heist” was publicised as a tactical media performance, Amazon Noir, produced by self-proclaimed super-villains Paolo Cirio, Alessandro Ludovico, and Ubermorgen.com. While controversially directed at highlighting the infrastructures that materially enforce property rights and access to knowledge online, the exploit additionally interrogated its own interventionist status as theoretically and politically ambiguous. That the “thief” was represented as a digital entity or machinic process (operating on the very terrain where exchange is differentiated) and the emergent act of “piracy” was fictionalised through the genre of noir conveys something of the indeterminacy or immensurability of the event. In this short article, I discuss some political aspects of intellectual property in relation to the complexities of Amazon Noir, particularly in the context of control, technological action, and discourses of freedom. Software, Piracy As a force of distribution, the Internet is continually subject to controversies concerning flows and permutations of agency. While often directed by discourses cast in terms of either radical autonomy or control, the technical constitution of these digital systems is more regularly a case of establishing structures of operation, codified rules, or conditions of possibility; that is, of guiding social processes and relations (McKenzie, “Cutting Code” 1-19). Software, as a medium through which such communication unfolds and becomes organised, is difficult to conceptualise as a result of being so event-orientated. There lies a complicated logic of contingency and calculation at its centre, a dimension exacerbated by the global scale of informational networks, where the inability to comprehend an environment that exceeds the limits of individual experience is frequently expressed through desires, anxieties, paranoia. Unsurprisingly, cautionary accounts and moral panics on identity theft, email fraud, pornography, surveillance, hackers, and computer viruses are as commonplace as those narratives advocating user interactivity. When analysing digital systems, cultural theory often struggles to describe forces that dictate movement and relations between disparate entities composed by code, an aspect heightened by the intensive movement of informational networks where differences are worked out through the constant exposure to unpredictability and chance (Terranova, “Communication beyond Meaning”). Such volatility partially explains the recent turn to distribution in media theory, as once durable networks for constructing economic difference – organising information in space and time (“at a distance”), accelerating or delaying its delivery – appear contingent, unstable, or consistently irregular (Cubitt 194). Attributing actions to users, programmers, or the software itself is a difficult task when faced with these states of co-emergence, especially in the context of sharing knowledge and distributing media content. Exchanges between corporate entities, mainstream media, popular cultural producers, and legal institutions over P2P networks represent an ongoing controversy in this respect, with numerous stakeholders competing between investments in property, innovation, piracy, and publics. Beginning to understand this problematic landscape is an urgent task, especially in relation to the technological dynamics that organised and propel such antagonisms. In the influential fragment, “Postscript on the Societies of Control,” Gilles Deleuze describes the historical passage from modern forms of organised enclosure (the prison, clinic, factory) to the contemporary arrangement of relational apparatuses and open systems as being materially provoked by – but not limited to – the mass deployment of networked digital technologies. In his analysis, the disciplinary mode most famously described by Foucault is spatially extended to informational systems based on code and flexibility. According to Deleuze, these cybernetic machines are connected into apparatuses that aim for intrusive monitoring: “in a control-based system nothing’s left alone for long” (“Control and Becoming” 175). Such a constant networking of behaviour is described as a shift from “molds” to “modulation,” where controls become “a self-transmuting molding changing from one moment to the next, or like a sieve whose mesh varies from one point to another” (“Postscript” 179). Accordingly, the crisis underpinning civil institutions is consistent with the generalisation of disciplinary logics across social space, forming an intensive modulation of everyday life, but one ambiguously associated with socio-technical ensembles. The precise dynamics of this epistemic shift are significant in terms of political agency: while control implies an arrangement capable of absorbing massive contingency, a series of complex instabilities actually mark its operation. Noise, viral contamination, and piracy are identified as key points of discontinuity; they appear as divisions or “errors” that force change by promoting indeterminacies in a system that would otherwise appear infinitely calculable, programmable, and predictable. The rendering of piracy as a tactic of resistance, a technique capable of levelling out the uneven economic field of global capitalism, has become a predictable catch-cry for political activists. In their analysis of multitude, for instance, Antonio Negri and Michael Hardt describe the contradictions of post-Fordist production as conjuring forth a tendency for labour to “become common.” That is, as productivity depends on flexibility, communication, and cognitive skills, directed by the cultivation of an ideal entrepreneurial or flexible subject, the greater the possibilities for self-organised forms of living that significantly challenge its operation. In this case, intellectual property exemplifies such a spiralling paradoxical logic, since “the infinite reproducibility central to these immaterial forms of property directly undermines any such construction of scarcity” (Hardt and Negri 180). The implications of the filesharing program Napster, accordingly, are read as not merely directed toward theft, but in relation to the private character of the property itself; a kind of social piracy is perpetuated that is viewed as radically recomposing social resources and relations. Ravi Sundaram, a co-founder of the Sarai new media initiative in Delhi, has meanwhile drawn attention to the existence of “pirate modernities” capable of being actualised when individuals or local groups gain illegitimate access to distributive media technologies; these are worlds of “innovation and non-legality,” of electronic survival strategies that partake in cultures of dispersal and escape simple classification (94). Meanwhile, pirate entrepreneurs Magnus Eriksson and Rasmus Fleische – associated with the notorious Piratbyrn – have promoted the bleeding away of Hollywood profits through fully deployed P2P networks, with the intention of pushing filesharing dynamics to an extreme in order to radicalise the potential for social change (“Copies and Context”). From an aesthetic perspective, such activist theories are complemented by the affective register of appropriation art, a movement broadly conceived in terms of antagonistically liberating knowledge from the confines of intellectual property: “those who pirate and hijack owned material, attempting to free information, art, film, and music – the rhetoric of our cultural life – from what they see as the prison of private ownership” (Harold 114). These “unruly” escape attempts are pursued through various modes of engagement, from experimental performances with legislative infrastructures (i.e. Kembrew McLeod’s patenting of the phrase “freedom of expression”) to musical remix projects, such as the work of Negativland, John Oswald, RTMark, Detritus, Illegal Art, and the Evolution Control Committee. Amazon Noir, while similarly engaging with questions of ownership, is distinguished by specifically targeting information communication systems and finding “niches” or gaps between overlapping networks of control and economic governance. Hans Bernhard and Lizvlx from Ubermorgen.com (meaning ‘Day after Tomorrow,’ or ‘Super-Tomorrow’) actually describe their work as “research-based”: “we not are opportunistic, money-driven or success-driven, our central motivation is to gain as much information as possible as fast as possible as chaotic as possible and to redistribute this information via digital channels” (“Interview with Ubermorgen”). This has led to experiments like Google Will Eat Itself (2005) and the construction of the automated software thief against Amazon.com, as process-based explorations of technological action. Agency, Distribution Deleuze’s “postscript” on control has proven massively influential for new media art by introducing a series of key questions on power (or desire) and digital networks. As a social diagram, however, control should be understood as a partial rather than totalising map of relations, referring to the augmentation of disciplinary power in specific technological settings. While control is a conceptual regime that refers to open-ended terrains beyond the architectural locales of enclosure, implying a move toward informational networks, data solicitation, and cybernetic feedback, there remains a peculiar contingent dimension to its limits. For example, software code is typically designed to remain cycling until user input is provided. There is a specifically immanent and localised quality to its actions that might be taken as exemplary of control as a continuously modulating affective materialism. The outcome is a heightened sense of bounded emergencies that are either flattened out or absorbed through reconstitution; however, these are never linear gestures of containment. As Tiziana Terranova observes, control operates through multilayered mechanisms of order and organisation: “messy local assemblages and compositions, subjective and machinic, characterised by different types of psychic investments, that cannot be the subject of normative, pre-made political judgments, but which need to be thought anew again and again, each time, in specific dynamic compositions” (“Of Sense and Sensibility” 34). This event-orientated vitality accounts for the political ambitions of tactical media as opening out communication channels through selective “transversal” targeting. Amazon Noir, for that reason, is pitched specifically against the material processes of communication. The system used to harvest the content from “Search inside the Book” is described as “robot-perversion-technology,” based on a network of four servers around the globe, each with a specific function: one located in the United States that retrieved (or “sucked”) the books from the site, one in Russia that injected the assembled documents onto P2P networks and two in Europe that coordinated the action via intelligent automated programs (see “The Diagram”). According to the “villains,” the main goal was to steal all 150,000 books from Search Inside!™ then use the same technology to steal books from the “Google Print Service” (the exploit was limited only by the amount of technological resources financially available, but there are apparent plans to improve the technique by reinvesting the money received through the settlement with Amazon.com not to publicise the hack). In terms of informational culture, this system resembles a machinic process directed at redistributing copyright content; “The Diagram” visualises key processes that define digital piracy as an emergent phenomenon within an open-ended and responsive milieu. That is, the static image foregrounds something of the activity of copying being a technological action that complicates any analysis focusing purely on copyright as content. In this respect, intellectual property rights are revealed as being entangled within information architectures as communication management and cultural recombination – dissipated and enforced by a measured interplay between openness and obstruction, resonance and emergence (Terranova, “Communication beyond Meaning” 52). To understand data distribution requires an acknowledgement of these underlying nonhuman relations that allow for such informational exchanges. It requires an understanding of the permutations of agency carried along by digital entities. According to Lawrence Lessig’s influential argument, code is not merely an object of governance, but has an overt legislative function itself. Within the informational environments of software, “a law is defined, not through a statue, but through the code that governs the space” (20). These points of symmetry are understood as concretised social values: they are material standards that regulate flow. Similarly, Alexander Galloway describes computer protocols as non-institutional “etiquette for autonomous agents,” or “conventional rules that govern the set of possible behavior patterns within a heterogeneous system” (7). In his analysis, these agreed-upon standardised actions operate as a style of management fostered by contradiction: progressive though reactionary, encouraging diversity by striving for the universal, synonymous with possibility but completely predetermined, and so on (243-244). Needless to say, political uncertainties arise from a paradigm that generates internal material obscurities through a constant twinning of freedom and control. For Wendy Hui Kyong Chun, these Cold War systems subvert the possibilities for any actual experience of autonomy by generalising paranoia through constant intrusion and reducing social problems to questions of technological optimisation (1-30). In confrontation with these seemingly ubiquitous regulatory structures, cultural theory requires a critical vocabulary differentiated from computer engineering to account for the sociality that permeates through and concatenates technological realities. In his recent work on “mundane” devices, software and code, Adrian McKenzie introduces a relevant analytic approach in the concept of technological action as something that both abstracts and concretises relations in a diffusion of collective-individual forces. Drawing on the thought of French philosopher Gilbert Simondon, he uses the term “transduction” to identify a key characteristic of technology in the relational process of becoming, or ontogenesis. This is described as bringing together disparate things into composites of relations that evolve and propagate a structure throughout a domain, or “overflow existing modalities of perception and movement on many scales” (“Impersonal and Personal Forces in Technological Action” 201). Most importantly, these innovative diffusions or contagions occur by bridging states of difference or incompatibilities. Technological action, therefore, arises from a particular type of disjunctive relation between an entity and something external to itself: “in making this relation, technical action changes not only the ensemble, but also the form of life of its agent. Abstraction comes into being and begins to subsume or reconfigure existing relations between the inside and outside” (203). Here, reciprocal interactions between two states or dimensions actualise disparate potentials through metastability: an equilibrium that proliferates, unfolds, and drives individuation. While drawing on cybernetics and dealing with specific technological platforms, McKenzie’s work can be extended to describe the significance of informational devices throughout control societies as a whole, particularly as a predictive and future-orientated force that thrives on staged conflicts. Moreover, being a non-deterministic technical theory, it additionally speaks to new tendencies in regimes of production that harness cognition and cooperation through specially designed infrastructures to enact persistent innovation without any end-point, final goal or natural target (Thrift 283-295). Here, the interface between intellectual property and reproduction can be seen as a site of variation that weaves together disparate objects and entities by imbrication in social life itself. These are specific acts of interference that propel relations toward unforeseen conclusions by drawing on memories, attention spans, material-technical traits, and so on. The focus lies on performance, context, and design “as a continual process of tuning arrived at by distributed aspiration” (Thrift 295). This later point is demonstrated in recent scholarly treatments of filesharing networks as media ecologies. Kate Crawford, for instance, describes the movement of P2P as processual or adaptive, comparable to technological action, marked by key transitions from partially decentralised architectures such as Napster, to the fully distributed systems of Gnutella and seeded swarm-based networks like BitTorrent (30-39). Each of these technologies can be understood as a response to various legal incursions, producing radically dissimilar socio-technological dynamics and emergent trends for how agency is modulated by informational exchanges. Indeed, even these aberrant formations are characterised by modes of commodification that continually spillover and feedback on themselves, repositioning markets and commodities in doing so, from MP3s to iPods, P2P to broadband subscription rates. However, one key limitation of this ontological approach is apparent when dealing with the sheer scale of activity involved, where mass participation elicits certain degrees of obscurity and relative safety in numbers. This represents an obvious problem for analysis, as dynamics can easily be identified in the broadest conceptual sense, without any understanding of the specific contexts of usage, political impacts, and economic effects for participants in their everyday consumptive habits. Large-scale distributed ensembles are “problematic” in their technological constitution, as a result. They are sites of expansive overflow that provoke an equivalent individuation of thought, as the Recording Industry Association of America observes on their educational website: “because of the nature of the theft, the damage is not always easy to calculate but not hard to envision” (“Piracy”). The politics of the filesharing debate, in this sense, depends on the command of imaginaries; that is, being able to conceptualise an overarching structural consistency to a persistent and adaptive ecology. As a mode of tactical intervention, Amazon Noir dramatises these ambiguities by framing technological action through the fictional sensibilities of narrative genre. Ambiguity, Control The extensive use of imagery and iconography from “noir” can be understood as an explicit reference to the increasing criminalisation of copyright violation through digital technologies. However, the term also refers to the indistinct or uncertain effects produced by this tactical intervention: who are the “bad guys” or the “good guys”? Are positions like ‘good’ and ‘evil’ (something like freedom or tyranny) so easily identified and distinguished? As Paolo Cirio explains, this political disposition is deliberately kept obscure in the project: “it’s a representation of the actual ambiguity about copyright issues, where every case seems to lack a moral or ethical basis” (“Amazon Noir Interview”). While user communications made available on the site clearly identify culprits (describing the project as jeopardising arts funding, as both irresponsible and arrogant), the self-description of the artists as political “failures” highlights the uncertainty regarding the project’s qualities as a force of long-term social renewal: Lizvlx from Ubermorgen.com had daily shootouts with the global mass-media, Cirio continuously pushed the boundaries of copyright (books are just pixels on a screen or just ink on paper), Ludovico and Bernhard resisted kickback-bribes from powerful Amazon.com until they finally gave in and sold the technology for an undisclosed sum to Amazon. Betrayal, blasphemy and pessimism finally split the gang of bad guys. (“Press Release”) Here, the adaptive and flexible qualities of informatic commodities and computational systems of distribution are knowingly posited as critical limits; in a certain sense, the project fails technologically in order to succeed conceptually. From a cynical perspective, this might be interpreted as guaranteeing authenticity by insisting on the useless or non-instrumental quality of art. However, through this process, Amazon Noir illustrates how forces confined as exterior to control (virality, piracy, noncommunication) regularly operate as points of distinction to generate change and innovation. Just as hackers are legitimately employed to challenge the durability of network exchanges, malfunctions are relied upon as potential sources of future information. Indeed, the notion of demonstrating ‘autonomy’ by illustrating the shortcomings of software is entirely consistent with the logic of control as a modulating organisational diagram. These so-called “circuit breakers” are positioned as points of bifurcation that open up new systems and encompass a more general “abstract machine” or tendency governing contemporary capitalism (Parikka 300). As a consequence, the ambiguities of Amazon Noir emerge not just from the contrary articulation of intellectual property and digital technology, but additionally through the concept of thinking “resistance” simultaneously with regimes of control. This tension is apparent in Galloway’s analysis of the cybernetic machines that are synonymous with the operation of Deleuzian control societies – i.e. “computerised information management” – where tactical media are posited as potential modes of contestation against the tyranny of code, “able to exploit flaws in protocological and proprietary command and control, not to destroy technology, but to sculpt protocol and make it better suited to people’s real desires” (176). While pushing a system into a state of hypertrophy to reform digital architectures might represent a possible technique that produces a space through which to imagine something like “our” freedom, it still leaves unexamined the desire for reformation itself as nurtured by and produced through the coupling of cybernetics, information theory, and distributed networking. This draws into focus the significance of McKenzie’s Simondon-inspired cybernetic perspective on socio-technological ensembles as being always-already predetermined by and driven through asymmetries or difference. As Chun observes, consequently, there is no paradox between resistance and capture since “control and freedom are not opposites, but different sides of the same coin: just as discipline served as a grid on which liberty was established, control is the matrix that enables freedom as openness” (71). Why “openness” should be so readily equated with a state of being free represents a major unexamined presumption of digital culture, and leads to the associated predicament of attempting to think of how this freedom has become something one cannot not desire. If Amazon Noir has political currency in this context, however, it emerges from a capacity to recognise how informational networks channel desire, memories, and imaginative visions rather than just cultivated antagonisms and counterintuitive economics. As a final point, it is worth observing that the project was initiated without publicity until the settlement with Amazon.com. There is, as a consequence, nothing to suggest that this subversive “event” might have actually occurred, a feeling heightened by the abstractions of software entities. To the extent that we believe in “the big book heist,” that such an act is even possible, is a gauge through which the paranoia of control societies is illuminated as a longing or desire for autonomy. As Hakim Bey observes in his conceptualisation of “pirate utopias,” such fleeting encounters with the imaginaries of freedom flow back into the experience of the everyday as political instantiations of utopian hope. Amazon Noir, with all its underlying ethical ambiguities, presents us with a challenge to rethink these affective investments by considering our profound weaknesses to master the complexities and constant intrusions of control. It provides an opportunity to conceive of a future that begins with limits and limitations as immanently central, even foundational, to our deep interconnection with socio-technological ensembles. References “Amazon Noir – The Big Book Crime.” http://www.amazon-noir.com/>. Bey, Hakim. T.A.Z.: The Temporary Autonomous Zone, Ontological Anarchy, Poetic Terrorism. New York: Autonomedia, 1991. Chun, Wendy Hui Kyong. Control and Freedom: Power and Paranoia in the Age of Fibre Optics. Cambridge, MA: MIT Press, 2006. Crawford, Kate. “Adaptation: Tracking the Ecologies of Music and Peer-to-Peer Networks.” Media International Australia 114 (2005): 30-39. Cubitt, Sean. “Distribution and Media Flows.” Cultural Politics 1.2 (2005): 193-214. Deleuze, Gilles. Foucault. Trans. Seán Hand. Minneapolis: U of Minnesota P, 1986. ———. “Control and Becoming.” Negotiations 1972-1990. Trans. Martin Joughin. New York: Columbia UP, 1995. 169-176. ———. “Postscript on the Societies of Control.” Negotiations 1972-1990. Trans. Martin Joughin. New York: Columbia UP, 1995. 177-182. Eriksson, Magnus, and Rasmus Fleische. “Copies and Context in the Age of Cultural Abundance.” Online posting. 5 June 2007. Nettime 25 Aug 2007. Galloway, Alexander. Protocol: How Control Exists after Decentralization. Cambridge, MA: MIT Press, 2004. Hardt, Michael, and Antonio Negri. Multitude: War and Democracy in the Age of Empire. New York: Penguin Press, 2004. Harold, Christine. OurSpace: Resisting the Corporate Control of Culture. Minneapolis: U of Minnesota P, 2007. Lessig, Lawrence. Code and Other Laws of Cyberspace. New York: Basic Books, 1999. McKenzie, Adrian. Cutting Code: Software and Sociality. New York: Peter Lang, 2006. ———. “The Strange Meshing of Impersonal and Personal Forces in Technological Action.” Culture, Theory and Critique 47.2 (2006): 197-212. Parikka, Jussi. “Contagion and Repetition: On the Viral Logic of Network Culture.” Ephemera: Theory & Politics in Organization 7.2 (2007): 287-308. “Piracy Online.” Recording Industry Association of America. 28 Aug 2007. http://www.riaa.com/physicalpiracy.php>. Sundaram, Ravi. “Recycling Modernity: Pirate Electronic Cultures in India.” Sarai Reader 2001: The Public Domain. Delhi, Sarai Media Lab, 2001. 93-99. http://www.sarai.net>. Terranova, Tiziana. “Communication beyond Meaning: On the Cultural Politics of Information.” Social Text 22.3 (2004): 51-73. ———. “Of Sense and Sensibility: Immaterial Labour in Open Systems.” DATA Browser 03 – Curating Immateriality: The Work of the Curator in the Age of Network Systems. Ed. Joasia Krysa. New York: Autonomedia, 2006. 27-38. Thrift, Nigel. “Re-inventing Invention: New Tendencies in Capitalist Commodification.” Economy and Society 35.2 (2006): 279-306. Citation reference for this article MLA Style Dieter, Michael. "Amazon Noir: Piracy, Distribution, Control." M/C Journal 10.5 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0710/07-dieter.php>. APA Style Dieter, M. (Oct. 2007) "Amazon Noir: Piracy, Distribution, Control," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0710/07-dieter.php>.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie