To see the other types of publications on this topic, follow the link: Neural network adaptation.

Journal articles on the topic 'Neural network adaptation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Neural network adaptation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Hylton, Todd. "Thermodynamic Neural Network." Entropy 22, no. 3 (February 25, 2020): 256. http://dx.doi.org/10.3390/e22030256.

Full text
Abstract:
A thermodynamically motivated neural network model is described that self-organizes to transport charge associated with internal and external potentials while in contact with a thermal reservoir. The model integrates techniques for rapid, large-scale, reversible, conservative equilibration of node states and slow, small-scale, irreversible, dissipative adaptation of the edge states as a means to create multiscale order. All interactions in the network are local and the network structures can be generic and recurrent. Isolated networks show multiscale dynamics, and externally driven networks evolve to efficiently connect external positive and negative potentials. The model integrates concepts of conservation, potentiation, fluctuation, dissipation, adaptation, equilibration and causation to illustrate the thermodynamic evolution of organization in open systems. A key conclusion of the work is that the transport and dissipation of conserved physical quantities drives the self-organization of open thermodynamic systems.
APA, Harvard, Vancouver, ISO, and other styles
2

Vreeswijk, C. van, and D. Hansel. "Patterns of Synchrony in Neural Networks with Spike Adaptation." Neural Computation 13, no. 5 (May 1, 2001): 959–92. http://dx.doi.org/10.1162/08997660151134280.

Full text
Abstract:
We study the emergence of synchronized burst activity in networks of neurons with spike adaptation. We show that networks of tonically firing adapting excitatory neurons can evolve to a state where the neurons burst in a synchronized manner. The mechanism leading to this burst activity is analyzed in a network of integrate-and-fire neurons with spike adaptation. The dependence of this state on the different network parameters is investigated, and it is shown that this mechanism is robust against inhomogeneities, sparseness of the connectivity, and noise. In networks of two populations, one excitatory and one inhibitory, we show that decreasing the inhibitory feedback can cause the network to switch from a tonically active, asynchronous state to the synchronized bursting state. Finally, we show that the same mechanism also causes synchronized burst activity in networks of more realistic conductance-based model neurons.
APA, Harvard, Vancouver, ISO, and other styles
3

Xie, Xurong, Xunying Liu, Tan Lee, and Lan Wang. "Bayesian Learning for Deep Neural Network Adaptation." IEEE/ACM Transactions on Audio, Speech, and Language Processing 29 (2021): 2096–110. http://dx.doi.org/10.1109/taslp.2021.3084072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Patre, P. M., S. Bhasin, Z. D. Wilcox, and W. E. Dixon. "Composite Adaptation for Neural Network-Based Controllers." IEEE Transactions on Automatic Control 55, no. 4 (April 2010): 944–50. http://dx.doi.org/10.1109/tac.2010.2041682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yu, D. L., and T. K. Chang. "Adaptation of diagonal recurrent neural network model." Neural Computing and Applications 14, no. 3 (March 23, 2005): 189–97. http://dx.doi.org/10.1007/s00521-004-0453-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Joty, Shafiq, Nadir Durrani, Hassan Sajjad, and Ahmed Abdelali. "Domain adaptation using neural network joint model." Computer Speech & Language 45 (September 2017): 161–79. http://dx.doi.org/10.1016/j.csl.2016.12.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Denker, John S. "Neural network models of learning and adaptation." Physica D: Nonlinear Phenomena 22, no. 1-3 (October 1986): 216–32. http://dx.doi.org/10.1016/0167-2789(86)90242-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

YAEGER, LARRY S. "IDENTIFYING NEURAL NETWORK TOPOLOGIES THAT FOSTER DYNAMICAL COMPLEXITY." Advances in Complex Systems 16, no. 02n03 (May 2013): 1350032. http://dx.doi.org/10.1142/s021952591350032x.

Full text
Abstract:
We use an ecosystem simulator capable of evolving arbitrary neural network topologies to explore the relationship between an information theoretic measure of the complexity of neural dynamics and several graph theoretical metrics calculated for the underlying network topologies. Evolutionary trends confirm and extend previous results demonstrating an evolutionary selection for complexity and small-world network properties during periods of behavioral adaptation. The resultant mapping of the space of network topologies occupied by the most complex networks yields new insights into the relationship between network structure and function. The highest complexity networks are found within limited numerical ranges of clustering coefficient, characteristic path length, small-world index, and global efficiency. The widths of these ranges vary from quite narrow to modest, and provide a guide to the most productive regions of the space of neural topologies in which to search for complexity. Our demonstration that evolution selects for complex dynamics and small-world networks helps explain biological evidence for these trends and provides evidence for selection of these characteristics based purely on network function—with no physical constraints on network structure—thus suggesting that functional and structural evolutionary pressures cooperate to produce brains optimized for adaptation to a complex, variable world.
APA, Harvard, Vancouver, ISO, and other styles
9

Ziemke, Tom. "Radar Image Segmentation Using Self-Adapting Recurrent Networks." International Journal of Neural Systems 08, no. 01 (February 1997): 47–54. http://dx.doi.org/10.1142/s0129065797000070.

Full text
Abstract:
This paper presents a novel approach to the segmentation and integration of (radar) images using a second-order recurrent artificial neural network architecture consisting of two sub-networks: a function network that classifies radar measurements into four different categories of objects in sea environments (water, oil spills, land and boats), and a context network that dynamically computes the function network's input weights. It is shown that in experiments (using simulated radar images) this mechanism outperforms conventional artificial neural networks since it allows the network to learn to solve the task through a dynamic adaptation of its classification function based on its internal state closely reflecting the current context.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Xiaofeng, Suying Xiang, Pengfei Zhu, and Min Wu. "Establishing a Dynamic Self-Adaptation Learning Algorithm of the BP Neural Network and Its Applications." International Journal of Bifurcation and Chaos 25, no. 14 (December 30, 2015): 1540030. http://dx.doi.org/10.1142/s0218127415400301.

Full text
Abstract:
In order to avoid the inherent deficiencies of the traditional BP neural network, such as slow convergence speed, that easily leading to local minima, poor generalization ability and difficulty in determining the network structure, the dynamic self-adaptive learning algorithm of the BP neural network is put forward to improve the function of the BP neural network. The new algorithm combines the merit of principal component analysis, particle swarm optimization, correlation analysis and self-adaptive model, hence can effectively solve the problems of selecting structural parameters, initial connection weights and thresholds and learning rates of the BP neural network. This new algorithm not only reduces the human intervention, optimizes the topological structures of BP neural networks and improves the network generalization ability, but also accelerates the convergence speed of a network, avoids trapping into local minima, and enhances network adaptation ability and prediction ability. The dynamic self-adaptive learning algorithm of the BP neural network is used to forecast the total retail sale of consumer goods of Sichuan Province, China. Empirical results indicate that the new algorithm is superior to the traditional BP network algorithm in predicting accuracy and time consumption, which shows the feasibility and effectiveness of the new algorithm.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhao, S., S. Saha, and X. X. Zhu. "GRAPH NEURAL NETWORK BASED OPEN-SET DOMAIN ADAPTATION." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2022 (May 31, 2022): 1407–13. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2022-1407-2022.

Full text
Abstract:
Abstract. Owing to the presence of many sensors and geographic/seasonal variations, domain adaptation is an important topic in remote sensing. However, most domain adaptation methods focus on close-set adaptation, i.e., they assume that the source and target domains share the same label space. This assumption often does not hold in practice, as there can be previously unseen classes in the target domain. To circumnavigate this issue, we propose a method for open set domain adaptation, where the target domain contains additional unknown classes that are not present in the source domain. To improve the model’s generalization ability, we propose a Progressive Weighted Graph Learning (PWGL) method. The proposed method exploits graph neural networks in aggregating similar samples across source and target domains. The progressive strategy gradually separates the unknown samples apart from known samples and upgrades the source domain by incorporating the pseudolabeled known target samples. The weighted adversarial learning promotes the alignment of known classes across different domains and rejects the unknown class. The experiments performed on a multi-city dataset show the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
12

GOLTSEV, ALEXANDER, and DONALD C. WUNSCH. "GENERALIZATION OF FEATURES IN THE ASSEMBLY NEURAL NETWORKS." International Journal of Neural Systems 14, no. 01 (February 2004): 39–56. http://dx.doi.org/10.1142/s0129065704001838.

Full text
Abstract:
The purpose of the paper is an experimental study of the formation of class descriptions, taking place during learning, in assembly neural networks. The assembly neural network is artificially partitioned into several sub-networks according to the number of classes that the network has to recognize. The features extracted from input data are represented in neural column structures of the sub-networks. Hebbian neural assemblies are formed in the column structure of the sub-networks by weight adaptation. A specific class description is formed in each sub-network of the assembly neural network due to intersections between the neural assemblies. The process of formation of class descriptions in the sub-networks is interpreted as feature generalization. A set of special experiments is performed to study this process, on a task of character recognition using the MNIST database.
APA, Harvard, Vancouver, ISO, and other styles
13

Hu, Brian, Marina E. Garrett, Peter A. Groblewski, Douglas R. Ollerenshaw, Jiaqi Shang, Kate Roll, Sahar Manavi, Christof Koch, Shawn R. Olsen, and Stefan Mihalas. "Adaptation supports short-term memory in a visual change detection task." PLOS Computational Biology 17, no. 9 (September 17, 2021): e1009246. http://dx.doi.org/10.1371/journal.pcbi.1009246.

Full text
Abstract:
The maintenance of short-term memories is critical for survival in a dynamically changing world. Previous studies suggest that this memory can be stored in the form of persistent neural activity or using a synaptic mechanism, such as with short-term plasticity. Here, we compare the predictions of these two mechanisms to neural and behavioral measurements in a visual change detection task. Mice were trained to respond to changes in a repeated sequence of natural images while neural activity was recorded using two-photon calcium imaging. We also trained two types of artificial neural networks on the same change detection task as the mice. Following fixed pre-processing using a pretrained convolutional neural network, either a recurrent neural network (RNN) or a feedforward neural network with short-term synaptic depression (STPNet) was trained to the same level of performance as the mice. While both networks are able to learn the task, the STPNet model contains units whose activity are more similar to the in vivo data and produces errors which are more similar to the mice. When images are omitted, an unexpected perturbation which was absent during training, mice often do not respond to the omission but are more likely to respond to the subsequent image. Unlike the RNN model, STPNet produces a similar pattern of behavior. These results suggest that simple neural adaptation mechanisms may serve as an important bottom-up memory signal in this task, which can be used by downstream areas in the decision-making process.
APA, Harvard, Vancouver, ISO, and other styles
14

Maksutova, K., N. Saparkhojayev, and Dusmat Zhamangarin. "DEVELOPMENT OF AN ONTOLOGICAL MODEL OF DEEP LEARNING NEURAL NETWORKS." Bulletin D. Serikbayev of EKTU, no. 1 (March 2024): 190–201. http://dx.doi.org/10.51885/1561-4212_2024_1_190.

Full text
Abstract:
This research paper examines the challenges and prospects associated with the integration of artificial neural networks and knowledge bases. The focus is on leveraging this integration to address practical problems. The paper explores the development, training, and integration of artificial neural net- works, emphasizing their adaptation to knowledge bases. This adaptation involves processes such as in- tegration, communication, representation of ontological structures, and interpretation by the knowledge base of the artificial neural network's representation through input and output. The paper also delves into the direction of establishing an intellectual environment conducive to the development, training, and integration of adapted artificial neural networks with knowledge bases. The knowledge base embedded in an artificial neural network is constructed using a homogeneous semantic network, and knowledge processing employs a multi-agent approach. The representation of artificial neural networks and their specifications within a unified semantic model of knowledge representation is detailed, encompassing text-based specifications in the language of knowledge representation with theoretical semantics. The models shared with the knowledge base include dynamic and other types that vary in their capabilities for knowledge representation. Furthermore, the paper conducts an analysis of approaches to creating artificial neural networks across various libraries of the high-level programming language Python. It explores techniques for developing arti- ficial neural networks within the Python development environment, investigating the key features and func- tions of these libraries. A comparative analysis of neural networks created in object-oriented programming languages is provided, along with the development of an ontological model for deep learning neural net- works.
APA, Harvard, Vancouver, ISO, and other styles
15

Hsu, Chun-Fei, Ping-Zong Lin, Tsu-Tian Lee, and Chi-Hsu Wang. "Adaptive asymmetric fuzzy neural network controller design via network structuring adaptation." Fuzzy Sets and Systems 159, no. 20 (October 2008): 2627–49. http://dx.doi.org/10.1016/j.fss.2008.01.034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Khaikine, Maxim, and Klaus Holthausen. "A General Probability Estimation Approach for Neural Computation." Neural Computation 12, no. 2 (February 1, 2000): 433–50. http://dx.doi.org/10.1162/089976600300015862.

Full text
Abstract:
We describe an analytical framework for the adaptations of neural systems that adapt its internal structure on the basis of subjective probabilities constructed by computation of randomly received input signals. A principled approach is provided with the key property that it defines a probability density model that allows studying the convergence of the adaptation process. In particular, the derived algorithm can be applied for approximation problems such as the estimation of probability densitiesor the recognition of regression functions. These approximation algorithms can be easily extended to higher-dimensional cases. Certain neural network models can be derived from our approach (e.g., topological feature maps and associative networks).
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Miao, Xu Yang, Yunchong Qian, Yunlin Lei, Jian Cai, Ziyi Huan, Xialv Lin, and Hao Dong. "Adaptive Neural Network Structure Optimization Algorithm Based on Dynamic Nodes." Current Issues in Molecular Biology 44, no. 2 (February 7, 2022): 817–32. http://dx.doi.org/10.3390/cimb44020056.

Full text
Abstract:
Large-scale artificial neural networks have many redundant structures, making the network fall into the issue of local optimization and extended training time. Moreover, existing neural network topology optimization algorithms have the disadvantage of many calculations and complex network structure modeling. We propose a Dynamic Node-based neural network Structure optimization algorithm (DNS) to handle these issues. DNS consists of two steps: the generation step and the pruning step. In the generation step, the network generates hidden layers layer by layer until accuracy reaches the threshold. Then, the network uses a pruning algorithm based on Hebb’s rule or Pearson’s correlation for adaptation in the pruning step. In addition, we combine genetic algorithm to optimize DNS (GA-DNS). Experimental results show that compared with traditional neural network topology optimization algorithms, GA-DNS can generate neural networks with higher construction efficiency, lower structure complexity, and higher classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
18

de Sousa, Celso, and Elder Moreira Hermerly. "ADAPTIVE CONTROL OF MOBILE ROBOTS USING A NEURAL NETWORK." International Journal of Neural Systems 11, no. 03 (June 2001): 211–18. http://dx.doi.org/10.1142/s0129065701000643.

Full text
Abstract:
A Neural Network - based control approach for mobile robot is proposed. The weight adaptation is made on-line, without previous learning. Several possible situations in robot navigation are considered, including uncertainties in the model and presence of disturbance. Weight adaptation laws are presented as well as simulation results.
APA, Harvard, Vancouver, ISO, and other styles
19

Marković, Dimitrije, and Claudius Gros. "Intrinsic Adaptation in Autonomous Recurrent Neural Networks." Neural Computation 24, no. 2 (February 2012): 523–40. http://dx.doi.org/10.1162/neco_a_00232.

Full text
Abstract:
A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depend crucially on the qualia of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting, or chaotic activity patterns. We study the influence of nonsynaptic plasticity on the default dynamical state of recurrent neural networks. The nonsynaptic adaption considered acts on intrinsic neural parameters, such as the threshold and the gain, and is driven by the optimization of the information entropy. We observe, in the presence of the intrinsic adaptation processes, three distinct and globally attracting dynamical regimes: a regular synchronized, an overall chaotic, and an intermittent bursting regime. The intermittent bursting regime is characterized by intervals of regular flows, which are quite insensitive to external stimuli, interceded by chaotic bursts that respond sensitively to input signals. We discuss these findings in the context of self-organized information processing and critical brain dynamics.
APA, Harvard, Vancouver, ISO, and other styles
20

Vinken, K., X. Boix, and G. Kreiman. "Incorporating intrinsic suppression in deep neural networks captures dynamics of adaptation in neurophysiology and perception." Science Advances 6, no. 42 (October 2020): eabd4205. http://dx.doi.org/10.1126/sciadv.abd4205.

Full text
Abstract:
Adaptation is a fundamental property of sensory systems that can change subjective experiences in the context of recent information. Adaptation has been postulated to arise from recurrent circuit mechanisms or as a consequence of neuronally intrinsic suppression. However, it is unclear whether intrinsic suppression by itself can account for effects beyond reduced responses. Here, we test the hypothesis that complex adaptation phenomena can emerge from intrinsic suppression cascading through a feedforward model of visual processing. A deep convolutional neural network with intrinsic suppression captured neural signatures of adaptation including novelty detection, enhancement, and tuning curve shifts, while producing aftereffects consistent with human perception. When adaptation was trained in a task where repeated input affects recognition performance, an intrinsic mechanism generalized better than a recurrent neural network. Our results demonstrate that feedforward propagation of intrinsic suppression changes the functional state of the network, reproducing key neurophysiological and perceptual properties of adaptation.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhu, Liqiang, Ying-Cheng Lai, Frank C. Hoppensteadt, and Jiping He. "Probing Changes in Neural Interaction During Adaptation." Neural Computation 15, no. 10 (October 1, 2003): 2359–77. http://dx.doi.org/10.1162/089976603322362392.

Full text
Abstract:
A procedure is developed to probe the changes in the functional interactions among neurons in primary motor cortex of the monkey brain during adaptation. A monkey is trained to learn a new skill, moving its arm to reach a target under the influence of external perturbations. The spike trains of multiple neurons in the primary motor cortex are recorded simultaneously. We utilize the methodology of directed transfer function, derived from a class of linear stochastic models, to quantify the causal interactions between the neurons. We find that the coupling between the motor neurons tends to increase during the adaptation but return to the original level after the adaptation. Furthermore, there is evidence that adaptation tends to affect the topology of the neural network, despite the approximate conservation of the average coupling strength in the network before and after the adaptation.
APA, Harvard, Vancouver, ISO, and other styles
22

Wu, Jian Hui, Guo Li Wang, Jing Wang, and Yu Su. "BP Neural Network and Multiple Linear Regression in Acute Hospitalization Costs in the Comparative Study." Applied Mechanics and Materials 50-51 (February 2011): 959–63. http://dx.doi.org/10.4028/www.scientific.net/amm.50-51.959.

Full text
Abstract:
The BP neural network is the important component of artificial neural networks, and gradually becomes a branch of the computation statistics. With its many characteristics such as large-scale parallel information processing, excellent self-adaptation and self-learning, the BP neural network has been used in solving the complex nonlinear dynamic system prediction. The BP neural network does not need the precise mathematical model, does not have any supposition request to the material itself. Its processing non-linear problem's ability is stronger than traditional statistical methods. By means of contrasting the BP neural network and the multi-dimensional linear regression ,this article discoveries that the BP neural network fitting ability is more stronger, the prediction performance is more stable, may be further applied and promoted in analysis and forecast of the continual material factor.
APA, Harvard, Vancouver, ISO, and other styles
23

Sharma, B. Lungsi, and Richard B. Wells. "A demonstration of using the model reference principle to develop the function-oriented adaptive pulse-coded neural network." SIMULATION 96, no. 2 (July 10, 2019): 207–19. http://dx.doi.org/10.1177/0037549719860587.

Full text
Abstract:
How can one design an adaptive pulsed neural network that is based on psycho-phenomenological foundations? In other words, how can one migrate the adaptive capability of a psychologically modeled neural network to a pulsed network? Neural networks that model psychological phenomena are at a larger scale than physiological models. There is a common presumption that pulse-coded neural network analogs to non-pulsing networks can be obtained by a simple mapping and scaling process of some sort. But the actual in vivo environment of pulse-coded neural network systems produces a much more diverse set of firing patterns. Thus, functional mapping from traditional neural network systems to pulse-coded neural network systems is much more challenging than has been presumed. This paper demonstrates that the employment of model reference adaptation as a method for applying scientific reduction is a powerful design tool for the development of a function-oriented adaptive pulse-coded neural network. The performance surface is empirically obtained by comparing the performance of the pulsed network to the non-pulsing network. Based on this surface, the adaptive algorithm is a combination of gain scheduling and steepest-descent method. Therefore, the adaptive property of the pulse-coded neural network is built upon a psycho-physiological foundation.
APA, Harvard, Vancouver, ISO, and other styles
24

Siddikov, I. H., P. I. Kalandarov, and D. B. ,. Yadgarova. "Engineering Calculation And Algorithm Of Adaptation Of Parameters Of A Neuro-Fuzzy Controller." American Journal of Applied sciences 03, no. 09 (September 30, 2021): 41–49. http://dx.doi.org/10.37547/tajas/volume03issue09-06.

Full text
Abstract:
As part of the study, a control scheme with the adaptation of the coefficients of the neuron-fuzzy regulator implemented. The area difference method used as a training method for the network. It improved by adding a rule base, which allows choosing the optimal learning rate for individual neurons of the neural network. The neural network controller applied as a superstructure of the PID controller in the process control scheme. The dynamic object can function in different modes. This technological process operates in different modes in terms of loading and temperature setpoints. Because of experiments, the power consumption and the amount of time required maintaining the same absorption process, using a conventional PID controller and a neural-network controller evaluated. It concluded that the neuro-fuzzy controller with a superstructure reduced the transient time by 19%.
APA, Harvard, Vancouver, ISO, and other styles
25

Save, Ashwini, and Narendra Shekokar. "Cross Domain Adaptation using A Novel Convolution Neural Network." International Journal of Engineering Research and Technology 13, no. 9 (September 30, 2020): 2230. http://dx.doi.org/10.37624/ijert/13.9.2020.2230-2238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Pan, Yongping, Qin Gao, and Haoyong Yu. "Fast and low-frequency adaptation in neural network control." IET Control Theory & Applications 8, no. 17 (November 20, 2014): 2062–69. http://dx.doi.org/10.1049/iet-cta.2014.0449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

He, Y., and U. Cilingirogu. "A charge-based on-chip adaptation Kohonen neural network." IEEE Transactions on Neural Networks 4, no. 3 (May 1993): 462–69. http://dx.doi.org/10.1109/72.217189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Furui, Sadaoki, Daisuke Itoh, and Zhipeng Zhang. "Neural-network-based HMM adaptation for noisy speech recognition." Acoustical Science and Technology 24, no. 2 (2003): 69–75. http://dx.doi.org/10.1250/ast.24.69.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Shi, Yangyang, Martha Larson, and Catholijn M. Jonker. "Recurrent neural network language model adaptation with curriculum learning." Computer Speech & Language 33, no. 1 (September 2015): 136–54. http://dx.doi.org/10.1016/j.csl.2014.11.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Li, Xudong, Jianhua Zheng, Mingtao Li, Wenzhen Ma, and Yang Hu. "Frequency-Domain Fusing Convolutional Neural Network: A Unified Architecture Improving Effect of Domain Adaptation for Fault Diagnosis." Sensors 21, no. 2 (January 10, 2021): 450. http://dx.doi.org/10.3390/s21020450.

Full text
Abstract:
In recent years, transfer learning has been widely applied in fault diagnosis for solving the problem of inconsistent distribution of the original training dataset and the online-collecting testing dataset. In particular, the domain adaptation method can solve the problem of the unlabeled testing dataset in transfer learning. Moreover, Convolutional Neural Network (CNN) is the most widely used network among existing domain adaptation approaches due to its powerful feature extraction capability. However, network designing is too empirical, and there is no network designing principle from the frequency domain. In this paper, we propose a unified convolutional neural network architecture from a frequency domain perspective for a domain adaptation named Frequency-domain Fusing Convolutional Neural Network (FFCNN). The method of FFCNN contains two parts, frequency-domain fusing layer and feature extractor. The frequency-domain fusing layer uses convolution operations to filter signals at different frequency bands and combines them into new input signals. These signals are input to the feature extractor to extract features and make domain adaptation. We apply FFCNN for three domain adaptation methods, and the diagnosis accuracy is improved compared to the typical CNN.
APA, Harvard, Vancouver, ISO, and other styles
31

Li, Xudong, Jianhua Zheng, Mingtao Li, Wenzhen Ma, and Yang Hu. "Frequency-Domain Fusing Convolutional Neural Network: A Unified Architecture Improving Effect of Domain Adaptation for Fault Diagnosis." Sensors 21, no. 2 (January 10, 2021): 450. http://dx.doi.org/10.3390/s21020450.

Full text
Abstract:
In recent years, transfer learning has been widely applied in fault diagnosis for solving the problem of inconsistent distribution of the original training dataset and the online-collecting testing dataset. In particular, the domain adaptation method can solve the problem of the unlabeled testing dataset in transfer learning. Moreover, Convolutional Neural Network (CNN) is the most widely used network among existing domain adaptation approaches due to its powerful feature extraction capability. However, network designing is too empirical, and there is no network designing principle from the frequency domain. In this paper, we propose a unified convolutional neural network architecture from a frequency domain perspective for a domain adaptation named Frequency-domain Fusing Convolutional Neural Network (FFCNN). The method of FFCNN contains two parts, frequency-domain fusing layer and feature extractor. The frequency-domain fusing layer uses convolution operations to filter signals at different frequency bands and combines them into new input signals. These signals are input to the feature extractor to extract features and make domain adaptation. We apply FFCNN for three domain adaptation methods, and the diagnosis accuracy is improved compared to the typical CNN.
APA, Harvard, Vancouver, ISO, and other styles
32

Ribar, Srdjan, Vojislav V. Mitic, and Goran Lazovic. "Neural Networks Application on Human Skin Biophysical Impedance Characterizations." Biophysical Reviews and Letters 16, no. 01 (February 6, 2021): 9–19. http://dx.doi.org/10.1142/s1793048021500028.

Full text
Abstract:
Artificial neural networks (ANNs) are basically the structures that perform input–output mapping. This mapping mimics the signal processing in biological neural networks. The basic element of biological neural network is a neuron. Neurons receive input signals from other neurons or the environment, process them, and generate their output which represents the input to another neuron of the network. Neurons can change their sensitivity to input signals. Each neuron has a simple rule to process an input signal. Biological neural networks have the property that signals are processed through many parallel connections (massively parallel processing). The activity of all neurons in these parallel connections is summed and represents the output of the whole network. The main feature of biological neural networks is that changes in the sensitivity of the neurons lead to changes in the operation of the entire network. This is called adaptation and is correlated with the learning process of living organisms. In this paper, a set of artificial neural networks are used for classifying the human skin biophysical impedance data.
APA, Harvard, Vancouver, ISO, and other styles
33

Yang, Guochun, Kai Wang, Weizhi Nan, Qi Li, Ya Zheng, Haiyan Wu, and Xun Liu. "Distinct Brain Mechanisms for Conflict Adaptation within and across Conflict Types." Journal of Cognitive Neuroscience 34, no. 3 (February 1, 2022): 445–60. http://dx.doi.org/10.1162/jocn_a_01806.

Full text
Abstract:
Abstract Cognitive conflict, like other cognitive processes, shows the characteristic of adaptation, that is, conflict effects are attenuated when immediately following a conflicting event, a phenomenon known as the conflict adaptation effect (CAE). One important aspect of CAE is its sensitivity to the intertrial coherence of conflict type, that is, behavioral CAE occurs only if consecutive trials are of the same conflict type. Although reliably observed behaviorally, the neural mechanisms underlying such a phenomenon remains elusive. With a paradigm combining the classic Simon task and Stroop task, this fMRI study examined neural correlates of conflict adaptation both within and across conflict types. The results revealed that when the conflict type repeated (but not when it alternated), the CAE-like neural activations were observed in dorsal ACC, inferior frontal gyrus (IFG), superior parietal lobe, and so forth (i.e., regions within typical task-positive networks). In contrast, when the conflict type alternated (but not when it repeated), we found CAE-like neural deactivations in the left superior frontal gyri (i.e., a region within the typical task-negative network). Network analyses suggested that the regions of ACC, IFG, superior parietal lobe, and superior frontal gyrus can be clustered into two antagonistic networks, and the ACC–IFG connection was associated with the within-type CAE. This evidence suggests that our adaptation to cognitive conflicts within a conflict type and across different types may rely on these two distinct neural mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
34

Tran, Vu, François Septier, Daisuke Murakami, and Tomoko Matsui. "Spatial–Temporal Temperature Forecasting Using Deep-Neural-Network-Based Domain Adaptation." Atmosphere 15, no. 1 (January 10, 2024): 90. http://dx.doi.org/10.3390/atmos15010090.

Full text
Abstract:
Accurate temperature forecasting is critical for various sectors, yet traditional methods struggle with complex atmospheric dynamics. Deep neural networks (DNNs), especially transformer-based DNNs, offer potential advantages, but face challenges with domain adaptation across different geographical regions. We evaluated the effectiveness of DNN-based domain adaptation for daily maximum temperature forecasting in experimental low-resource settings. We used an attention-based transformer deep learning architecture as the core forecasting framework and used kernel mean matching (KMM) for domain adaptation. Domain adaptation significantly improved forecasting accuracy in most experimental settings, thereby mitigating domain differences between source and target regions. Specifically, we observed that domain adaptation is more effective than exclusively training on a small amount of target-domain training data. This study reinforces the potential of using DNNs for temperature forecasting and underscores the benefits of domain adaptation using KMM. It also highlights the need for caution when using small amounts of target-domain data to avoid overfitting. Future research includes investigating strategies to minimize overfitting and to further probe the effect of various factors on model performance.
APA, Harvard, Vancouver, ISO, and other styles
35

Alavash, Mohsen, Sarah Tune, and Jonas Obleser. "Dynamic large-scale connectivity of intrinsic cortical oscillations supports adaptive listening in challenging conditions." PLOS Biology 19, no. 10 (October 11, 2021): e3001410. http://dx.doi.org/10.1371/journal.pbio.3001410.

Full text
Abstract:
In multi-talker situations, individuals adapt behaviorally to this listening challenge mostly with ease, but how do brain neural networks shape this adaptation? We here establish a long-sought link between large-scale neural communications in electrophysiology and behavioral success in the control of attention in difficult listening situations. In an age-varying sample of N = 154 individuals, we find that connectivity between intrinsic neural oscillations extracted from source-reconstructed electroencephalography is regulated according to the listener’s goal during a challenging dual-talker task. These dynamics occur as spatially organized modulations in power-envelope correlations of alpha and low-beta neural oscillations during approximately 2-s intervals most critical for listening behavior relative to resting-state baseline. First, left frontoparietal low-beta connectivity (16 to 24 Hz) increased during anticipation and processing of a spatial-attention cue before speech presentation. Second, posterior alpha connectivity (7 to 11 Hz) decreased during comprehension of competing speech, particularly around target-word presentation. Connectivity dynamics of these networks were predictive of individual differences in the speed and accuracy of target-word identification, respectively, but proved unconfounded by changes in neural oscillatory activity strength. Successful adaptation to a listening challenge thus latches onto two distinct yet complementary neural systems: a beta-tuned frontoparietal network enabling the flexible adaptation to attentive listening state and an alpha-tuned posterior network supporting attention to speech.
APA, Harvard, Vancouver, ISO, and other styles
36

Nerrand, O., P. Roussel-Ragot, L. Personnaz, G. Dreyfus, and S. Marcos. "Neural Networks and Nonlinear Adaptive Filtering: Unifying Concepts and New Algorithms." Neural Computation 5, no. 2 (March 1993): 165–99. http://dx.doi.org/10.1162/neco.1993.5.2.165.

Full text
Abstract:
The paper proposes a general framework that encompasses the training of neural networks and the adaptation of filters. We show that neural networks can be considered as general nonlinear filters that can be trained adaptively, that is, that can undergo continual training with a possibly infinite number of time-ordered examples. We introduce the canonical form of a neural network. This canonical form permits a unified presentation of network architectures and of gradient-based training algorithms for both feedforward networks (transversal filters) and feedback networks (recursive filters). We show that several algorithms used classically in linear adaptive filtering, and some algorithms suggested by other authors for training neural networks, are special cases in a general classification of training algorithms for feedback networks.
APA, Harvard, Vancouver, ISO, and other styles
37

Lin, Baihan. "Regularity Normalization: Neuroscience-Inspired Unsupervised Attention across Neural Network Layers." Entropy 24, no. 1 (December 28, 2021): 59. http://dx.doi.org/10.3390/e24010059.

Full text
Abstract:
Inspired by the adaptation phenomenon of neuronal firing, we propose the regularity normalization (RN) as an unsupervised attention mechanism (UAM) which computes the statistical regularity in the implicit space of neural networks under the Minimum Description Length (MDL) principle. Treating the neural network optimization process as a partially observable model selection problem, the regularity normalization constrains the implicit space by a normalization factor, the universal code length. We compute this universal code incrementally across neural network layers and demonstrate the flexibility to include data priors such as top-down attention and other oracle information. Empirically, our approach outperforms existing normalization methods in tackling limited, imbalanced and non-stationary input distribution in image classification, classic control, procedurally-generated reinforcement learning, generative modeling, handwriting generation and question answering tasks with various neural network architectures. Lastly, the unsupervised attention mechanisms is a useful probing tool for neural networks by tracking the dependency and critical learning stages across layers and recurrent time steps of deep networks.
APA, Harvard, Vancouver, ISO, and other styles
38

Bereta, Michał. "Kohonen Network-Based Adaptation of Non Sequential Data for Use in Convolutional Neural Networks." Sensors 21, no. 21 (October 29, 2021): 7221. http://dx.doi.org/10.3390/s21217221.

Full text
Abstract:
Convolutional neural networks have become one of the most powerful computing tools of artificial intelligence in recent years. They are especially suitable for the analysis of images and other data that have an inherent sequence structure, such as time series data. In the case of data in the form of vectors of features, the order of which does not matter, the use of convolutional neural networks is not justified. This paper presents a new method of representing non-sequential data as images that can be analyzed by a convolutional network. The well-known Kohonen network was used for this purpose. After training on non-sequential data, each example is represented by so-called U-image that can be used as input to a convolutional layer. A hybrid approach was also presented, where the neural network uses two types of input signals, both U-image representation and the original features. The results of the proposed method on traditional machine learning databases as well as on a difficult classification problem originating from the analysis of measurement data from experiments in particle physics are presented.
APA, Harvard, Vancouver, ISO, and other styles
39

Ge, S. S., and T. H. Lee. "Parallel Adaptive Neural Network Control of Robots." Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 208, no. 4 (November 1994): 231–37. http://dx.doi.org/10.1243/pime_proc_1994_208_336_02.

Full text
Abstract:
In this paper, a parallel adaptive neural network (NN) control design for robots motivated by the work by Lee and Tan is presented. The controller is based on direct adaptive techniques and an approach of using an additional parallel NN to provide adaptive enhancements to a basic fixed controller, which can be either a NN-based non-linear controller or a model-based non-linear controller. It is shown that, if Gaussian radial basis function networks are used for the additional parallel NN, uniformly stable adaptation is assured and asymptotic tracking of the position reference signal is achieved.
APA, Harvard, Vancouver, ISO, and other styles
40

Sousa, Miguel Angelo de Abreu de, Edson Lemos Horta, Sergio Takeo Kofuji, and Emilio Del-Moral-Hernandez. "Architecture Analysis of an FPGA-Based Hopfield Neural Network." Advances in Artificial Neural Systems 2014 (December 9, 2014): 1–10. http://dx.doi.org/10.1155/2014/602325.

Full text
Abstract:
Interconnections between electronic circuits and neural computation have been a strongly researched topic in the machine learning field in order to approach several practical requirements, including decreasing training and operation times in high performance applications and reducing cost, size, and energy consumption for autonomous or embedded developments. Field programmable gate array (FPGA) hardware shows some inherent features typically associated with neural networks, such as, parallel processing, modular executions, and dynamic adaptation, and works on different types of FPGA-based neural networks were presented in recent years. This paper aims to address different aspects of architectural characteristics analysis on a Hopfield Neural Network implemented in FPGA, such as maximum operating frequency and chip-area occupancy according to the network capacity. Also, the FPGA implementation methodology, which does not employ multipliers in the architecture developed for the Hopfield neural model, is presented, in detail.
APA, Harvard, Vancouver, ISO, and other styles
41

CARTLING, BO. "GENERATION OF ASSOCIATIVE PROCESSES IN A NEURAL NETWORK WITH REALISTIC FEATURES OF ARCHITECTURE AND UNITS." International Journal of Neural Systems 05, no. 03 (September 1994): 181–94. http://dx.doi.org/10.1142/s0129065794000207.

Full text
Abstract:
A recent neural network model of cortical associative memory incorporating neuronal adaptation by a simplified description of its underlying ionic mechanisms is extended towards more realistic network units and architecture. Excitatory units correspond to groups of adapting pyramidal neurons and inhibitory units to groups of nonadapting interneurons. The network architecture is formed from pairs of one pyramidal and one interneuron unit each with inhibitory connections within and excitatory connections between pairs. The degree of adaptability of the pyramidal units controls the character of the network dynamics. An intermediate adaptability generates limit cycles of transitions between stored patterns and regulates oscillation frequencies in the range of theta rhythms observed in the brain. In particular, neuronal adaptation can impose a direction of transitions between overlapping patterns also in a symmetrically connected network. The model permits a detailed analysis of the transition mechanisms. Temporal sequences of patterns thus formed may constitute parts of associative processes, such as recall of stored sequences or search of pattern subspaces. As a special case, neuronal adaptation can accomplish pattern segmentation by which overlapping patterns are temporally resolved. The type of limit cycles produced by neuronal adaptation may also be of significance for central pattern generators, also for networks involving motor neurons. The applied learning rule of Hebbian type is compared to a modified version also common in neural network modelling. It is also shown that the dependence of the network dynamic behaviour on neuronal adaptability, from fixed point attractors at weak adaptability towards more complex dynamics of limit cycles and chaos at strong adaptability, agrees with that recently observed in a more abstract version of the model. The present description of neuronal adaptation is compared to models based on dynamic firing thresholds.
APA, Harvard, Vancouver, ISO, and other styles
42

Rivera-Rovelo, Jorge, and Eduardo Bayro-Corrochano. "Surface Approximation using Growing Self-Organizing Nets and Gradient Information." Applied Bionics and Biomechanics 4, no. 3 (2007): 125–36. http://dx.doi.org/10.1155/2007/502679.

Full text
Abstract:
In this paper we show how to improve the performance of two self-organizing neural networks used to approximate the shape of a 2D or 3D object by incorporating gradient information in the adaptation stage. The methods are based on the growing versions of the Kohonen's map and the neural gas network. Also, we show that in the adaptation stage the network utilizes efficient transformations, expressed as versors in the conformal geometric algebra framework, which build the shape of the object independent of its position in space (coordinate free). Our algorithms were tested with several images, including medical images (CT and MR images). We include also some examples for the case of 3D surface estimation.
APA, Harvard, Vancouver, ISO, and other styles
43

Puga-Guzmán, Sergio A., Carlos Aguilar-Avelar, Javier Moreno-Valenzuela, and Víctor Santibáñez. "Tracking of periodic oscillations in an underactuated system via adaptive neural networks." Journal of Low Frequency Noise, Vibration and Active Control 37, no. 1 (January 18, 2018): 128–43. http://dx.doi.org/10.1177/1461348417752988.

Full text
Abstract:
In this paper, the tracking control of periodic oscillations in an underactuated mechanical system is discussed. The proposed scheme is derived from the feedback linearization control technique and adaptive neural networks are used to estimate the unknown dynamics and to compensate uncertainties. The proposed neural network-based controller is applied to the Furuta pendulum, which is a nonlinear and nonminimum phase underactuated mechanical system with two degrees of freedom. The new neural network-based controller is experimentally compared with respect to its model-based version. Results indicated that the proposed neural algorithm performs better than the model-based controller, showing that the real-time adaptation of the neural network weights successfully estimates the unknown dynamics and compensates uncertainties in the experimental platform.
APA, Harvard, Vancouver, ISO, and other styles
44

Hou, Wen-Juan, and Bamfa Ceesay. "Exploring the Adaptation of Recurrent Neural Network Approaches for Extracting Drug–Drug Interactions from Biomedical Text." International Journal of Machine Learning and Computing 11, no. 4 (August 2021): 267–73. http://dx.doi.org/10.18178/ijmlc.2021.11.4.1046.

Full text
Abstract:
Information extraction (IE) is the process of automatically identifying structured information from unstructured or partially structured text. IE processes can involve several activities, such as named entity recognition, event extraction, relationship discovery, and document classification, with the overall goal of translating text into a more structured form. Information on the changes in the effect of a drug, when taken in combination with a second drug, is known as drug–drug interaction (DDI). DDIs can delay, decrease, or enhance absorption of drugs and thus decrease or increase their efficacy or cause adverse effects. Recent research trends have shown several adaptation of recurrent neural networks (RNNs) from text. In this study, we highlight significant challenges of using RNNs in biomedical text processing and propose automatic extraction of DDIs aiming at overcoming some challenges. Our results show that the system is competitive against other systems for the task of extracting DDIs.
APA, Harvard, Vancouver, ISO, and other styles
45

BOURBAKIS, N., P. KAKUMANU, S. MAKROGIANNIS, R. BRYLL, and S. PANCHANATHAN. "NEURAL NETWORK APPROACH FOR IMAGE CHROMATIC ADAPTATION FOR SKIN COLOR DETECTION." International Journal of Neural Systems 17, no. 01 (February 2007): 1–12. http://dx.doi.org/10.1142/s0129065707000920.

Full text
Abstract:
The goal of image chromatic adaptation is to remove the effect of illumination and to obtain color data that reflects precisely the physical contents of the scene. We present in this paper an approach to image chromatic adaptation using Neural Networks (NN) with application for detecting — adapting human skin color. The NN is trained on randomly chosen color images containing human subject under various illuminating conditions, thereby enabling the model to dynamically adapt to the changing illumination conditions. The proposed network predicts directly the illuminant estimate in the image so as to adapt to human skin color. The comparison of our method with Gray World, White Patch and NN on White Patch methods for skin color stabilization is presented. The skin regions in the NN stabilized images are successfully detected using a computationally inexpensive thresholding operation. We also present results on detecting skin regions on a data set of test images. The results are promising and suggest a new approach for adapting human skin color using neural networks.
APA, Harvard, Vancouver, ISO, and other styles
46

Westendorff, Stephanie, Shenbing Kuang, Bahareh Taghizadeh, Opher Donchin, and Alexander Gail. "Asymmetric generalization in adaptation to target displacement errors in humans and in a neural network model." Journal of Neurophysiology 113, no. 7 (April 2015): 2360–75. http://dx.doi.org/10.1152/jn.00483.2014.

Full text
Abstract:
Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement (“jump”) consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation.
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Byoung-Tak, Peter Ohm, and Heinz Mühlenbein. "Evolutionary Induction of Sparse Neural Trees." Evolutionary Computation 5, no. 2 (June 1997): 213–36. http://dx.doi.org/10.1162/evco.1997.5.2.213.

Full text
Abstract:
This paper is concerned with the automatic induction of parsimonious neural networks. In contrast to other program induction situations, network induction entails parametric learning as well as structural adaptation. We present a novel representation scheme called neural trees that allows efficient learning of both network architectures and parameters by genetic search. A hybrid evolutionary method is developed for neural tree induction that combines genetic programming and the breeder genetic algorithm under the unified framework of the minimum description length principle. The method is successfully applied to the induction of higher order neural trees while still keeping the resulting structures sparse to ensure good generalization performance. Empirical results are provided on two chaotic time series prediction problems of practical interest.
APA, Harvard, Vancouver, ISO, and other styles
48

Tunik, Eugene, Paul J. Schmitt, and Scott T. Grafton. "BOLD Coherence Reveals Segregated Functional Neural Interactions When Adapting to Distinct Torque Perturbations." Journal of Neurophysiology 97, no. 3 (March 2007): 2107–20. http://dx.doi.org/10.1152/jn.00405.2006.

Full text
Abstract:
In the natural world, we experience and adapt to multiple extrinsic perturbations. This poses a challenge to neural circuits in discriminating between different context-appropriate responses. Using event-related fMRI, we characterized the neural dynamics involved in this process by randomly delivering a position- or velocity-dependent torque perturbation to subjects’ arms during a target-capture task. Each perturbation was color-cued during movement preparation to provide contextual information. Although trajectories differed between perturbations, subjects significantly reduced error under both conditions. This was paralleled by reduced BOLD signal in the right dentate nucleus, the left sensorimotor cortex, and the left intraparietal sulcus. Trials included “NoGo” conditions to dissociate activity related to preparation from execution and adaptation. Subsequent analysis identified perturbation-specific neural processes underlying preparation (“NoGo”) and adaptation (“Go”) early and late into learning. Between-perturbation comparisons of BOLD magnitude revealed negligible differences for both preparation and adaptation trials. However, a network-level analysis of BOLD coherence revealed that by late learning, response preparation (“NoGo”) was attributed to a relative focusing of coherence within cortical and basal ganglia networks in both perturbation conditions, demonstrating a common network interaction for establishing arbitrary visuomotor associations. Conversely, late-learning adaptation (“Go”) was attributed to a focusing of BOLD coherence between a cortical–basal ganglia network in the viscous condition and between a cortical–cerebellar network in the positional condition. Our findings demonstrate that trial-to-trial acquisition of two distinct adaptive responses is attributed not to anatomically segregated regions, but to differential functional interactions within common sensorimotor circuits.
APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Xiaoqing, and Xiangjun Wang. "Unsupervised Domain Adaptation with Coupled Generative Adversarial Autoencoders." Applied Sciences 8, no. 12 (December 7, 2018): 2529. http://dx.doi.org/10.3390/app8122529.

Full text
Abstract:
When large-scale annotated data are not available for certain image classification tasks, training a deep convolutional neural network model becomes challenging. Some recent domain adaptation methods try to solve this problem using generative adversarial networks and have achieved promising results. However, these methods are based on a shared latent space assumption and they do not consider the situation when shared high level representations in different domains do not exist or are not ideal as they assumed. To overcome this limitation, we propose a neural network structure called coupled generative adversarial autoencoders (CGAA) that allows a pair of generators to learn the high-level differences between two domains by sharing only part of the high-level layers. Additionally, by introducing a class consistent loss calculated by a stand-alone classifier into the generator optimization, our model is able to generate class invariant style-transferred images suitable for classification tasks in domain adaptation. We apply CGAA to several domain transferred image classification scenarios including several benchmark datasets. Experiment results have shown that our method can achieve state-of-the-art classification results.
APA, Harvard, Vancouver, ISO, and other styles
50

SCHNEIDEWIND, NORMAN. "APPLYING NEURAL NETWORKS TO SOFTWARE RELIABILITY ASSESSMENT." International Journal of Reliability, Quality and Safety Engineering 17, no. 04 (August 2010): 313–29. http://dx.doi.org/10.1142/s0218539310003834.

Full text
Abstract:
We adapt concepts from the field of neural networks to assess the reliability of software, employing cumulative failures, reliability, remaining failures, and time to failure metrics. In addition, the risk of not achieving reliability, remaining failure, and time to failure goals are assessed. The purpose of the assessment is to compare a criterion, derived from a neural network model, for estimating the parameters of software reliability metrics, with the method of maximum likelihood estimation. To our surprise the neural network method proved superior for all the reliability metrics that were assessed by virtue of yielding lower prediction error and risk. We also found that considerable adaptation of the neural network model was necessary to be meaningful for our application – only inputs, functions, neurons, weights, activation units, and outputs were required to characterize our application.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography