Artigos de revistas sobre o tema "Neural network adaptation"

Siga este link para ver outros tipos de publicações sobre o tema: Neural network adaptation.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Neural network adaptation".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Hylton, Todd. "Thermodynamic Neural Network". Entropy 22, n.º 3 (25 de fevereiro de 2020): 256. http://dx.doi.org/10.3390/e22030256.

Texto completo da fonte
Resumo:
A thermodynamically motivated neural network model is described that self-organizes to transport charge associated with internal and external potentials while in contact with a thermal reservoir. The model integrates techniques for rapid, large-scale, reversible, conservative equilibration of node states and slow, small-scale, irreversible, dissipative adaptation of the edge states as a means to create multiscale order. All interactions in the network are local and the network structures can be generic and recurrent. Isolated networks show multiscale dynamics, and externally driven networks evolve to efficiently connect external positive and negative potentials. The model integrates concepts of conservation, potentiation, fluctuation, dissipation, adaptation, equilibration and causation to illustrate the thermodynamic evolution of organization in open systems. A key conclusion of the work is that the transport and dissipation of conserved physical quantities drives the self-organization of open thermodynamic systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Vreeswijk, C. van, e D. Hansel. "Patterns of Synchrony in Neural Networks with Spike Adaptation". Neural Computation 13, n.º 5 (1 de maio de 2001): 959–92. http://dx.doi.org/10.1162/08997660151134280.

Texto completo da fonte
Resumo:
We study the emergence of synchronized burst activity in networks of neurons with spike adaptation. We show that networks of tonically firing adapting excitatory neurons can evolve to a state where the neurons burst in a synchronized manner. The mechanism leading to this burst activity is analyzed in a network of integrate-and-fire neurons with spike adaptation. The dependence of this state on the different network parameters is investigated, and it is shown that this mechanism is robust against inhomogeneities, sparseness of the connectivity, and noise. In networks of two populations, one excitatory and one inhibitory, we show that decreasing the inhibitory feedback can cause the network to switch from a tonically active, asynchronous state to the synchronized bursting state. Finally, we show that the same mechanism also causes synchronized burst activity in networks of more realistic conductance-based model neurons.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Xie, Xurong, Xunying Liu, Tan Lee e Lan Wang. "Bayesian Learning for Deep Neural Network Adaptation". IEEE/ACM Transactions on Audio, Speech, and Language Processing 29 (2021): 2096–110. http://dx.doi.org/10.1109/taslp.2021.3084072.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Patre, P. M., S. Bhasin, Z. D. Wilcox e W. E. Dixon. "Composite Adaptation for Neural Network-Based Controllers". IEEE Transactions on Automatic Control 55, n.º 4 (abril de 2010): 944–50. http://dx.doi.org/10.1109/tac.2010.2041682.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Yu, D. L., e T. K. Chang. "Adaptation of diagonal recurrent neural network model". Neural Computing and Applications 14, n.º 3 (23 de março de 2005): 189–97. http://dx.doi.org/10.1007/s00521-004-0453-9.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Joty, Shafiq, Nadir Durrani, Hassan Sajjad e Ahmed Abdelali. "Domain adaptation using neural network joint model". Computer Speech & Language 45 (setembro de 2017): 161–79. http://dx.doi.org/10.1016/j.csl.2016.12.006.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Denker, John S. "Neural network models of learning and adaptation". Physica D: Nonlinear Phenomena 22, n.º 1-3 (outubro de 1986): 216–32. http://dx.doi.org/10.1016/0167-2789(86)90242-3.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

YAEGER, LARRY S. "IDENTIFYING NEURAL NETWORK TOPOLOGIES THAT FOSTER DYNAMICAL COMPLEXITY". Advances in Complex Systems 16, n.º 02n03 (maio de 2013): 1350032. http://dx.doi.org/10.1142/s021952591350032x.

Texto completo da fonte
Resumo:
We use an ecosystem simulator capable of evolving arbitrary neural network topologies to explore the relationship between an information theoretic measure of the complexity of neural dynamics and several graph theoretical metrics calculated for the underlying network topologies. Evolutionary trends confirm and extend previous results demonstrating an evolutionary selection for complexity and small-world network properties during periods of behavioral adaptation. The resultant mapping of the space of network topologies occupied by the most complex networks yields new insights into the relationship between network structure and function. The highest complexity networks are found within limited numerical ranges of clustering coefficient, characteristic path length, small-world index, and global efficiency. The widths of these ranges vary from quite narrow to modest, and provide a guide to the most productive regions of the space of neural topologies in which to search for complexity. Our demonstration that evolution selects for complex dynamics and small-world networks helps explain biological evidence for these trends and provides evidence for selection of these characteristics based purely on network function—with no physical constraints on network structure—thus suggesting that functional and structural evolutionary pressures cooperate to produce brains optimized for adaptation to a complex, variable world.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Ziemke, Tom. "Radar Image Segmentation Using Self-Adapting Recurrent Networks". International Journal of Neural Systems 08, n.º 01 (fevereiro de 1997): 47–54. http://dx.doi.org/10.1142/s0129065797000070.

Texto completo da fonte
Resumo:
This paper presents a novel approach to the segmentation and integration of (radar) images using a second-order recurrent artificial neural network architecture consisting of two sub-networks: a function network that classifies radar measurements into four different categories of objects in sea environments (water, oil spills, land and boats), and a context network that dynamically computes the function network's input weights. It is shown that in experiments (using simulated radar images) this mechanism outperforms conventional artificial neural networks since it allows the network to learn to solve the task through a dynamic adaptation of its classification function based on its internal state closely reflecting the current context.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Li, Xiaofeng, Suying Xiang, Pengfei Zhu e Min Wu. "Establishing a Dynamic Self-Adaptation Learning Algorithm of the BP Neural Network and Its Applications". International Journal of Bifurcation and Chaos 25, n.º 14 (30 de dezembro de 2015): 1540030. http://dx.doi.org/10.1142/s0218127415400301.

Texto completo da fonte
Resumo:
In order to avoid the inherent deficiencies of the traditional BP neural network, such as slow convergence speed, that easily leading to local minima, poor generalization ability and difficulty in determining the network structure, the dynamic self-adaptive learning algorithm of the BP neural network is put forward to improve the function of the BP neural network. The new algorithm combines the merit of principal component analysis, particle swarm optimization, correlation analysis and self-adaptive model, hence can effectively solve the problems of selecting structural parameters, initial connection weights and thresholds and learning rates of the BP neural network. This new algorithm not only reduces the human intervention, optimizes the topological structures of BP neural networks and improves the network generalization ability, but also accelerates the convergence speed of a network, avoids trapping into local minima, and enhances network adaptation ability and prediction ability. The dynamic self-adaptive learning algorithm of the BP neural network is used to forecast the total retail sale of consumer goods of Sichuan Province, China. Empirical results indicate that the new algorithm is superior to the traditional BP network algorithm in predicting accuracy and time consumption, which shows the feasibility and effectiveness of the new algorithm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Zhao, S., S. Saha e X. X. Zhu. "GRAPH NEURAL NETWORK BASED OPEN-SET DOMAIN ADAPTATION". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2022 (31 de maio de 2022): 1407–13. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2022-1407-2022.

Texto completo da fonte
Resumo:
Abstract. Owing to the presence of many sensors and geographic/seasonal variations, domain adaptation is an important topic in remote sensing. However, most domain adaptation methods focus on close-set adaptation, i.e., they assume that the source and target domains share the same label space. This assumption often does not hold in practice, as there can be previously unseen classes in the target domain. To circumnavigate this issue, we propose a method for open set domain adaptation, where the target domain contains additional unknown classes that are not present in the source domain. To improve the model’s generalization ability, we propose a Progressive Weighted Graph Learning (PWGL) method. The proposed method exploits graph neural networks in aggregating similar samples across source and target domains. The progressive strategy gradually separates the unknown samples apart from known samples and upgrades the source domain by incorporating the pseudolabeled known target samples. The weighted adversarial learning promotes the alignment of known classes across different domains and rejects the unknown class. The experiments performed on a multi-city dataset show the effectiveness of the proposed approach.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

GOLTSEV, ALEXANDER, e DONALD C. WUNSCH. "GENERALIZATION OF FEATURES IN THE ASSEMBLY NEURAL NETWORKS". International Journal of Neural Systems 14, n.º 01 (fevereiro de 2004): 39–56. http://dx.doi.org/10.1142/s0129065704001838.

Texto completo da fonte
Resumo:
The purpose of the paper is an experimental study of the formation of class descriptions, taking place during learning, in assembly neural networks. The assembly neural network is artificially partitioned into several sub-networks according to the number of classes that the network has to recognize. The features extracted from input data are represented in neural column structures of the sub-networks. Hebbian neural assemblies are formed in the column structure of the sub-networks by weight adaptation. A specific class description is formed in each sub-network of the assembly neural network due to intersections between the neural assemblies. The process of formation of class descriptions in the sub-networks is interpreted as feature generalization. A set of special experiments is performed to study this process, on a task of character recognition using the MNIST database.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Hu, Brian, Marina E. Garrett, Peter A. Groblewski, Douglas R. Ollerenshaw, Jiaqi Shang, Kate Roll, Sahar Manavi, Christof Koch, Shawn R. Olsen e Stefan Mihalas. "Adaptation supports short-term memory in a visual change detection task". PLOS Computational Biology 17, n.º 9 (17 de setembro de 2021): e1009246. http://dx.doi.org/10.1371/journal.pcbi.1009246.

Texto completo da fonte
Resumo:
The maintenance of short-term memories is critical for survival in a dynamically changing world. Previous studies suggest that this memory can be stored in the form of persistent neural activity or using a synaptic mechanism, such as with short-term plasticity. Here, we compare the predictions of these two mechanisms to neural and behavioral measurements in a visual change detection task. Mice were trained to respond to changes in a repeated sequence of natural images while neural activity was recorded using two-photon calcium imaging. We also trained two types of artificial neural networks on the same change detection task as the mice. Following fixed pre-processing using a pretrained convolutional neural network, either a recurrent neural network (RNN) or a feedforward neural network with short-term synaptic depression (STPNet) was trained to the same level of performance as the mice. While both networks are able to learn the task, the STPNet model contains units whose activity are more similar to the in vivo data and produces errors which are more similar to the mice. When images are omitted, an unexpected perturbation which was absent during training, mice often do not respond to the omission but are more likely to respond to the subsequent image. Unlike the RNN model, STPNet produces a similar pattern of behavior. These results suggest that simple neural adaptation mechanisms may serve as an important bottom-up memory signal in this task, which can be used by downstream areas in the decision-making process.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Maksutova, K., N. Saparkhojayev e Dusmat Zhamangarin. "DEVELOPMENT OF AN ONTOLOGICAL MODEL OF DEEP LEARNING NEURAL NETWORKS". Bulletin D. Serikbayev of EKTU, n.º 1 (março de 2024): 190–201. http://dx.doi.org/10.51885/1561-4212_2024_1_190.

Texto completo da fonte
Resumo:
This research paper examines the challenges and prospects associated with the integration of artificial neural networks and knowledge bases. The focus is on leveraging this integration to address practical problems. The paper explores the development, training, and integration of artificial neural net- works, emphasizing their adaptation to knowledge bases. This adaptation involves processes such as in- tegration, communication, representation of ontological structures, and interpretation by the knowledge base of the artificial neural network's representation through input and output. The paper also delves into the direction of establishing an intellectual environment conducive to the development, training, and integration of adapted artificial neural networks with knowledge bases. The knowledge base embedded in an artificial neural network is constructed using a homogeneous semantic network, and knowledge processing employs a multi-agent approach. The representation of artificial neural networks and their specifications within a unified semantic model of knowledge representation is detailed, encompassing text-based specifications in the language of knowledge representation with theoretical semantics. The models shared with the knowledge base include dynamic and other types that vary in their capabilities for knowledge representation. Furthermore, the paper conducts an analysis of approaches to creating artificial neural networks across various libraries of the high-level programming language Python. It explores techniques for developing arti- ficial neural networks within the Python development environment, investigating the key features and func- tions of these libraries. A comparative analysis of neural networks created in object-oriented programming languages is provided, along with the development of an ontological model for deep learning neural net- works.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Hsu, Chun-Fei, Ping-Zong Lin, Tsu-Tian Lee e Chi-Hsu Wang. "Adaptive asymmetric fuzzy neural network controller design via network structuring adaptation". Fuzzy Sets and Systems 159, n.º 20 (outubro de 2008): 2627–49. http://dx.doi.org/10.1016/j.fss.2008.01.034.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Khaikine, Maxim, e Klaus Holthausen. "A General Probability Estimation Approach for Neural Computation". Neural Computation 12, n.º 2 (1 de fevereiro de 2000): 433–50. http://dx.doi.org/10.1162/089976600300015862.

Texto completo da fonte
Resumo:
We describe an analytical framework for the adaptations of neural systems that adapt its internal structure on the basis of subjective probabilities constructed by computation of randomly received input signals. A principled approach is provided with the key property that it defines a probability density model that allows studying the convergence of the adaptation process. In particular, the derived algorithm can be applied for approximation problems such as the estimation of probability densitiesor the recognition of regression functions. These approximation algorithms can be easily extended to higher-dimensional cases. Certain neural network models can be derived from our approach (e.g., topological feature maps and associative networks).
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Wang, Miao, Xu Yang, Yunchong Qian, Yunlin Lei, Jian Cai, Ziyi Huan, Xialv Lin e Hao Dong. "Adaptive Neural Network Structure Optimization Algorithm Based on Dynamic Nodes". Current Issues in Molecular Biology 44, n.º 2 (7 de fevereiro de 2022): 817–32. http://dx.doi.org/10.3390/cimb44020056.

Texto completo da fonte
Resumo:
Large-scale artificial neural networks have many redundant structures, making the network fall into the issue of local optimization and extended training time. Moreover, existing neural network topology optimization algorithms have the disadvantage of many calculations and complex network structure modeling. We propose a Dynamic Node-based neural network Structure optimization algorithm (DNS) to handle these issues. DNS consists of two steps: the generation step and the pruning step. In the generation step, the network generates hidden layers layer by layer until accuracy reaches the threshold. Then, the network uses a pruning algorithm based on Hebb’s rule or Pearson’s correlation for adaptation in the pruning step. In addition, we combine genetic algorithm to optimize DNS (GA-DNS). Experimental results show that compared with traditional neural network topology optimization algorithms, GA-DNS can generate neural networks with higher construction efficiency, lower structure complexity, and higher classification accuracy.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

de Sousa, Celso, e Elder Moreira Hermerly. "ADAPTIVE CONTROL OF MOBILE ROBOTS USING A NEURAL NETWORK". International Journal of Neural Systems 11, n.º 03 (junho de 2001): 211–18. http://dx.doi.org/10.1142/s0129065701000643.

Texto completo da fonte
Resumo:
A Neural Network - based control approach for mobile robot is proposed. The weight adaptation is made on-line, without previous learning. Several possible situations in robot navigation are considered, including uncertainties in the model and presence of disturbance. Weight adaptation laws are presented as well as simulation results.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Marković, Dimitrije, e Claudius Gros. "Intrinsic Adaptation in Autonomous Recurrent Neural Networks". Neural Computation 24, n.º 2 (fevereiro de 2012): 523–40. http://dx.doi.org/10.1162/neco_a_00232.

Texto completo da fonte
Resumo:
A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depend crucially on the qualia of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting, or chaotic activity patterns. We study the influence of nonsynaptic plasticity on the default dynamical state of recurrent neural networks. The nonsynaptic adaption considered acts on intrinsic neural parameters, such as the threshold and the gain, and is driven by the optimization of the information entropy. We observe, in the presence of the intrinsic adaptation processes, three distinct and globally attracting dynamical regimes: a regular synchronized, an overall chaotic, and an intermittent bursting regime. The intermittent bursting regime is characterized by intervals of regular flows, which are quite insensitive to external stimuli, interceded by chaotic bursts that respond sensitively to input signals. We discuss these findings in the context of self-organized information processing and critical brain dynamics.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Vinken, K., X. Boix e G. Kreiman. "Incorporating intrinsic suppression in deep neural networks captures dynamics of adaptation in neurophysiology and perception". Science Advances 6, n.º 42 (outubro de 2020): eabd4205. http://dx.doi.org/10.1126/sciadv.abd4205.

Texto completo da fonte
Resumo:
Adaptation is a fundamental property of sensory systems that can change subjective experiences in the context of recent information. Adaptation has been postulated to arise from recurrent circuit mechanisms or as a consequence of neuronally intrinsic suppression. However, it is unclear whether intrinsic suppression by itself can account for effects beyond reduced responses. Here, we test the hypothesis that complex adaptation phenomena can emerge from intrinsic suppression cascading through a feedforward model of visual processing. A deep convolutional neural network with intrinsic suppression captured neural signatures of adaptation including novelty detection, enhancement, and tuning curve shifts, while producing aftereffects consistent with human perception. When adaptation was trained in a task where repeated input affects recognition performance, an intrinsic mechanism generalized better than a recurrent neural network. Our results demonstrate that feedforward propagation of intrinsic suppression changes the functional state of the network, reproducing key neurophysiological and perceptual properties of adaptation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Zhu, Liqiang, Ying-Cheng Lai, Frank C. Hoppensteadt e Jiping He. "Probing Changes in Neural Interaction During Adaptation". Neural Computation 15, n.º 10 (1 de outubro de 2003): 2359–77. http://dx.doi.org/10.1162/089976603322362392.

Texto completo da fonte
Resumo:
A procedure is developed to probe the changes in the functional interactions among neurons in primary motor cortex of the monkey brain during adaptation. A monkey is trained to learn a new skill, moving its arm to reach a target under the influence of external perturbations. The spike trains of multiple neurons in the primary motor cortex are recorded simultaneously. We utilize the methodology of directed transfer function, derived from a class of linear stochastic models, to quantify the causal interactions between the neurons. We find that the coupling between the motor neurons tends to increase during the adaptation but return to the original level after the adaptation. Furthermore, there is evidence that adaptation tends to affect the topology of the neural network, despite the approximate conservation of the average coupling strength in the network before and after the adaptation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Wu, Jian Hui, Guo Li Wang, Jing Wang e Yu Su. "BP Neural Network and Multiple Linear Regression in Acute Hospitalization Costs in the Comparative Study". Applied Mechanics and Materials 50-51 (fevereiro de 2011): 959–63. http://dx.doi.org/10.4028/www.scientific.net/amm.50-51.959.

Texto completo da fonte
Resumo:
The BP neural network is the important component of artificial neural networks, and gradually becomes a branch of the computation statistics. With its many characteristics such as large-scale parallel information processing, excellent self-adaptation and self-learning, the BP neural network has been used in solving the complex nonlinear dynamic system prediction. The BP neural network does not need the precise mathematical model, does not have any supposition request to the material itself. Its processing non-linear problem's ability is stronger than traditional statistical methods. By means of contrasting the BP neural network and the multi-dimensional linear regression ,this article discoveries that the BP neural network fitting ability is more stronger, the prediction performance is more stable, may be further applied and promoted in analysis and forecast of the continual material factor.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Sharma, B. Lungsi, e Richard B. Wells. "A demonstration of using the model reference principle to develop the function-oriented adaptive pulse-coded neural network". SIMULATION 96, n.º 2 (10 de julho de 2019): 207–19. http://dx.doi.org/10.1177/0037549719860587.

Texto completo da fonte
Resumo:
How can one design an adaptive pulsed neural network that is based on psycho-phenomenological foundations? In other words, how can one migrate the adaptive capability of a psychologically modeled neural network to a pulsed network? Neural networks that model psychological phenomena are at a larger scale than physiological models. There is a common presumption that pulse-coded neural network analogs to non-pulsing networks can be obtained by a simple mapping and scaling process of some sort. But the actual in vivo environment of pulse-coded neural network systems produces a much more diverse set of firing patterns. Thus, functional mapping from traditional neural network systems to pulse-coded neural network systems is much more challenging than has been presumed. This paper demonstrates that the employment of model reference adaptation as a method for applying scientific reduction is a powerful design tool for the development of a function-oriented adaptive pulse-coded neural network. The performance surface is empirically obtained by comparing the performance of the pulsed network to the non-pulsing network. Based on this surface, the adaptive algorithm is a combination of gain scheduling and steepest-descent method. Therefore, the adaptive property of the pulse-coded neural network is built upon a psycho-physiological foundation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Siddikov, I. H., P. I. Kalandarov e D. B. ,. Yadgarova. "Engineering Calculation And Algorithm Of Adaptation Of Parameters Of A Neuro-Fuzzy Controller". American Journal of Applied sciences 03, n.º 09 (30 de setembro de 2021): 41–49. http://dx.doi.org/10.37547/tajas/volume03issue09-06.

Texto completo da fonte
Resumo:
As part of the study, a control scheme with the adaptation of the coefficients of the neuron-fuzzy regulator implemented. The area difference method used as a training method for the network. It improved by adding a rule base, which allows choosing the optimal learning rate for individual neurons of the neural network. The neural network controller applied as a superstructure of the PID controller in the process control scheme. The dynamic object can function in different modes. This technological process operates in different modes in terms of loading and temperature setpoints. Because of experiments, the power consumption and the amount of time required maintaining the same absorption process, using a conventional PID controller and a neural-network controller evaluated. It concluded that the neuro-fuzzy controller with a superstructure reduced the transient time by 19%.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Save, Ashwini, e Narendra Shekokar. "Cross Domain Adaptation using A Novel Convolution Neural Network". International Journal of Engineering Research and Technology 13, n.º 9 (30 de setembro de 2020): 2230. http://dx.doi.org/10.37624/ijert/13.9.2020.2230-2238.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Pan, Yongping, Qin Gao e Haoyong Yu. "Fast and low-frequency adaptation in neural network control". IET Control Theory & Applications 8, n.º 17 (20 de novembro de 2014): 2062–69. http://dx.doi.org/10.1049/iet-cta.2014.0449.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

He, Y., e U. Cilingirogu. "A charge-based on-chip adaptation Kohonen neural network". IEEE Transactions on Neural Networks 4, n.º 3 (maio de 1993): 462–69. http://dx.doi.org/10.1109/72.217189.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Furui, Sadaoki, Daisuke Itoh e Zhipeng Zhang. "Neural-network-based HMM adaptation for noisy speech recognition". Acoustical Science and Technology 24, n.º 2 (2003): 69–75. http://dx.doi.org/10.1250/ast.24.69.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Shi, Yangyang, Martha Larson e Catholijn M. Jonker. "Recurrent neural network language model adaptation with curriculum learning". Computer Speech & Language 33, n.º 1 (setembro de 2015): 136–54. http://dx.doi.org/10.1016/j.csl.2014.11.004.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Li, Xudong, Jianhua Zheng, Mingtao Li, Wenzhen Ma e Yang Hu. "Frequency-Domain Fusing Convolutional Neural Network: A Unified Architecture Improving Effect of Domain Adaptation for Fault Diagnosis". Sensors 21, n.º 2 (10 de janeiro de 2021): 450. http://dx.doi.org/10.3390/s21020450.

Texto completo da fonte
Resumo:
In recent years, transfer learning has been widely applied in fault diagnosis for solving the problem of inconsistent distribution of the original training dataset and the online-collecting testing dataset. In particular, the domain adaptation method can solve the problem of the unlabeled testing dataset in transfer learning. Moreover, Convolutional Neural Network (CNN) is the most widely used network among existing domain adaptation approaches due to its powerful feature extraction capability. However, network designing is too empirical, and there is no network designing principle from the frequency domain. In this paper, we propose a unified convolutional neural network architecture from a frequency domain perspective for a domain adaptation named Frequency-domain Fusing Convolutional Neural Network (FFCNN). The method of FFCNN contains two parts, frequency-domain fusing layer and feature extractor. The frequency-domain fusing layer uses convolution operations to filter signals at different frequency bands and combines them into new input signals. These signals are input to the feature extractor to extract features and make domain adaptation. We apply FFCNN for three domain adaptation methods, and the diagnosis accuracy is improved compared to the typical CNN.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Li, Xudong, Jianhua Zheng, Mingtao Li, Wenzhen Ma e Yang Hu. "Frequency-Domain Fusing Convolutional Neural Network: A Unified Architecture Improving Effect of Domain Adaptation for Fault Diagnosis". Sensors 21, n.º 2 (10 de janeiro de 2021): 450. http://dx.doi.org/10.3390/s21020450.

Texto completo da fonte
Resumo:
In recent years, transfer learning has been widely applied in fault diagnosis for solving the problem of inconsistent distribution of the original training dataset and the online-collecting testing dataset. In particular, the domain adaptation method can solve the problem of the unlabeled testing dataset in transfer learning. Moreover, Convolutional Neural Network (CNN) is the most widely used network among existing domain adaptation approaches due to its powerful feature extraction capability. However, network designing is too empirical, and there is no network designing principle from the frequency domain. In this paper, we propose a unified convolutional neural network architecture from a frequency domain perspective for a domain adaptation named Frequency-domain Fusing Convolutional Neural Network (FFCNN). The method of FFCNN contains two parts, frequency-domain fusing layer and feature extractor. The frequency-domain fusing layer uses convolution operations to filter signals at different frequency bands and combines them into new input signals. These signals are input to the feature extractor to extract features and make domain adaptation. We apply FFCNN for three domain adaptation methods, and the diagnosis accuracy is improved compared to the typical CNN.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Ribar, Srdjan, Vojislav V. Mitic e Goran Lazovic. "Neural Networks Application on Human Skin Biophysical Impedance Characterizations". Biophysical Reviews and Letters 16, n.º 01 (6 de fevereiro de 2021): 9–19. http://dx.doi.org/10.1142/s1793048021500028.

Texto completo da fonte
Resumo:
Artificial neural networks (ANNs) are basically the structures that perform input–output mapping. This mapping mimics the signal processing in biological neural networks. The basic element of biological neural network is a neuron. Neurons receive input signals from other neurons or the environment, process them, and generate their output which represents the input to another neuron of the network. Neurons can change their sensitivity to input signals. Each neuron has a simple rule to process an input signal. Biological neural networks have the property that signals are processed through many parallel connections (massively parallel processing). The activity of all neurons in these parallel connections is summed and represents the output of the whole network. The main feature of biological neural networks is that changes in the sensitivity of the neurons lead to changes in the operation of the entire network. This is called adaptation and is correlated with the learning process of living organisms. In this paper, a set of artificial neural networks are used for classifying the human skin biophysical impedance data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Yang, Guochun, Kai Wang, Weizhi Nan, Qi Li, Ya Zheng, Haiyan Wu e Xun Liu. "Distinct Brain Mechanisms for Conflict Adaptation within and across Conflict Types". Journal of Cognitive Neuroscience 34, n.º 3 (1 de fevereiro de 2022): 445–60. http://dx.doi.org/10.1162/jocn_a_01806.

Texto completo da fonte
Resumo:
Abstract Cognitive conflict, like other cognitive processes, shows the characteristic of adaptation, that is, conflict effects are attenuated when immediately following a conflicting event, a phenomenon known as the conflict adaptation effect (CAE). One important aspect of CAE is its sensitivity to the intertrial coherence of conflict type, that is, behavioral CAE occurs only if consecutive trials are of the same conflict type. Although reliably observed behaviorally, the neural mechanisms underlying such a phenomenon remains elusive. With a paradigm combining the classic Simon task and Stroop task, this fMRI study examined neural correlates of conflict adaptation both within and across conflict types. The results revealed that when the conflict type repeated (but not when it alternated), the CAE-like neural activations were observed in dorsal ACC, inferior frontal gyrus (IFG), superior parietal lobe, and so forth (i.e., regions within typical task-positive networks). In contrast, when the conflict type alternated (but not when it repeated), we found CAE-like neural deactivations in the left superior frontal gyri (i.e., a region within the typical task-negative network). Network analyses suggested that the regions of ACC, IFG, superior parietal lobe, and superior frontal gyrus can be clustered into two antagonistic networks, and the ACC–IFG connection was associated with the within-type CAE. This evidence suggests that our adaptation to cognitive conflicts within a conflict type and across different types may rely on these two distinct neural mechanisms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Tran, Vu, François Septier, Daisuke Murakami e Tomoko Matsui. "Spatial–Temporal Temperature Forecasting Using Deep-Neural-Network-Based Domain Adaptation". Atmosphere 15, n.º 1 (10 de janeiro de 2024): 90. http://dx.doi.org/10.3390/atmos15010090.

Texto completo da fonte
Resumo:
Accurate temperature forecasting is critical for various sectors, yet traditional methods struggle with complex atmospheric dynamics. Deep neural networks (DNNs), especially transformer-based DNNs, offer potential advantages, but face challenges with domain adaptation across different geographical regions. We evaluated the effectiveness of DNN-based domain adaptation for daily maximum temperature forecasting in experimental low-resource settings. We used an attention-based transformer deep learning architecture as the core forecasting framework and used kernel mean matching (KMM) for domain adaptation. Domain adaptation significantly improved forecasting accuracy in most experimental settings, thereby mitigating domain differences between source and target regions. Specifically, we observed that domain adaptation is more effective than exclusively training on a small amount of target-domain training data. This study reinforces the potential of using DNNs for temperature forecasting and underscores the benefits of domain adaptation using KMM. It also highlights the need for caution when using small amounts of target-domain data to avoid overfitting. Future research includes investigating strategies to minimize overfitting and to further probe the effect of various factors on model performance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Alavash, Mohsen, Sarah Tune e Jonas Obleser. "Dynamic large-scale connectivity of intrinsic cortical oscillations supports adaptive listening in challenging conditions". PLOS Biology 19, n.º 10 (11 de outubro de 2021): e3001410. http://dx.doi.org/10.1371/journal.pbio.3001410.

Texto completo da fonte
Resumo:
In multi-talker situations, individuals adapt behaviorally to this listening challenge mostly with ease, but how do brain neural networks shape this adaptation? We here establish a long-sought link between large-scale neural communications in electrophysiology and behavioral success in the control of attention in difficult listening situations. In an age-varying sample of N = 154 individuals, we find that connectivity between intrinsic neural oscillations extracted from source-reconstructed electroencephalography is regulated according to the listener’s goal during a challenging dual-talker task. These dynamics occur as spatially organized modulations in power-envelope correlations of alpha and low-beta neural oscillations during approximately 2-s intervals most critical for listening behavior relative to resting-state baseline. First, left frontoparietal low-beta connectivity (16 to 24 Hz) increased during anticipation and processing of a spatial-attention cue before speech presentation. Second, posterior alpha connectivity (7 to 11 Hz) decreased during comprehension of competing speech, particularly around target-word presentation. Connectivity dynamics of these networks were predictive of individual differences in the speed and accuracy of target-word identification, respectively, but proved unconfounded by changes in neural oscillatory activity strength. Successful adaptation to a listening challenge thus latches onto two distinct yet complementary neural systems: a beta-tuned frontoparietal network enabling the flexible adaptation to attentive listening state and an alpha-tuned posterior network supporting attention to speech.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Nerrand, O., P. Roussel-Ragot, L. Personnaz, G. Dreyfus e S. Marcos. "Neural Networks and Nonlinear Adaptive Filtering: Unifying Concepts and New Algorithms". Neural Computation 5, n.º 2 (março de 1993): 165–99. http://dx.doi.org/10.1162/neco.1993.5.2.165.

Texto completo da fonte
Resumo:
The paper proposes a general framework that encompasses the training of neural networks and the adaptation of filters. We show that neural networks can be considered as general nonlinear filters that can be trained adaptively, that is, that can undergo continual training with a possibly infinite number of time-ordered examples. We introduce the canonical form of a neural network. This canonical form permits a unified presentation of network architectures and of gradient-based training algorithms for both feedforward networks (transversal filters) and feedback networks (recursive filters). We show that several algorithms used classically in linear adaptive filtering, and some algorithms suggested by other authors for training neural networks, are special cases in a general classification of training algorithms for feedback networks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Lin, Baihan. "Regularity Normalization: Neuroscience-Inspired Unsupervised Attention across Neural Network Layers". Entropy 24, n.º 1 (28 de dezembro de 2021): 59. http://dx.doi.org/10.3390/e24010059.

Texto completo da fonte
Resumo:
Inspired by the adaptation phenomenon of neuronal firing, we propose the regularity normalization (RN) as an unsupervised attention mechanism (UAM) which computes the statistical regularity in the implicit space of neural networks under the Minimum Description Length (MDL) principle. Treating the neural network optimization process as a partially observable model selection problem, the regularity normalization constrains the implicit space by a normalization factor, the universal code length. We compute this universal code incrementally across neural network layers and demonstrate the flexibility to include data priors such as top-down attention and other oracle information. Empirically, our approach outperforms existing normalization methods in tackling limited, imbalanced and non-stationary input distribution in image classification, classic control, procedurally-generated reinforcement learning, generative modeling, handwriting generation and question answering tasks with various neural network architectures. Lastly, the unsupervised attention mechanisms is a useful probing tool for neural networks by tracking the dependency and critical learning stages across layers and recurrent time steps of deep networks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Bereta, Michał. "Kohonen Network-Based Adaptation of Non Sequential Data for Use in Convolutional Neural Networks". Sensors 21, n.º 21 (29 de outubro de 2021): 7221. http://dx.doi.org/10.3390/s21217221.

Texto completo da fonte
Resumo:
Convolutional neural networks have become one of the most powerful computing tools of artificial intelligence in recent years. They are especially suitable for the analysis of images and other data that have an inherent sequence structure, such as time series data. In the case of data in the form of vectors of features, the order of which does not matter, the use of convolutional neural networks is not justified. This paper presents a new method of representing non-sequential data as images that can be analyzed by a convolutional network. The well-known Kohonen network was used for this purpose. After training on non-sequential data, each example is represented by so-called U-image that can be used as input to a convolutional layer. A hybrid approach was also presented, where the neural network uses two types of input signals, both U-image representation and the original features. The results of the proposed method on traditional machine learning databases as well as on a difficult classification problem originating from the analysis of measurement data from experiments in particle physics are presented.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Ge, S. S., e T. H. Lee. "Parallel Adaptive Neural Network Control of Robots". Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 208, n.º 4 (novembro de 1994): 231–37. http://dx.doi.org/10.1243/pime_proc_1994_208_336_02.

Texto completo da fonte
Resumo:
In this paper, a parallel adaptive neural network (NN) control design for robots motivated by the work by Lee and Tan is presented. The controller is based on direct adaptive techniques and an approach of using an additional parallel NN to provide adaptive enhancements to a basic fixed controller, which can be either a NN-based non-linear controller or a model-based non-linear controller. It is shown that, if Gaussian radial basis function networks are used for the additional parallel NN, uniformly stable adaptation is assured and asymptotic tracking of the position reference signal is achieved.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Sousa, Miguel Angelo de Abreu de, Edson Lemos Horta, Sergio Takeo Kofuji e Emilio Del-Moral-Hernandez. "Architecture Analysis of an FPGA-Based Hopfield Neural Network". Advances in Artificial Neural Systems 2014 (9 de dezembro de 2014): 1–10. http://dx.doi.org/10.1155/2014/602325.

Texto completo da fonte
Resumo:
Interconnections between electronic circuits and neural computation have been a strongly researched topic in the machine learning field in order to approach several practical requirements, including decreasing training and operation times in high performance applications and reducing cost, size, and energy consumption for autonomous or embedded developments. Field programmable gate array (FPGA) hardware shows some inherent features typically associated with neural networks, such as, parallel processing, modular executions, and dynamic adaptation, and works on different types of FPGA-based neural networks were presented in recent years. This paper aims to address different aspects of architectural characteristics analysis on a Hopfield Neural Network implemented in FPGA, such as maximum operating frequency and chip-area occupancy according to the network capacity. Also, the FPGA implementation methodology, which does not employ multipliers in the architecture developed for the Hopfield neural model, is presented, in detail.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

CARTLING, BO. "GENERATION OF ASSOCIATIVE PROCESSES IN A NEURAL NETWORK WITH REALISTIC FEATURES OF ARCHITECTURE AND UNITS". International Journal of Neural Systems 05, n.º 03 (setembro de 1994): 181–94. http://dx.doi.org/10.1142/s0129065794000207.

Texto completo da fonte
Resumo:
A recent neural network model of cortical associative memory incorporating neuronal adaptation by a simplified description of its underlying ionic mechanisms is extended towards more realistic network units and architecture. Excitatory units correspond to groups of adapting pyramidal neurons and inhibitory units to groups of nonadapting interneurons. The network architecture is formed from pairs of one pyramidal and one interneuron unit each with inhibitory connections within and excitatory connections between pairs. The degree of adaptability of the pyramidal units controls the character of the network dynamics. An intermediate adaptability generates limit cycles of transitions between stored patterns and regulates oscillation frequencies in the range of theta rhythms observed in the brain. In particular, neuronal adaptation can impose a direction of transitions between overlapping patterns also in a symmetrically connected network. The model permits a detailed analysis of the transition mechanisms. Temporal sequences of patterns thus formed may constitute parts of associative processes, such as recall of stored sequences or search of pattern subspaces. As a special case, neuronal adaptation can accomplish pattern segmentation by which overlapping patterns are temporally resolved. The type of limit cycles produced by neuronal adaptation may also be of significance for central pattern generators, also for networks involving motor neurons. The applied learning rule of Hebbian type is compared to a modified version also common in neural network modelling. It is also shown that the dependence of the network dynamic behaviour on neuronal adaptability, from fixed point attractors at weak adaptability towards more complex dynamics of limit cycles and chaos at strong adaptability, agrees with that recently observed in a more abstract version of the model. The present description of neuronal adaptation is compared to models based on dynamic firing thresholds.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Rivera-Rovelo, Jorge, e Eduardo Bayro-Corrochano. "Surface Approximation using Growing Self-Organizing Nets and Gradient Information". Applied Bionics and Biomechanics 4, n.º 3 (2007): 125–36. http://dx.doi.org/10.1155/2007/502679.

Texto completo da fonte
Resumo:
In this paper we show how to improve the performance of two self-organizing neural networks used to approximate the shape of a 2D or 3D object by incorporating gradient information in the adaptation stage. The methods are based on the growing versions of the Kohonen's map and the neural gas network. Also, we show that in the adaptation stage the network utilizes efficient transformations, expressed as versors in the conformal geometric algebra framework, which build the shape of the object independent of its position in space (coordinate free). Our algorithms were tested with several images, including medical images (CT and MR images). We include also some examples for the case of 3D surface estimation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Puga-Guzmán, Sergio A., Carlos Aguilar-Avelar, Javier Moreno-Valenzuela e Víctor Santibáñez. "Tracking of periodic oscillations in an underactuated system via adaptive neural networks". Journal of Low Frequency Noise, Vibration and Active Control 37, n.º 1 (18 de janeiro de 2018): 128–43. http://dx.doi.org/10.1177/1461348417752988.

Texto completo da fonte
Resumo:
In this paper, the tracking control of periodic oscillations in an underactuated mechanical system is discussed. The proposed scheme is derived from the feedback linearization control technique and adaptive neural networks are used to estimate the unknown dynamics and to compensate uncertainties. The proposed neural network-based controller is applied to the Furuta pendulum, which is a nonlinear and nonminimum phase underactuated mechanical system with two degrees of freedom. The new neural network-based controller is experimentally compared with respect to its model-based version. Results indicated that the proposed neural algorithm performs better than the model-based controller, showing that the real-time adaptation of the neural network weights successfully estimates the unknown dynamics and compensates uncertainties in the experimental platform.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Hou, Wen-Juan, e Bamfa Ceesay. "Exploring the Adaptation of Recurrent Neural Network Approaches for Extracting Drug–Drug Interactions from Biomedical Text". International Journal of Machine Learning and Computing 11, n.º 4 (agosto de 2021): 267–73. http://dx.doi.org/10.18178/ijmlc.2021.11.4.1046.

Texto completo da fonte
Resumo:
Information extraction (IE) is the process of automatically identifying structured information from unstructured or partially structured text. IE processes can involve several activities, such as named entity recognition, event extraction, relationship discovery, and document classification, with the overall goal of translating text into a more structured form. Information on the changes in the effect of a drug, when taken in combination with a second drug, is known as drug–drug interaction (DDI). DDIs can delay, decrease, or enhance absorption of drugs and thus decrease or increase their efficacy or cause adverse effects. Recent research trends have shown several adaptation of recurrent neural networks (RNNs) from text. In this study, we highlight significant challenges of using RNNs in biomedical text processing and propose automatic extraction of DDIs aiming at overcoming some challenges. Our results show that the system is competitive against other systems for the task of extracting DDIs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

BOURBAKIS, N., P. KAKUMANU, S. MAKROGIANNIS, R. BRYLL e S. PANCHANATHAN. "NEURAL NETWORK APPROACH FOR IMAGE CHROMATIC ADAPTATION FOR SKIN COLOR DETECTION". International Journal of Neural Systems 17, n.º 01 (fevereiro de 2007): 1–12. http://dx.doi.org/10.1142/s0129065707000920.

Texto completo da fonte
Resumo:
The goal of image chromatic adaptation is to remove the effect of illumination and to obtain color data that reflects precisely the physical contents of the scene. We present in this paper an approach to image chromatic adaptation using Neural Networks (NN) with application for detecting — adapting human skin color. The NN is trained on randomly chosen color images containing human subject under various illuminating conditions, thereby enabling the model to dynamically adapt to the changing illumination conditions. The proposed network predicts directly the illuminant estimate in the image so as to adapt to human skin color. The comparison of our method with Gray World, White Patch and NN on White Patch methods for skin color stabilization is presented. The skin regions in the NN stabilized images are successfully detected using a computationally inexpensive thresholding operation. We also present results on detecting skin regions on a data set of test images. The results are promising and suggest a new approach for adapting human skin color using neural networks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Westendorff, Stephanie, Shenbing Kuang, Bahareh Taghizadeh, Opher Donchin e Alexander Gail. "Asymmetric generalization in adaptation to target displacement errors in humans and in a neural network model". Journal of Neurophysiology 113, n.º 7 (abril de 2015): 2360–75. http://dx.doi.org/10.1152/jn.00483.2014.

Texto completo da fonte
Resumo:
Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement (“jump”) consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Zhang, Byoung-Tak, Peter Ohm e Heinz Mühlenbein. "Evolutionary Induction of Sparse Neural Trees". Evolutionary Computation 5, n.º 2 (junho de 1997): 213–36. http://dx.doi.org/10.1162/evco.1997.5.2.213.

Texto completo da fonte
Resumo:
This paper is concerned with the automatic induction of parsimonious neural networks. In contrast to other program induction situations, network induction entails parametric learning as well as structural adaptation. We present a novel representation scheme called neural trees that allows efficient learning of both network architectures and parameters by genetic search. A hybrid evolutionary method is developed for neural tree induction that combines genetic programming and the breeder genetic algorithm under the unified framework of the minimum description length principle. The method is successfully applied to the induction of higher order neural trees while still keeping the resulting structures sparse to ensure good generalization performance. Empirical results are provided on two chaotic time series prediction problems of practical interest.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Tunik, Eugene, Paul J. Schmitt e Scott T. Grafton. "BOLD Coherence Reveals Segregated Functional Neural Interactions When Adapting to Distinct Torque Perturbations". Journal of Neurophysiology 97, n.º 3 (março de 2007): 2107–20. http://dx.doi.org/10.1152/jn.00405.2006.

Texto completo da fonte
Resumo:
In the natural world, we experience and adapt to multiple extrinsic perturbations. This poses a challenge to neural circuits in discriminating between different context-appropriate responses. Using event-related fMRI, we characterized the neural dynamics involved in this process by randomly delivering a position- or velocity-dependent torque perturbation to subjects’ arms during a target-capture task. Each perturbation was color-cued during movement preparation to provide contextual information. Although trajectories differed between perturbations, subjects significantly reduced error under both conditions. This was paralleled by reduced BOLD signal in the right dentate nucleus, the left sensorimotor cortex, and the left intraparietal sulcus. Trials included “NoGo” conditions to dissociate activity related to preparation from execution and adaptation. Subsequent analysis identified perturbation-specific neural processes underlying preparation (“NoGo”) and adaptation (“Go”) early and late into learning. Between-perturbation comparisons of BOLD magnitude revealed negligible differences for both preparation and adaptation trials. However, a network-level analysis of BOLD coherence revealed that by late learning, response preparation (“NoGo”) was attributed to a relative focusing of coherence within cortical and basal ganglia networks in both perturbation conditions, demonstrating a common network interaction for establishing arbitrary visuomotor associations. Conversely, late-learning adaptation (“Go”) was attributed to a focusing of BOLD coherence between a cortical–basal ganglia network in the viscous condition and between a cortical–cerebellar network in the positional condition. Our findings demonstrate that trial-to-trial acquisition of two distinct adaptive responses is attributed not to anatomically segregated regions, but to differential functional interactions within common sensorimotor circuits.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Wang, Xiaoqing, e Xiangjun Wang. "Unsupervised Domain Adaptation with Coupled Generative Adversarial Autoencoders". Applied Sciences 8, n.º 12 (7 de dezembro de 2018): 2529. http://dx.doi.org/10.3390/app8122529.

Texto completo da fonte
Resumo:
When large-scale annotated data are not available for certain image classification tasks, training a deep convolutional neural network model becomes challenging. Some recent domain adaptation methods try to solve this problem using generative adversarial networks and have achieved promising results. However, these methods are based on a shared latent space assumption and they do not consider the situation when shared high level representations in different domains do not exist or are not ideal as they assumed. To overcome this limitation, we propose a neural network structure called coupled generative adversarial autoencoders (CGAA) that allows a pair of generators to learn the high-level differences between two domains by sharing only part of the high-level layers. Additionally, by introducing a class consistent loss calculated by a stand-alone classifier into the generator optimization, our model is able to generate class invariant style-transferred images suitable for classification tasks in domain adaptation. We apply CGAA to several domain transferred image classification scenarios including several benchmark datasets. Experiment results have shown that our method can achieve state-of-the-art classification results.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

SCHNEIDEWIND, NORMAN. "APPLYING NEURAL NETWORKS TO SOFTWARE RELIABILITY ASSESSMENT". International Journal of Reliability, Quality and Safety Engineering 17, n.º 04 (agosto de 2010): 313–29. http://dx.doi.org/10.1142/s0218539310003834.

Texto completo da fonte
Resumo:
We adapt concepts from the field of neural networks to assess the reliability of software, employing cumulative failures, reliability, remaining failures, and time to failure metrics. In addition, the risk of not achieving reliability, remaining failure, and time to failure goals are assessed. The purpose of the assessment is to compare a criterion, derived from a neural network model, for estimating the parameters of software reliability metrics, with the method of maximum likelihood estimation. To our surprise the neural network method proved superior for all the reliability metrics that were assessed by virtue of yielding lower prediction error and risk. We also found that considerable adaptation of the neural network model was necessary to be meaningful for our application – only inputs, functions, neurons, weights, activation units, and outputs were required to characterize our application.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia