Добірка наукової літератури з теми "Incremental neural network"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Incremental neural network".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Incremental neural network"
Yang, Shuyuan, Min Wang, and Licheng Jiao. "Incremental constructive ridgelet neural network." Neurocomputing 72, no. 1-3 (December 2008): 367–77. http://dx.doi.org/10.1016/j.neucom.2008.01.001.
Повний текст джерелаSiddiqui, Zahid Ali, and Unsang Park. "Progressive Convolutional Neural Network for Incremental Learning." Electronics 10, no. 16 (August 5, 2021): 1879. http://dx.doi.org/10.3390/electronics10161879.
Повний текст джерелаHo, Jiacang, and Dae-Ki Kang. "Brick Assembly Networks: An Effective Network for Incremental Learning Problems." Electronics 9, no. 11 (November 17, 2020): 1929. http://dx.doi.org/10.3390/electronics9111929.
Повний текст джерелаAbramova, E. S., A. A. Orlov, and K. V. Makarov. "Possibi¬lities of Using Neural Network Incremental Learning." Bulletin of the South Ural State University. Ser. Computer Technologies, Automatic Control & Radioelectronics 21, no. 4 (November 2021): 19–27. http://dx.doi.org/10.14529/ctcr210402.
Повний текст джерелаTomimori, Haruka, Kui-Ting Chen, and Takaaki Baba. "A Convolutional Neural Network with Incremental Learning." Journal of Signal Processing 21, no. 4 (2017): 155–58. http://dx.doi.org/10.2299/jsp.21.155.
Повний текст джерелаShiotani, Shigetoshi, Toshio Fukuda, and Takanori Shibata. "A neural network architecture for incremental learning." Neurocomputing 9, no. 2 (October 1995): 111–30. http://dx.doi.org/10.1016/0925-2312(94)00061-v.
Повний текст джерелаMellado, Diego, Carolina Saavedra, Steren Chabert, Romina Torres, and Rodrigo Salas. "Self-Improving Generative Artificial Neural Network for Pseudorehearsal Incremental Class Learning." Algorithms 12, no. 10 (October 1, 2019): 206. http://dx.doi.org/10.3390/a12100206.
Повний текст джерелаHeo, Kwang-Seung, and Kwee-Bo Sim. "Speaker Identification Based on Incremental Learning Neural Network." International Journal of Fuzzy Logic and Intelligent Systems 5, no. 1 (March 1, 2005): 76–82. http://dx.doi.org/10.5391/ijfis.2005.5.1.076.
Повний текст джерелаCiarelli, Patrick Marques, Elias Oliveira, and Evandro O. T. Salles. "An incremental neural network with a reduced architecture." Neural Networks 35 (November 2012): 70–81. http://dx.doi.org/10.1016/j.neunet.2012.08.003.
Повний текст джерелаZhang, Yansheng, Dong Ye, Yuanhong Liu, and Jianjun Xu. "Incremental LLE Based on Back Propagation Neural Network." IOP Conference Series: Earth and Environmental Science 170 (July 2018): 042051. http://dx.doi.org/10.1088/1755-1315/170/4/042051.
Повний текст джерелаДисертації з теми "Incremental neural network"
Lundberg, Emil. "Adding temporal plasticity to a self-organizing incremental neural network using temporal activity diffusion." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-180346.
Повний текст джерелаVektorkvantisering (VQ; eng: Vector Quantization) är ett klassiskt problem och en enkel metod för mönsterigenkänning. Bland tillämpningar finns förstörande datakompression, klustring och igenkänning av tal och talare. Även om VQ i stort har ersatts av tidsmedvetna tekniker såsom dolda Markovmodeller (HMM, eng: Hidden Markov Models) och dynamisk tidskrökning (DTW, eng: Dynamic Time Warping) i vissa tillämpningar, som tal- och talarigenkänning, har VQ ännu viss relevans tack vare sin mycket lägre beräkningsmässiga kostnad — särskilt för exempelvis inbyggda system. En ny studie demonstrerar också ett VQ-system med flera sektioner som åstadkommer prestanda i klass med DTW i en tillämpning på igenkänning av handskrivna signaturer, men till en mycket lägre beräkningsmässig kostnad. Att dra nytta av temporala mönster i en VQ-algoritm skulle kunna hjälpa till att förbättra sådana resultat ytterligare. SOTPAR2 är en sådan utökning av Neural Gas, en artificiell neural nätverk-algorithm för VQ. SOTPAR2 använder en konceptuellt enkel idé, baserad på att lägga till sidleds anslutningar mellan nätverksnoder och skapa “temporal aktivitet” som diffunderar genom anslutna noder. Aktiviteten gör sedan så att närmaste-granne-klassificeraren föredrar noder med hög aktivitet, och författarna till SOTPAR2 rapporterar förbättrade resultat jämfört med Neural Gas i en tillämpning på förutsägning av en tidsserie. I denna rapport undersöks hur samma utökning påverkar kvantiserings- och förutsägningsprestanda hos algoritmen självorganiserande inkrementellt neuralt nätverk (SOINN, eng: self-organizing incremental neural network). SOINN är en VQ-algorithm som automatiskt väljer en lämplig kodboksstorlek och också kan användas för klustring med godtyckliga klusterformer. Experimentella resultat visar att denna utökning inte förbättrar prestandan hos SOINN, istället försämrades prestandan i alla experiment som genomfördes. Detta resultat diskuteras, liksom inverkan av parametervärden på prestandan, och möjligt framtida arbete för att förbättra resultaten föreslås.
Flores, João Henrique Ferreira. "ARMA-CIGMN : an Incremental Gaussian Mixture Network for time series analysis and forecasting." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/116126.
Повний текст джерелаThis work presents a new model of neural network for time series analysis and forecasting: the ARMA-CIGMN (Autoregressive Moving Average Classical Incremental Gaussian Mixture Network) model and its analysis. This model is based on modifications made to a reformulated IGMN, the Classical IGMN (CIGMN). The CIGMN is similar to the original IGMN, but based on a classical statistical approach. The modifications to the IGMN algorithm were made to better fit it to time series. The proposed ARMA-CIGMN model demonstrates good forecasts and the modeling procedure can also be aided by known statistical tools as the autocorrelation (acf) and partial autocorrelation functions (pacf), already used in classical statistical time series modeling and also with the original IGMN algorithm models. The ARMA-CIGMN model was evaluated using known series and simulated data. The models used for comparisons were the classical statistical ARIMA model and its variants, the original IGMN and two modifications over the original IGMN: (i) a modification similar to a classical ARMA (Autoregressive Moving Average) model and (ii) a similar NOE (Nonlinear Output Error) model. It is also presented a reformulated IGMN version with a classical statistical approach, which is needed for the ARMA-CIGMN model.
Hocquet, Guillaume. "Class Incremental Continual Learning in Deep Neural Networks." Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPAST070.
Повний текст джерелаWe are interested in the problem of continual learning of artificial neural networks in the case where the data are available for only one class at a time. To address the problem of catastrophic forgetting that restrain the learning performances in these conditions, we propose an approach based on the representation of the data of a class by a normal distribution. The transformations associated with these representations are performed using invertible neural networks, which can be trained with the data of a single class. Each class is assigned a network that will model its features. In this setting, predicting the class of a sample corresponds to identifying the network that best fit the sample. The advantage of such an approach is that once a network is trained, it is no longer necessary to update it later, as each network is independent of the others. It is this particularly advantageous property that sets our method apart from previous work in this area. We support our demonstration with experiments performed on various datasets and show that our approach performs favorably compared to the state of the art. Subsequently, we propose to optimize our approach by reducing its impact on memory by factoring the network parameters. It is then possible to significantly reduce the storage cost of these networks with a limited performance loss. Finally, we also study strategies to produce efficient feature extractor models for continual learning and we show their relevance compared to the networks traditionally used for continual learning
Ronco, Eric. "Incremental polynomial controller networks two self-organising non-linear controllers /." Thesis, Connect to electronic version, 1997. http://hdl.handle.net/1905/181.
Повний текст джерелаButtar, Sarpreet Singh. "Applying Artificial Neural Networks to Reduce the Adaptation Space in Self-Adaptive Systems : an exploratory work." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-87117.
Повний текст джерелаMonica, Riccardo. "Deep Incremental Learning for Object Recognition." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/12331/.
Повний текст джерелаPinto, Rafael Coimbra. "Continuous reinforcement learning with incremental Gaussian mixture models." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/157591.
Повний текст джерелаThis thesis’ original contribution is a novel algorithm which integrates a data-efficient function approximator with reinforcement learning in continuous state spaces. The complete research includes the development of a scalable online and incremental algorithm capable of learning from a single pass through data. This algorithm, called Fast Incremental Gaussian Mixture Network (FIGMN), was employed as a sample-efficient function approximator for the state space of continuous reinforcement learning tasks, which, combined with linear Q-learning, results in competitive performance. Then, this same function approximator was employed to model the joint state and Q-values space, all in a single FIGMN, resulting in a concise and data-efficient algorithm, i.e., a reinforcement learning algorithm that learns from very few interactions with the environment. A single episode is enough to learn the investigated tasks in most trials. Results are analysed in order to explain the properties of the obtained algorithm, and it is observed that the use of the FIGMN function approximator brings some important advantages to reinforcement learning in relation to conventional neural networks.
Pinto, Rafael Coimbra. "Online incremental one-shot learning of temporal sequences." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2011. http://hdl.handle.net/10183/49063.
Повний текст джерелаThis work introduces novel neural networks algorithms for online spatio-temporal pattern processing by extending the Incremental Gaussian Mixture Network (IGMN). The IGMN algorithm is an online incremental neural network that learns from a single scan through data by means of an incremental version of the Expectation-Maximization (EM) algorithm combined with locally weighted regression (LWR). Four different approaches are used to give temporal processing capabilities to the IGMN algorithm: time-delay lines (Time-Delay IGMN), a reservoir layer (Echo-State IGMN), exponential moving average of reconstructed input vector (Merge IGMN) and self-referencing (Recursive IGMN). This results in algorithms that are online, incremental, aggressive and have temporal capabilities, and therefore are suitable for tasks with memory or unknown internal states, characterized by continuous non-stopping data-flows, and that require life-long learning while operating and giving predictions without separated stages. The proposed algorithms are compared to other spatio-temporal neural networks in 8 time-series prediction tasks. Two of them show satisfactory performances, generally improving upon existing approaches. A general enhancement for the IGMN algorithm is also described, eliminating one of the algorithm’s manually tunable parameters and giving better results.
Glöde, Isabella. "Autonomous control of a mobile robot with incremental deep learning neural networks." Master's thesis, Pontificia Universidad Católica del Perú, 2021. http://hdl.handle.net/20.500.12404/18676.
Повний текст джерелаThuv, Øyvin Halfdan. "Incrementally Evolving a Dynamic Neural Network for Tactile-Olfactory Insect Navigation." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8834.
Повний текст джерелаThis Masters thesis gives a thorough description of a study carried out in the Self-Organizing Systems group at the NTNU. Much {AI research in the later years has moved towards increased use of representationless strategies such as simulated neural networks. One technique for creating such networks is to evolve them using simulated Darwinian evolution. This is a powerful technique, but it is often limited by the computer resources available. One way to speed up evolution, is to focus the evolutionary search on a more narrow range of solutions. It is for example possible to favor evolution of a specific ``species'' by initializing the search with a specialized set of genes. A disadvantage of doing this is of course that many other solutions (or ``species'') are disregarded so that good solutions in theory may be lost. It is therefore necessary to find focusing strategies that are generally applicable and (with a high probability) only disregards solutions that are considered unimportant. Three different ways of focusing evolutionary search for cognitive behaviours are merged and evaluated in this thesis: On a macro level, incremental evolution is applied to partition the evolutionary search. On a micro level, specific properties of the chosen neural network model (CTRNNs) are exploited. The two properties are seeding initial populations with center-crossing neural networks and/or bifurcative neurons. The techniques are compared to standard, naive, evolutionary searches by applying them to the evolution of simulated neural networks for the walking and control of a six-legged mobile robot. A problem simple enough to be satisfactorily understood, but complex enough to be a challenge for a traditional evolutionary search.
Книги з теми "Incremental neural network"
Mundy, Peter. A Neural Networks, Information-Processing Model of Joint Attention and Social-Cognitive Development. Edited by Philip David Zelazo. Oxford University Press, 2013. http://dx.doi.org/10.1093/oxfordhb/9780199958474.013.0010.
Повний текст джерелаЧастини книг з теми "Incremental neural network"
Shen, Shaofeng, Qiang Gan, Furao Shen, Chaomin Luo, and Jinxi Zhao. "An Incremental Network with Local Experts Ensemble." In Neural Information Processing, 515–22. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-26555-1_58.
Повний текст джерелаAlpaydm, Ethem. "Grow-and-Learn: An Incremental Method for Category Learning." In International Neural Network Conference, 761–64. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_69.
Повний текст джерелаKakemoto, Yoshitsugu, and Shinchi Nakasuka. "Dynamics of Incremental Learning by VSF-Network." In Artificial Neural Networks – ICANN 2009, 688–97. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04274-4_71.
Повний текст джерелаRizzi, A., M. Biancavilla, and F. M. Frattale Mascioli. "Incremental Min-Max Network. Part 1: Continuous Spaces." In Perspectives in Neural Computing, 371–76. London: Springer London, 1999. http://dx.doi.org/10.1007/978-1-4471-0811-5_43.
Повний текст джерелаZhang, Tianyue, Baile Xu, and Furao Shen. "Fuzzy Self-Organizing Incremental Neural Network for Fuzzy Clustering." In Neural Information Processing, 24–32. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70087-8_3.
Повний текст джерелаFurao, Shen, and Osamu Hasegawa. "An Incremental Neural Network for Non-stationary Unsupervised Learning." In Neural Information Processing, 641–46. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30499-9_98.
Повний текст джерелаDriff, Lydia Nahla, and Habiba Drias. "Artificial Neural Network for Incremental Data Mining." In Advances in Intelligent Systems and Computing, 133–43. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-56535-4_14.
Повний текст джерелаShen, Furao, and Osamu Hasegawa. "Self-Organizing Incremental Neural Network and Its Application." In Artificial Neural Networks – ICANN 2010, 535–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15825-4_74.
Повний текст джерелаAlfarozi, Syukron Abu Ishaq, Noor Akhmad Setiawan, Teguh Bharata Adji, Kuntpong Woraratpanya, Kitsuchart Pasupa, and Masanori Sugimoto. "Analytical Incremental Learning: Fast Constructive Learning Method for Neural Network." In Neural Information Processing, 259–68. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46672-9_30.
Повний текст джерелаWang, Xiaoyu, Lucian Gheorghe, and Jun-ichi Imura. "A Gaussian Process-Based Incremental Neural Network for Online Regression." In Neural Information Processing, 149–61. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63836-8_13.
Повний текст джерелаТези доповідей конференцій з теми "Incremental neural network"
Kou, Jialiang, Shengwu Xiong, Shuzhen Wan, and Hongbing Liu. "The Incremental Probabilistic Neural Network." In 2010 Sixth International Conference on Natural Computation (ICNC). IEEE, 2010. http://dx.doi.org/10.1109/icnc.2010.5583589.
Повний текст джерелаMi, Fei, and Boi Faltings. "Memory Augmented Neural Model for Incremental Session-based Recommendation." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/300.
Повний текст джерелаWang, Jenq Haur, and Hsin Yang Wang. "Incremental Neural Network Construction for Text Classification." In 2014 International Symposium on Computer, Consumer and Control (IS3C). IEEE, 2014. http://dx.doi.org/10.1109/is3c.2014.254.
Повний текст джерелаOkada, Shogo, and Toyoaki Nishida. "Incremental clustering of gesture patterns based on a self organizing incremental neural network." In 2009 International Joint Conference on Neural Networks (IJCNN 2009 - Atlanta). IEEE, 2009. http://dx.doi.org/10.1109/ijcnn.2009.5178845.
Повний текст джерелаHuang, Shin-Ying, Fang Yu, Rua-Huan Tsaih, and Yennun Huang. "Network-traffic anomaly detection with incremental majority learning." In 2015 International Joint Conference on Neural Networks (IJCNN). IEEE, 2015. http://dx.doi.org/10.1109/ijcnn.2015.7280573.
Повний текст джерелаLiu, Xiaojian. "Incremental Wavelet Neural Network based Prediction of Network Security Situation." In 2016 4th International Conference on Machinery, Materials and Computing Technology. Paris, France: Atlantis Press, 2016. http://dx.doi.org/10.2991/icmmct-16.2016.236.
Повний текст джерелаBakırlı, Gözde, Derya Birant, and Alp Kut. "INNA: Incremental Neural Network Algorithm and Performance Analysis." In Intelligent Systems and Control. Calgary,AB,Canada: ACTAPRESS, 2011. http://dx.doi.org/10.2316/p.2011.742-027.
Повний текст джерелаHebboul, Amel, Meriem Hacini, and Fella Hachouf. "An incremental parallel neural network for unsupervised classification." In 2011 7th International Workshop on Systems, Signal Processing and their Applications (WOSSPA). IEEE, 2011. http://dx.doi.org/10.1109/wosspa.2011.5931521.
Повний текст джерелаAnowar, Farzana, and Samira Sadaoui. "Incremental Neural-Network Learning for Big Fraud Data." In 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2020. http://dx.doi.org/10.1109/smc42975.2020.9283136.
Повний текст джерелаTer-Sarkisov, Alex, Holger Schwenk, Fethi Bougares, and Loïc Barrault. "Incremental Adaptation Strategies for Neural Network Language Models." In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality. Stroudsburg, PA, USA: Association for Computational Linguistics, 2015. http://dx.doi.org/10.18653/v1/w15-4006.
Повний текст джерелаЗвіти організацій з теми "Incremental neural network"
Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak, and Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, July 1996. http://dx.doi.org/10.32747/1996.7613033.bard.
Повний текст джерела