Добірка наукової літератури з теми "Neural networks (Computer science)"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Neural networks (Computer science)".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Neural networks (Computer science)"

1

Mijwel, Maad M., Adam Esen, and Aysar Shamil. "Overview of Neural Networks." Babylonian Journal of Machine Learning 2023 (August 11, 2023): 42–45. http://dx.doi.org/10.58496/bjml/2023/008.

Повний текст джерела
Анотація:
Since it was confirmed and verified that the human nervous system consists of individual cells, which were later called neurons, and it was discovered that these cells connect with each other to form an extensive communication network, a large number of possibilities have been opened for application in multiple disciplines in areas of knowledge. Neural Networks are created to perform tasks such as pattern recognition, classification, regression, and many other functions that serve humans and are an essential component in the field of machine learning and artificial intelligence. In computer science, progress has been made, and computers are supposed to learn how to solve problems like that of the human brain. Through pre-established examples, the computer must be able to provide solutions to issues that are like those presented during training. This article overviews neural networks and their application in developing computer systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Cottrell, G. W. "COMPUTER SCIENCE: New Life for Neural Networks." Science 313, no. 5786 (July 28, 2006): 454–55. http://dx.doi.org/10.1126/science.1129813.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Li, Xiao Guang. "Research on the Development and Applications of Artificial Neural Networks." Applied Mechanics and Materials 556-562 (May 2014): 6011–14. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.6011.

Повний текст джерела
Анотація:
Intelligent control is a class of control techniques that use various AI computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, evolutionary computation and genetic algorithms. In computer science and related fields, artificial neural networks are computational models inspired by animals’ central nervous systems (in particular the brain) that are capable of machine learning and pattern recognition. They are usually presented as systems of interconnected “neurons” that can compute values from inputs by feeding information through the network. Like other machine learning methods, neural networks have been used to solve a wide variety of tasks that are hard to solve using ordinary rule-based programming, including computer vision and speech recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Schöneburg, E. "Neural networks hunt computer viruses." Neurocomputing 2, no. 5-6 (July 1991): 243–48. http://dx.doi.org/10.1016/0925-2312(91)90027-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Turega, M. A. "Neural Networks." Computer Journal 35, no. 3 (June 1, 1992): 290. http://dx.doi.org/10.1093/comjnl/35.3.290.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Widrow, Bernard, David E. Rumelhart, and Michael A. Lehr. "Neural networks." Communications of the ACM 37, no. 3 (March 1994): 93–105. http://dx.doi.org/10.1145/175247.175257.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Begum, Afsana, Md Masiur Rahman, and Sohana Jahan. "Medical diagnosis using artificial neural networks." Mathematics in Applied Sciences and Engineering 5, no. 2 (June 4, 2024): 149–64. http://dx.doi.org/10.5206/mase/17138.

Повний текст джерела
Анотація:
Medical diagnosis using Artificial Neural Networks (ANN) and computer-aided diagnosis with deep learning is currently a very active research area in medical science. In recent years, for medical diagnosis, neural network models are broadly considered since they are ideal for recognizing different kinds of diseases including autism, cancer, tumor lung infection, etc. It is evident that early diagnosis of any disease is vital for successful treatment and improved survival rates. In this research, five neural networks, Multilayer neural network (MLNN), Probabilistic neural network (PNN), Learning vector quantization neural network (LVQNN), Generalized regression neural network (GRNN), and Radial basis function neural network (RBFNN) have been explored. These networks are applied to several benchmarking data collected from the University of California Irvine (UCI) Machine Learning Repository. Results from numerical experiments indicate that each network excels at recognizing specific physical issues. In the majority of cases, both the Learning Vector Quantization Neural Network and the Probabilistic Neural Network demonstrate superior performance compared to the other networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Yen, Gary G., and Haiming Lu. "Hierarchical Rank Density Genetic Algorithm for Radial-Basis Function Neural Network Design." International Journal of Computational Intelligence and Applications 03, no. 03 (September 2003): 213–32. http://dx.doi.org/10.1142/s1469026803000975.

Повний текст джерела
Анотація:
In this paper, we propose a genetic algorithm based design procedure for a radial-basis function neural network. A Hierarchical Rank Density Genetic Algorithm (HRDGA) is used to evolve the neural network's topology and parameters simultaneously. Compared with traditional genetic algorithm based designs for neural networks, the hierarchical approach addresses several deficiencies highlighted in literature. In addition, the rank-density based fitness assignment technique is used to optimize the performance and topology of the evolved neural network to deal with the confliction between the training performance and network complexity. Instead of producing a single optimal solution, HRDGA provides a set of near-optimal neural networks to the designers so that they can have more flexibility for the final decision-making based on certain preferences. In terms of searching for a near-complete set of candidate networks with high performances, the networks designed by the proposed algorithm prove to be competitive, or even superior, to three other traditional radial-basis function networks for predicting Mackey–Glass chaotic time series.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Cavallaro, Lucia, Ovidiu Bagdasar, Pasquale De Meo, Giacomo Fiumara, and Antonio Liotta. "Artificial neural networks training acceleration through network science strategies." Soft Computing 24, no. 23 (September 9, 2020): 17787–95. http://dx.doi.org/10.1007/s00500-020-05302-y.

Повний текст джерела
Анотація:
AbstractThe development of deep learning has led to a dramatic increase in the number of applications of artificial intelligence. However, the training of deeper neural networks for stable and accurate models translates into artificial neural networks (ANNs) that become unmanageable as the number of features increases. This work extends our earlier study where we explored the acceleration effects obtained by enforcing, in turn, scale freeness, small worldness, and sparsity during the ANN training process. The efficiency of that approach was confirmed by recent studies (conducted independently) where a million-node ANN was trained on non-specialized laptops. Encouraged by those results, our study is now focused on some tunable parameters, to pursue a further acceleration effect. We show that, although optimal parameter tuning is unfeasible, due to the high non-linearity of ANN problems, we can actually come up with a set of useful guidelines that lead to speed-ups in practical cases. We find that significant reductions in execution time can generally be achieved by setting the revised fraction parameter ($$\zeta $$ ζ ) to relatively low values.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Kumar, G. Prem, and P. Venkataram. "Network restoration using recurrent neural networks." International Journal of Network Management 8, no. 5 (September 1998): 264–73. http://dx.doi.org/10.1002/(sici)1099-1190(199809/10)8:5<264::aid-nem298>3.0.co;2-o.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Neural networks (Computer science)"

1

Landassuri, Moreno Victor Manuel. "Evolution of modular neural networks." Thesis, University of Birmingham, 2012. http://etheses.bham.ac.uk//id/eprint/3243/.

Повний текст джерела
Анотація:
It is well known that the human brain is highly modular, having a structural and functional organization that allows the different regions of the brain to be reused for different cognitive processes. So far, this has not been fully addressed by artificial systems, and a better understanding of when and how modules emerge is required, with a broad framework indicating how modules could be reused within neural networks. This thesis provides a deep investigation of module formation, module communication (interaction) and module reuse during evolution for a variety of classification and prediction tasks. The evolutionary algorithm EPNet is used to deliver the evolution of artificial neural networks. In the first stage of this study, the EPNet algorithm is carefully studied to understand its basis and to ensure confidence in its behaviour. Thereafter, its input feature selection (required for module evolution) is optimized, showing the robustness of the improved algorithm compared with the fixed input case and previous publications. Then module emergence, communication and reuse are investigated with the modular EPNet (M-EPNet) algorithm, which uses the information provided by a modularity measure to implement new mutation operators that favour the evolution of modules, allowing a new perspective for analyzing modularity, module formation and module reuse during evolution. The results obtained extend those of previous work, indicating that pure-modular architectures may emerge at low connectivity values, where similar tasks may share (reuse) common neural elements creating compact representations, and that the more different two tasks are, the bigger the modularity obtained during evolution. Other results indicate that some neural structures may be reused when similar tasks are evolved, leading to module interaction during evolution.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Sloan, Cooper Stokes. "Neural bus networks." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119711.

Повний текст джерела
Анотація:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 65-68).
Bus schedules are unreliable, leaving passengers waiting and increasing commute times. This problem can be solved by modeling the traffic network, and delivering predicted arrival times to passengers. Research attempts to model traffic networks use historical, statistical and learning based models, with learning based models achieving the best results. This research compares several neural network architectures trained on historical data from Boston buses. Three models are trained: multilayer perceptron, convolutional neural network and recurrent neural network. Recurrent neural networks show the best performance when compared to feed forward models. This indicates that neural time series models are effective at modeling bus networks. The large amount of data available for training bus network models and the effectiveness of large neural networks at modeling this data show that great progress can be made in improving commutes for passengers.
by Cooper Stokes Sloan.
M. Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Khan, Altaf Hamid. "Feedforward neural networks with constrained weights." Thesis, University of Warwick, 1996. http://wrap.warwick.ac.uk/4332/.

Повний текст джерела
Анотація:
The conventional multilayer feedforward network having continuous-weights is expensive to implement in digital hardware. Two new types of networks are proposed which lend themselves to cost-effective implementations in hardware and have a fast forward-pass capability. These two differ from the conventional model in having extra constraints on their weights: the first allows its weights to take integer values in the range [-3,3] only, whereas the second restricts its synapses to the set {-1,0,1} while allowing unrestricted offsets. The benefits of the first configuration are in having weights which are only 3-bits deep and a multiplication operation requiring a maximum of one shift, one add, and one sign-change instruction. The advantages of the second are in having 1-bit synapses and a multiplication operation which consists of a single sign-change instruction. The procedure proposed for training these networks starts like the conventional error backpropagation procedure, but becomes more and more discretised in its behaviour as the network gets closer to an error minimum. Mainly based on steepest descent, it also has a perturbation mechanism to avoid getting trapped in local minima, and a novel mechanism for rounding off 'near integers'. It incorporates weight elimination implicitly, which simplifies the choice of the start-up network configuration for training. It is shown that the integer-weight network, although lacking the universal approximation capability, can implement learning tasks, especially classification tasks, to acceptable accuracies. A new theoretical result is presented which shows that the multiplier-free network is a universal approximator over the space of continuous functions of one variable. In light of experimental results it is conjectured that the same is true for functions of many variables. Decision and error surfaces are used to explore the discrete-weight approximation of continuous-weight networks using discretisation schemes other than integer weights. The results suggest that provided a suitable discretisation interval is chosen, a discrete-weight network can be found which performs as well as a continuous-weight networks, but that it may require more hidden neurons than its conventional counterpart. Experiments are performed to compare the generalisation performances of the new networks with that of the conventional one using three very different benchmarks: the MONK's benchmark, a set of artificial tasks designed to compare the capabilities of learning algorithms, the 'onset of diabetes mellitus' prediction data set, a realistic set with very noisy attributes, and finally the handwritten numeral recognition database, a realistic but very structured data set. The results indicate that the new networks, despite having strong constraints on their weights, have generalisation performances similar to that of their conventional counterparts.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Zaghloul, Waleed A. Lee Sang M. "Text mining using neural networks." Lincoln, Neb. : University of Nebraska-Lincoln, 2005. http://0-www.unl.edu.library.unl.edu/libr/Dissertations/2005/Zaghloul.pdf.

Повний текст джерела
Анотація:
Thesis (Ph.D.)--University of Nebraska-Lincoln, 2005.
Title from title screen (sites viewed on Oct. 18, 2005). PDF text: 100 p. : col. ill. Includes bibliographical references (p. 95-100 of dissertation).
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Hadjifaradji, Saeed. "Learning algorithms for restricted neural networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0016/NQ48102.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Cheung, Ka Kit. "Neural networks for optimization." HKBU Institutional Repository, 2001. http://repository.hkbu.edu.hk/etd_ra/291.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Ahamed, Woakil Uddin. "Quantum recurrent neural networks for filtering." Thesis, University of Hull, 2009. http://hydra.hull.ac.uk/resources/hull:2411.

Повний текст джерела
Анотація:
The essence of stochastic filtering is to compute the time-varying probability densityfunction (pdf) for the measurements of the observed system. In this thesis, a filter isdesigned based on the principles of quantum mechanics where the schrodinger waveequation (SWE) plays the key part. This equation is transformed to fit into the neuralnetwork architecture. Each neuron in the network mediates a spatio-temporal field witha unified quantum activation function that aggregates the pdf information of theobserved signals. The activation function is the result of the solution of the SWE. Theincorporation of SWE into the field of neural network provides a framework which is socalled the quantum recurrent neural network (QRNN). A filter based on this approachis categorized as intelligent filter, as the underlying formulation is based on the analogyto real neuron.In a QRNN filter, the interaction between the observed signal and the wave dynamicsare governed by the SWE. A key issue, therefore, is achieving a solution of the SWEthat ensures the stability of the numerical scheme. Another important aspect indesigning this filter is in the way the wave function transforms the observed signalthrough the network. This research has shown that there are two different ways (anormal wave and a calm wave, Chapter-5) this transformation can be achieved and thesewave packets play a critical role in the evolution of the pdf. In this context, this thesishave investigated the following issues: existing filtering approach in the evolution of thepdf, architecture of the QRNN, the method of solving SWE, numerical stability of thesolution, and propagation of the waves in the well. The methods developed in this thesishave been tested with relevant simulations. The filter has also been tested with somebenchmark chaotic series along with applications to real world situation. Suggestionsare made for the scope of further developments.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Williams, Bryn V. "Evolutionary neural networks : models and applications." Thesis, Aston University, 1995. http://publications.aston.ac.uk/10635/.

Повний текст джерела
Анотація:
The scaling problems which afflict attempts to optimise neural networks (NNs) with genetic algorithms (GAs) are disclosed. A novel GA-NN hybrid is introduced, based on the bumptree, a little-used connectionist model. As well as being computationally efficient, the bumptree is shown to be more amenable to genetic coding lthan other NN models. A hierarchical genetic coding scheme is developed for the bumptree and shown to have low redundancy, as well as being complete and closed with respect to the search space. When applied to optimising bumptree architectures for classification problems the GA discovers bumptrees which significantly out-perform those constructed using a standard algorithm. The fields of artificial life, control and robotics are identified as likely application areas for the evolutionary optimisation of NNs. An artificial life case-study is presented and discussed. Experiments are reported which show that the GA-bumptree is able to learn simulated pole balancing and car parking tasks using only limited environmental feedback. A simple modification of the fitness function allows the GA-bumptree to learn mappings which are multi-modal, such as robot arm inverse kinematics. The dynamics of the 'geographic speciation' selection model used by the GA-bumptree are investigated empirically and the convergence profile is introduced as an analytical tool. The relationships between the rate of genetic convergence and the phenomena of speciation, genetic drift and punctuated equilibrium arc discussed. The importance of genetic linkage to GA design is discussed and two new recombination operators arc introduced. The first, linkage mapped crossover (LMX) is shown to be a generalisation of existing crossover operators. LMX provides a new framework for incorporating prior knowledge into GAs. Its adaptive form, ALMX, is shown to be able to infer linkage relationships automatically during genetic search.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

De, Jongh Albert. "Neural network ensembles." Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/50035.

Повний текст джерела
Анотація:
Thesis (MSc)--Stellenbosch University, 2004.
ENGLISH ABSTRACT: It is possible to improve on the accuracy of a single neural network by using an ensemble of diverse and accurate networks. This thesis explores diversity in ensembles and looks at the underlying theory and mechanisms employed to generate and combine ensemble members. Bagging and boosting are studied in detail and I explain their success in terms of well-known theoretical instruments. An empirical evaluation of their performance is conducted and I compare them to a single classifier and to each other in terms of accuracy and diversity.
AFRIKAANSE OPSOMMING: Dit is moontlik om op die akkuraatheid van 'n enkele neurale netwerk te verbeter deur 'n ensemble van diverse en akkurate netwerke te gebruik. Hierdie tesis ondersoek diversiteit in ensembles, asook die meganismes waardeur lede van 'n ensemble geskep en gekombineer kan word. Die algoritmes "bagging" en "boosting" word in diepte bestudeer en hulle sukses word aan die hand van bekende teoretiese instrumente verduidelik. Die prestasie van hierdie twee algoritmes word eksperimenteel gemeet en hulle akkuraatheid en diversiteit word met 'n enkele netwerk vergelyk.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Lee, Ji Young Ph D. Massachusetts Institute of Technology. "Information extraction with neural networks." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111905.

Повний текст джерела
Анотація:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 85-97).
Electronic health records (EHRs) have been widely adopted, and are a gold mine for clinical research. However, EHRs, especially their text components, remain largely unexplored due to the fact that they must be de-identified prior to any medical investigation. Existing systems for de-identification rely on manual rules or features, which are time-consuming to develop and fine-tune for new datasets. In this thesis, we propose the first de-identification system based on artificial neural networks (ANNs), which achieves state-of-the-art results without any human-engineered features. The ANN architecture is extended to incorporate features, further improving the de-identification performance. Under practical considerations, we explore transfer learning to take advantage of large annotated dataset to improve the performance on datasets with limited number of annotations. The ANN-based system is publicly released as an easy-to-use software package for general purpose named-entity recognition as well as de-identification. Finally, we present an ANN architecture for relation extraction, which ranked first in the SemEval-2017 task 10 (ScienceIE) for relation extraction in scientific articles (subtask C).
by Ji Young Lee.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Neural networks (Computer science)"

1

Dominique, Valentin, and Edelman Betty, eds. Neural networks. Thousand Oaks, Calif: Sage Publications, 1999.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

1931-, Taylor John, and UNICOM Seminars, eds. Neural networks. Henley-on-Thames: A. Waller, 1995.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

1948-, Vandewalle J., and Roska T, eds. Cellular neural networks. Chichester [England]: Wiley, 1993.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Bischof, Horst. Pyramidal neural networks. Mahwah, NJ: Lawrence Erlbaum Associates, 1995.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kwon, Seoyun J. Artificial neural networks. Hauppauge, N.Y: Nova Science Publishers, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Hoffmann, Norbert. Simulating neural networks. Wiesbaden: Vieweg, 1994.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Maass, Wolfgang, 1949 Aug. 21- and Bishop Christopher M, eds. Pulsed neural networks. Cambridge, Mass: MIT Press, 1999.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Caudill, Maureen. Understanding neural networks: Computer explorations. Cambridge, Mass: MIT Press, 1993.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hu, Xiaolin, and P. Balasubramaniam. Recurrent neural networks. Rijek, Crotia: InTech, 2008.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Baram, Yoram. Nested neural networks. Moffett Field, Calif: National Aeronautics and Space Administration, Ames Research Center, 1988.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Neural networks (Computer science)"

1

ElAarag, Hala. "Neural Networks." In SpringerBriefs in Computer Science, 11–16. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-4893-7_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Siegelmann, Hava T. "Recurrent neural networks." In Computer Science Today, 29–45. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/bfb0015235.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Yan, Wei Qi. "Convolutional Neural Networks and Recurrent Neural Networks." In Texts in Computer Science, 69–124. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-4823-9_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ertel, Wolfgang. "Neural Networks." In Undergraduate Topics in Computer Science, 221–56. London: Springer London, 2011. http://dx.doi.org/10.1007/978-0-85729-299-5_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ertel, Wolfgang. "Neural Networks." In Undergraduate Topics in Computer Science, 245–87. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58487-4_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Feldman, Jerome A. "Neural Networks and Computer Science." In Opportunities and Constraints of Parallel Computing, 37–38. New York, NY: Springer US, 1989. http://dx.doi.org/10.1007/978-1-4613-9668-0_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Kruse, Rudolf, Christian Borgelt, Christian Braune, Sanaz Mostaghim, and Matthias Steinbrecher. "General Neural Networks." In Texts in Computer Science, 37–46. London: Springer London, 2016. http://dx.doi.org/10.1007/978-1-4471-7296-3_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Kruse, Rudolf, Christian Borgelt, Frank Klawonn, Christian Moewes, Matthias Steinbrecher, and Pascal Held. "General Neural Networks." In Texts in Computer Science, 37–46. London: Springer London, 2013. http://dx.doi.org/10.1007/978-1-4471-5013-8_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Kruse, Rudolf, Sanaz Mostaghim, Christian Borgelt, Christian Braune, and Matthias Steinbrecher. "General Neural Networks." In Texts in Computer Science, 39–52. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-42227-1_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Betti, Alessandro, Marco Gori, and Stefano Melacci. "Foveated Neural Networks." In SpringerBriefs in Computer Science, 63–72. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-90987-1_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Neural networks (Computer science)"

1

Doncow, Sergey, Leonid Orbachevskyi, Valentin Birukow, and Nina V. Stepanova. "Artificial Kohonen's neural networks for computer capillarometry." In Optical Information Science and Technology, edited by Andrei L. Mikaelian. SPIE, 1998. http://dx.doi.org/10.1117/12.304962.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Nowak, Jakub, Marcin Korytkowski, and Rafał Scherer. "Classification of Computer Network Users with Convolutional Neural Networks." In 2018 Federated Conference on Computer Science and Information Systems. IEEE, 2018. http://dx.doi.org/10.15439/2018f321.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Shastri, Bhavin J., Volker Sorger, and Nir Rotenberg. "In situ Training of Silicon Photonic Neural Networks: from Classical to Quantum." In CLEO: Science and Innovations. Washington, D.C.: Optica Publishing Group, 2023. http://dx.doi.org/10.1364/cleo_si.2023.sm4j.1.

Повний текст джерела
Анотація:
Photonic neural networks perform ultrafast inference operations but are trained on slow computers. We highlight on-chip network training enabled by silicon photonics. We introduce quantum photonic neural networks and discuss the role of weak nonlinearities.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Dias, L. P., J. J. F. Cerqueira, K. D. R. Assis, and R. C. Almeida. "Using artificial neural network in intrusion detection systems to computer networks." In 2017 9th Computer Science and Electronic Engineering (CEEC). IEEE, 2017. http://dx.doi.org/10.1109/ceec.2017.8101615.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Araújo, Georger, and Célia Ralha. "Computer Forensic Document Clustering with ART1 Neural Networks." In The Sixth International Conference on Forensic Computer Science. ABEAT, 2011. http://dx.doi.org/10.5769/c2011011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Wang, Huiran, and Ruifang Ma. "Optimization of Neural Networks for Network Intrusion Detection." In 2009 First International Workshop on Education Technology and Computer Science. IEEE, 2009. http://dx.doi.org/10.1109/etcs.2009.102.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Eilermann, Sebastian, Christoph Petroll, Philipp Hoefer, and Oliver Niggemann. "3D Multi-Criteria Design Generation and Optimization of an Engine Mount for an Unmanned Air Vehicle Using a Conditional Variational Autoencoder." In Computer Science Research Notes. University of West Bohemia, Czech Republic, 2024. http://dx.doi.org/10.24132/csrn.3401.22.

Повний текст джерела
Анотація:
One of the most promising developments in computer vision in recent years is the use of generative neural networks for functionality condition-based 3D design reconstruction and generation. Here, neural networks learn dependencies between functionalities and a geometry in a very effective way. For a neural network the functionalities are translated in conditions to a certain geometry. But the more conditions the design generation needs to reflect, the more difficult it is to learn clear dependencies. This leads to a multi criteria design problem due various conditions, which are not considered in the neural network structure so far. In this paper, we address this multi-criteria challenge for a 3D design use case related to an unmanned aerial vehicle (UAV) motor mount. We generate 10,000 abstract 3D designs and subject them all to simulations for three physical disciplines: mechanics, thermodynamics, and aerodynamics. Then, we train a Conditional Variational Autoencoder (CVAE) using the geometry and corresponding multicriteria functional constraints as input. We use our trained CVAE as well as the Marching cubes algorithm to generate meshes for simulation based evaluation. The results are then evaluated with the generated UAV designs. Subsequently, we demonstrate the ability to generate optimized designs under self-defined functionality conditions using the trained neural network.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Sakas, D. P., D. S. Vlachos, T. E. Simos, Theodore E. Simos, and George Psihoyios. "Fuzzy Neural Networks for Decision Support in Negotiation." In INTERNATIONAL ELECTRONIC CONFERENCE ON COMPUTER SCIENCE. AIP, 2008. http://dx.doi.org/10.1063/1.3037115.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

"Speech Emotion Recognition using Convolutional Neural Networks and Recurrent Neural Networks with Attention Model." In 2019 the 9th International Workshop on Computer Science and Engineering. WCSE, 2019. http://dx.doi.org/10.18178/wcse.2019.06.044.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Čajić, Elvir, Irma Ibrišimović, Alma Šehanović, Damir Bajrić, and Julija Ščekić. "Fuzzy Logic And Neural Networks For Disease Detection And Simulation In Matlab." In 9th International Conference on Computer Science, Engineering and Applications. Academy & Industry Research Collaboration Center, 2023. http://dx.doi.org/10.5121/csit.2023.132302.

Повний текст джерела
Анотація:
This paper investigates the integration of fuzzy logic and neural networks for disease detection using the Matlab environment. Disease detection is key in medical diagnostics, and the combination of fuzzy logic and neural networks offers an advanced methodology for the analysis and interpretation of medical data. Fuzzy logic is used for modeling and resolving uncertainty in diagnostic processes, while neural networks are applied for indepth processing and analysis of images relevant to disease diagnosis. This paper demonstrates the development and implementation of a simulation system in Matlab, using real medical data and images of organs for the purpose of detecting specific diseases, with a special focus on the application in the diagnosis of kidney diseases. Combining fuzzy logic and neural networks, simulation offers precision and robustness in the diagnosis process, opening the door to advanced medical information systems
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Neural networks (Computer science)"

1

Markova, Oksana, Serhiy Semerikov та Maiia Popel. СoCalc as a Learning Tool for Neural Network Simulation in the Special Course “Foundations of Mathematic Informatics”. Sun SITE Central Europe, травень 2018. http://dx.doi.org/10.31812/0564/2250.

Повний текст джерела
Анотація:
The role of neural network modeling in the learning сontent of special course “Foundations of Mathematic Informatics” was discussed. The course was developed for the students of technical universities – future IT-specialists and directed to breaking the gap between theoretic computer science and it’s applied applications: software, system and computing engineering. CoCalc was justified as a learning tool of mathematical informatics in general and neural network modeling in particular. The elements of technique of using CoCalc at studying topic “Neural network and pattern recognition” of the special course “Foundations of Mathematic Informatics” are shown. The program code was presented in a CofeeScript language, which implements the basic components of artificial neural network: neurons, synaptic connections, functions of activations (tangential, sigmoid, stepped) and their derivatives, methods of calculating the network`s weights, etc. The features of the Kolmogorov–Arnold representation theorem application were discussed for determination the architecture of multilayer neural networks. The implementation of the disjunctive logical element and approximation of an arbitrary function using a three-layer neural network were given as an examples. According to the simulation results, a conclusion was made as for the limits of the use of constructed networks, in which they retain their adequacy. The framework topics of individual research of the artificial neural networks is proposed.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Semerikov, Serhiy, Illia Teplytskyi, Yuliia Yechkalo, Oksana Markova, Vladimir Soloviev, and Arnold Kiv. Computer Simulation of Neural Networks Using Spreadsheets: Dr. Anderson, Welcome Back. [б. в.], June 2019. http://dx.doi.org/10.31812/123456789/3178.

Повний текст джерела
Анотація:
The authors of the given article continue the series presented by the 2018 paper “Computer Simulation of Neural Networks Using Spreadsheets: The Dawn of the Age of Camelot”. This time, they consider mathematical informatics as the basis of higher engineering education fundamentalization. Mathematical informatics deals with smart simulation, information security, long-term data storage and big data management, artificial intelligence systems, etc. The authors suggest studying basic principles of mathematical informatics by applying cloud-oriented means of various levels including those traditionally considered supplementary – spreadsheets. The article considers ways of building neural network models in cloud-oriented spreadsheets, Google Sheets. The model is based on the problem of classifying multi-dimensional data provided in “The Use of Multiple Measurements in Taxonomic Problems” by R. A. Fisher. Edgar Anderson’s role in collecting and preparing the data in the 1920s-1930s is discussed as well as some peculiarities of data selection. There are presented data on the method of multi-dimensional data presentation in the form of an ideograph developed by Anderson and considered one of the first efficient ways of data visualization.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Grossberg, Stephen. Instrumentation for Scientific Computing in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics. Fort Belvoir, VA: Defense Technical Information Center, October 1987. http://dx.doi.org/10.21236/ada189981.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Semerikov, Serhiy O., Illia O. Teplytskyi, Yuliia V. Yechkalo, and Arnold E. Kiv. Computer Simulation of Neural Networks Using Spreadsheets: The Dawn of the Age of Camelot. [б. в.], November 2018. http://dx.doi.org/10.31812/123456789/2648.

Повний текст джерела
Анотація:
The article substantiates the necessity to develop training methods of computer simulation of neural networks in the spreadsheet environment. The systematic review of their application to simulating artificial neural networks is performed. The authors distinguish basic approaches to solving the problem of network computer simulation training in the spreadsheet environment, joint application of spreadsheets and tools of neural network simulation, application of third-party add-ins to spreadsheets, development of macros using the embedded languages of spreadsheets; use of standard spreadsheet add-ins for non-linear optimization, creation of neural networks in the spreadsheet environment without add-ins and macros. After analyzing a collection of writings of 1890-1950, the research determines the role of the scientific journal “Bulletin of Mathematical Biophysics”, its founder Nicolas Rashevsky and the scientific community around the journal in creating and developing models and methods of computational neuroscience. There are identified psychophysical basics of creating neural networks, mathematical foundations of neural computing and methods of neuroengineering (image recognition, in particular). The role of Walter Pitts in combining the descriptive and quantitative theories of training is discussed. It is shown that to acquire neural simulation competences in the spreadsheet environment, one should master the models based on the historical and genetic approach. It is indicated that there are three groups of models, which are promising in terms of developing corresponding methods – the continuous two-factor model of Rashevsky, the discrete model of McCulloch and Pitts, and the discrete-continuous models of Householder and Landahl.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Farhi, Edward, and Hartmut Neven. Classification with Quantum Neural Networks on Near Term Processors. Web of Open Science, December 2020. http://dx.doi.org/10.37686/qrl.v1i2.80.

Повний текст джерела
Анотація:
We introduce a quantum neural network, QNN, that can represent labeled data, classical or quantum, and be trained by supervised learning. The quantum circuit consists of a sequence of parameter dependent unitary transformations which acts on an input quantum state. For binary classification a single Pauli operator is measured on a designated readout qubit. The measured output is the quantum neural network’s predictor of the binary label of the input state. We show through classical simulation that parameters can be found that allow the QNN to learn to correctly distinguish the two data sets. We then discuss presenting the data as quantum superpositions of computational basis states corresponding to different label values. Here we show through simulation that learning is possible. We consider using our QNN to learn the label of a general quantum state. By example we show that this can be done. Our work is exploratory and relies on the classical simulation of small quantum systems. The QNN proposed here was designed with near-term quantum processors in mind. Therefore it will be possible to run this QNN on a near term gate model quantum computer where its power can be explored beyond what can be explored with simulation.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Willson. L51756 State of the Art Intelligent Control for Large Engines. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), September 1996. http://dx.doi.org/10.55274/r0010423.

Повний текст джерела
Анотація:
Computers have become a vital part of the control of pipeline compressors and compressor stations. For many tasks, computers have helped to improve accuracy, reliability, and safety, and have reduced operating costs. Computers excel at repetitive, precise tasks that humans perform poorly - calculation, measurement, statistical analysis, control, etc. Computers are used to perform these type of precise tasks at compressor stations: engine / turbine speed control, ignition control, horsepower estimation, or control of complicated sequences of events during startup and/or shutdown. For other tasks, however, computers perform very poorly at tasks that humans find to be trivial. A discussion of the differences in the way humans and computer process information is crucial to an understanding of the field of artificial intelligence. In this project, several artificial intelligence/ intelligent control systems were examined: heuristic search techniques, adaptive control, expert systems, fuzzy logic, neural networks, and genetic algorithms. Of these, neural networks showed the most potential for use on large bore engines because of their ability to recognize patterns in incomplete, noisy data. Two sets of experimental tests were conducted to test the predictive capabilities of neural networks. The first involved predicting the ignition timing from combustion pressure histories; the best networks responded within a specified tolerance level 90% to 98.8% of the time. In the second experiment, neural networks were used to predict NOx, A/F ratio, and fuel consumption. NOx prediction accuracy was 91.4%, A/F ratio accuracy was 82.9%, and fuel consumption accuracy was 52.9%. This report documents the assessment of the state of the art of artificial intelligence for application to the monitoring and control of large-bore natural gas engines.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Modlo, Yevhenii O., Serhiy O. Semerikov, Ruslan P. Shajda, Stanislav T. Tolmachev, and Oksana M. Markova. Methods of using mobile Internet devices in the formation of the general professional component of bachelor in electromechanics competency in modeling of technical objects. [б. в.], July 2020. http://dx.doi.org/10.31812/123456789/3878.

Повний текст джерела
Анотація:
The article describes the components of methods of using mobile Internet devices in the formation of the general professional component of bachelor in electromechanics competency in modeling of technical objects: using various methods of representing models; solving professional problems using ICT; competence in electric machines and critical thinking. On the content of learning academic disciplines “Higher mathematics”, “Automatic control theory”, “Modeling of electromechanical systems”, “Electrical machines” features of use are disclosed for Scilab, SageCell, Google Sheets, Xcos on Cloud in the formation of the general professional component of bachelor in electromechanics competency in modeling of technical objects. It is concluded that it is advisable to use the following software for mobile Internet devices: a cloud-based spreadsheets as modeling tools (including neural networks), a visual modeling systems as a means of structural modeling of technical objects; a mobile computer mathematical system used at all stages of modeling; a mobile communication tools for organizing joint modeling activities.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

SAINI, RAVINDER, AbdulKhaliq Alshadid, and Lujain Aldosari. Investigation on the application of artificial intelligence in prosthodontics. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, December 2022. http://dx.doi.org/10.37766/inplasy2022.12.0096.

Повний текст джерела
Анотація:
Review question / Objective: 1. Which artificial intelligence techniques are practiced in dentistry? 2. How AI is improving the diagnosis, clinical decision making, and outcome of dental treatment? 3. What are the current clinical applications and diagnostic performance of AI in the field of prosthodontics? Condition being studied: Procedures for desktop designing and fabrication Computer-aided design (CAD/CAM) in particular have made their way into routine healthcare and laboratory practice.Based on flat imagery, artificial intelligence may also be utilized to forecast the debonding of dental repairs. Dental arches in detachable prosthodontics may be categorized using Convolutional neural networks (CNN). By properly positioning the teeth, machine learning in CAD/CAM software can reestablish healthy inter-maxillary connections. AI may assist with accurate color matching in challenging cosmetic scenarios that include a single central incisor or many front teeth. Intraoral detectors can identify implant placements in implant prosthodontics and instantly input them into CAD software. The design and execution of dental implants could potentially be improved by utilizing AI.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Johansen, Richard, Alan Katzenmeyer, Kaytee Pokrzywinski, and Molly Reif. A review of sensor-based approaches for monitoring rapid response treatments of cyanoHABs. Engineer Research and Development Center (U.S.), July 2023. http://dx.doi.org/10.21079/11681/47261.

Повний текст джерела
Анотація:
Water quality sensors are dynamic and vary greatly both in terms of utility and data acquisition. Data collection can range from single-parameter and one-dimensional to highly complex multiparameter spatiotemporal. Likewise, the analytical and statistical approaches range from relatively simple (e.g., linear regression) to more complex (e.g., artificial neural networks). Therefore, the decision to implement a particular water quality monitoring strategy is dependent upon many factors and varies widely. The purpose of this review was to document the current scientific literature to identify and compile approaches for water quality monitoring as well as statistical methodologies required to analyze and visualize highly diverse spatiotemporal water quality data. The literature review identified two broad categories: (1) sensor-based approaches for monitoring rapid response treatments of cyanobacterial harmful algal blooms (cyanoHABs), and (2) analytical tools and techniques to analyze complex high resolution spatial and temporal water quality data. The ultimate goal of this review is to provide the current state of the science as an array of scalable approaches, spanning from simple and practical to complex and comprehensive, and thus, equipping the US Army Corps of Engineers (USACE) water quality managers with options for technology-analysis combinations that best fit their needs.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Seginer, Ido, James Jones, Per-Olof Gutman, and Eduardo Vallejos. Optimal Environmental Control for Indeterminate Greenhouse Crops. United States Department of Agriculture, August 1997. http://dx.doi.org/10.32747/1997.7613034.bard.

Повний текст джерела
Анотація:
Increased world competition, as well as increased concern for the environment, drive all manufacturing systems, including greenhouses, towards high-precision operation. Optimal control is an important tool to achieve this goal, since it finds the best compromise between conflicting demands, such as higher profits and environmental concerns. The report, which is a collection of papers, each with its own abstract, outlines an approach for optimal, model-based control of the greenhouse environment. A reliable crop model is essential for this approach and a significant portion of the effort went in this direction, resulting in a radically new version of the tomato model TOMGRO, which can be used as a prototype model for other greenhouse crops. Truly optimal control of a very complex system requires prohibitively large computer resources. Two routes to model simplification have, therefore, been tried: Model reduction (to fewer state variables) and simplified decision making. Crop model reduction from nearly 70 state variables to about 5, was accomplished by either selecting a subset of the original variables or by forming combinations of them. Model dynamics were then fitted either with mechanistic relationships or with neural networks. To simplify the decision making process, the number of costate variables (control policy parametrs) was recuced to one or two. The dry-matter state variable was transformed in such a way that its costate became essentially constant throughout the season. A quasi-steady-state control algorithm was implemented in an experimental greenhouse. A constant value for the dry-matter costate was able to control simultaneously ventilation and CO2 enrichment by continuously producing weather-dependent optimal setpoints and then maintaining them closely.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії