Auswahl der wissenschaftlichen Literatur zum Thema „Neural networks (Computer science)“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Neural networks (Computer science)" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Neural networks (Computer science)"

1

Mijwel, Maad M., Adam Esen und Aysar Shamil. „Overview of Neural Networks“. Babylonian Journal of Machine Learning 2023 (11.08.2023): 42–45. http://dx.doi.org/10.58496/bjml/2023/008.

Der volle Inhalt der Quelle
Annotation:
Since it was confirmed and verified that the human nervous system consists of individual cells, which were later called neurons, and it was discovered that these cells connect with each other to form an extensive communication network, a large number of possibilities have been opened for application in multiple disciplines in areas of knowledge. Neural Networks are created to perform tasks such as pattern recognition, classification, regression, and many other functions that serve humans and are an essential component in the field of machine learning and artificial intelligence. In computer science, progress has been made, and computers are supposed to learn how to solve problems like that of the human brain. Through pre-established examples, the computer must be able to provide solutions to issues that are like those presented during training. This article overviews neural networks and their application in developing computer systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Cottrell, G. W. „COMPUTER SCIENCE: New Life for Neural Networks“. Science 313, Nr. 5786 (28.07.2006): 454–55. http://dx.doi.org/10.1126/science.1129813.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Li, Xiao Guang. „Research on the Development and Applications of Artificial Neural Networks“. Applied Mechanics and Materials 556-562 (Mai 2014): 6011–14. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.6011.

Der volle Inhalt der Quelle
Annotation:
Intelligent control is a class of control techniques that use various AI computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, evolutionary computation and genetic algorithms. In computer science and related fields, artificial neural networks are computational models inspired by animals’ central nervous systems (in particular the brain) that are capable of machine learning and pattern recognition. They are usually presented as systems of interconnected “neurons” that can compute values from inputs by feeding information through the network. Like other machine learning methods, neural networks have been used to solve a wide variety of tasks that are hard to solve using ordinary rule-based programming, including computer vision and speech recognition.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Schöneburg, E. „Neural networks hunt computer viruses“. Neurocomputing 2, Nr. 5-6 (Juli 1991): 243–48. http://dx.doi.org/10.1016/0925-2312(91)90027-9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Turega, M. A. „Neural Networks“. Computer Journal 35, Nr. 3 (01.06.1992): 290. http://dx.doi.org/10.1093/comjnl/35.3.290.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Widrow, Bernard, David E. Rumelhart und Michael A. Lehr. „Neural networks“. Communications of the ACM 37, Nr. 3 (März 1994): 93–105. http://dx.doi.org/10.1145/175247.175257.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Begum, Afsana, Md Masiur Rahman und Sohana Jahan. „Medical diagnosis using artificial neural networks“. Mathematics in Applied Sciences and Engineering 5, Nr. 2 (04.06.2024): 149–64. http://dx.doi.org/10.5206/mase/17138.

Der volle Inhalt der Quelle
Annotation:
Medical diagnosis using Artificial Neural Networks (ANN) and computer-aided diagnosis with deep learning is currently a very active research area in medical science. In recent years, for medical diagnosis, neural network models are broadly considered since they are ideal for recognizing different kinds of diseases including autism, cancer, tumor lung infection, etc. It is evident that early diagnosis of any disease is vital for successful treatment and improved survival rates. In this research, five neural networks, Multilayer neural network (MLNN), Probabilistic neural network (PNN), Learning vector quantization neural network (LVQNN), Generalized regression neural network (GRNN), and Radial basis function neural network (RBFNN) have been explored. These networks are applied to several benchmarking data collected from the University of California Irvine (UCI) Machine Learning Repository. Results from numerical experiments indicate that each network excels at recognizing specific physical issues. In the majority of cases, both the Learning Vector Quantization Neural Network and the Probabilistic Neural Network demonstrate superior performance compared to the other networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Yen, Gary G., und Haiming Lu. „Hierarchical Rank Density Genetic Algorithm for Radial-Basis Function Neural Network Design“. International Journal of Computational Intelligence and Applications 03, Nr. 03 (September 2003): 213–32. http://dx.doi.org/10.1142/s1469026803000975.

Der volle Inhalt der Quelle
Annotation:
In this paper, we propose a genetic algorithm based design procedure for a radial-basis function neural network. A Hierarchical Rank Density Genetic Algorithm (HRDGA) is used to evolve the neural network's topology and parameters simultaneously. Compared with traditional genetic algorithm based designs for neural networks, the hierarchical approach addresses several deficiencies highlighted in literature. In addition, the rank-density based fitness assignment technique is used to optimize the performance and topology of the evolved neural network to deal with the confliction between the training performance and network complexity. Instead of producing a single optimal solution, HRDGA provides a set of near-optimal neural networks to the designers so that they can have more flexibility for the final decision-making based on certain preferences. In terms of searching for a near-complete set of candidate networks with high performances, the networks designed by the proposed algorithm prove to be competitive, or even superior, to three other traditional radial-basis function networks for predicting Mackey–Glass chaotic time series.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Cavallaro, Lucia, Ovidiu Bagdasar, Pasquale De Meo, Giacomo Fiumara und Antonio Liotta. „Artificial neural networks training acceleration through network science strategies“. Soft Computing 24, Nr. 23 (09.09.2020): 17787–95. http://dx.doi.org/10.1007/s00500-020-05302-y.

Der volle Inhalt der Quelle
Annotation:
AbstractThe development of deep learning has led to a dramatic increase in the number of applications of artificial intelligence. However, the training of deeper neural networks for stable and accurate models translates into artificial neural networks (ANNs) that become unmanageable as the number of features increases. This work extends our earlier study where we explored the acceleration effects obtained by enforcing, in turn, scale freeness, small worldness, and sparsity during the ANN training process. The efficiency of that approach was confirmed by recent studies (conducted independently) where a million-node ANN was trained on non-specialized laptops. Encouraged by those results, our study is now focused on some tunable parameters, to pursue a further acceleration effect. We show that, although optimal parameter tuning is unfeasible, due to the high non-linearity of ANN problems, we can actually come up with a set of useful guidelines that lead to speed-ups in practical cases. We find that significant reductions in execution time can generally be achieved by setting the revised fraction parameter ($$\zeta $$ ζ ) to relatively low values.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Kumar, G. Prem, und P. Venkataram. „Network restoration using recurrent neural networks“. International Journal of Network Management 8, Nr. 5 (September 1998): 264–73. http://dx.doi.org/10.1002/(sici)1099-1190(199809/10)8:5<264::aid-nem298>3.0.co;2-o.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Neural networks (Computer science)"

1

Landassuri, Moreno Victor Manuel. „Evolution of modular neural networks“. Thesis, University of Birmingham, 2012. http://etheses.bham.ac.uk//id/eprint/3243/.

Der volle Inhalt der Quelle
Annotation:
It is well known that the human brain is highly modular, having a structural and functional organization that allows the different regions of the brain to be reused for different cognitive processes. So far, this has not been fully addressed by artificial systems, and a better understanding of when and how modules emerge is required, with a broad framework indicating how modules could be reused within neural networks. This thesis provides a deep investigation of module formation, module communication (interaction) and module reuse during evolution for a variety of classification and prediction tasks. The evolutionary algorithm EPNet is used to deliver the evolution of artificial neural networks. In the first stage of this study, the EPNet algorithm is carefully studied to understand its basis and to ensure confidence in its behaviour. Thereafter, its input feature selection (required for module evolution) is optimized, showing the robustness of the improved algorithm compared with the fixed input case and previous publications. Then module emergence, communication and reuse are investigated with the modular EPNet (M-EPNet) algorithm, which uses the information provided by a modularity measure to implement new mutation operators that favour the evolution of modules, allowing a new perspective for analyzing modularity, module formation and module reuse during evolution. The results obtained extend those of previous work, indicating that pure-modular architectures may emerge at low connectivity values, where similar tasks may share (reuse) common neural elements creating compact representations, and that the more different two tasks are, the bigger the modularity obtained during evolution. Other results indicate that some neural structures may be reused when similar tasks are evolved, leading to module interaction during evolution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Sloan, Cooper Stokes. „Neural bus networks“. Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119711.

Der volle Inhalt der Quelle
Annotation:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 65-68).
Bus schedules are unreliable, leaving passengers waiting and increasing commute times. This problem can be solved by modeling the traffic network, and delivering predicted arrival times to passengers. Research attempts to model traffic networks use historical, statistical and learning based models, with learning based models achieving the best results. This research compares several neural network architectures trained on historical data from Boston buses. Three models are trained: multilayer perceptron, convolutional neural network and recurrent neural network. Recurrent neural networks show the best performance when compared to feed forward models. This indicates that neural time series models are effective at modeling bus networks. The large amount of data available for training bus network models and the effectiveness of large neural networks at modeling this data show that great progress can be made in improving commutes for passengers.
by Cooper Stokes Sloan.
M. Eng.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Khan, Altaf Hamid. „Feedforward neural networks with constrained weights“. Thesis, University of Warwick, 1996. http://wrap.warwick.ac.uk/4332/.

Der volle Inhalt der Quelle
Annotation:
The conventional multilayer feedforward network having continuous-weights is expensive to implement in digital hardware. Two new types of networks are proposed which lend themselves to cost-effective implementations in hardware and have a fast forward-pass capability. These two differ from the conventional model in having extra constraints on their weights: the first allows its weights to take integer values in the range [-3,3] only, whereas the second restricts its synapses to the set {-1,0,1} while allowing unrestricted offsets. The benefits of the first configuration are in having weights which are only 3-bits deep and a multiplication operation requiring a maximum of one shift, one add, and one sign-change instruction. The advantages of the second are in having 1-bit synapses and a multiplication operation which consists of a single sign-change instruction. The procedure proposed for training these networks starts like the conventional error backpropagation procedure, but becomes more and more discretised in its behaviour as the network gets closer to an error minimum. Mainly based on steepest descent, it also has a perturbation mechanism to avoid getting trapped in local minima, and a novel mechanism for rounding off 'near integers'. It incorporates weight elimination implicitly, which simplifies the choice of the start-up network configuration for training. It is shown that the integer-weight network, although lacking the universal approximation capability, can implement learning tasks, especially classification tasks, to acceptable accuracies. A new theoretical result is presented which shows that the multiplier-free network is a universal approximator over the space of continuous functions of one variable. In light of experimental results it is conjectured that the same is true for functions of many variables. Decision and error surfaces are used to explore the discrete-weight approximation of continuous-weight networks using discretisation schemes other than integer weights. The results suggest that provided a suitable discretisation interval is chosen, a discrete-weight network can be found which performs as well as a continuous-weight networks, but that it may require more hidden neurons than its conventional counterpart. Experiments are performed to compare the generalisation performances of the new networks with that of the conventional one using three very different benchmarks: the MONK's benchmark, a set of artificial tasks designed to compare the capabilities of learning algorithms, the 'onset of diabetes mellitus' prediction data set, a realistic set with very noisy attributes, and finally the handwritten numeral recognition database, a realistic but very structured data set. The results indicate that the new networks, despite having strong constraints on their weights, have generalisation performances similar to that of their conventional counterparts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Zaghloul, Waleed A. Lee Sang M. „Text mining using neural networks“. Lincoln, Neb. : University of Nebraska-Lincoln, 2005. http://0-www.unl.edu.library.unl.edu/libr/Dissertations/2005/Zaghloul.pdf.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph.D.)--University of Nebraska-Lincoln, 2005.
Title from title screen (sites viewed on Oct. 18, 2005). PDF text: 100 p. : col. ill. Includes bibliographical references (p. 95-100 of dissertation).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Hadjifaradji, Saeed. „Learning algorithms for restricted neural networks“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0016/NQ48102.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Cheung, Ka Kit. „Neural networks for optimization“. HKBU Institutional Repository, 2001. http://repository.hkbu.edu.hk/etd_ra/291.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ahamed, Woakil Uddin. „Quantum recurrent neural networks for filtering“. Thesis, University of Hull, 2009. http://hydra.hull.ac.uk/resources/hull:2411.

Der volle Inhalt der Quelle
Annotation:
The essence of stochastic filtering is to compute the time-varying probability densityfunction (pdf) for the measurements of the observed system. In this thesis, a filter isdesigned based on the principles of quantum mechanics where the schrodinger waveequation (SWE) plays the key part. This equation is transformed to fit into the neuralnetwork architecture. Each neuron in the network mediates a spatio-temporal field witha unified quantum activation function that aggregates the pdf information of theobserved signals. The activation function is the result of the solution of the SWE. Theincorporation of SWE into the field of neural network provides a framework which is socalled the quantum recurrent neural network (QRNN). A filter based on this approachis categorized as intelligent filter, as the underlying formulation is based on the analogyto real neuron.In a QRNN filter, the interaction between the observed signal and the wave dynamicsare governed by the SWE. A key issue, therefore, is achieving a solution of the SWEthat ensures the stability of the numerical scheme. Another important aspect indesigning this filter is in the way the wave function transforms the observed signalthrough the network. This research has shown that there are two different ways (anormal wave and a calm wave, Chapter-5) this transformation can be achieved and thesewave packets play a critical role in the evolution of the pdf. In this context, this thesishave investigated the following issues: existing filtering approach in the evolution of thepdf, architecture of the QRNN, the method of solving SWE, numerical stability of thesolution, and propagation of the waves in the well. The methods developed in this thesishave been tested with relevant simulations. The filter has also been tested with somebenchmark chaotic series along with applications to real world situation. Suggestionsare made for the scope of further developments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Williams, Bryn V. „Evolutionary neural networks : models and applications“. Thesis, Aston University, 1995. http://publications.aston.ac.uk/10635/.

Der volle Inhalt der Quelle
Annotation:
The scaling problems which afflict attempts to optimise neural networks (NNs) with genetic algorithms (GAs) are disclosed. A novel GA-NN hybrid is introduced, based on the bumptree, a little-used connectionist model. As well as being computationally efficient, the bumptree is shown to be more amenable to genetic coding lthan other NN models. A hierarchical genetic coding scheme is developed for the bumptree and shown to have low redundancy, as well as being complete and closed with respect to the search space. When applied to optimising bumptree architectures for classification problems the GA discovers bumptrees which significantly out-perform those constructed using a standard algorithm. The fields of artificial life, control and robotics are identified as likely application areas for the evolutionary optimisation of NNs. An artificial life case-study is presented and discussed. Experiments are reported which show that the GA-bumptree is able to learn simulated pole balancing and car parking tasks using only limited environmental feedback. A simple modification of the fitness function allows the GA-bumptree to learn mappings which are multi-modal, such as robot arm inverse kinematics. The dynamics of the 'geographic speciation' selection model used by the GA-bumptree are investigated empirically and the convergence profile is introduced as an analytical tool. The relationships between the rate of genetic convergence and the phenomena of speciation, genetic drift and punctuated equilibrium arc discussed. The importance of genetic linkage to GA design is discussed and two new recombination operators arc introduced. The first, linkage mapped crossover (LMX) is shown to be a generalisation of existing crossover operators. LMX provides a new framework for incorporating prior knowledge into GAs. Its adaptive form, ALMX, is shown to be able to infer linkage relationships automatically during genetic search.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

De, Jongh Albert. „Neural network ensembles“. Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/50035.

Der volle Inhalt der Quelle
Annotation:
Thesis (MSc)--Stellenbosch University, 2004.
ENGLISH ABSTRACT: It is possible to improve on the accuracy of a single neural network by using an ensemble of diverse and accurate networks. This thesis explores diversity in ensembles and looks at the underlying theory and mechanisms employed to generate and combine ensemble members. Bagging and boosting are studied in detail and I explain their success in terms of well-known theoretical instruments. An empirical evaluation of their performance is conducted and I compare them to a single classifier and to each other in terms of accuracy and diversity.
AFRIKAANSE OPSOMMING: Dit is moontlik om op die akkuraatheid van 'n enkele neurale netwerk te verbeter deur 'n ensemble van diverse en akkurate netwerke te gebruik. Hierdie tesis ondersoek diversiteit in ensembles, asook die meganismes waardeur lede van 'n ensemble geskep en gekombineer kan word. Die algoritmes "bagging" en "boosting" word in diepte bestudeer en hulle sukses word aan die hand van bekende teoretiese instrumente verduidelik. Die prestasie van hierdie twee algoritmes word eksperimenteel gemeet en hulle akkuraatheid en diversiteit word met 'n enkele netwerk vergelyk.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Lee, Ji Young Ph D. Massachusetts Institute of Technology. „Information extraction with neural networks“. Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111905.

Der volle Inhalt der Quelle
Annotation:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 85-97).
Electronic health records (EHRs) have been widely adopted, and are a gold mine for clinical research. However, EHRs, especially their text components, remain largely unexplored due to the fact that they must be de-identified prior to any medical investigation. Existing systems for de-identification rely on manual rules or features, which are time-consuming to develop and fine-tune for new datasets. In this thesis, we propose the first de-identification system based on artificial neural networks (ANNs), which achieves state-of-the-art results without any human-engineered features. The ANN architecture is extended to incorporate features, further improving the de-identification performance. Under practical considerations, we explore transfer learning to take advantage of large annotated dataset to improve the performance on datasets with limited number of annotations. The ANN-based system is publicly released as an easy-to-use software package for general purpose named-entity recognition as well as de-identification. Finally, we present an ANN architecture for relation extraction, which ranked first in the SemEval-2017 task 10 (ScienceIE) for relation extraction in scientific articles (subtask C).
by Ji Young Lee.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Neural networks (Computer science)"

1

Dominique, Valentin, und Edelman Betty, Hrsg. Neural networks. Thousand Oaks, Calif: Sage Publications, 1999.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

1931-, Taylor John, und UNICOM Seminars, Hrsg. Neural networks. Henley-on-Thames: A. Waller, 1995.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

1948-, Vandewalle J., und Roska T, Hrsg. Cellular neural networks. Chichester [England]: Wiley, 1993.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Bischof, Horst. Pyramidal neural networks. Mahwah, NJ: Lawrence Erlbaum Associates, 1995.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kwon, Seoyun J. Artificial neural networks. Hauppauge, N.Y: Nova Science Publishers, 2010.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Hoffmann, Norbert. Simulating neural networks. Wiesbaden: Vieweg, 1994.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Maass, Wolfgang, 1949 Aug. 21- und Bishop Christopher M, Hrsg. Pulsed neural networks. Cambridge, Mass: MIT Press, 1999.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Caudill, Maureen. Understanding neural networks: Computer explorations. Cambridge, Mass: MIT Press, 1993.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Hu, Xiaolin, und P. Balasubramaniam. Recurrent neural networks. Rijek, Crotia: InTech, 2008.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Baram, Yoram. Nested neural networks. Moffett Field, Calif: National Aeronautics and Space Administration, Ames Research Center, 1988.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Neural networks (Computer science)"

1

ElAarag, Hala. „Neural Networks“. In SpringerBriefs in Computer Science, 11–16. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-4893-7_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Siegelmann, Hava T. „Recurrent neural networks“. In Computer Science Today, 29–45. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/bfb0015235.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Yan, Wei Qi. „Convolutional Neural Networks and Recurrent Neural Networks“. In Texts in Computer Science, 69–124. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-4823-9_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ertel, Wolfgang. „Neural Networks“. In Undergraduate Topics in Computer Science, 221–56. London: Springer London, 2011. http://dx.doi.org/10.1007/978-0-85729-299-5_9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Ertel, Wolfgang. „Neural Networks“. In Undergraduate Topics in Computer Science, 245–87. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58487-4_9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Feldman, Jerome A. „Neural Networks and Computer Science“. In Opportunities and Constraints of Parallel Computing, 37–38. New York, NY: Springer US, 1989. http://dx.doi.org/10.1007/978-1-4613-9668-0_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kruse, Rudolf, Christian Borgelt, Christian Braune, Sanaz Mostaghim und Matthias Steinbrecher. „General Neural Networks“. In Texts in Computer Science, 37–46. London: Springer London, 2016. http://dx.doi.org/10.1007/978-1-4471-7296-3_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Kruse, Rudolf, Christian Borgelt, Frank Klawonn, Christian Moewes, Matthias Steinbrecher und Pascal Held. „General Neural Networks“. In Texts in Computer Science, 37–46. London: Springer London, 2013. http://dx.doi.org/10.1007/978-1-4471-5013-8_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Kruse, Rudolf, Sanaz Mostaghim, Christian Borgelt, Christian Braune und Matthias Steinbrecher. „General Neural Networks“. In Texts in Computer Science, 39–52. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-42227-1_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Betti, Alessandro, Marco Gori und Stefano Melacci. „Foveated Neural Networks“. In SpringerBriefs in Computer Science, 63–72. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-90987-1_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Neural networks (Computer science)"

1

Doncow, Sergey, Leonid Orbachevskyi, Valentin Birukow und Nina V. Stepanova. „Artificial Kohonen's neural networks for computer capillarometry“. In Optical Information Science and Technology, herausgegeben von Andrei L. Mikaelian. SPIE, 1998. http://dx.doi.org/10.1117/12.304962.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Nowak, Jakub, Marcin Korytkowski und Rafał Scherer. „Classification of Computer Network Users with Convolutional Neural Networks“. In 2018 Federated Conference on Computer Science and Information Systems. IEEE, 2018. http://dx.doi.org/10.15439/2018f321.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Shastri, Bhavin J., Volker Sorger und Nir Rotenberg. „In situ Training of Silicon Photonic Neural Networks: from Classical to Quantum“. In CLEO: Science and Innovations. Washington, D.C.: Optica Publishing Group, 2023. http://dx.doi.org/10.1364/cleo_si.2023.sm4j.1.

Der volle Inhalt der Quelle
Annotation:
Photonic neural networks perform ultrafast inference operations but are trained on slow computers. We highlight on-chip network training enabled by silicon photonics. We introduce quantum photonic neural networks and discuss the role of weak nonlinearities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Dias, L. P., J. J. F. Cerqueira, K. D. R. Assis und R. C. Almeida. „Using artificial neural network in intrusion detection systems to computer networks“. In 2017 9th Computer Science and Electronic Engineering (CEEC). IEEE, 2017. http://dx.doi.org/10.1109/ceec.2017.8101615.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Araújo, Georger, und Célia Ralha. „Computer Forensic Document Clustering with ART1 Neural Networks“. In The Sixth International Conference on Forensic Computer Science. ABEAT, 2011. http://dx.doi.org/10.5769/c2011011.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Wang, Huiran, und Ruifang Ma. „Optimization of Neural Networks for Network Intrusion Detection“. In 2009 First International Workshop on Education Technology and Computer Science. IEEE, 2009. http://dx.doi.org/10.1109/etcs.2009.102.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Eilermann, Sebastian, Christoph Petroll, Philipp Hoefer und Oliver Niggemann. „3D Multi-Criteria Design Generation and Optimization of an Engine Mount for an Unmanned Air Vehicle Using a Conditional Variational Autoencoder“. In Computer Science Research Notes. University of West Bohemia, Czech Republic, 2024. http://dx.doi.org/10.24132/csrn.3401.22.

Der volle Inhalt der Quelle
Annotation:
One of the most promising developments in computer vision in recent years is the use of generative neural networks for functionality condition-based 3D design reconstruction and generation. Here, neural networks learn dependencies between functionalities and a geometry in a very effective way. For a neural network the functionalities are translated in conditions to a certain geometry. But the more conditions the design generation needs to reflect, the more difficult it is to learn clear dependencies. This leads to a multi criteria design problem due various conditions, which are not considered in the neural network structure so far. In this paper, we address this multi-criteria challenge for a 3D design use case related to an unmanned aerial vehicle (UAV) motor mount. We generate 10,000 abstract 3D designs and subject them all to simulations for three physical disciplines: mechanics, thermodynamics, and aerodynamics. Then, we train a Conditional Variational Autoencoder (CVAE) using the geometry and corresponding multicriteria functional constraints as input. We use our trained CVAE as well as the Marching cubes algorithm to generate meshes for simulation based evaluation. The results are then evaluated with the generated UAV designs. Subsequently, we demonstrate the ability to generate optimized designs under self-defined functionality conditions using the trained neural network.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Sakas, D. P., D. S. Vlachos, T. E. Simos, Theodore E. Simos und George Psihoyios. „Fuzzy Neural Networks for Decision Support in Negotiation“. In INTERNATIONAL ELECTRONIC CONFERENCE ON COMPUTER SCIENCE. AIP, 2008. http://dx.doi.org/10.1063/1.3037115.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

„Speech Emotion Recognition using Convolutional Neural Networks and Recurrent Neural Networks with Attention Model“. In 2019 the 9th International Workshop on Computer Science and Engineering. WCSE, 2019. http://dx.doi.org/10.18178/wcse.2019.06.044.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Čajić, Elvir, Irma Ibrišimović, Alma Šehanović, Damir Bajrić und Julija Ščekić. „Fuzzy Logic And Neural Networks For Disease Detection And Simulation In Matlab“. In 9th International Conference on Computer Science, Engineering and Applications. Academy & Industry Research Collaboration Center, 2023. http://dx.doi.org/10.5121/csit.2023.132302.

Der volle Inhalt der Quelle
Annotation:
This paper investigates the integration of fuzzy logic and neural networks for disease detection using the Matlab environment. Disease detection is key in medical diagnostics, and the combination of fuzzy logic and neural networks offers an advanced methodology for the analysis and interpretation of medical data. Fuzzy logic is used for modeling and resolving uncertainty in diagnostic processes, while neural networks are applied for indepth processing and analysis of images relevant to disease diagnosis. This paper demonstrates the development and implementation of a simulation system in Matlab, using real medical data and images of organs for the purpose of detecting specific diseases, with a special focus on the application in the diagnosis of kidney diseases. Combining fuzzy logic and neural networks, simulation offers precision and robustness in the diagnosis process, opening the door to advanced medical information systems
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Neural networks (Computer science)"

1

Markova, Oksana, Serhiy Semerikov und Maiia Popel. СoCalc as a Learning Tool for Neural Network Simulation in the Special Course “Foundations of Mathematic Informatics”. Sun SITE Central Europe, Mai 2018. http://dx.doi.org/10.31812/0564/2250.

Der volle Inhalt der Quelle
Annotation:
The role of neural network modeling in the learning сontent of special course “Foundations of Mathematic Informatics” was discussed. The course was developed for the students of technical universities – future IT-specialists and directed to breaking the gap between theoretic computer science and it’s applied applications: software, system and computing engineering. CoCalc was justified as a learning tool of mathematical informatics in general and neural network modeling in particular. The elements of technique of using CoCalc at studying topic “Neural network and pattern recognition” of the special course “Foundations of Mathematic Informatics” are shown. The program code was presented in a CofeeScript language, which implements the basic components of artificial neural network: neurons, synaptic connections, functions of activations (tangential, sigmoid, stepped) and their derivatives, methods of calculating the network`s weights, etc. The features of the Kolmogorov–Arnold representation theorem application were discussed for determination the architecture of multilayer neural networks. The implementation of the disjunctive logical element and approximation of an arbitrary function using a three-layer neural network were given as an examples. According to the simulation results, a conclusion was made as for the limits of the use of constructed networks, in which they retain their adequacy. The framework topics of individual research of the artificial neural networks is proposed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Semerikov, Serhiy, Illia Teplytskyi, Yuliia Yechkalo, Oksana Markova, Vladimir Soloviev und Arnold Kiv. Computer Simulation of Neural Networks Using Spreadsheets: Dr. Anderson, Welcome Back. [б. в.], Juni 2019. http://dx.doi.org/10.31812/123456789/3178.

Der volle Inhalt der Quelle
Annotation:
The authors of the given article continue the series presented by the 2018 paper “Computer Simulation of Neural Networks Using Spreadsheets: The Dawn of the Age of Camelot”. This time, they consider mathematical informatics as the basis of higher engineering education fundamentalization. Mathematical informatics deals with smart simulation, information security, long-term data storage and big data management, artificial intelligence systems, etc. The authors suggest studying basic principles of mathematical informatics by applying cloud-oriented means of various levels including those traditionally considered supplementary – spreadsheets. The article considers ways of building neural network models in cloud-oriented spreadsheets, Google Sheets. The model is based on the problem of classifying multi-dimensional data provided in “The Use of Multiple Measurements in Taxonomic Problems” by R. A. Fisher. Edgar Anderson’s role in collecting and preparing the data in the 1920s-1930s is discussed as well as some peculiarities of data selection. There are presented data on the method of multi-dimensional data presentation in the form of an ideograph developed by Anderson and considered one of the first efficient ways of data visualization.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Grossberg, Stephen. Instrumentation for Scientific Computing in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics. Fort Belvoir, VA: Defense Technical Information Center, Oktober 1987. http://dx.doi.org/10.21236/ada189981.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Semerikov, Serhiy O., Illia O. Teplytskyi, Yuliia V. Yechkalo und Arnold E. Kiv. Computer Simulation of Neural Networks Using Spreadsheets: The Dawn of the Age of Camelot. [б. в.], November 2018. http://dx.doi.org/10.31812/123456789/2648.

Der volle Inhalt der Quelle
Annotation:
The article substantiates the necessity to develop training methods of computer simulation of neural networks in the spreadsheet environment. The systematic review of their application to simulating artificial neural networks is performed. The authors distinguish basic approaches to solving the problem of network computer simulation training in the spreadsheet environment, joint application of spreadsheets and tools of neural network simulation, application of third-party add-ins to spreadsheets, development of macros using the embedded languages of spreadsheets; use of standard spreadsheet add-ins for non-linear optimization, creation of neural networks in the spreadsheet environment without add-ins and macros. After analyzing a collection of writings of 1890-1950, the research determines the role of the scientific journal “Bulletin of Mathematical Biophysics”, its founder Nicolas Rashevsky and the scientific community around the journal in creating and developing models and methods of computational neuroscience. There are identified psychophysical basics of creating neural networks, mathematical foundations of neural computing and methods of neuroengineering (image recognition, in particular). The role of Walter Pitts in combining the descriptive and quantitative theories of training is discussed. It is shown that to acquire neural simulation competences in the spreadsheet environment, one should master the models based on the historical and genetic approach. It is indicated that there are three groups of models, which are promising in terms of developing corresponding methods – the continuous two-factor model of Rashevsky, the discrete model of McCulloch and Pitts, and the discrete-continuous models of Householder and Landahl.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Farhi, Edward, und Hartmut Neven. Classification with Quantum Neural Networks on Near Term Processors. Web of Open Science, Dezember 2020. http://dx.doi.org/10.37686/qrl.v1i2.80.

Der volle Inhalt der Quelle
Annotation:
We introduce a quantum neural network, QNN, that can represent labeled data, classical or quantum, and be trained by supervised learning. The quantum circuit consists of a sequence of parameter dependent unitary transformations which acts on an input quantum state. For binary classification a single Pauli operator is measured on a designated readout qubit. The measured output is the quantum neural network’s predictor of the binary label of the input state. We show through classical simulation that parameters can be found that allow the QNN to learn to correctly distinguish the two data sets. We then discuss presenting the data as quantum superpositions of computational basis states corresponding to different label values. Here we show through simulation that learning is possible. We consider using our QNN to learn the label of a general quantum state. By example we show that this can be done. Our work is exploratory and relies on the classical simulation of small quantum systems. The QNN proposed here was designed with near-term quantum processors in mind. Therefore it will be possible to run this QNN on a near term gate model quantum computer where its power can be explored beyond what can be explored with simulation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Willson. L51756 State of the Art Intelligent Control for Large Engines. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), September 1996. http://dx.doi.org/10.55274/r0010423.

Der volle Inhalt der Quelle
Annotation:
Computers have become a vital part of the control of pipeline compressors and compressor stations. For many tasks, computers have helped to improve accuracy, reliability, and safety, and have reduced operating costs. Computers excel at repetitive, precise tasks that humans perform poorly - calculation, measurement, statistical analysis, control, etc. Computers are used to perform these type of precise tasks at compressor stations: engine / turbine speed control, ignition control, horsepower estimation, or control of complicated sequences of events during startup and/or shutdown. For other tasks, however, computers perform very poorly at tasks that humans find to be trivial. A discussion of the differences in the way humans and computer process information is crucial to an understanding of the field of artificial intelligence. In this project, several artificial intelligence/ intelligent control systems were examined: heuristic search techniques, adaptive control, expert systems, fuzzy logic, neural networks, and genetic algorithms. Of these, neural networks showed the most potential for use on large bore engines because of their ability to recognize patterns in incomplete, noisy data. Two sets of experimental tests were conducted to test the predictive capabilities of neural networks. The first involved predicting the ignition timing from combustion pressure histories; the best networks responded within a specified tolerance level 90% to 98.8% of the time. In the second experiment, neural networks were used to predict NOx, A/F ratio, and fuel consumption. NOx prediction accuracy was 91.4%, A/F ratio accuracy was 82.9%, and fuel consumption accuracy was 52.9%. This report documents the assessment of the state of the art of artificial intelligence for application to the monitoring and control of large-bore natural gas engines.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Modlo, Yevhenii O., Serhiy O. Semerikov, Ruslan P. Shajda, Stanislav T. Tolmachev und Oksana M. Markova. Methods of using mobile Internet devices in the formation of the general professional component of bachelor in electromechanics competency in modeling of technical objects. [б. в.], Juli 2020. http://dx.doi.org/10.31812/123456789/3878.

Der volle Inhalt der Quelle
Annotation:
The article describes the components of methods of using mobile Internet devices in the formation of the general professional component of bachelor in electromechanics competency in modeling of technical objects: using various methods of representing models; solving professional problems using ICT; competence in electric machines and critical thinking. On the content of learning academic disciplines “Higher mathematics”, “Automatic control theory”, “Modeling of electromechanical systems”, “Electrical machines” features of use are disclosed for Scilab, SageCell, Google Sheets, Xcos on Cloud in the formation of the general professional component of bachelor in electromechanics competency in modeling of technical objects. It is concluded that it is advisable to use the following software for mobile Internet devices: a cloud-based spreadsheets as modeling tools (including neural networks), a visual modeling systems as a means of structural modeling of technical objects; a mobile computer mathematical system used at all stages of modeling; a mobile communication tools for organizing joint modeling activities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

SAINI, RAVINDER, AbdulKhaliq Alshadid und Lujain Aldosari. Investigation on the application of artificial intelligence in prosthodontics. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, Dezember 2022. http://dx.doi.org/10.37766/inplasy2022.12.0096.

Der volle Inhalt der Quelle
Annotation:
Review question / Objective: 1. Which artificial intelligence techniques are practiced in dentistry? 2. How AI is improving the diagnosis, clinical decision making, and outcome of dental treatment? 3. What are the current clinical applications and diagnostic performance of AI in the field of prosthodontics? Condition being studied: Procedures for desktop designing and fabrication Computer-aided design (CAD/CAM) in particular have made their way into routine healthcare and laboratory practice.Based on flat imagery, artificial intelligence may also be utilized to forecast the debonding of dental repairs. Dental arches in detachable prosthodontics may be categorized using Convolutional neural networks (CNN). By properly positioning the teeth, machine learning in CAD/CAM software can reestablish healthy inter-maxillary connections. AI may assist with accurate color matching in challenging cosmetic scenarios that include a single central incisor or many front teeth. Intraoral detectors can identify implant placements in implant prosthodontics and instantly input them into CAD software. The design and execution of dental implants could potentially be improved by utilizing AI.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Johansen, Richard, Alan Katzenmeyer, Kaytee Pokrzywinski und Molly Reif. A review of sensor-based approaches for monitoring rapid response treatments of cyanoHABs. Engineer Research and Development Center (U.S.), Juli 2023. http://dx.doi.org/10.21079/11681/47261.

Der volle Inhalt der Quelle
Annotation:
Water quality sensors are dynamic and vary greatly both in terms of utility and data acquisition. Data collection can range from single-parameter and one-dimensional to highly complex multiparameter spatiotemporal. Likewise, the analytical and statistical approaches range from relatively simple (e.g., linear regression) to more complex (e.g., artificial neural networks). Therefore, the decision to implement a particular water quality monitoring strategy is dependent upon many factors and varies widely. The purpose of this review was to document the current scientific literature to identify and compile approaches for water quality monitoring as well as statistical methodologies required to analyze and visualize highly diverse spatiotemporal water quality data. The literature review identified two broad categories: (1) sensor-based approaches for monitoring rapid response treatments of cyanobacterial harmful algal blooms (cyanoHABs), and (2) analytical tools and techniques to analyze complex high resolution spatial and temporal water quality data. The ultimate goal of this review is to provide the current state of the science as an array of scalable approaches, spanning from simple and practical to complex and comprehensive, and thus, equipping the US Army Corps of Engineers (USACE) water quality managers with options for technology-analysis combinations that best fit their needs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Seginer, Ido, James Jones, Per-Olof Gutman und Eduardo Vallejos. Optimal Environmental Control for Indeterminate Greenhouse Crops. United States Department of Agriculture, August 1997. http://dx.doi.org/10.32747/1997.7613034.bard.

Der volle Inhalt der Quelle
Annotation:
Increased world competition, as well as increased concern for the environment, drive all manufacturing systems, including greenhouses, towards high-precision operation. Optimal control is an important tool to achieve this goal, since it finds the best compromise between conflicting demands, such as higher profits and environmental concerns. The report, which is a collection of papers, each with its own abstract, outlines an approach for optimal, model-based control of the greenhouse environment. A reliable crop model is essential for this approach and a significant portion of the effort went in this direction, resulting in a radically new version of the tomato model TOMGRO, which can be used as a prototype model for other greenhouse crops. Truly optimal control of a very complex system requires prohibitively large computer resources. Two routes to model simplification have, therefore, been tried: Model reduction (to fewer state variables) and simplified decision making. Crop model reduction from nearly 70 state variables to about 5, was accomplished by either selecting a subset of the original variables or by forming combinations of them. Model dynamics were then fitted either with mechanistic relationships or with neural networks. To simplify the decision making process, the number of costate variables (control policy parametrs) was recuced to one or two. The dry-matter state variable was transformed in such a way that its costate became essentially constant throughout the season. A quasi-steady-state control algorithm was implemented in an experimental greenhouse. A constant value for the dry-matter costate was able to control simultaneously ventilation and CO2 enrichment by continuously producing weather-dependent optimal setpoints and then maintaining them closely.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie