Academic literature on the topic 'Neural networks (Computer science)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Neural networks (Computer science).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Neural networks (Computer science)":

1

Cottrell, G. W. "COMPUTER SCIENCE: New Life for Neural Networks." Science 313, no. 5786 (July 28, 2006): 454–55. http://dx.doi.org/10.1126/science.1129813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Xiao Guang. "Research on the Development and Applications of Artificial Neural Networks." Applied Mechanics and Materials 556-562 (May 2014): 6011–14. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.6011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Intelligent control is a class of control techniques that use various AI computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, evolutionary computation and genetic algorithms. In computer science and related fields, artificial neural networks are computational models inspired by animals’ central nervous systems (in particular the brain) that are capable of machine learning and pattern recognition. They are usually presented as systems of interconnected “neurons” that can compute values from inputs by feeding information through the network. Like other machine learning methods, neural networks have been used to solve a wide variety of tasks that are hard to solve using ordinary rule-based programming, including computer vision and speech recognition.
3

Schöneburg, E. "Neural networks hunt computer viruses." Neurocomputing 2, no. 5-6 (July 1991): 243–48. http://dx.doi.org/10.1016/0925-2312(91)90027-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Turega, M. A. "Neural Networks." Computer Journal 35, no. 3 (June 1, 1992): 290. http://dx.doi.org/10.1093/comjnl/35.3.290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Widrow, Bernard, David E. Rumelhart, and Michael A. Lehr. "Neural networks." Communications of the ACM 37, no. 3 (March 1994): 93–105. http://dx.doi.org/10.1145/175247.175257.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cavallaro, Lucia, Ovidiu Bagdasar, Pasquale De Meo, Giacomo Fiumara, and Antonio Liotta. "Artificial neural networks training acceleration through network science strategies." Soft Computing 24, no. 23 (September 9, 2020): 17787–95. http://dx.doi.org/10.1007/s00500-020-05302-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractThe development of deep learning has led to a dramatic increase in the number of applications of artificial intelligence. However, the training of deeper neural networks for stable and accurate models translates into artificial neural networks (ANNs) that become unmanageable as the number of features increases. This work extends our earlier study where we explored the acceleration effects obtained by enforcing, in turn, scale freeness, small worldness, and sparsity during the ANN training process. The efficiency of that approach was confirmed by recent studies (conducted independently) where a million-node ANN was trained on non-specialized laptops. Encouraged by those results, our study is now focused on some tunable parameters, to pursue a further acceleration effect. We show that, although optimal parameter tuning is unfeasible, due to the high non-linearity of ANN problems, we can actually come up with a set of useful guidelines that lead to speed-ups in practical cases. We find that significant reductions in execution time can generally be achieved by setting the revised fraction parameter ($$\zeta $$ ζ ) to relatively low values.
7

Kumar, G. Prem, and P. Venkataram. "Network restoration using recurrent neural networks." International Journal of Network Management 8, no. 5 (September 1998): 264–73. http://dx.doi.org/10.1002/(sici)1099-1190(199809/10)8:5<264::aid-nem298>3.0.co;2-o.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yen, Gary G., and Haiming Lu. "Hierarchical Rank Density Genetic Algorithm for Radial-Basis Function Neural Network Design." International Journal of Computational Intelligence and Applications 03, no. 03 (September 2003): 213–32. http://dx.doi.org/10.1142/s1469026803000975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, we propose a genetic algorithm based design procedure for a radial-basis function neural network. A Hierarchical Rank Density Genetic Algorithm (HRDGA) is used to evolve the neural network's topology and parameters simultaneously. Compared with traditional genetic algorithm based designs for neural networks, the hierarchical approach addresses several deficiencies highlighted in literature. In addition, the rank-density based fitness assignment technique is used to optimize the performance and topology of the evolved neural network to deal with the confliction between the training performance and network complexity. Instead of producing a single optimal solution, HRDGA provides a set of near-optimal neural networks to the designers so that they can have more flexibility for the final decision-making based on certain preferences. In terms of searching for a near-complete set of candidate networks with high performances, the networks designed by the proposed algorithm prove to be competitive, or even superior, to three other traditional radial-basis function networks for predicting Mackey–Glass chaotic time series.
9

SIEGELMANN, HAVA T. "ON NIL: THE SOFTWARE CONSTRUCTOR OF NEURAL NETWORKS." Parallel Processing Letters 06, no. 04 (December 1996): 575–82. http://dx.doi.org/10.1142/s0129626496000510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Analog recurrent neural networks have attracted much attention lately as powerful tools of automatic learning. However, they are not as popular in industry as should be justified by their usefulness. The lack of any programming tool for networks. and their vague internal representation, leave the networks for the use of experts only. We propose a way to make the neural networks friendly to users by formally defining a high level language, called Neural Information Processing Programming Langage, which is rich enough to express any computer algorithm or rule-based system. We show how to compile a NIL program into a network which computes exactly as the original program and requires the same computation/convergence time and physical size. Allowing for a natural neural evolution after the construction, the neural networks are both capable of dynamical continuous learning and represent any given symbolic knowledge. Thus, the language along with its compiler may be thought of as the ultimate bridge from symbolic to analog computation.
10

Cerf, Vinton G. "On neural networks." Communications of the ACM 61, no. 7 (June 25, 2018): 7. http://dx.doi.org/10.1145/3224195.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Neural networks (Computer science)":

1

Landassuri, Moreno Victor Manuel. "Evolution of modular neural networks." Thesis, University of Birmingham, 2012. http://etheses.bham.ac.uk//id/eprint/3243/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
It is well known that the human brain is highly modular, having a structural and functional organization that allows the different regions of the brain to be reused for different cognitive processes. So far, this has not been fully addressed by artificial systems, and a better understanding of when and how modules emerge is required, with a broad framework indicating how modules could be reused within neural networks. This thesis provides a deep investigation of module formation, module communication (interaction) and module reuse during evolution for a variety of classification and prediction tasks. The evolutionary algorithm EPNet is used to deliver the evolution of artificial neural networks. In the first stage of this study, the EPNet algorithm is carefully studied to understand its basis and to ensure confidence in its behaviour. Thereafter, its input feature selection (required for module evolution) is optimized, showing the robustness of the improved algorithm compared with the fixed input case and previous publications. Then module emergence, communication and reuse are investigated with the modular EPNet (M-EPNet) algorithm, which uses the information provided by a modularity measure to implement new mutation operators that favour the evolution of modules, allowing a new perspective for analyzing modularity, module formation and module reuse during evolution. The results obtained extend those of previous work, indicating that pure-modular architectures may emerge at low connectivity values, where similar tasks may share (reuse) common neural elements creating compact representations, and that the more different two tasks are, the bigger the modularity obtained during evolution. Other results indicate that some neural structures may be reused when similar tasks are evolved, leading to module interaction during evolution.
2

Sloan, Cooper Stokes. "Neural bus networks." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 65-68).
Bus schedules are unreliable, leaving passengers waiting and increasing commute times. This problem can be solved by modeling the traffic network, and delivering predicted arrival times to passengers. Research attempts to model traffic networks use historical, statistical and learning based models, with learning based models achieving the best results. This research compares several neural network architectures trained on historical data from Boston buses. Three models are trained: multilayer perceptron, convolutional neural network and recurrent neural network. Recurrent neural networks show the best performance when compared to feed forward models. This indicates that neural time series models are effective at modeling bus networks. The large amount of data available for training bus network models and the effectiveness of large neural networks at modeling this data show that great progress can be made in improving commutes for passengers.
by Cooper Stokes Sloan.
M. Eng.
3

Khan, Altaf Hamid. "Feedforward neural networks with constrained weights." Thesis, University of Warwick, 1996. http://wrap.warwick.ac.uk/4332/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The conventional multilayer feedforward network having continuous-weights is expensive to implement in digital hardware. Two new types of networks are proposed which lend themselves to cost-effective implementations in hardware and have a fast forward-pass capability. These two differ from the conventional model in having extra constraints on their weights: the first allows its weights to take integer values in the range [-3,3] only, whereas the second restricts its synapses to the set {-1,0,1} while allowing unrestricted offsets. The benefits of the first configuration are in having weights which are only 3-bits deep and a multiplication operation requiring a maximum of one shift, one add, and one sign-change instruction. The advantages of the second are in having 1-bit synapses and a multiplication operation which consists of a single sign-change instruction. The procedure proposed for training these networks starts like the conventional error backpropagation procedure, but becomes more and more discretised in its behaviour as the network gets closer to an error minimum. Mainly based on steepest descent, it also has a perturbation mechanism to avoid getting trapped in local minima, and a novel mechanism for rounding off 'near integers'. It incorporates weight elimination implicitly, which simplifies the choice of the start-up network configuration for training. It is shown that the integer-weight network, although lacking the universal approximation capability, can implement learning tasks, especially classification tasks, to acceptable accuracies. A new theoretical result is presented which shows that the multiplier-free network is a universal approximator over the space of continuous functions of one variable. In light of experimental results it is conjectured that the same is true for functions of many variables. Decision and error surfaces are used to explore the discrete-weight approximation of continuous-weight networks using discretisation schemes other than integer weights. The results suggest that provided a suitable discretisation interval is chosen, a discrete-weight network can be found which performs as well as a continuous-weight networks, but that it may require more hidden neurons than its conventional counterpart. Experiments are performed to compare the generalisation performances of the new networks with that of the conventional one using three very different benchmarks: the MONK's benchmark, a set of artificial tasks designed to compare the capabilities of learning algorithms, the 'onset of diabetes mellitus' prediction data set, a realistic set with very noisy attributes, and finally the handwritten numeral recognition database, a realistic but very structured data set. The results indicate that the new networks, despite having strong constraints on their weights, have generalisation performances similar to that of their conventional counterparts.
4

Zaghloul, Waleed A. Lee Sang M. "Text mining using neural networks." Lincoln, Neb. : University of Nebraska-Lincoln, 2005. http://0-www.unl.edu.library.unl.edu/libr/Dissertations/2005/Zaghloul.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph.D.)--University of Nebraska-Lincoln, 2005.
Title from title screen (sites viewed on Oct. 18, 2005). PDF text: 100 p. : col. ill. Includes bibliographical references (p. 95-100 of dissertation).
5

Hadjifaradji, Saeed. "Learning algorithms for restricted neural networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0016/NQ48102.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cheung, Ka Kit. "Neural networks for optimization." HKBU Institutional Repository, 2001. http://repository.hkbu.edu.hk/etd_ra/291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ahamed, Woakil Uddin. "Quantum recurrent neural networks for filtering." Thesis, University of Hull, 2009. http://hydra.hull.ac.uk/resources/hull:2411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The essence of stochastic filtering is to compute the time-varying probability densityfunction (pdf) for the measurements of the observed system. In this thesis, a filter isdesigned based on the principles of quantum mechanics where the schrodinger waveequation (SWE) plays the key part. This equation is transformed to fit into the neuralnetwork architecture. Each neuron in the network mediates a spatio-temporal field witha unified quantum activation function that aggregates the pdf information of theobserved signals. The activation function is the result of the solution of the SWE. Theincorporation of SWE into the field of neural network provides a framework which is socalled the quantum recurrent neural network (QRNN). A filter based on this approachis categorized as intelligent filter, as the underlying formulation is based on the analogyto real neuron.In a QRNN filter, the interaction between the observed signal and the wave dynamicsare governed by the SWE. A key issue, therefore, is achieving a solution of the SWEthat ensures the stability of the numerical scheme. Another important aspect indesigning this filter is in the way the wave function transforms the observed signalthrough the network. This research has shown that there are two different ways (anormal wave and a calm wave, Chapter-5) this transformation can be achieved and thesewave packets play a critical role in the evolution of the pdf. In this context, this thesishave investigated the following issues: existing filtering approach in the evolution of thepdf, architecture of the QRNN, the method of solving SWE, numerical stability of thesolution, and propagation of the waves in the well. The methods developed in this thesishave been tested with relevant simulations. The filter has also been tested with somebenchmark chaotic series along with applications to real world situation. Suggestionsare made for the scope of further developments.
8

Williams, Bryn V. "Evolutionary neural networks : models and applications." Thesis, Aston University, 1995. http://publications.aston.ac.uk/10635/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The scaling problems which afflict attempts to optimise neural networks (NNs) with genetic algorithms (GAs) are disclosed. A novel GA-NN hybrid is introduced, based on the bumptree, a little-used connectionist model. As well as being computationally efficient, the bumptree is shown to be more amenable to genetic coding lthan other NN models. A hierarchical genetic coding scheme is developed for the bumptree and shown to have low redundancy, as well as being complete and closed with respect to the search space. When applied to optimising bumptree architectures for classification problems the GA discovers bumptrees which significantly out-perform those constructed using a standard algorithm. The fields of artificial life, control and robotics are identified as likely application areas for the evolutionary optimisation of NNs. An artificial life case-study is presented and discussed. Experiments are reported which show that the GA-bumptree is able to learn simulated pole balancing and car parking tasks using only limited environmental feedback. A simple modification of the fitness function allows the GA-bumptree to learn mappings which are multi-modal, such as robot arm inverse kinematics. The dynamics of the 'geographic speciation' selection model used by the GA-bumptree are investigated empirically and the convergence profile is introduced as an analytical tool. The relationships between the rate of genetic convergence and the phenomena of speciation, genetic drift and punctuated equilibrium arc discussed. The importance of genetic linkage to GA design is discussed and two new recombination operators arc introduced. The first, linkage mapped crossover (LMX) is shown to be a generalisation of existing crossover operators. LMX provides a new framework for incorporating prior knowledge into GAs. Its adaptive form, ALMX, is shown to be able to infer linkage relationships automatically during genetic search.
9

De, Jongh Albert. "Neural network ensembles." Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/50035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (MSc)--Stellenbosch University, 2004.
ENGLISH ABSTRACT: It is possible to improve on the accuracy of a single neural network by using an ensemble of diverse and accurate networks. This thesis explores diversity in ensembles and looks at the underlying theory and mechanisms employed to generate and combine ensemble members. Bagging and boosting are studied in detail and I explain their success in terms of well-known theoretical instruments. An empirical evaluation of their performance is conducted and I compare them to a single classifier and to each other in terms of accuracy and diversity.
AFRIKAANSE OPSOMMING: Dit is moontlik om op die akkuraatheid van 'n enkele neurale netwerk te verbeter deur 'n ensemble van diverse en akkurate netwerke te gebruik. Hierdie tesis ondersoek diversiteit in ensembles, asook die meganismes waardeur lede van 'n ensemble geskep en gekombineer kan word. Die algoritmes "bagging" en "boosting" word in diepte bestudeer en hulle sukses word aan die hand van bekende teoretiese instrumente verduidelik. Die prestasie van hierdie twee algoritmes word eksperimenteel gemeet en hulle akkuraatheid en diversiteit word met 'n enkele netwerk vergelyk.
10

Nareshkumar, Nithyalakshmi. "Simulataneous versus Successive Learning in Neural Networks." Miami University / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=miami1134068959.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Neural networks (Computer science)":

1

Picton, Philip. Neural networks. New York: Palgrave, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Abdi, Hervé. Neural networks. Thousand Oaks, Calif: Sage Publications, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

1931-, Taylor John, and UNICOM Seminars, eds. Neural networks. Henley-on-Thames: A. Waller, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bischof, Horst. Pyramidal neural networks. Mahwah, NJ: Lawrence Erlbaum Associates, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Skapura, David M. Building neural networks. New York, N.Y: ACM Press, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

1948-, Vandewalle J., and Roska T, eds. Cellular neural networks. Chichester [England]: Wiley, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kwon, Seoyun J. Artificial neural networks. Hauppauge, N.Y: Nova Science Publishers, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Maass, Wolfgang, 1949 Aug. 21- and Bishop Christopher M, eds. Pulsed neural networks. Cambridge, Mass: MIT Press, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hoffmann, Norbert. Simulating neural networks. Wiesbaden: Vieweg, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Caudill, Maureen. Understanding neural networks: Computer explorations. Cambridge, Mass: MIT Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Neural networks (Computer science)":

1

ElAarag, Hala. "Neural Networks." In SpringerBriefs in Computer Science, 11–16. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-4893-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Siegelmann, Hava T. "Recurrent neural networks." In Computer Science Today, 29–45. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/bfb0015235.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ertel, Wolfgang. "Neural Networks." In Undergraduate Topics in Computer Science, 221–56. London: Springer London, 2011. http://dx.doi.org/10.1007/978-0-85729-299-5_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ertel, Wolfgang. "Neural Networks." In Undergraduate Topics in Computer Science, 245–87. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58487-4_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Feldman, Jerome A. "Neural Networks and Computer Science." In Opportunities and Constraints of Parallel Computing, 37–38. New York, NY: Springer US, 1989. http://dx.doi.org/10.1007/978-1-4613-9668-0_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kruse, Rudolf, Christian Borgelt, Christian Braune, Sanaz Mostaghim, and Matthias Steinbrecher. "General Neural Networks." In Texts in Computer Science, 37–46. London: Springer London, 2016. http://dx.doi.org/10.1007/978-1-4471-7296-3_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kruse, Rudolf, Christian Borgelt, Frank Klawonn, Christian Moewes, Matthias Steinbrecher, and Pascal Held. "General Neural Networks." In Texts in Computer Science, 37–46. London: Springer London, 2013. http://dx.doi.org/10.1007/978-1-4471-5013-8_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kruse, Rudolf, Sanaz Mostaghim, Christian Borgelt, Christian Braune, and Matthias Steinbrecher. "General Neural Networks." In Texts in Computer Science, 39–52. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-42227-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Betti, Alessandro, Marco Gori, and Stefano Melacci. "Foveated Neural Networks." In SpringerBriefs in Computer Science, 63–72. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-90987-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Betti, Alessandro, Marco Gori, and Stefano Melacci. "Foveated Neural Networks." In SpringerBriefs in Computer Science, 63–72. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-90987-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Neural networks (Computer science)":

1

Doncow, Sergey, Leonid Orbachevskyi, Valentin Birukow, and Nina V. Stepanova. "Artificial Kohonen's neural networks for computer capillarometry." In Optical Information Science and Technology, edited by Andrei L. Mikaelian. SPIE, 1998. http://dx.doi.org/10.1117/12.304962.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shastri, Bhavin J., Volker Sorger, and Nir Rotenberg. "In situ Training of Silicon Photonic Neural Networks: from Classical to Quantum." In CLEO: Science and Innovations. Washington, D.C.: Optica Publishing Group, 2023. http://dx.doi.org/10.1364/cleo_si.2023.sm4j.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Photonic neural networks perform ultrafast inference operations but are trained on slow computers. We highlight on-chip network training enabled by silicon photonics. We introduce quantum photonic neural networks and discuss the role of weak nonlinearities.
3

Nowak, Jakub, Marcin Korytkowski, and Rafał Scherer. "Classification of Computer Network Users with Convolutional Neural Networks." In 2018 Federated Conference on Computer Science and Information Systems. IEEE, 2018. http://dx.doi.org/10.15439/2018f321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dias, L. P., J. J. F. Cerqueira, K. D. R. Assis, and R. C. Almeida. "Using artificial neural network in intrusion detection systems to computer networks." In 2017 9th Computer Science and Electronic Engineering (CEEC). IEEE, 2017. http://dx.doi.org/10.1109/ceec.2017.8101615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Araújo, Georger, and Célia Ralha. "Computer Forensic Document Clustering with ART1 Neural Networks." In The Sixth International Conference on Forensic Computer Science. ABEAT, 2011. http://dx.doi.org/10.5769/c2011011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Huiran, and Ruifang Ma. "Optimization of Neural Networks for Network Intrusion Detection." In 2009 First International Workshop on Education Technology and Computer Science. IEEE, 2009. http://dx.doi.org/10.1109/etcs.2009.102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sakas, D. P., D. S. Vlachos, T. E. Simos, Theodore E. Simos, and George Psihoyios. "Fuzzy Neural Networks for Decision Support in Negotiation." In INTERNATIONAL ELECTRONIC CONFERENCE ON COMPUTER SCIENCE. AIP, 2008. http://dx.doi.org/10.1063/1.3037115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

"Speech Emotion Recognition using Convolutional Neural Networks and Recurrent Neural Networks with Attention Model." In 2019 the 9th International Workshop on Computer Science and Engineering. WCSE, 2019. http://dx.doi.org/10.18178/wcse.2019.06.044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mendez, Arturo J., Emilio G. Rosello, Maria J. Lado, Jacinto G. Dacosta, David M. Torres, and Manuel P. Cota. "IMO.Net Artificial Neural Networks: an object-oriented reusable software component library to integrate Matlab Neural Networks functionality." In Proceedings. 7th Mexican International Conference on Computer Science. IEEE, 2006. http://dx.doi.org/10.1109/enc.2006.18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Miao, Lihua. "Chaotifiation of Cardiac Dynamics Based on Fuzzy BP Neural Networks." In Computer Science and Technology 2015. Science & Engineering Research Support soCiety, 2015. http://dx.doi.org/10.14257/astl.2015.81.17.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Neural networks (Computer science)":

1

Markova, Oksana, Serhiy Semerikov, and Maiia Popel. СoCalc as a Learning Tool for Neural Network Simulation in the Special Course “Foundations of Mathematic Informatics”. Sun SITE Central Europe, May 2018. http://dx.doi.org/10.31812/0564/2250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The role of neural network modeling in the learning сontent of special course “Foundations of Mathematic Informatics” was discussed. The course was developed for the students of technical universities – future IT-specialists and directed to breaking the gap between theoretic computer science and it’s applied applications: software, system and computing engineering. CoCalc was justified as a learning tool of mathematical informatics in general and neural network modeling in particular. The elements of technique of using CoCalc at studying topic “Neural network and pattern recognition” of the special course “Foundations of Mathematic Informatics” are shown. The program code was presented in a CofeeScript language, which implements the basic components of artificial neural network: neurons, synaptic connections, functions of activations (tangential, sigmoid, stepped) and their derivatives, methods of calculating the network`s weights, etc. The features of the Kolmogorov–Arnold representation theorem application were discussed for determination the architecture of multilayer neural networks. The implementation of the disjunctive logical element and approximation of an arbitrary function using a three-layer neural network were given as an examples. According to the simulation results, a conclusion was made as for the limits of the use of constructed networks, in which they retain their adequacy. The framework topics of individual research of the artificial neural networks is proposed.
2

Semerikov, Serhiy, Illia Teplytskyi, Yuliia Yechkalo, Oksana Markova, Vladimir Soloviev, and Arnold Kiv. Computer Simulation of Neural Networks Using Spreadsheets: Dr. Anderson, Welcome Back. [б. в.], June 2019. http://dx.doi.org/10.31812/123456789/3178.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The authors of the given article continue the series presented by the 2018 paper “Computer Simulation of Neural Networks Using Spreadsheets: The Dawn of the Age of Camelot”. This time, they consider mathematical informatics as the basis of higher engineering education fundamentalization. Mathematical informatics deals with smart simulation, information security, long-term data storage and big data management, artificial intelligence systems, etc. The authors suggest studying basic principles of mathematical informatics by applying cloud-oriented means of various levels including those traditionally considered supplementary – spreadsheets. The article considers ways of building neural network models in cloud-oriented spreadsheets, Google Sheets. The model is based on the problem of classifying multi-dimensional data provided in “The Use of Multiple Measurements in Taxonomic Problems” by R. A. Fisher. Edgar Anderson’s role in collecting and preparing the data in the 1920s-1930s is discussed as well as some peculiarities of data selection. There are presented data on the method of multi-dimensional data presentation in the form of an ideograph developed by Anderson and considered one of the first efficient ways of data visualization.
3

Grossberg, Stephen. Instrumentation for Scientific Computing in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics. Fort Belvoir, VA: Defense Technical Information Center, October 1987. http://dx.doi.org/10.21236/ada189981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Semerikov, Serhiy O., Illia O. Teplytskyi, Yuliia V. Yechkalo, and Arnold E. Kiv. Computer Simulation of Neural Networks Using Spreadsheets: The Dawn of the Age of Camelot. [б. в.], November 2018. http://dx.doi.org/10.31812/123456789/2648.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The article substantiates the necessity to develop training methods of computer simulation of neural networks in the spreadsheet environment. The systematic review of their application to simulating artificial neural networks is performed. The authors distinguish basic approaches to solving the problem of network computer simulation training in the spreadsheet environment, joint application of spreadsheets and tools of neural network simulation, application of third-party add-ins to spreadsheets, development of macros using the embedded languages of spreadsheets; use of standard spreadsheet add-ins for non-linear optimization, creation of neural networks in the spreadsheet environment without add-ins and macros. After analyzing a collection of writings of 1890-1950, the research determines the role of the scientific journal “Bulletin of Mathematical Biophysics”, its founder Nicolas Rashevsky and the scientific community around the journal in creating and developing models and methods of computational neuroscience. There are identified psychophysical basics of creating neural networks, mathematical foundations of neural computing and methods of neuroengineering (image recognition, in particular). The role of Walter Pitts in combining the descriptive and quantitative theories of training is discussed. It is shown that to acquire neural simulation competences in the spreadsheet environment, one should master the models based on the historical and genetic approach. It is indicated that there are three groups of models, which are promising in terms of developing corresponding methods – the continuous two-factor model of Rashevsky, the discrete model of McCulloch and Pitts, and the discrete-continuous models of Householder and Landahl.
5

Farhi, Edward, and Hartmut Neven. Classification with Quantum Neural Networks on Near Term Processors. Web of Open Science, December 2020. http://dx.doi.org/10.37686/qrl.v1i2.80.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We introduce a quantum neural network, QNN, that can represent labeled data, classical or quantum, and be trained by supervised learning. The quantum circuit consists of a sequence of parameter dependent unitary transformations which acts on an input quantum state. For binary classification a single Pauli operator is measured on a designated readout qubit. The measured output is the quantum neural network’s predictor of the binary label of the input state. We show through classical simulation that parameters can be found that allow the QNN to learn to correctly distinguish the two data sets. We then discuss presenting the data as quantum superpositions of computational basis states corresponding to different label values. Here we show through simulation that learning is possible. We consider using our QNN to learn the label of a general quantum state. By example we show that this can be done. Our work is exploratory and relies on the classical simulation of small quantum systems. The QNN proposed here was designed with near-term quantum processors in mind. Therefore it will be possible to run this QNN on a near term gate model quantum computer where its power can be explored beyond what can be explored with simulation.
6

Willson. L51756 State of the Art Intelligent Control for Large Engines. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), September 1996. http://dx.doi.org/10.55274/r0010423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Computers have become a vital part of the control of pipeline compressors and compressor stations. For many tasks, computers have helped to improve accuracy, reliability, and safety, and have reduced operating costs. Computers excel at repetitive, precise tasks that humans perform poorly - calculation, measurement, statistical analysis, control, etc. Computers are used to perform these type of precise tasks at compressor stations: engine / turbine speed control, ignition control, horsepower estimation, or control of complicated sequences of events during startup and/or shutdown. For other tasks, however, computers perform very poorly at tasks that humans find to be trivial. A discussion of the differences in the way humans and computer process information is crucial to an understanding of the field of artificial intelligence. In this project, several artificial intelligence/ intelligent control systems were examined: heuristic search techniques, adaptive control, expert systems, fuzzy logic, neural networks, and genetic algorithms. Of these, neural networks showed the most potential for use on large bore engines because of their ability to recognize patterns in incomplete, noisy data. Two sets of experimental tests were conducted to test the predictive capabilities of neural networks. The first involved predicting the ignition timing from combustion pressure histories; the best networks responded within a specified tolerance level 90% to 98.8% of the time. In the second experiment, neural networks were used to predict NOx, A/F ratio, and fuel consumption. NOx prediction accuracy was 91.4%, A/F ratio accuracy was 82.9%, and fuel consumption accuracy was 52.9%. This report documents the assessment of the state of the art of artificial intelligence for application to the monitoring and control of large-bore natural gas engines.
7

Modlo, Yevhenii O., Serhiy O. Semerikov, Ruslan P. Shajda, Stanislav T. Tolmachev, and Oksana M. Markova. Methods of using mobile Internet devices in the formation of the general professional component of bachelor in electromechanics competency in modeling of technical objects. [б. в.], July 2020. http://dx.doi.org/10.31812/123456789/3878.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The article describes the components of methods of using mobile Internet devices in the formation of the general professional component of bachelor in electromechanics competency in modeling of technical objects: using various methods of representing models; solving professional problems using ICT; competence in electric machines and critical thinking. On the content of learning academic disciplines “Higher mathematics”, “Automatic control theory”, “Modeling of electromechanical systems”, “Electrical machines” features of use are disclosed for Scilab, SageCell, Google Sheets, Xcos on Cloud in the formation of the general professional component of bachelor in electromechanics competency in modeling of technical objects. It is concluded that it is advisable to use the following software for mobile Internet devices: a cloud-based spreadsheets as modeling tools (including neural networks), a visual modeling systems as a means of structural modeling of technical objects; a mobile computer mathematical system used at all stages of modeling; a mobile communication tools for organizing joint modeling activities.
8

SAINI, RAVINDER, AbdulKhaliq Alshadid, and Lujain Aldosari. Investigation on the application of artificial intelligence in prosthodontics. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, December 2022. http://dx.doi.org/10.37766/inplasy2022.12.0096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Review question / Objective: 1. Which artificial intelligence techniques are practiced in dentistry? 2. How AI is improving the diagnosis, clinical decision making, and outcome of dental treatment? 3. What are the current clinical applications and diagnostic performance of AI in the field of prosthodontics? Condition being studied: Procedures for desktop designing and fabrication Computer-aided design (CAD/CAM) in particular have made their way into routine healthcare and laboratory practice.Based on flat imagery, artificial intelligence may also be utilized to forecast the debonding of dental repairs. Dental arches in detachable prosthodontics may be categorized using Convolutional neural networks (CNN). By properly positioning the teeth, machine learning in CAD/CAM software can reestablish healthy inter-maxillary connections. AI may assist with accurate color matching in challenging cosmetic scenarios that include a single central incisor or many front teeth. Intraoral detectors can identify implant placements in implant prosthodontics and instantly input them into CAD software. The design and execution of dental implants could potentially be improved by utilizing AI.
9

Johansen, Richard, Alan Katzenmeyer, Kaytee Pokrzywinski, and Molly Reif. A review of sensor-based approaches for monitoring rapid response treatments of cyanoHABs. Engineer Research and Development Center (U.S.), July 2023. http://dx.doi.org/10.21079/11681/47261.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Water quality sensors are dynamic and vary greatly both in terms of utility and data acquisition. Data collection can range from single-parameter and one-dimensional to highly complex multiparameter spatiotemporal. Likewise, the analytical and statistical approaches range from relatively simple (e.g., linear regression) to more complex (e.g., artificial neural networks). Therefore, the decision to implement a particular water quality monitoring strategy is dependent upon many factors and varies widely. The purpose of this review was to document the current scientific literature to identify and compile approaches for water quality monitoring as well as statistical methodologies required to analyze and visualize highly diverse spatiotemporal water quality data. The literature review identified two broad categories: (1) sensor-based approaches for monitoring rapid response treatments of cyanobacterial harmful algal blooms (cyanoHABs), and (2) analytical tools and techniques to analyze complex high resolution spatial and temporal water quality data. The ultimate goal of this review is to provide the current state of the science as an array of scalable approaches, spanning from simple and practical to complex and comprehensive, and thus, equipping the US Army Corps of Engineers (USACE) water quality managers with options for technology-analysis combinations that best fit their needs.
10

Seginer, Ido, James Jones, Per-Olof Gutman, and Eduardo Vallejos. Optimal Environmental Control for Indeterminate Greenhouse Crops. United States Department of Agriculture, August 1997. http://dx.doi.org/10.32747/1997.7613034.bard.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Increased world competition, as well as increased concern for the environment, drive all manufacturing systems, including greenhouses, towards high-precision operation. Optimal control is an important tool to achieve this goal, since it finds the best compromise between conflicting demands, such as higher profits and environmental concerns. The report, which is a collection of papers, each with its own abstract, outlines an approach for optimal, model-based control of the greenhouse environment. A reliable crop model is essential for this approach and a significant portion of the effort went in this direction, resulting in a radically new version of the tomato model TOMGRO, which can be used as a prototype model for other greenhouse crops. Truly optimal control of a very complex system requires prohibitively large computer resources. Two routes to model simplification have, therefore, been tried: Model reduction (to fewer state variables) and simplified decision making. Crop model reduction from nearly 70 state variables to about 5, was accomplished by either selecting a subset of the original variables or by forming combinations of them. Model dynamics were then fitted either with mechanistic relationships or with neural networks. To simplify the decision making process, the number of costate variables (control policy parametrs) was recuced to one or two. The dry-matter state variable was transformed in such a way that its costate became essentially constant throughout the season. A quasi-steady-state control algorithm was implemented in an experimental greenhouse. A constant value for the dry-matter costate was able to control simultaneously ventilation and CO2 enrichment by continuously producing weather-dependent optimal setpoints and then maintaining them closely.

To the bibliography