Articoli di riviste sul tema "Neural networks"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Neural networks.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Neural networks".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Navghare, Tukaram, Aniket Muley e Vinayak Jadhav. "Siamese Neural Networks for Kinship Prediction: A Deep Convolutional Neural Network Approach". Indian Journal Of Science And Technology 17, n. 4 (26 gennaio 2024): 352–58. http://dx.doi.org/10.17485/ijst/v17i4.3018.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

N, Vikram. "Artificial Neural Networks". International Journal of Research Publication and Reviews 4, n. 4 (23 aprile 2023): 4308–9. http://dx.doi.org/10.55248/gengpi.4.423.37858.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

O. H. Abdelwahed, O. H. Abdelwahed, e M. El-Sayed Wahed. "Optimizing Single Layer Cellular Neural Network Simulator using Simulated Annealing Technique with Neural Networks". Indian Journal of Applied Research 3, n. 6 (1 ottobre 2011): 91–94. http://dx.doi.org/10.15373/2249555x/june2013/31.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Zengguo Sun, Zengguo Sun, Guodong Zhao Zengguo Sun, Rafał Scherer Guodong Zhao, Wei Wei Rafał Scherer e Marcin Woźniak Wei Wei. "Overview of Capsule Neural Networks". 網際網路技術學刊 23, n. 1 (gennaio 2022): 033–44. http://dx.doi.org/10.53106/160792642022012301004.

Testo completo
Abstract (sommario):
<p>As a vector transmission network structure, the capsule neural network has been one of the research hotspots in deep learning since it was proposed in 2017. In this paper, the latest research progress of capsule networks is analyzed and summarized. Firstly, we summarize the shortcomings of convolutional neural networks and introduce the basic concept of capsule network. Secondly, we analyze and summarize the improvements in the dynamic routing mechanism and network structure of the capsule network in recent years and the combination of the capsule network with other network structures. Finally, we compile the applications of capsule network in many fields, including computer vision, natural language, and speech processing. Our purpose in writing this article is to provide methods and means that can be used for reference in the research and practical applications of capsule networks.</p> <p>&nbsp;</p>
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Perfetti, R. "A neural network to design neural networks". IEEE Transactions on Circuits and Systems 38, n. 9 (1991): 1099–103. http://dx.doi.org/10.1109/31.83884.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

AVeselý. "Neural networks in data mining". Agricultural Economics (Zemědělská ekonomika) 49, No. 9 (2 marzo 2012): 427–31. http://dx.doi.org/10.17221/5427-agricecon.

Testo completo
Abstract (sommario):
To posses relevant information is an inevitable condition for successful enterprising in modern business. Information could be parted to data and knowledge. How to gather, store and retrieve data is studied in database theory. In the knowledge engineering, there is in the centre of interest the knowledge and methods of its formalization and gaining are studied. Knowledge could be gained from experts, specialists in the area of interest, or it can be gained by induction from sets of data. Automatic induction of knowledge from data sets, usually stored in large databases, is called data mining. Classical methods of gaining knowledge from data sets are statistical methods. In data mining, new methods besides statistical are used. These new methods have their origin in artificial intelligence. They look for unknown and unexpected relations, which can be uncovered by exploring of data in database. In the article, a utilization of modern methods of data mining is described and especially the methods based on neural networks theory are pursued. The advantages and drawbacks of applications of multiplayer feed forward neural networks and Kohonen&rsquo;s self-organizing maps are discussed. Kohonen&rsquo;s self-organizing map is the most promising neural data-mining algorithm regarding its capability to visualize high-dimensional data.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

J, Joselin, Dinesh T e Ashiq M. "A Review on Neural Networks". International Journal of Trend in Scientific Research and Development Volume-2, Issue-6 (31 ottobre 2018): 565–69. http://dx.doi.org/10.31142/ijtsrd18461.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Ziroyan, M. A., E. A. Tusova, A. S. Hovakimian e S. G. Sargsyan. "Neural networks apparatus in biometrics". Contemporary problems of social work 1, n. 2 (30 giugno 2015): 129–37. http://dx.doi.org/10.17922/2412-5466-2015-1-2-129-137.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Marton, Sascha, Stefan Lüdtke e Christian Bartelt. "Explanations for Neural Networks by Neural Networks". Applied Sciences 12, n. 3 (18 gennaio 2022): 980. http://dx.doi.org/10.3390/app12030980.

Testo completo
Abstract (sommario):
Understanding the function learned by a neural network is crucial in many domains, e.g., to detect a model’s adaption to concept drift in online learning. Existing global surrogate model approaches generate explanations by maximizing the fidelity between the neural network and a surrogate model on a sample-basis, which can be very time-consuming. Therefore, these approaches are not applicable in scenarios where timely or frequent explanations are required. In this paper, we introduce a real-time approach for generating a symbolic representation of the function learned by a neural network. Our idea is to generate explanations via another neural network (called the Interpretation Network, or I-Net), which maps network parameters to a symbolic representation of the network function. We show that the training of an I-Net for a family of functions can be performed up-front and subsequent generation of an explanation only requires querying the I-Net once, which is computationally very efficient and does not require training data. We empirically evaluate our approach for the case of low-order polynomials as explanations, and show that it achieves competitive results for various data and function complexities. To the best of our knowledge, this is the first approach that attempts to learn mapping from neural networks to symbolic representations.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Rodriguez, Nathaniel, Eduardo Izquierdo e Yong-Yeol Ahn. "Optimal modularity and memory capacity of neural reservoirs". Network Neuroscience 3, n. 2 (gennaio 2019): 551–66. http://dx.doi.org/10.1162/netn_a_00082.

Testo completo
Abstract (sommario):
The neural network is a powerful computing framework that has been exploited by biological evolution and by humans for solving diverse problems. Although the computational capabilities of neural networks are determined by their structure, the current understanding of the relationships between a neural network’s architecture and function is still primitive. Here we reveal that a neural network’s modular architecture plays a vital role in determining the neural dynamics and memory performance of the network of threshold neurons. In particular, we demonstrate that there exists an optimal modularity for memory performance, where a balance between local cohesion and global connectivity is established, allowing optimally modular networks to remember longer. Our results suggest that insights from dynamical analysis of neural networks and information-spreading processes can be leveraged to better design neural networks and may shed light on the brain’s modular organization.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Wang, Jun. "Artificial neural networks versus natural neural networks". Decision Support Systems 11, n. 5 (giugno 1994): 415–29. http://dx.doi.org/10.1016/0167-9236(94)90016-7.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Yashchenko, V. O. "Neural-like growing networks in the development of general intelligence. Neural-like growing networks (P. II)". Mathematical machines and systems 1 (2023): 3–29. http://dx.doi.org/10.34121/1028-9763-2023-1-3-29.

Testo completo
Abstract (sommario):
This article is devoted to the development of general artificial intelligence (AGI) based on a new type of neural networks – “neural-like growing networks”. It consists of two parts. The first one was published in N4, 2022, and describes an artificial neural-like element (artificial neuron) in terms of its functionality, which is as close as possible to a biological neuron. An artificial neural-like element is the main element in building neural-like growing networks. The second part deals with the structures and functions of artificial and natural neural networks. The paper proposes a new approach for creating neural-like growing networks as a means of developing AGI that is as close as possible to the natural intelligence of a person. The intelligence of man and living organisms is formed by their nervous system. According to I.P. Pavlov's definition, the main mechanism of higher nervous activity is the reflex activity of the nervous system. In the nerve cell, the main storage of unconditioned reflexes is the deoxyribonucleic acid (DNA) molecule. The article describes ribosomal protein synthesis that contributes to the implementation of unconditioned reflexes and the formation of conditioned reflexes as the basis for learning biological objects. The first part of the work shows that the structure and functions of ribosomes almost completely coincide with the structure and functions of the Turing machine. Turing invented this machine to prove the fundamental (theoretical) possibility of constructing arbitrarily complex algorithms from extremely simple operations, and the operations themselves are performed automatically. Here arises a stunning analogy, nature created DNA and the ribosome to build complex algorithms for creating biological objects and their communication with each other and with the external environment, and the ribosomal protein synthesis is carried out by many ribosomes at the same time. It was concluded that the nerve cells of the brain are analog multi-machine complexes – ultra-fast molecular supercomputers with an unusually simple analog programming device.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

JORGENSEN, THOMAS D., BARRY P. HAYNES e CHARLOTTE C. F. NORLUND. "PRUNING ARTIFICIAL NEURAL NETWORKS USING NEURAL COMPLEXITY MEASURES". International Journal of Neural Systems 18, n. 05 (ottobre 2008): 389–403. http://dx.doi.org/10.1142/s012906570800166x.

Testo completo
Abstract (sommario):
This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Parks, Allen D. "Characterizing Computation in Artificial Neural Networks by their Diclique Covers and Forman-Ricci Curvatures". European Journal of Engineering Research and Science 5, n. 2 (13 febbraio 2020): 171–77. http://dx.doi.org/10.24018/ejers.2020.5.2.1689.

Testo completo
Abstract (sommario):
The relationships between the structural topology of artificial neural networks, their computational flow, and their performance is not well understood. Consequently, a unifying mathematical framework that describes computational performance in terms of their underlying structure does not exist. This paper makes a modest contribution to understanding the structure-computational flow relationship in artificial neural networks from the perspective of the dicliques that cover the structure of an artificial neural network and the Forman-Ricci curvature of an artificial neural network’s connections. Special diclique cover digraph representations of artificial neural networks useful for network analysis are introduced and it is shown that such covers generate semigroups that provide algebraic representations of neural network connectivity.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Parks, Allen D. "Characterizing Computation in Artificial Neural Networks by their Diclique Covers and Forman-Ricci Curvatures". European Journal of Engineering and Technology Research 5, n. 2 (13 febbraio 2020): 171–77. http://dx.doi.org/10.24018/ejeng.2020.5.2.1689.

Testo completo
Abstract (sommario):
The relationships between the structural topology of artificial neural networks, their computational flow, and their performance is not well understood. Consequently, a unifying mathematical framework that describes computational performance in terms of their underlying structure does not exist. This paper makes a modest contribution to understanding the structure-computational flow relationship in artificial neural networks from the perspective of the dicliques that cover the structure of an artificial neural network and the Forman-Ricci curvature of an artificial neural network’s connections. Special diclique cover digraph representations of artificial neural networks useful for network analysis are introduced and it is shown that such covers generate semigroups that provide algebraic representations of neural network connectivity.
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Li, Xiao Hu, Feng Xu, Jin Hua Zhang e Su Nan Wang. "A New Small-World Neural Network with its Performance on Fault Tolerance". Advanced Materials Research 629 (dicembre 2012): 719–24. http://dx.doi.org/10.4028/www.scientific.net/amr.629.719.

Testo completo
Abstract (sommario):
Many artificial neural networks are the simple simulation of brain neural network’s architecture and function. However, how to rebuild new artificial neural network which architecture is similar to biological neural networks is worth studying. In this study, a new multilayer feedforward small-world neural network is presented using the results form research on complex network. Firstly, a new multilayer feedforward small-world neural network which relies on the rewiring probability heavily is built up on the basis of the construction ideology of Watts-Strogatz networks model and community structure. Secondly, fault tolerance is employed in investigating the performances of new small-world neural network. When the network with connection fault or neuron damage is used to test the fault tolerance performance under different rewiring probability, simulation results show that the fault tolerance capability of small-world neural network outmatches that of the same scale regular network when the fault probability is more than 40%, while random network has the best fault tolerance capability.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Tetko, Igor V. "Neural Network Studies. 4. Introduction to Associative Neural Networks". Journal of Chemical Information and Computer Sciences 42, n. 3 (26 marzo 2002): 717–28. http://dx.doi.org/10.1021/ci010379o.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Garcia, R. K., K. Moreira Gandra, J. M. Block e D. Barrera-Arellano. "Neural networks to formulate special fats". Grasas y Aceites 63, n. 3 (5 luglio 2012): 245–52. http://dx.doi.org/10.3989/gya.119011.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Veselý, A., e D. Brechlerová. "Neural networks in intrusion detection systems". Agricultural Economics (Zemědělská ekonomika) 50, No. 1 (24 febbraio 2012): 35–40. http://dx.doi.org/10.17221/5164-agricecon.

Testo completo
Abstract (sommario):
Security of an information system is its very important property, especially today, when computers are interconnected via internet. Because no system can be absolutely secure, the timely and accurate detection of intrusions is necessary. For this purpose, Intrusion Detection Systems (IDS) were designed. There are two basic models of IDS: misuse IDS and anomaly IDS. Misuse systems detect intrusions by looking for activity that corresponds to the known signatures of intrusions or vulnerabilities. Anomaly systems detect intrusions by searching for an abnormal system activity. Most IDS commercial tools are misuse systems with rule-based expert system structure. However, these techniques are less successful when attack characteristics vary from built-in signatures. Artificial neural networks offer the potential to resolve these problems. As far as anomaly systems are concerned, it is very difficult to build them, because it is difficult to define the normal and abnormal behaviour of a&nbsp;system. Also for building anomaly system, neural networks can be used, because they can learn to discriminate the normal and abnormal behaviour of a&nbsp;system from examples. Therefore, they offer a&nbsp;promising technique for building anomaly systems. This paper presents an overview of the applicability of neural networks in building intrusion systems and discusses advantages and drawbacks of neural network technology.
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Mu, Yangzi, Mengxing Huang, Chunyang Ye e Qingzhou Wu. "Diagnosis Prediction via Recurrent Neural Networks". International Journal of Machine Learning and Computing 8, n. 2 (aprile 2018): 117–20. http://dx.doi.org/10.18178/ijmlc.2018.8.2.673.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Sandhiya, K., M. Vidhya e M. Shivaranjani S. Saranya. "Smart Fruit Classification using Neural Networks". International Journal of Trend in Scientific Research and Development Volume-2, Issue-1 (31 dicembre 2017): 1298–303. http://dx.doi.org/10.31142/ijtsrd6986.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
22

S, Pothumani, e Priya N. "Analysis of RAID in Neural Networks". Journal of Advanced Research in Dynamical and Control Systems 11, n. 0009-SPECIAL ISSUE (25 settembre 2019): 589–94. http://dx.doi.org/10.5373/jardcs/v11/20192609.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
23

TYMOSHENKO, Pavlo, Yevgen ZASOBA, Olexander KOVALCHUK e Olexander PSHENYCHNYY. "NEUROEVOLUTIONARY ALGORITHMS FOR NEURAL NETWORKS GENERATING". Herald of Khmelnytskyi National University. Technical sciences 315, n. 6(1) (29 dicembre 2022): 240–44. http://dx.doi.org/10.31891/2307-5732-2022-315-6-240-244.

Testo completo
Abstract (sommario):
Solving engineering problems using conventional neural networks requires long-term research on the choice of architecture and hyperparameters. A strong artificial intelligence would be devoid of such shortcomings. Such research is carried out using a very wide range of approaches: for example, biological (attempts to grow a brain in laboratory conditions), hardware (creating neural processors) or software (using the power of ordinary CPUs and GPUs). The goal of the work is to develop such a system that would allow using evolutionary approaches to generate neural networks suitable for solving problems. This is called “neuroevolution”. The purpose of this work also includes the study of the features of possible applicable evolutionary strategies. The object of research in this work is a neuroevolutionary approach to solving problems of machine learning. The subject of research is evolutionary strategies, neural coding methods networks in the organism’s genome. The scientific novelty of the work lies in the testing of previously unused evolutionary strategies and the generalization of the obtained system to the systems of “general artificial intelligence”. A system for simulating neuroevolution was created. The specifics of implementation were considered, the choice of algorithms was justified, and their work was explained. In order to perform experiments, datasets were created and methods of applying neuroevolutionary systems were developed. It was possible to choose the most optimal training parameters, to find out the relationship between them, as well as the accuracy and speed of training. It cannot be said that the models implemented within this work directly bring us closer to strong AI. They still lack their own memory as well as a certain level of complexity. For successful use, it is necessary to configure the view of the input data or perform some calculations outside the model. However, in the future, such a system can be developed, for example, to work with SNNs, or for use on special equipment
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Boonsatit, Nattakan, Santhakumari Rajendran, Chee Peng Lim, Anuwat Jirawattanapanit e Praneesh Mohandas. "New Adaptive Finite-Time Cluster Synchronization of Neutral-Type Complex-Valued Coupled Neural Networks with Mixed Time Delays". Fractal and Fractional 6, n. 9 (13 settembre 2022): 515. http://dx.doi.org/10.3390/fractalfract6090515.

Testo completo
Abstract (sommario):
The issue of adaptive finite-time cluster synchronization corresponding to neutral-type coupled complex-valued neural networks with mixed delays is examined in this research. A neutral-type coupled complex-valued neural network with mixed delays is more general than that of a traditional neural network, since it considers distributed delays, state delays and coupling delays. In this research, a new adaptive control technique is developed to synchronize neutral-type coupled complex-valued neural networks with mixed delays in finite time. To stabilize the resulting closed-loop system, the Lyapunov stability argument is leveraged to infer the necessary requirements on the control factors. The effectiveness of the proposed method is illustrated through simulation studies.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Mahat, Norpah, Nor Idayunie Nording, Jasmani Bidin, Suzanawati Abu Hasan e Teoh Yeong Kin. "Artificial Neural Network (ANN) to Predict Mathematics Students’ Performance". Journal of Computing Research and Innovation 7, n. 1 (30 marzo 2022): 29–38. http://dx.doi.org/10.24191/jcrinn.v7i1.264.

Testo completo
Abstract (sommario):
Predicting students’ academic performance is very essential to produce high-quality students. The main goal is to continuously help students to increase their ability in the learning process and to help educators as well in improving their teaching skills. Therefore, this study was conducted to predict mathematics students’ performance using Artificial Neural Network (ANN). The secondary data from 382 mathematics students from UCI Machine Learning Repository Data Sets used to train the neural networks. The neural network model built using nntool. Two inputs are used which are the first and the second period grade while one target output is used which is the final grade. This study also aims to identify which training function is the best among three Feed-Forward Neural Networks known as Network1, Network2 and Network3. Three types of training functions have been selected in this study, which are Levenberg-Marquardt (TRAINLM), Gradient descent with momentum (TRAINGDM) and Gradient descent with adaptive learning rate (TRAINGDA). Each training function will be compared based on Performance value, correlation coefficient, gradient and epoch. MATLAB R2020a was used for data processing. The results show that the TRAINLM function is the most suitable function in predicting mathematics students’ performance because it has a higher correlation coefficient and a lower Performance value.
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Henzinger, Thomas A., Mathias Lechner e Đorđe Žikelić. "Scalable Verification of Quantized Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 35, n. 5 (18 maggio 2021): 3787–95. http://dx.doi.org/10.1609/aaai.v35i5.16496.

Testo completo
Abstract (sommario):
Formal verification of neural networks is an active topic of research, and recent advances have significantly increased the size of the networks that verification tools can handle. However, most methods are designed for verification of an idealized model of the actual network which works over real arithmetic and ignores rounding imprecisions. This idealization is in stark contrast to network quantization, which is a technique that trades numerical precision for computational efficiency and is, therefore, often applied in practice. Neglecting rounding errors of such low-bit quantized neural networks has been shown to lead to wrong conclusions about the network's correctness. Thus, the desired approach for verifying quantized neural networks would be one that takes these rounding errors into account. In this paper, we show that verifying the bit-exact implementation of quantized neural networks with bit-vector specifications is PSPACE-hard, even though verifying idealized real-valued networks and satisfiability of bit-vector specifications alone are each in NP. Furthermore, we explore several practical heuristics toward closing the complexity gap between idealized and bit-exact verification. In particular, we propose three techniques for making SMT-based verification of quantized neural networks more scalable. Our experiments demonstrate that our proposed methods allow a speedup of up to three orders of magnitude over existing approaches.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Zhang, Wei, Zhi Han, Xiai Chen, Baichen Liu, Huidi Jia e Yandong Tang. "Fully Kernected Neural Networks". Journal of Mathematics 2023 (28 giugno 2023): 1–9. http://dx.doi.org/10.1155/2023/1539436.

Testo completo
Abstract (sommario):
In this paper, we apply kernel methods to deep convolutional neural network (DCNN) to improve its nonlinear ability. DCNNs have achieved significant improvement in many computer vision tasks. For an image classification task, the accuracy comes to saturation when the depth and width of network are enough and appropriate. The saturation accuracy will not rise even by increasing the depth and width. We find that improving nonlinear ability of DCNNs can break through the saturation accuracy. In a DCNN, the former layer is more inclined to extract features and the latter layer is more inclined to classify features. Therefore, we apply kernel methods at the last fully connected layer to implicitly map features to a higher-dimensional space to improve nonlinear ability so that the network achieves better linear separability. Also, we name the network as fully kernected neural networks (fully connected neural networks with kernel methods). Our experiment result shows that fully kernected neural networks achieve higher classification accuracy and faster convergence rate than baseline networks.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Simons, Robert, e J. G. Taylor. "Neural Networks." Journal of the Operational Research Society 47, n. 4 (aprile 1996): 596. http://dx.doi.org/10.2307/3010740.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Schier, R. "Neural networks." Radiology 191, n. 1 (aprile 1994): 291. http://dx.doi.org/10.1148/radiology.191.1.8134593.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Tafti, Mohammed H. A. "Neural networks". ACM SIGMIS Database: the DATABASE for Advances in Information Systems 23, n. 1 (marzo 1992): 51–54. http://dx.doi.org/10.1145/134347.134361.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Turega, M. A. "Neural Networks". Computer Journal 35, n. 3 (1 giugno 1992): 290. http://dx.doi.org/10.1093/comjnl/35.3.290.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Jordan, Michael I., e Christopher M. Bishop. "Neural networks". ACM Computing Surveys 28, n. 1 (marzo 1996): 73–75. http://dx.doi.org/10.1145/234313.234348.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Dory, Robert A. "Neural Networks". Computers in Physics 4, n. 3 (1990): 324. http://dx.doi.org/10.1063/1.4822918.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Ganssle, Graham. "Neural networks". Leading Edge 37, n. 8 (agosto 2018): 616–19. http://dx.doi.org/10.1190/tle37080616.1.

Testo completo
Abstract (sommario):
We've all heard a proselytizing hyperbolist make the artificial-intelligence-is-going-to-steal-my-job speech. If you subscribe, look at the code in the notebook accompanying this tutorial at https://github.com/seg/tutorials-2018 . It demonstrates a small neural network. You'll find a simple system composed chiefly of multiply and add operations. That's really all that happens inside a neural network. Multiply and add. There's no magic here.
Gli stili APA, Harvard, Vancouver, ISO e altri
35

McGourty, Christine. "Neural networks". Nature 335, n. 6186 (settembre 1988): 103. http://dx.doi.org/10.1038/335103b0.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Simons, Robert. "Neural Networks". Journal of the Operational Research Society 47, n. 4 (aprile 1996): 596–97. http://dx.doi.org/10.1057/jors.1996.70.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Beatty, P. C. W. "Neural networks". Current Anaesthesia & Critical Care 9, n. 4 (agosto 1998): 168–73. http://dx.doi.org/10.1016/s0953-7112(98)80050-1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Cutler, Adele. "Neural Networks". Technometrics 42, n. 4 (novembre 2000): 432. http://dx.doi.org/10.1080/00401706.2000.10485724.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Signorini, DavidF, JimM Slattery, S. R. Dodds, Victor Lane e Peter Littlejohns. "Neural networks". Lancet 346, n. 8988 (dicembre 1995): 1500–1501. http://dx.doi.org/10.1016/s0140-6736(95)92525-2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Jefferson, MilesF, Neil Pendleton, Sam Lucas, MichaelA Horan e Lionel Tarassenko. "Neural networks". Lancet 346, n. 8991-8992 (dicembre 1995): 1712. http://dx.doi.org/10.1016/s0140-6736(95)92880-4.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Gutfreund, H. "NEURAL NETWORKS". International Journal of Modern Physics B 04, n. 06 (maggio 1990): 1223–39. http://dx.doi.org/10.1142/s0217979290000607.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Medoff, Deborah R., e M.-A. Tagamets. "Neural Networks". American Journal of Psychiatry 157, n. 10 (ottobre 2000): 1571. http://dx.doi.org/10.1176/appi.ajp.157.10.1571.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Lewis, David A. "Neural Networks". American Journal of Psychiatry 157, n. 11 (novembre 2000): 1752. http://dx.doi.org/10.1176/appi.ajp.157.11.1752.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Graybiel, Ann M. "Neural Networks". American Journal of Psychiatry 158, n. 1 (gennaio 2001): 21. http://dx.doi.org/10.1176/appi.ajp.158.1.21.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Tamminga, Carol A., e Henry H. Holcomb. "Neural Networks". American Journal of Psychiatry 158, n. 2 (febbraio 2001): 185. http://dx.doi.org/10.1176/appi.ajp.158.2.185.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Schwindling, Jerome. "Neural Networks". EPJ Web of Conferences 4 (2010): 02002. http://dx.doi.org/10.1051/epjconf/20100402002.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Widrow, Bernard, David E. Rumelhart e Michael A. Lehr. "Neural networks". Communications of the ACM 37, n. 3 (marzo 1994): 93–105. http://dx.doi.org/10.1145/175247.175257.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Kock, Gerd. "Neural networks". Microprocessing and Microprogramming 38, n. 1-5 (settembre 1993): 679. http://dx.doi.org/10.1016/0165-6074(93)90210-c.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Titterington, Michael. "Neural networks". WIREs Computational Statistics 2, n. 1 (21 dicembre 2009): 1–8. http://dx.doi.org/10.1002/wics.50.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Xu, Zenglin. "Tensor Networks Meet Neural Networks". Journal of Physics: Conference Series 2278, n. 1 (1 maggio 2022): 012003. http://dx.doi.org/10.1088/1742-6596/2278/1/012003.

Testo completo
Abstract (sommario):
Abstract As a simulation of the human cognitive system, deep neural networks have achieved great success in many machine learning tasks and are the main driving force of the current development of artificial intelligence. On the other hand, tensor networks as an approximation of quantum many-body systems in quantum physics are applied to quantum physics, statistical physics, quantum chemistry and machine learning. This talk will first give a brief introduction to neural networks and tensor networks, and then discuss the cross-field research between deep neural networks and tensor networks, such as network compression and knowledge fusion, including our recent work on tensor neural networks. Finally, this talk will also discuss the connection to quantum machine learning.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia