Gotowa bibliografia na temat „Artificial neural network”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Artificial neural network”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Artificial neural network"

1

CVS, Rajesh, i Nadikoppula Pardhasaradhi. "Analysis of Artificial Neural-Network". International Journal of Trend in Scientific Research and Development Volume-2, Issue-6 (31.10.2018): 418–28. http://dx.doi.org/10.31142/ijtsrd18482.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

O., Sheeba, Jithin George, Rajin P. K., Nisha Thomas i Thomas George. "Glaucoma Detection Using Artificial Neural Network". International Journal of Engineering and Technology 6, nr 2 (2014): 158–61. http://dx.doi.org/10.7763/ijet.2014.v6.687.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Al-Abaid, Shaimaa Abbas. "Artificial Neural Network Based Image Encryption Technique". Journal of Advanced Research in Dynamical and Control Systems 12, SP3 (28.02.2020): 1184–89. http://dx.doi.org/10.5373/jardcs/v12sp3/20201365.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Gupta, Sakshi. "Concrete Mix Design Using Artificial Neural Network". Journal on Today's Ideas-Tomorrow's Technologies 1, nr 1 (3.06.2013): 29–43. http://dx.doi.org/10.15415/jotitt.2013.11003.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Al-Rawi, Kamal R., i Consuelo Gonzalo. "Adaptive Pointing Theory (APT) Artificial Neural Network". International Journal of Computer and Communication Engineering 3, nr 3 (2014): 212–15. http://dx.doi.org/10.7763/ijcce.2014.v3.322.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Jung, Jisoo, i Ji Won Yoon. "Author Identification Using Artificial Neural Network". Journal of the Korea Institute of Information Security and Cryptology 26, nr 5 (31.10.2016): 1191–99. http://dx.doi.org/10.13089/jkiisc.2016.26.5.1191.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Mahat, Norpah, Nor Idayunie Nording, Jasmani Bidin, Suzanawati Abu Hasan i Teoh Yeong Kin. "Artificial Neural Network (ANN) to Predict Mathematics Students’ Performance". Journal of Computing Research and Innovation 7, nr 1 (30.03.2022): 29–38. http://dx.doi.org/10.24191/jcrinn.v7i1.264.

Pełny tekst źródła
Streszczenie:
Predicting students’ academic performance is very essential to produce high-quality students. The main goal is to continuously help students to increase their ability in the learning process and to help educators as well in improving their teaching skills. Therefore, this study was conducted to predict mathematics students’ performance using Artificial Neural Network (ANN). The secondary data from 382 mathematics students from UCI Machine Learning Repository Data Sets used to train the neural networks. The neural network model built using nntool. Two inputs are used which are the first and the second period grade while one target output is used which is the final grade. This study also aims to identify which training function is the best among three Feed-Forward Neural Networks known as Network1, Network2 and Network3. Three types of training functions have been selected in this study, which are Levenberg-Marquardt (TRAINLM), Gradient descent with momentum (TRAINGDM) and Gradient descent with adaptive learning rate (TRAINGDA). Each training function will be compared based on Performance value, correlation coefficient, gradient and epoch. MATLAB R2020a was used for data processing. The results show that the TRAINLM function is the most suitable function in predicting mathematics students’ performance because it has a higher correlation coefficient and a lower Performance value.
Style APA, Harvard, Vancouver, ISO itp.
8

Yashchenko, V. O. "Artificial brain. Biological and artificial neural networks, advantages, disadvantages, and prospects for development". Mathematical machines and systems 2 (2023): 3–17. http://dx.doi.org/10.34121/1028-9763-2023-2-3-17.

Pełny tekst źródła
Streszczenie:
The article analyzes the problem of developing artificial neural networks within the framework of creating an artificial brain. The structure and functions of the biological brain are considered. The brain performs many functions such as controlling the organism, coordinating movements, processing information, memory, thinking, attention, and regulating emotional states, and consists of billions of neurons interconnected by a multitude of connections in a biological neural network. The structure and functions of biological neural networks are discussed, and their advantages and disadvantages are described in detail compared to artificial neural networks. Biological neural networks solve various complex tasks in real-time, which are still inaccessible to artificial networks, such as simultaneous perception of information from different sources, including vision, hearing, smell, taste, and touch, recognition and analysis of signals from the environment with simultaneous decision-making in known and uncertain situations. Overall, despite all the advantages of biological neural networks, artificial intelligence continues to rapidly progress and gradually win positions over the biological brain. It is assumed that in the future, artificial neural networks will be able to approach the capabilities of the human brain and even surpass it. The comparison of human brain neural networks with artificial neural networks is carried out. Deep neural networks, their training and use in various applications are described, and their advantages and disadvantages are discussed in detail. Possible ways for further development of this direction are analyzed. The Human Brain project aimed at creating a computer model that imitates the functions of the human brain and the advanced artificial intelligence project – ChatGPT – are briefly considered. To develop an artificial brain, a new type of neural network is proposed – neural-like growing networks, the structure and functions of which are similar to natural biological networks. A simplified scheme of the structure of an artificial brain based on a neural-like growing network is presented in the paper.
Style APA, Harvard, Vancouver, ISO itp.
9

JORGENSEN, THOMAS D., BARRY P. HAYNES i CHARLOTTE C. F. NORLUND. "PRUNING ARTIFICIAL NEURAL NETWORKS USING NEURAL COMPLEXITY MEASURES". International Journal of Neural Systems 18, nr 05 (październik 2008): 389–403. http://dx.doi.org/10.1142/s012906570800166x.

Pełny tekst źródła
Streszczenie:
This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.
Style APA, Harvard, Vancouver, ISO itp.
10

Begum, Afsana, Md Masiur Rahman i Sohana Jahan. "Medical diagnosis using artificial neural networks". Mathematics in Applied Sciences and Engineering 5, nr 2 (4.06.2024): 149–64. http://dx.doi.org/10.5206/mase/17138.

Pełny tekst źródła
Streszczenie:
Medical diagnosis using Artificial Neural Networks (ANN) and computer-aided diagnosis with deep learning is currently a very active research area in medical science. In recent years, for medical diagnosis, neural network models are broadly considered since they are ideal for recognizing different kinds of diseases including autism, cancer, tumor lung infection, etc. It is evident that early diagnosis of any disease is vital for successful treatment and improved survival rates. In this research, five neural networks, Multilayer neural network (MLNN), Probabilistic neural network (PNN), Learning vector quantization neural network (LVQNN), Generalized regression neural network (GRNN), and Radial basis function neural network (RBFNN) have been explored. These networks are applied to several benchmarking data collected from the University of California Irvine (UCI) Machine Learning Repository. Results from numerical experiments indicate that each network excels at recognizing specific physical issues. In the majority of cases, both the Learning Vector Quantization Neural Network and the Probabilistic Neural Network demonstrate superior performance compared to the other networks.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Artificial neural network"

1

BRUCE, WILLIAM, i OTTER EDVIN VON. "Artificial Neural Network Autonomous Vehicle : Artificial Neural Network controlled vehicle". Thesis, KTH, Maskinkonstruktion (Inst.), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-191192.

Pełny tekst źródła
Streszczenie:
This thesis aims to explain how a Artificial Neural Network algorithm could be used as means of control for a Autonomous Vehicle. It describes the theory behind the neural network and Autonomous Vehicles, and how a prototype with a camera as its only input can be designed to test and evaluate the algorithms capabilites, and also drive using it. The thesis will show that the Artificial Neural Network can, with a image resolution of 100 × 100 and a training set with 900 images, makes decisions with a 0.78 confidence level.
Denna rapport har som mal att beskriva hur en Artificiellt Neuronnatverk al- goritm kan anvandas for att kontrollera en bil. Det beskriver teorin bakom neu- ronnatverk och autonoma farkoster samt hur en prototyp, som endast anvander en kamera som indata, kan designas for att testa och utvardera algoritmens formagor. Rapporten kommer visa att ett neuronnatverk kan, med bildupplos- ningen 100 × 100 och traningsdata innehallande 900 bilder, ta beslut med en 0.78 sakerhet.
Style APA, Harvard, Vancouver, ISO itp.
2

Смаль, Богдан Віталійович. "Artificial Neural Network". Thesis, Київський національний університет технологій та дизайну, 2017. https://er.knutd.edu.ua/handle/123456789/7384.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Chambers, Mark Andrew. "Queuing network construction using artificial neural networks /". The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488193665234291.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Leija, Carlos Ivan. "An artificial neural network with reconfigurable interconnection network". To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2008. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Alkharobi, Talal M. "Secret sharing using artificial neural network". Diss., Texas A&M University, 2004. http://hdl.handle.net/1969.1/1223.

Pełny tekst źródła
Streszczenie:
Secret sharing is a fundamental notion for secure cryptographic design. In a secret sharing scheme, a set of participants shares a secret among them such that only pre-specified subsets of these shares can get together to recover the secret. This dissertation introduces a neural network approach to solve the problem of secret sharing for any given access structure. Other approaches have been used to solve this problem. However, the yet known approaches result in exponential increase in the amount of data that every participant need to keep. This amount is measured by the secret sharing scheme information rate. This work is intended to solve the problem with better information rate.
Style APA, Harvard, Vancouver, ISO itp.
6

Zhao, Lichen. "Random pulse artificial neural network architecture". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0006/MQ36758.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Parzhin, Yu, А. Rohovyi i V. Nevliudova. "Detector Artificial Neural Network. Neurobiological rationale". Thesis, ХНУРЕ, 2019. http://openarchive.nure.ua/handle/document/10037.

Pełny tekst źródła
Streszczenie:
On the basis of the formulated hypotheses the information model of a neuron-detector is suggested, the detector being one of the basic elements of a detector artificial neural network (DANN). The paper subjects the connectionist paradigm of ANN building to criticism and suggests a new presentation paradigm for ANN building and neuroelements (NE) learning. The adequacy of the suggested model is proved by the fact that is does not contradict the modern propositions of neuropsychology and neurophysiology.
Style APA, Harvard, Vancouver, ISO itp.
8

Ng, Justin. "Artificial Neural Network-Based Robotic Control". DigitalCommons@CalPoly, 2018. https://digitalcommons.calpoly.edu/theses/1846.

Pełny tekst źródła
Streszczenie:
Artificial neural networks (ANNs) are highly-capable alternatives to traditional problem solving schemes due to their ability to solve non-linear systems with a nonalgorithmic approach. The applications of ANNs range from process control to pattern recognition and, with increasing importance, robotics. This paper demonstrates continuous control of a robot using the deep deterministic policy gradients (DDPG) algorithm, an actor-critic reinforcement learning strategy, originally conceived by Google DeepMind. After training, the robot performs controlled locomotion within an enclosed area. The paper also details the robot design process and explores the challenges of implementation in a real-time system.
Style APA, Harvard, Vancouver, ISO itp.
9

Khazanova, Yekaterina. "Experiments with Neural Network Libraries". University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1527607591612278.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Lukashev, A. "Basics of artificial neural networks (ANNs)". Thesis, Київський національний університет технологій та дизайну, 2018. https://er.knutd.edu.ua/handle/123456789/11353.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Artificial neural network"

1

Shanmuganathan, Subana, i Sandhya Samarasinghe, red. Artificial Neural Network Modelling. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-28495-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

S, Mohan. Artificial neural network modelling. Roorkee: Indian National Committee on Hydrology, 2007.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Neural network models in artificial intelligence. New York: E. Horwood, 1990.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Zeidenberg, Matthew. Neural network models in artificial intelligence and cognition. Chichester: Ellis Horwood, 1989.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Bisi, Manjubala, i Neeraj Kumar Goyal. Artificial Neural Network for Software Reliability Prediction. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2017. http://dx.doi.org/10.1002/9781119223931.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Roberts, S. G. The evolution of artificial neural network structures. Manchester: UMIST, 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Kattan, Ali. Artificial neural network training and software implementation techniques. Hauppauge, N.Y: Nova Science Publishers, 2011.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Kattan, Ali. Artificial neural network training and software implementation techniques. Hauppauge, N.Y: Nova Science Publishers, 2011.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

North Atlantic Treaty Organization. Advisory Group for Aerospace Research and Development. Artificial neural network approaches in guidance and control. Neuilly sur Seine, France: AGARD, 1991.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Smith, Amelia. Predicting disease outcomes using an artificial neural network. Oxford: Oxford Brookes University, 2003.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Artificial neural network"

1

Zhang, Dengsheng. "Artificial Neural Network". W Texts in Computer Science, 207–42. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-17989-2_9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Rathore, Heena. "Artificial Neural Network". W Mapping Biological Systems to Network Systems, 79–96. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-29782-8_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Shekhar, Shashi, i Hui Xiong. "Artificial Neural Network". W Encyclopedia of GIS, 31. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-35973-1_72.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Zhou, Hong. "Artificial Neural Network". W Learn Data Mining Through Excel, 163–87. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5982-5_11.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Ayyadevara, V. Kishore. "Artificial Neural Network". W Pro Machine Learning Algorithms, 135–65. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3564-5_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Zhang, Zhihua. "Artificial Neural Network". W Multivariate Time Series Analysis in Climate and Environmental Research, 1–35. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-67340-0_1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Attew, David. "Artificial Neural Network". W Perspectives in Neural Computing, 157–66. London: Springer London, 2002. http://dx.doi.org/10.1007/978-1-4471-0151-2_18.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Wang, Sun-Chong. "Artificial Neural Network". W Interdisciplinary Computing in Java Programming, 81–100. Boston, MA: Springer US, 2003. http://dx.doi.org/10.1007/978-1-4615-0377-4_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Weik, Martin H. "artificial neural network". W Computer Science and Communications Dictionary, 65. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_860.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Majumder, Mrinmoy. "Artificial Neural Network". W Impact of Urbanization on Water Shortage in Face of Climatic Aberrations, 49–54. Singapore: Springer Singapore, 2015. http://dx.doi.org/10.1007/978-981-4560-73-3_3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Artificial neural network"

1

Zheng, Shengjie, Lang Qian, Pingsheng Li, Chenggang He, Xiaoqi Qin i Xiaojian Li. "An Introductory Review of Spiking Neural Network and Artificial Neural Network: From Biological Intelligence to Artificial Intelligence". W 8th International Conference on Artificial Intelligence (ARIN 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.121010.

Pełny tekst źródła
Streszczenie:
Stemming from the rapid development of artificial intelligence, which has gained expansive success in pattern recognition, robotics, and bioinformatics, neuroscience is also gaining tremendous progress. A kind of spiking neural network with biological interpretability is gradually receiving wide attention, and this kind of neural network is also regarded as one of the directions toward general artificial intelligence. This review summarizes the basic properties of artificial neural networks as well as spiking neural networks. Our focus is on the biological background and theoretical basis of spiking neurons, different neuronal models, and the connectivity of neural circuits. We also review the mainstream neural network learning mechanisms and network architectures. This review hopes to attract different researchers and advance the development of brain intelligence and artificial intelligence.
Style APA, Harvard, Vancouver, ISO itp.
2

Yang, Zhun, Adam Ishay i Joohyung Lee. "NeurASP: Embracing Neural Networks into Answer Set Programming". W Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/243.

Pełny tekst źródła
Streszczenie:
We present NeurASP, a simple extension of answer set programs by embracing neural networks. By treating the neural network output as the probability distribution over atomic facts in answer set programs, NeurASP provides a simple and effective way to integrate sub-symbolic and symbolic computation. We demonstrate how NeurASP can make use of a pre-trained neural network in symbolic computation and how it can improve the neural network's perception result by applying symbolic reasoning in answer set programming. Also, NeurASP can make use of ASP rules to train a neural network better so that a neural network not only learns from implicit correlations from the data but also from the explicit complex semantic constraints expressed by the rules.
Style APA, Harvard, Vancouver, ISO itp.
3

Kumari, Neha, i Vani Bhargava. "Artificial Neural Network". W 2019 International Conference on Issues and Challenges in Intelligent Computing Techniques (ICICT). IEEE, 2019. http://dx.doi.org/10.1109/icict46931.2019.8977685.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Mehdizadeh, Nasser S., Payam Sinaei i Ali L. Nichkoohi. "Modeling Jones’ Reduced Chemical Mechanism of Methane Combustion With Artificial Neural Network". W ASME 2010 3rd Joint US-European Fluids Engineering Summer Meeting collocated with 8th International Conference on Nanochannels, Microchannels, and Minichannels. ASMEDC, 2010. http://dx.doi.org/10.1115/fedsm-icnmm2010-31186.

Pełny tekst źródła
Streszczenie:
The present work reports a way of using Artificial Neural Networks for modeling and integrating the governing chemical kinetics differential equations of Jones’ reduced chemical mechanism for methane combustion. The chemical mechanism is applicable to both diffusion and premixed laminar flames. A feed-forward multi-layer neural network is incorporated as neural network architecture. In order to find sets of input-output data, for adapting the neural network’s synaptic weights in the training phase, a thermochemical analysis is embedded to find the chemical species mole fractions. An analysis of computational performance along with a comparison between the neural network approach and other conventional methods, used to represent the chemistry, are presented and the ability of neural networks for representing a non-linear chemical system is illustrated.
Style APA, Harvard, Vancouver, ISO itp.
5

Pryor, Connor, Charles Dickens, Eriq Augustine, Alon Albalak, William Yang Wang i Lise Getoor. "NeuPSL: Neural Probabilistic Soft Logic". W Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/461.

Pełny tekst źródła
Streszczenie:
In this paper, we introduce Neural Probabilistic Soft Logic (NeuPSL), a novel neuro-symbolic (NeSy) framework that unites state-of-the-art symbolic reasoning with the low-level perception of deep neural networks. To model the boundary between neural and symbolic representations, we propose a family of energy-based models, NeSy Energy-Based Models, and show that they are general enough to include NeuPSL and many other NeSy approaches. Using this framework, we show how to seamlessly integrate neural and symbolic parameter learning and inference in NeuPSL. Through an extensive empirical evaluation, we demonstrate the benefits of using NeSy methods, achieving upwards of 30% improvement over independent neural network models. On a well-established NeSy task, MNIST-Addition, NeuPSL demonstrates its joint reasoning capabilities by outperforming existing NeSy approaches by up to 10% in low-data settings. Furthermore, NeuPSL achieves a 5% boost in performance over state-of-the-art NeSy methods in a canonical citation network task with up to a 40 times speed up.
Style APA, Harvard, Vancouver, ISO itp.
6

Zhan, Tiffany. "Hyper-Parameter Tuning in Deep Neural Network Learning". W 8th International Conference on Artificial Intelligence and Applications (AI 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.121809.

Pełny tekst źródła
Streszczenie:
Deep learning has been increasingly used in various applications such as image and video recognition, recommender systems, image classification, image segmentation, medical image analysis, natural language processing, brain–computer interfaces, and financial time series. In deep learning, a convolutional neural network (CNN) is regularized versions of multilayer perceptrons. Multilayer perceptrons usually mean fully connected networks, that is, each neuron in one layer is connected to all neurons in the next layer. The full connectivity of these networks makes them prone to overfitting data. Typical ways of regularization, or preventing overfitting, include penalizing parameters during training or trimming connectivity. CNNs use relatively little pre-processing compared to other image classification algorithms. Given the rise in popularity and use of deep neural network learning, the problem of tuning hyperparameters is increasingly prominent tasks in constructing efficient deep neural networks. In this paper, the tuning of deep neural network learning (DNN) hyper-parameters is explored using an evolutionary based approach popularized for use in estimating solutions to problems where the problem space is too large to get an exact solution.
Style APA, Harvard, Vancouver, ISO itp.
7

Mahajan, R. L. "Strategies for Building Artificial Neural Network Models". W ASME 2000 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2000. http://dx.doi.org/10.1115/imece2000-1464.

Pełny tekst źródła
Streszczenie:
Abstract An artificial neural network (ANN) is a massively parallel, dynamic system of processing elements, neurons, which are connected in complicated patterns to allow for a variety of interactions among the inputs to produce the desired output. It has the ability to learn directly from example data rather than by following the programmed rules based on a knowledge base. There is virtually no limit to what an ANN can predict or decipher, so long as it has been trained properly through examples which encompass the entire range of desired predictions. This paper provides an overview of such strategies needed to build accurate ANN models. Following a general introduction to artificial neural networks, the paper will describe different techniques to build and train ANN models. Step-by-step procedures will be described to demonstrate the mechanics of building neural network models, with particular emphasis on feedforward neural networks using back-propagation learning algorithm. The network structure and pre-processing of data are two significant aspects of ANN model building. The former has a significant influence on the predictive capability of the network [1]. Several studies have addressed the issue of optimal network structure. Kim and May [2] use statistical experimental design to determine an optimal network for a specific application. Bhat and McAvoy [3] propose a stripping algorithm, starting with a large network and then reducing the network complexity by removing unnecessary weights/nodes. This ‘complex-to-simple’ procedure requires heavy and tedious computation. Villiers and Bernard [4] conclude that although there is no significant difference between the optimal performance of one or two hidden layer networks, single layer networks do better classification on average. Marwah et al. [5] advocate a simple-to-complex methodology in which the training starts with the simplest ANN structure. The complexity of the structure is incrementally stepped-up till an acceptable learning performance is obtained. Preprocessing of data can lead to substantial improvements in the training process. Kown et al. [6] propose a data pre-processing algorithm for a highly skewed data set. Marwah et al. [5] propose two different strategies for dealing with the data. For applications with a significant amount of historical data, smart select methodology is proposed that ensures equal weighted distribution of the data over the range of the input parameters. For applications, where there is scarcity of data or where the experiments are expensive to perform, a statistical design of experiments approach is suggested. In either case, it is shown that dividing the data into training, testing and validation ensures an accurate ANN model that has excellent predictive capabilities. The paper also describes recently developed concepts of physical-neural network models and model transfer techniques. In the former, an ANN model is built on the data generated through the ‘first-principles’ analytical or numerical model of the process under consideration. It is shown that such a model, termed as a physical-neural network model has the accuracy of the first-principles model but yet is orders of magnitude faster to execute. In recognition of the fact that such a model has all the approximations that are generally inherent in physical models for many complex processes, model transfer techniques have been developed [6] that allow economical development of accurate process equipment models. Examples from thermally-based materials processing will be described to illustrate the application of the basic concepts involved.
Style APA, Harvard, Vancouver, ISO itp.
8

Dolezel, Petr, Martin Manska, Ivan Taufer i Libor Havlicek. "Artificial neural network promotion". W 2013 International Conference on Process Control (PC). IEEE, 2013. http://dx.doi.org/10.1109/pc.2013.6581411.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Pande, Bhushan, Eesha Kulkarni, Priyansh Solanki i Pratvina Talele. "Augean Artificial Neural Network". W 2020 Second International Conference on Inventive Research in Computing Applications (ICIRCA). IEEE, 2020. http://dx.doi.org/10.1109/icirca48905.2020.9183276.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Mughaz, Dror, Michael Cohen, Sagit Mejahez, Tal Ades i Dan Bouhnik. "From an Artificial Neural Network to Teaching [Abstract]". W InSITE 2020: Informing Science + IT Education Conferences: Online. Informing Science Institute, 2020. http://dx.doi.org/10.28945/4557.

Pełny tekst źródła
Streszczenie:
[This Proceedings paper was revised and published in the "Interdisciplinary Journal of e-Skills and Lifelong Learning," 16, 1-17.] Aim/Purpose: Using Artificial Intelligence with Deep Learning (DL) techniques, which mimic the action of the brain, to improve a student’s grammar learning process. Finding the subject of a sentence using DL, and learning, by way of this computer field, to analyze human learning processes and mistakes. In addition, showing Artificial Intelligence learning processes, with and without a general overview of the problem that it is under examination. Applying the idea of the general perspective that the network gets on the sentences and deriving recommendations from this for teaching processes. Background: We looked for common patterns of computer errors and human grammar mistakes. Also deducing the neural network’s learning process, deriving conclusions, and applying concepts from this process to the process of human learning. Methodology: We used DL technologies and research methods. After analysis, we built models from three types of complex neuronal networks – LSTM, Bi-LSTM, and GRU – with sequence-to-sequence architecture. After this, we combined the sequence-to- sequence architecture model with the attention mechanism that gives a general overview of the input that the network receives. Contribution: The cost of computer applications is cheaper than that of manual human effort, and the availability of a computer program is much greater than that of humans to perform the same task. Thus, using computer applications, we can get many desired examples of mistakes without having to pay humans to perform the same task. Understanding the mistakes of the machine can help us to under-stand the human mistakes, because the human brain is the model of the artificial neural network. This way, we can facilitate the student learning process by teaching students not to make mistakes that we have seen made by the artificial neural network. We hope that with the method we have developed, it will be easier for teachers to discover common mistakes in students’ work before starting to teach them. In addition, we show that a “general explanation” of the issue under study can help the teaching and learning process. Findings: We performed the test case on the Hebrew language. From the mistakes we received from the computerized neuronal networks model we built, we were able to classify common human errors. That is, we were able to find a correspondence between machine mistakes and student mistakes. Recommendations for Practitioners: Use an artificial neural network to discover mistakes, and teach students not to make those mistakes. We recommend that before the teacher begins teaching a new topic, he or she gives a general explanation of the problems this topic deals with, and how to solve them. Recommendations for Researchers: To use machines that simulate the learning processes of the human brain, and study if we can thus learn about human learning processes. Impact on Society: When the computer makes the same mistakes as a human would, it is very easy to learn from those mistakes and improve the study process. The fact that ma-chine and humans make similar mistakes is a valuable insight, especially in the field of education, Since we can generate and analyze computer system errors instead of doing a survey of humans (who make mistakes similar to those of the machine); the teaching process becomes cheaper and more efficient. Future Research: We plan to create an automatic grammar-mistakes maker (for instance, by giving the artificial neural network only a tiny data-set to learn from) and ask the students to correct the errors made. In this way, the students will practice on the material in a focused manner. We plan to apply these techniques to other education subfields and, also, to non-educational fields. As far as we know, this is the first study to go in this direction ‒ instead of looking at organisms and building machines, to look at machines and learn about organisms.
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Artificial neural network"

1

Powell, Bruce C. Artificial Neural Network Analysis System. Fort Belvoir, VA: Defense Technical Information Center, luty 2001. http://dx.doi.org/10.21236/ada392390.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Sgurev, Vassil. Artificial Neural Networks as a Network Flow with Capacities. "Prof. Marin Drinov" Publishing House of Bulgarian Academy of Sciences, wrzesień 2018. http://dx.doi.org/10.7546/crabs.2018.09.12.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Karakowski, Joseph A., i Hai H. Phu. A Fuzzy Hypercube Artificial Neural Network Classifier. Fort Belvoir, VA: Defense Technical Information Center, październik 1998. http://dx.doi.org/10.21236/ada354805.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Vitela, J. E., U. R. Hanebutte i J. Reifman. An artificial neural network controller for intelligent transportation systems applications. Office of Scientific and Technical Information (OSTI), kwiecień 1996. http://dx.doi.org/10.2172/219376.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Vela, Daniel. Forecasting latin-american yield curves: an artificial neural network approach. Bogotá, Colombia: Banco de la República, marzec 2013. http://dx.doi.org/10.32468/be.761.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Markova, Oksana, Serhiy Semerikov i Maiia Popel. СoCalc as a Learning Tool for Neural Network Simulation in the Special Course “Foundations of Mathematic Informatics”. Sun SITE Central Europe, maj 2018. http://dx.doi.org/10.31812/0564/2250.

Pełny tekst źródła
Streszczenie:
The role of neural network modeling in the learning сontent of special course “Foundations of Mathematic Informatics” was discussed. The course was developed for the students of technical universities – future IT-specialists and directed to breaking the gap between theoretic computer science and it’s applied applications: software, system and computing engineering. CoCalc was justified as a learning tool of mathematical informatics in general and neural network modeling in particular. The elements of technique of using CoCalc at studying topic “Neural network and pattern recognition” of the special course “Foundations of Mathematic Informatics” are shown. The program code was presented in a CofeeScript language, which implements the basic components of artificial neural network: neurons, synaptic connections, functions of activations (tangential, sigmoid, stepped) and their derivatives, methods of calculating the network`s weights, etc. The features of the Kolmogorov–Arnold representation theorem application were discussed for determination the architecture of multilayer neural networks. The implementation of the disjunctive logical element and approximation of an arbitrary function using a three-layer neural network were given as an examples. According to the simulation results, a conclusion was made as for the limits of the use of constructed networks, in which they retain their adequacy. The framework topics of individual research of the artificial neural networks is proposed.
Style APA, Harvard, Vancouver, ISO itp.
7

Hsieh, Bernard B., i Charles L. Bartos. Riverflow/River Stage Prediction for Military Applications Using Artificial Neural Network Modeling. Fort Belvoir, VA: Defense Technical Information Center, sierpień 2000. http://dx.doi.org/10.21236/ada382991.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Huang, Wenrui, i Catherine Murray. Application of an Artificial Neural Network to Predict Tidal Currents in an Inlet. Fort Belvoir, VA: Defense Technical Information Center, marzec 2003. http://dx.doi.org/10.21236/ada592255.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Fitch, J. The radon transform for data reduction, line detection, and artificial neural network preprocessing. Office of Scientific and Technical Information (OSTI), maj 1990. http://dx.doi.org/10.2172/6874873.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Reifman, Jaques, i Javier Vitela. Artificial Neural Network Training with Conjugate Gradients for Diagnosing Transients in Nuclear Power Plants. Office of Scientific and Technical Information (OSTI), marzec 1993. http://dx.doi.org/10.2172/10198077.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii