Academic literature on the topic 'Neural Networks method'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Neural Networks method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Neural Networks method"

1

Klyuchko, O. M. "APPLICATION OF ARTIFICIAL NEURAL NETWORKS METHOD IN BIOTECHNOLOGY." Biotechnologia Acta 10, no. 4 (August 2017): 5–13. http://dx.doi.org/10.15407/biotech10.04.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Keping, Shuang Gu, and Dongyang Yan. "A Link Prediction Method Based on Neural Networks." Applied Sciences 11, no. 11 (June 3, 2021): 5186. http://dx.doi.org/10.3390/app11115186.

Full text
Abstract:
Link prediction to optimize network performance is of great significance in network evolution. Because of the complexity of network systems and the uncertainty of network evolution, it faces many challenges. This paper proposes a new link prediction method based on neural networks trained on scale-free networks as input data, and optimized networks trained by link prediction models as output data. In order to solve the influence of the generalization of the neural network on the experiments, a greedy link pruning strategy is applied. We consider network efficiency and the proposed global network structure reliability as objectives to comprehensively evaluate link prediction performance and the advantages of the neural network method. The experimental results demonstrate that the neural network method generates the optimized networks with better network efficiency and global network structure reliability than the traditional link prediction models.
APA, Harvard, Vancouver, ISO, and other styles
3

Golubinskiy, Andrey, and Andrey Tolstykh. "Hybrid method of conventional neural network training." Informatics and Automation 20, no. 2 (March 30, 2021): 463–90. http://dx.doi.org/10.15622/ia.2021.20.2.8.

Full text
Abstract:
The paper proposes a hybrid method for training convolutional neural networks. The method consists of combining second and first-order methods for different elements of the architecture of a convolutional neural network. The hybrid convolution neural network training method allows to achieve significantly better convergence compared to Adam; however, it requires fewer computational operations to implement. Using the proposed method, it is possible to train networks on which learning paralysis occurs when using first-order methods. Moreover, the proposed method could adjust its computational complexity to the hardware on which the computation is performed; at the same time, the hybrid method allows using the mini-packet learning approach. The analysis of the ratio of computations between convolutional neural networks and fully connected artificial neural networks is presented. The mathematical apparatus of error optimization of artificial neural networks is considered, including the method of backpropagation of the error, the Levenberg-Marquardt algorithm. The main limitations of these methods that arise when training a convolutional neural network are analyzed. The analysis of the stability of the proposed method when the initialization parameters are changed. The results of the applicability of the method in various problems are presented.
APA, Harvard, Vancouver, ISO, and other styles
4

JORGENSEN, THOMAS D., BARRY P. HAYNES, and CHARLOTTE C. F. NORLUND. "PRUNING ARTIFICIAL NEURAL NETWORKS USING NEURAL COMPLEXITY MEASURES." International Journal of Neural Systems 18, no. 05 (October 2008): 389–403. http://dx.doi.org/10.1142/s012906570800166x.

Full text
Abstract:
This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.
APA, Harvard, Vancouver, ISO, and other styles
5

Peng, Yun, and Zonglin Zhou. "A neural network learning method for belief networks." International Journal of Intelligent Systems 11, no. 11 (December 7, 1998): 893–915. http://dx.doi.org/10.1002/(sici)1098-111x(199611)11:11<893::aid-int3>3.0.co;2-u.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Neruda, M., and R. Neruda. "To contemplate quantitative and qualitative water features by neural networks method." Plant, Soil and Environment 48, No. 7 (December 21, 2011): 322–26. http://dx.doi.org/10.17221/4375-pse.

Full text
Abstract:
An application deals with calibration of neural model and Fourier series model for Ploučnice catchment. This approach has an advantage, that the network choice is independent of other example&rsquo;s parameters. Each networks, and their variants (different units and hidden layer number) can be connected in as a&nbsp;black box and tested independently. A&nbsp;Stuttgart neural simulator SNNS and a&nbsp;multiagent hybrid system Bang2 developed in Institute of Computer Science, AS CR have been used for testing. A&nbsp;perceptron network has been constructed, which was trained by back propagation method improved with a&nbsp;momentum term. The network is capable of an accurate forecast of the next day runoff based on the runoff and rainfall values from previous day.
APA, Harvard, Vancouver, ISO, and other styles
7

Feng, Yifan, Haoxuan You, Zizhao Zhang, Rongrong Ji, and Yue Gao. "Hypergraph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3558–65. http://dx.doi.org/10.1609/aaai.v33i01.33013558.

Full text
Abstract:
In this paper, we present a hypergraph neural networks (HGNN) framework for data representation learning, which can encode high-order data correlation in a hypergraph structure. Confronting the challenges of learning representation for complex data in real practice, we propose to incorporate such data structure in a hypergraph, which is more flexible on data modeling, especially when dealing with complex data. In this method, a hyperedge convolution operation is designed to handle the data correlation during representation learning. In this way, traditional hypergraph learning procedure can be conducted using hyperedge convolution operations efficiently. HGNN is able to learn the hidden layer representation considering the high-order data structure, which is a general framework considering the complex data correlations. We have conducted experiments on citation network classification and visual object recognition tasks and compared HGNN with graph convolutional networks and other traditional methods. Experimental results demonstrate that the proposed HGNN method outperforms recent state-of-theart methods. We can also reveal from the results that the proposed HGNN is superior when dealing with multi-modal data compared with existing methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Fan, Yuanliang, Han Wu, Weiming Chen, Zeyu Jiang, Xinghua Huang, and Si-Zhe Chen. "A Data Augmentation Method to Optimize Neural Networks for Predicting SOH of Lithium Batteries." Journal of Physics: Conference Series 2203, no. 1 (February 1, 2022): 012034. http://dx.doi.org/10.1088/1742-6596/2203/1/012034.

Full text
Abstract:
Abstract Neural Network is an excellent methodology for predicting lithium battery state of health (SOH). However, if the data amount is insufficient, the neural network will be overfitted, which decreass the prediction accuracy of SOH. To solve this issue, a data augmentation method based on random noise superposition is proposed. The original dataset is expanded in this approach, which enhances the neural network’s generalization ability. Moreover, random noises simulate capacity regeneration, capacity dips and sensor errors during the actual operation of lithium batteries, which also improves the adaptive and robustness of the SOH prediction method. The proposed method is validated on mainstream neural networks, including long short-term memory (LSTM) and gated recurrent unit (GRU) neural networks. In terms of the results, the proposed data augmentation method effectively improves the neural network generalization ability and lithium battery SOH prediction accuracy.
APA, Harvard, Vancouver, ISO, and other styles
9

Shi, Lin, and Lei Zheng. "An IGWOCNN Deep Method for Medical Education Quality Estimating." Mathematical Problems in Engineering 2022 (August 9, 2022): 1–5. http://dx.doi.org/10.1155/2022/9037726.

Full text
Abstract:
The deep learning and mining ability of big data are used to analyze the shortcomings in the teaching scheme, and the teaching scheme is optimized to improve the teaching ability. The convolution neural network optimized by improved grey wolf optimization is used to train the data so as to improve the efficiency of searching the optimal value of the algorithm and prevent the algorithm from tending to the local optimal value. In order to solve the shortcoming of grey wolf optimization, an improved grey wolf optimization, that is, grey wolf optimization with variable convergence factor, is used to optimize the convolution neural network. The grey wolf optimization with variable convergence factor is to balance the global search ability and local search ability of the algorithm. The testing results show that the quality estimating accuracy of convolutional neural networks optimized by improved grey wolf optimization is 100%, the quality estimating accuracy of convolutional neural networks optimized by grey wolf optimization is 93.33%, and the quality estimating accuracy of classical convolutional neural networks is 86.67%. We can conclude that the medical education quality estimating ability of convolutional neural network optimized by improved grey wolf optimization is the best among convolutional neural networks optimized by improved grey wolf optimization and classical convolutional neural networks.
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Yunfeng, and Fengxian Tang. "Network Intrusion Detection Based on Stochastic Neural Networks Method." International Journal of Security and Its Applications 10, no. 8 (August 31, 2016): 435–46. http://dx.doi.org/10.14257/ijsia.2016.10.8.38.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Neural Networks method"

1

Dunn, Nathan A. "A Novel Neural Network Analysis Method Applied to Biological Neural Networks." Thesis, view abstract or download file of text, 2006. http://proquest.umi.com/pqdweb?did=1251892251&sid=2&Fmt=2&clientId=11238&RQT=309&VName=PQD.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 2006.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 122- 131). Also available for download via the World Wide Web; free to University of Oregon users.
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Youping. "Neural network approximation for linear fitting method." Ohio : Ohio University, 1992. http://www.ohiolink.edu/etd/view.cgi?ohiou1172243968.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

CUNHA, JOAO MARCO BRAGA DA. "ESTIMATING ARTIFICIAL NEURAL NETWORKS WITH GENERALIZED METHOD OF MOMENTS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2015. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=26922@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
As Redes Neurais Artificiais (RNAs) começaram a ser desenvolvidas nos anos 1940. Porém, foi a partir dos anos 1980, com a popularização e o aumento de capacidade dos computadores, que as RNAs passaram a ter grande relevância. Também nos anos 1980, houve dois outros acontecimentos acadêmicos relacionados ao presente trabalho: (i) um grande crescimento do interesse de econometristas por modelos não lineares, que culminou nas abordagens econométricas para RNAs, no final desta década; e (ii) a introdução do Método Generalizado dos Momentos (MGM) para estimação de parâmetros, em 1982. Nas abordagens econométricas de RNAs, sempre predominou a estimação por Quasi Máxima Verossimilhança (QMV). Apesar de possuir boas propriedades assintóticas, a QMV é muito suscetível a um problema nas estimações em amostra finita, conhecido como sobreajuste. O presente trabalho estende o estado da arte em abordagens econométricas de RNAs, apresentando uma proposta alternativa à estimação por QMV que preserva as suas boas propriedades assintóticas e é menos suscetível ao sobreajuste. A proposta utiliza a estimação pelo MGM. Como subproduto, a estimação pelo MGM possibilita a utilização do chamado Teste J para verifificar a existência de não linearidade negligenciada. Os estudos de Monte Carlo realizados indicaram que as estimações pelo MGM são mais precisas que as geradas pela QMV em situações com alto ruído, especialmente em pequenas amostras. Este resultado é compatível com a hipótese de que o MGM é menos suscetível ao sobreajuste. Experimentos de previsão de taxas de câmbio reforçaram estes resultados. Um segundo estudo de Monte Carlo apontou boas propriedades em amostra finita para o Teste J aplicado à não linearidade negligenciada, comparado a um teste de referência amplamente conhecido e utilizado. No geral, os resultados apontaram que a estimação pelo MGM é uma alternativa recomendável, em especial no caso de dados com alto nível de ruído.
Artificial Neural Networks (ANN) started being developed in the decade of 1940. However, it was during the 1980 s that the ANNs became relevant, pushed by the popularization and increasing power of computers. Also in the 1980 s, there were two other two other academic events closely related to the present work: (i) a large increase of interest in nonlinear models from econometricians, culminating in the econometric approaches for ANN by the end of that decade; and (ii) the introduction of the Generalized Method of Moments (GMM) for parameter estimation in 1982. In econometric approaches for ANNs, the estimation by Quasi Maximum Likelihood (QML) always prevailed. Despite its good asymptotic properties, QML is very prone to an issue in finite sample estimations, known as overfiting. This thesis expands the state of the art in econometric approaches for ANNs by presenting an alternative to QML estimation that keeps its good asymptotic properties and has reduced leaning to overfiting. The presented approach relies on GMM estimation. As a byproduct, GMM estimation allows the use of the so-called J Test to verify the existence of neglected nonlinearity. The performed Monte Carlo studies indicate that the estimates from GMM are more accurate than those generated by QML in situations with high noise, especially in small samples. This result supports the hypothesis that GMM is susceptible to overfiting. Exchange rate forecasting experiments reinforced these findings. A second Monte Carlo study revealed satisfactory finite sample properties of the J Test applied to the neglected nonlinearity, compared with a reference test widely known and used. Overall, the results indicated that the estimation by GMM is a better alternative, especially for data with high noise level.
APA, Harvard, Vancouver, ISO, and other styles
4

Bishop, Russell C. "A Method for Generating Robot Control Systems." Connect to resource online, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1222394834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

KAIMAL, VINOD GOPALKRISHNA. "A NEURAL METHOD OF COMPUTING OPTICAL FLOW BASED ON GEOMETRIC CONSTRAINTS." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1037632137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sung, Woong Je. "A neural network construction method for surrogate modeling of physics-based analysis." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43721.

Full text
Abstract:
A connectivity adjusting learning algorithm, Optimal Brain Growth (OBG) was proposed. Contrast to the conventional training methods for the Artificial Neural Network (ANN) which focus on the weight-only optimization, the OBG method trains both weights and connectivity of a network in a single training process. The standard Back-Propagation (BP) algorithm was extended to exploit the error gradient information of the latent connection whose current weight has zero value. Based on this, the OBG algorithm makes a rational decision between a further adjustment of an existing connection weight and a creation of a new connection having zero weight. The training efficiency of a growing network is maintained by freezing stabilized connections in the further optimization process. A stabilized computational unit is also decomposed into two units and a particular set of decomposition rules guarantees a seamless local re-initialization of a training trajectory. The OBG method was tested for the multiple canonical, regression and classification problems and for a surrogate modeling of the pressure distribution on transonic airfoils. The OBG method showed an improved learning capability in computationally efficient manner compared to the conventional weight-only training using connectivity-fixed Multilayer Perceptrons (MLPs).
APA, Harvard, Vancouver, ISO, and other styles
7

Chavali, Krishna Kumar. "Integration of statistical and neural network method for data analysis." Morgantown, W. Va. : [West Virginia University Libraries], 2006. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4749.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2006.
Title from document title page. Document formatted into pages; contains viii, 68 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 50-51).
APA, Harvard, Vancouver, ISO, and other styles
8

Radhakrishnan, Kapilan. "A non-intrusive method to evaluate perceived voice quality of VoIP networks using random neural networks." Thesis, Glasgow Caledonian University, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.547414.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mohamed, Ibrahim. "A method for the analysis of the MDTF data using neural networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0032/MQ62402.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rowlands, H. "Optimum design using the Taguchi method with neural networks and genetic algorithms." Thesis, Cardiff University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241701.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Neural Networks method"

1

Zhang, Yunong. Zhang neural networks and neural-dynamic method. Hauppauge, N.Y: Nova Science Publishers, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Harrison, R. F. A general method for the discovery and use of rules induced by feedforward artificial neural networks. Sheffield: University of Sheffield, Dept. of Automatic Control & Systems Engineering, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Amezcua, Jonathan, Patricia Melin, and Oscar Castillo. New Classification Method Based on Modular Neural Networks with the LVQ Algorithm and Type-2 Fuzzy Logic. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73773-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cronley, Thomas J. The use of neural networks as a method of correlating thermal fluid data to provide useful information on thermal systems. Monterey, Calif: Naval Postgraduate School, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Neural networks and simulation methods. New York: M. Dekker, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Suzuki, T. Edge detection methods using neural networks. Manchester: UMIST, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kumar, Bose Deb, ed. Neural networks: Deterministic methods of analysis. London: International Thomson Computer Press, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shepherd, Adrian J. Second-Order Methods for Neural Networks. London: Springer London, 1997. http://dx.doi.org/10.1007/978-1-4471-0953-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pabisek, Ewa. Systemy hybrydowe intergruja̜ce MES i SSN w analizie wybranych problemów mechaniki konstrukcji i materiałów. Kraków: Wydawn. Politechniki Krakowskiej, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pabisek, Ewa. Systemy hybrydowe intergruja̜ce MES i SSN w analizie wybranych problemów mechaniki konstrukcji i materiałów. Kraków: Wydawn. Politechniki Krakowskiej, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Neural Networks method"

1

Annema, Anne-Johan. "The Vector Decomposition Method." In Feed-Forward Neural Networks, 27–37. Boston, MA: Springer US, 1995. http://dx.doi.org/10.1007/978-1-4615-2337-6_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

da Silva, Ivan Nunes, Danilo Hernane Spatti, Rogerio Andrade Flauzino, Luisa Helena Bartocci Liboni, and Silas Franco dos Reis Alves. "Method for Classifying Tomatoes Using Computer Vision and MLP Networks." In Artificial Neural Networks, 253–58. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43162-8_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Magoulas, G. D., M. N. Vrahatis, T. N. Grapsa, and G. S. Androulakis. "A Training Method for Discrete Multilayer Neural Networks." In Mathematics of Neural Networks, 250–54. Boston, MA: Springer US, 1997. http://dx.doi.org/10.1007/978-1-4615-6099-9_42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wei, Hui, Bo Lang, and Qing-song Zuo. "A Scale-Changeable Image Analysis Method." In Engineering Applications of Neural Networks, 63–68. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23957-1_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bader, Sebastian, and Steffen Hölldobler. "The Core Method: Connectionist Model Generation." In Artificial Neural Networks – ICANN 2006, 1–13. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11840930_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mitsuishi, Takashi, and Yasunari Shidama. "Height Defuzzification Method on L ∞ Space." In Artificial Neural Networks – ICANN 2009, 598–607. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04274-4_62.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Huang, Haiping. "Spin Glass Models and Cavity Method." In Statistical Mechanics of Neural Networks, 5–15. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-7570-6_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lee, John A., and Michel Verleysen. "Nonlinear Projection with the Isotop Method." In Artificial Neural Networks — ICANN 2002, 933–38. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46084-5_151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Magoulas, G. D., M. N. Vrahatis, T. N. Grapsa, and G. S. Androulakis. "Neural Network Supervised Training Based on a Dimension Reducing Method." In Mathematics of Neural Networks, 245–49. Boston, MA: Springer US, 1997. http://dx.doi.org/10.1007/978-1-4615-6099-9_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tong, Zhiqiang, Kazuyuki Aihara, and Gouhei Tanaka. "A Hybrid Pooling Method for Convolutional Neural Networks." In Neural Information Processing, 454–61. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46672-9_51.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Neural Networks method"

1

Mikaelian, Andrei L., Boris S. Kiselyov, and Nickolay Y. Kulakov. "Modification of simulated annealing method for solving combinatorial optimization problems." In Photonic Neural Networks. SPIE, 1993. http://dx.doi.org/10.1117/12.983196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sevo, Igor. "Semi-supervised neural network training method for fast-moving object detection." In 2018 14th Symposium on Neural Networks and Applications (NEUREL). IEEE, 2018. http://dx.doi.org/10.1109/neurel.2018.8586986.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Benmaghnia, Hanane, Matthieu Martel, and Yassamine Seladji. "Fixed-Point Code Synthesis for Neural Networks." In 6th International Conference on Artificial Intelligence, Soft Computing and Applications (AISCA 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.120202.

Full text
Abstract:
Over the last few years, neural networks have started penetrating safety critical systems to take decisions in robots, rockets, autonomous driving car, etc. A problem is that these critical systems often have limited computing resources. Often, they use the fixed-point arithmetic for its many advantages (rapidity, compatibility with small memory devices.) In this article, a new technique is introduced to tune the formats (precision) of already trained neural networks using fixed-point arithmetic, which can be implemented using integer operations only. The new optimized neural network computes the output with fixed-point numbers without modifying the accuracy up to a threshold fixed by the user. A fixed-point code is synthesized for the new optimized neural network ensuring the respect of the threshold for any input vector belonging the range [xmin, xmax] determined during the analysis. From a technical point of view, we do a preliminary analysis of our floating neural network to determine the worst cases, then we generate a system of linear constraints among integer variables that we can solve by linear programming. The solution of this system is the new fixed-point format of each neuron. The experimental results obtained show the efficiency of our method which can ensure that the new fixed-point neural network has the same behavior as the initial floating-point neural network.
APA, Harvard, Vancouver, ISO, and other styles
4

Ferreira, João, Manuel de Sousa Ribeiro, Ricardo Gonçalves, and João Leite. "Looking Inside the Black-Box: Logic-based Explanations for Neural Networks." In 19th International Conference on Principles of Knowledge Representation and Reasoning {KR-2022}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/kr.2022/45.

Full text
Abstract:
Deep neural network-based methods have recently enjoyed great popularity due to their effectiveness in solving difficult tasks. Requiring minimal human effort, they have turned into an almost ubiquitous solution in multiple domains. However, due to the size and complexity of typical neural network models' architectures, as well as the sub-symbolical nature of the representations generated by their neuronal activations, neural networks are essentially opaque, making it nearly impossible to explain to humans the reasoning behind their decisions. We address this issue by developing a procedure to induce human-understandable logic-based theories that attempt to represent the classification process of a given neural network model, based on the idea of establishing mappings from the values of the activations produced by the neurons of that model to human-defined concepts to be used in the induced logic-based theory. Exploring the setting of a synthetic image classification task, we provide empirical results to assess the quality of the developed theories for different neural network models, compare them to existing theories on that task, and give evidence that the theories developed through our method are faithful to the representations learned by the neural networks that they are built to describe.
APA, Harvard, Vancouver, ISO, and other styles
5

Yuskov, I. O., and E. P. Stroganova. "Corporative Combined Networks Investigation with Neural Networks Method." In 2022 Systems of Signal Synchronization, Generating and Processing in Telecommunications (SYNCHROINFO). IEEE, 2022. http://dx.doi.org/10.1109/synchroinfo55067.2022.9840871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Panasyuk, Lev M., and A. A. Forsh. "Optical information recording on vitreous semiconductors with a thermoplastic method of visualization." In Optical Memory and Neural Networks, edited by Andrei L. Mikaelian. SPIE, 1991. http://dx.doi.org/10.1117/12.50416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kassem, Ayman H., and Ihab G. Adam. "Optimizing Neural Networks for Leak Monitoring in Pipelines." In ASME/JSME 2004 Pressure Vessels and Piping Conference. ASMEDC, 2004. http://dx.doi.org/10.1115/pvp2004-3005.

Full text
Abstract:
Feedforward neural networks can be used for nonlinear dynamic modeling. Although the basic principle of employing such networks is straightforward, the problem of selecting the training data set and the network topology is not a trivial task. This paper examines the use of genetic algorithm optimization techniques to optimize the neural network. The paper presents the results of studies on the effect of number of neurons and input combination method on the performance of neural networks and the application of this study to improve leak monitoring in pipelines. The neural networks examined in this study do not use the sensor reading directly as in conventional neural networks but combine it using polynomial type laws to produce hybrid inputs. The optimization technique tries to find the best polynomial laws (input combination) to reduce network size, head variation effect, and optimize network performance. The resulting networks show superior performance and use fewer numbers of neurons.
APA, Harvard, Vancouver, ISO, and other styles
8

Nagasawa, Yurina, Tomotaka Kimura, Takanori Kudo, and Kouji Hirata. "Estimation method of network availability with convolutional neural networks." In 2019 IEEE International Conference on Consumer Electronics - Taiwan (ICCE-TW). IEEE, 2019. http://dx.doi.org/10.1109/icce-tw46550.2019.8991804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kelleher, Matthew D., Thomas J. Cronley, K. T. Yang, and Mihir Sen. "Using Artificial Neural Networks to Develop a Predictive Method From Complex Experimental Heat Transfer Data." In ASME 2001 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2001. http://dx.doi.org/10.1115/imece2001/htd-24285.

Full text
Abstract:
Abstract Artificial neural networks are employed to develop a predictive algorithm using experimental heat transfer data for a complex situation. The data of Marto and Anderson has used to illustrate the process. This data is from a series of experiments investigating the boiling heat transfer from a vertical bank of tubes in refrigerant 114 with variable amounts of oil present. Both finned and unfinned tubes were investigated. The network was trained with a partial set of the available data. The prediction obtained using the trained network was then compared to the remaining experimental data. The artificial neural network provided an excellent predictive method.
APA, Harvard, Vancouver, ISO, and other styles
10

Rangarajan, Simchony, and Chellappa. "Deterministic networks for image estimation using a penalty function method." In International Joint Conference on Neural Networks. IEEE, 1989. http://dx.doi.org/10.1109/ijcnn.1989.118495.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Neural Networks method"

1

Yu, Haichao, Haoxiang Li, Honghui Shi, Thomas S. Huang, and Gang Hua. Any-Precision Deep Neural Networks. Web of Open Science, December 2020. http://dx.doi.org/10.37686/ejai.v1i1.82.

Full text
Abstract:
We present Any-Precision Deep Neural Networks (Any- Precision DNNs), which are trained with a new method that empowers learned DNNs to be flexible in any numerical precision during inference. The same model in runtime can be flexibly and directly set to different bit-width, by trun- cating the least significant bits, to support dynamic speed and accuracy trade-off. When all layers are set to low- bits, we show that the model achieved accuracy compara- ble to dedicated models trained at the same precision. This nice property facilitates flexible deployment of deep learn- ing models in real-world applications, where in practice trade-offs between model accuracy and runtime efficiency are often sought. Previous literature presents solutions to train models at each individual fixed efficiency/accuracy trade-off point. But how to produce a model flexible in runtime precision is largely unexplored. When the demand of efficiency/accuracy trade-off varies from time to time or even dynamically changes in runtime, it is infeasible to re-train models accordingly, and the storage budget may forbid keeping multiple models. Our proposed framework achieves this flexibility without performance degradation. More importantly, we demonstrate that this achievement is agnostic to model architectures. We experimentally validated our method with different deep network backbones (AlexNet-small, Resnet-20, Resnet-50) on different datasets (SVHN, Cifar-10, ImageNet) and observed consistent results.
APA, Harvard, Vancouver, ISO, and other styles
2

Grossberg, Stephen. Content-Addressable Memory Storage by Neural Networks: A General Model and Global Liapunov Method,. Fort Belvoir, VA: Defense Technical Information Center, March 1988. http://dx.doi.org/10.21236/ada192716.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak, and Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, July 1996. http://dx.doi.org/10.32747/1996.7613033.bard.

Full text
Abstract:
The objectives of this project were to develop procedures and models, based on neural networks, for quality sorting of agricultural produce. Two research teams, one in Purdue University and the other in Israel, coordinated their research efforts on different aspects of each objective utilizing both melons and tomatoes as case studies. At Purdue: An expert system was developed to measure variances in human grading. Data were acquired from eight sensors: vision, two firmness sensors (destructive and nondestructive), chlorophyll from fluorescence, color sensor, electronic sniffer for odor detection, refractometer and a scale (mass). Data were analyzed and provided input for five classification models. Chlorophyll from fluorescence was found to give the best estimation for ripeness stage while the combination of machine vision and firmness from impact performed best for quality sorting. A new algorithm was developed to estimate and minimize training size for supervised classification. A new criteria was established to choose a training set such that a recurrent auto-associative memory neural network is stabilized. Moreover, this method provides for rapid and accurate updating of the classifier over growing seasons, production environments and cultivars. Different classification approaches (parametric and non-parametric) for grading were examined. Statistical methods were found to be as accurate as neural networks in grading. Classification models by voting did not enhance the classification significantly. A hybrid model that incorporated heuristic rules and either a numerical classifier or neural network was found to be superior in classification accuracy with half the required processing of solely the numerical classifier or neural network. In Israel: A multi-sensing approach utilizing non-destructive sensors was developed. Shape, color, stem identification, surface defects and bruises were measured using a color image processing system. Flavor parameters (sugar, acidity, volatiles) and ripeness were measured using a near-infrared system and an electronic sniffer. Mechanical properties were measured using three sensors: drop impact, resonance frequency and cyclic deformation. Classification algorithms for quality sorting of fruit based on multi-sensory data were developed and implemented. The algorithms included a dynamic artificial neural network, a back propagation neural network and multiple linear regression. Results indicated that classification based on multiple sensors may be applied in real-time sorting and can improve overall classification. Advanced image processing algorithms were developed for shape determination, bruise and stem identification and general color and color homogeneity. An unsupervised method was developed to extract necessary vision features. The primary advantage of the algorithms developed is their ability to learn to determine the visual quality of almost any fruit or vegetable with no need for specific modification and no a-priori knowledge. Moreover, since there is no assumption as to the type of blemish to be characterized, the algorithm is capable of distinguishing between stems and bruises. This enables sorting of fruit without knowing the fruits' orientation. A new algorithm for on-line clustering of data was developed. The algorithm's adaptability is designed to overcome some of the difficulties encountered when incrementally clustering sparse data and preserves information even with memory constraints. Large quantities of data (many images) of high dimensionality (due to multiple sensors) and new information arriving incrementally (a function of the temporal dynamics of any natural process) can now be processed. Furhermore, since the learning is done on-line, it can be implemented in real-time. The methodology developed was tested to determine external quality of tomatoes based on visual information. An improved model for color sorting which is stable and does not require recalibration for each season was developed for color determination. Excellent classification results were obtained for both color and firmness classification. Results indicted that maturity classification can be obtained using a drop-impact and a vision sensor in order to predict the storability and marketing of harvested fruits. In conclusion: We have been able to define quantitatively the critical parameters in the quality sorting and grading of both fresh market cantaloupes and tomatoes. We have been able to accomplish this using nondestructive measurements and in a manner consistent with expert human grading and in accordance with market acceptance. This research constructed and used large databases of both commodities, for comparative evaluation and optimization of expert system, statistical and/or neural network models. The models developed in this research were successfully tested, and should be applicable to a wide range of other fruits and vegetables. These findings are valuable for the development of on-line grading and sorting of agricultural produce through the incorporation of multiple measurement inputs that rapidly define quality in an automated manner, and in a manner consistent with the human graders and inspectors.
APA, Harvard, Vancouver, ISO, and other styles
4

Yaroshchuk, Svitlana O., Nonna N. Shapovalova, Andrii M. Striuk, Olena H. Rybalchenko, Iryna O. Dotsenko, and Svitlana V. Bilashenko. Credit scoring model for microfinance organizations. [б. в.], February 2020. http://dx.doi.org/10.31812/123456789/3683.

Full text
Abstract:
The purpose of the work is the development and application of models for scoring assessment of microfinance institution borrowers. This model allows to increase the efficiency of work in the field of credit. The object of research is lending. The subject of the study is a direct scoring model for improving the quality of lending using machine learning methods. The objective of the study: to determine the criteria for choosing a solvent borrower, to develop a model for an early assessment, to create software based on neural networks to determine the probability of a loan default risk. Used research methods such as analysis of the literature on banking scoring; artificial intelligence methods for scoring; modeling of scoring estimation algorithm using neural networks, empirical method for determining the optimal parameters of the training model; method of object-oriented design and programming. The result of the work is a neural network scoring model with high accuracy of calculations, an implemented system of automatic customer lending.
APA, Harvard, Vancouver, ISO, and other styles
5

Kirichek, Galina, Vladyslav Harkusha, Artur Timenko, and Nataliia Kulykovska. System for detecting network anomalies using a hybrid of an uncontrolled and controlled neural network. [б. в.], February 2020. http://dx.doi.org/10.31812/123456789/3743.

Full text
Abstract:
In this article realization method of attacks and anomalies detection with the use of training of ordinary and attacking packages, respectively. The method that was used to teach an attack on is a combination of an uncontrollable and controlled neural network. In an uncontrolled network, attacks are classified in smaller categories, taking into account their features and using the self- organized map. To manage clusters, a neural network based on back-propagation method used. We use PyBrain as the main framework for designing, developing and learning perceptron data. This framework has a sufficient number of solutions and algorithms for training, designing and testing various types of neural networks. Software architecture is presented using a procedural-object approach. Because there is no need to save intermediate result of the program (after learning entire perceptron is stored in the file), all the progress of learning is stored in the normal files on hard disk.
APA, Harvard, Vancouver, ISO, and other styles
6

Semerikov, Serhiy, Illia Teplytskyi, Yuliia Yechkalo, Oksana Markova, Vladimir Soloviev, and Arnold Kiv. Computer Simulation of Neural Networks Using Spreadsheets: Dr. Anderson, Welcome Back. [б. в.], June 2019. http://dx.doi.org/10.31812/123456789/3178.

Full text
Abstract:
The authors of the given article continue the series presented by the 2018 paper “Computer Simulation of Neural Networks Using Spreadsheets: The Dawn of the Age of Camelot”. This time, they consider mathematical informatics as the basis of higher engineering education fundamentalization. Mathematical informatics deals with smart simulation, information security, long-term data storage and big data management, artificial intelligence systems, etc. The authors suggest studying basic principles of mathematical informatics by applying cloud-oriented means of various levels including those traditionally considered supplementary – spreadsheets. The article considers ways of building neural network models in cloud-oriented spreadsheets, Google Sheets. The model is based on the problem of classifying multi-dimensional data provided in “The Use of Multiple Measurements in Taxonomic Problems” by R. A. Fisher. Edgar Anderson’s role in collecting and preparing the data in the 1920s-1930s is discussed as well as some peculiarities of data selection. There are presented data on the method of multi-dimensional data presentation in the form of an ideograph developed by Anderson and considered one of the first efficient ways of data visualization.
APA, Harvard, Vancouver, ISO, and other styles
7

Warrick, Arthur W., Gideon Oron, Mary M. Poulton, Rony Wallach, and Alex Furman. Multi-Dimensional Infiltration and Distribution of Water of Different Qualities and Solutes Related Through Artificial Neural Networks. United States Department of Agriculture, January 2009. http://dx.doi.org/10.32747/2009.7695865.bard.

Full text
Abstract:
The project exploits the use of Artificial Neural Networks (ANN) to describe infiltration, water, and solute distribution in the soil during irrigation. It provides a method of simulating water and solute movement in the subsurface which, in principle, is different and has some advantages over the more common approach of numerical modeling of flow and transport equations. The five objectives were (i) Numerically develop a database for the prediction of water and solute distribution for irrigation; (ii) Develop predictive models using ANN; (iii) Develop an experimental (laboratory) database of water distribution with time; within a transparent flow cell by high resolution CCD video camera; (iv) Conduct field studies to provide basic data for developing and testing the ANN; and (v) Investigate the inclusion of water quality [salinity and organic matter (OM)] in an ANN model used for predicting infiltration and subsurface water distribution. A major accomplishment was the successful use of Moment Analysis (MA) to characterize “plumes of water” applied by various types of irrigation (including drip and gravity sources). The general idea is to describe the subsurface water patterns statistically in terms of only a few (often 3) parameters which can then be predicted by the ANN. It was shown that ellipses (in two dimensions) or ellipsoids (in three dimensions) can be depicted about the center of the plume. Any fraction of water added can be related to a ‘‘probability’’ curve relating the size of the ellipse (or ellipsoid) that contains that amount of water. The initial test of an ANN to predict the moments (and hence the water plume) was with numerically generated data for infiltration from surface and subsurface drip line and point sources in three contrasting soils. The underlying dataset consisted of 1,684,500 vectors (5 soils×5 discharge rates×3 initial conditions×1,123 nodes×20 print times) where each vector had eleven elements consisting of initial water content, hydraulic properties of the soil, flow rate, time and space coordinates. The output is an estimate of subsurface water distribution for essentially any soil property, initial condition or flow rate from a drip source. Following the formal development of the ANN, we have prepared a “user-friendly” version in a spreadsheet environment (in “Excel”). The input data are selected from appropriate values and the output is instantaneous resulting in a picture of the resulting water plume. The MA has also proven valuable, on its own merit, in the description of the flow in soil under laboratory conditions for both wettable and repellant soils. This includes non-Darcian flow examples and redistribution and well as infiltration. Field experiments were conducted in different agricultural fields and various water qualities in Israel. The obtained results will be the basis for the further ANN models development. Regions of high repellence were identified primarily under the canopy of various orchard crops, including citrus and persimmons. Also, increasing OM in the applied water lead to greater repellency. Major scientific implications are that the ANN offers an alternative to conventional flow and transport modeling and that MA is a powerful technique for describing the subsurface water distributions for normal (wettable) and repellant soil. Implications of the field measurements point to the special role of OM in affecting wettability, both from the irrigation water and from soil accumulation below canopies. Implications for agriculture are that a modified approach for drip system design should be adopted for open area crops and orchards, and taking into account the OM components both in the soil and in the applied waters.
APA, Harvard, Vancouver, ISO, and other styles
8

Markova, Oksana, Serhiy Semerikov, and Maiia Popel. СoCalc as a Learning Tool for Neural Network Simulation in the Special Course “Foundations of Mathematic Informatics”. Sun SITE Central Europe, May 2018. http://dx.doi.org/10.31812/0564/2250.

Full text
Abstract:
The role of neural network modeling in the learning сontent of special course “Foundations of Mathematic Informatics” was discussed. The course was developed for the students of technical universities – future IT-specialists and directed to breaking the gap between theoretic computer science and it’s applied applications: software, system and computing engineering. CoCalc was justified as a learning tool of mathematical informatics in general and neural network modeling in particular. The elements of technique of using CoCalc at studying topic “Neural network and pattern recognition” of the special course “Foundations of Mathematic Informatics” are shown. The program code was presented in a CofeeScript language, which implements the basic components of artificial neural network: neurons, synaptic connections, functions of activations (tangential, sigmoid, stepped) and their derivatives, methods of calculating the network`s weights, etc. The features of the Kolmogorov–Arnold representation theorem application were discussed for determination the architecture of multilayer neural networks. The implementation of the disjunctive logical element and approximation of an arbitrary function using a three-layer neural network were given as an examples. According to the simulation results, a conclusion was made as for the limits of the use of constructed networks, in which they retain their adequacy. The framework topics of individual research of the artificial neural networks is proposed.
APA, Harvard, Vancouver, ISO, and other styles
9

Semerikov, Serhiy O., Illia O. Teplytskyi, Yuliia V. Yechkalo, and Arnold E. Kiv. Computer Simulation of Neural Networks Using Spreadsheets: The Dawn of the Age of Camelot. [б. в.], November 2018. http://dx.doi.org/10.31812/123456789/2648.

Full text
Abstract:
The article substantiates the necessity to develop training methods of computer simulation of neural networks in the spreadsheet environment. The systematic review of their application to simulating artificial neural networks is performed. The authors distinguish basic approaches to solving the problem of network computer simulation training in the spreadsheet environment, joint application of spreadsheets and tools of neural network simulation, application of third-party add-ins to spreadsheets, development of macros using the embedded languages of spreadsheets; use of standard spreadsheet add-ins for non-linear optimization, creation of neural networks in the spreadsheet environment without add-ins and macros. After analyzing a collection of writings of 1890-1950, the research determines the role of the scientific journal “Bulletin of Mathematical Biophysics”, its founder Nicolas Rashevsky and the scientific community around the journal in creating and developing models and methods of computational neuroscience. There are identified psychophysical basics of creating neural networks, mathematical foundations of neural computing and methods of neuroengineering (image recognition, in particular). The role of Walter Pitts in combining the descriptive and quantitative theories of training is discussed. It is shown that to acquire neural simulation competences in the spreadsheet environment, one should master the models based on the historical and genetic approach. It is indicated that there are three groups of models, which are promising in terms of developing corresponding methods – the continuous two-factor model of Rashevsky, the discrete model of McCulloch and Pitts, and the discrete-continuous models of Householder and Landahl.
APA, Harvard, Vancouver, ISO, and other styles
10

Semerikov, Serhiy, Hanna Kucherova, Vita Los, and Dmytro Ocheretin. Neural Network Analytics and Forecasting the Country's Business Climate in Conditions of the Coronavirus Disease (COVID-19). CEUR Workshop Proceedings, April 2021. http://dx.doi.org/10.31812//123456789/4364.

Full text
Abstract:
The prospects for doing business in countries are also determined by the business confidence index. The purpose of the article is to model trends in indicators that determine the state of the business climate of countries, in particular, the period of influence of the consequences of COVID-19 is of scientific interest. The approach is based on the preliminary results of substantiating a set of indicators and applying the taxonomy method to substantiate an alternative indicator of the business climate, the advantage of which is its advanced nature. The most significant factors influencing the business climate index were identified, in particular, the annual GDP growth rate and the volume of retail sales. The similarity of the trends in the calculated and actual business climate index was obtained, the forecast values were calculated with an accuracy of 89.38%. And also, the obtained modeling results were developed by means of building and using neural networks with learning capabilities, which makes it possible to improve the quality and accuracy of the business climate index forecast up to 96.22%. It has been established that the consequences of the impact of COVID-19 are forecasting a decrease in the level of the country's business climate index in the 3rd quarter of 2020. The proposed approach to modeling the country's business climate is unified, easily applied to the macroeconomic data of various countries, demonstrates a high level of accuracy and quality of forecasting. The prospects for further research are modeling the business climate of the countries of the world in order to compare trends and levels, as well as their changes under the influence of quarantine restrictions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography