Academic literature on the topic 'Models of neural elements'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Models of neural elements.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Models of neural elements"

1

Bainbridge, William Sims. "Neural Network Models of Religious Belief." Sociological Perspectives 38, no. 4 (December 1995): 483–95. http://dx.doi.org/10.2307/1389269.

Full text
Abstract:
This paper applies neural network technology, a standard approach in computer science that has been unaccountably ignored by sociologists, to the problem of developing rigorous sociological theories. A simulation program employing a “varimax” model of human learning and decision-making models central elements of the Stark-Bainbridge theory of religion. Individuals in a micro-society of 24 simulated people learn which categories of potential exchange partners to seek for each of four material rewards which in fact can be provided by other actors in the society. However, when they seek eternal life, they are unable to find suitable human exchange partners who can provide it to them, so they postulate the existence of supernatural exchange partners as substitutes. The explanation of how the particular neural net works, including reference to modulo arithmetic, introduces some aspects of this new technology to sociology, and this paper invites readers to explore the wide range of other neural net techniques that may be of value for social scientists
APA, Harvard, Vancouver, ISO, and other styles
2

Yaish, Ofir, and Yaron Orenstein. "Computational modeling of mRNA degradation dynamics using deep neural networks." Bioinformatics 38, no. 4 (November 26, 2021): 1087–101. http://dx.doi.org/10.1093/bioinformatics/btab800.

Full text
Abstract:
Abstract Motivation messenger RNA (mRNA) degradation plays critical roles in post-transcriptional gene regulation. A major component of mRNA degradation is determined by 3′-UTR elements. Hence, researchers are interested in studying mRNA dynamics as a function of 3′-UTR elements. A recent study measured the mRNA degradation dynamics of tens of thousands of 3′-UTR sequences using a massively parallel reporter assay. However, the computational approach used to model mRNA degradation was based on a simplifying assumption of a linear degradation rate. Consequently, the underlying mechanism of 3′-UTR elements is still not fully understood. Results Here, we developed deep neural networks to predict mRNA degradation dynamics and interpreted the networks to identify regulatory elements in the 3′-UTR and their positional effect. Given an input of a 110 nt-long 3′-UTR sequence and an initial mRNA level, the model predicts mRNA levels of eight consecutive time points. Our deep neural networks significantly improved prediction performance of mRNA degradation dynamics compared with extant methods for the task. Moreover, we demonstrated that models predicting the dynamics of two identical 3′-UTR sequences, differing by their poly(A) tail, performed better than single-task models. On the interpretability front, by using Integrated Gradients, our convolutional neural networks (CNNs) models identified known and novel cis-regulatory sequence elements of mRNA degradation. By applying a novel systematic evaluation of model interpretability, we demonstrated that the recurrent neural network models are inferior to the CNN models in terms of interpretability and that random initialization ensemble improves both prediction and interoperability performance. Moreover, using a mutagenesis analysis, we newly discovered the positional effect of various 3′-UTR elements. Availability and implementation All the code developed through this study is available at github.com/OrensteinLab/DeepUTR/. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
3

Seeliger, K., L. Ambrogioni, Y. Güçlütürk, L. M. van den Bulk, U. Güçlü, and M. A. J. van Gerven. "End-to-end neural system identification with neural information flow." PLOS Computational Biology 17, no. 2 (February 4, 2021): e1008558. http://dx.doi.org/10.1371/journal.pcbi.1008558.

Full text
Abstract:
Neural information flow (NIF) provides a novel approach for system identification in neuroscience. It models the neural computations in multiple brain regions and can be trained end-to-end via stochastic gradient descent from noninvasive data. NIF models represent neural information processing via a network of coupled tensors, each encoding the representation of the sensory input contained in a brain region. The elements of these tensors can be interpreted as cortical columns whose activity encodes the presence of a specific feature in a spatiotemporal location. Each tensor is coupled to the measured data specific to a brain region via low-rank observation models that can be decomposed into the spatial, temporal and feature receptive fields of a localized neuronal population. Both these observation models and the convolutional weights defining the information processing within regions are learned end-to-end by predicting the neural signal during sensory stimulation. We trained a NIF model on the activity of early visual areas using a large-scale fMRI dataset recorded in a single participant. We show that we can recover plausible visual representations and population receptive fields that are consistent with empirical findings.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Zhaojun, Jiangning Wang, Congtian Lin, Yan Han, Zhaosheng Wang, and Liqiang Ji. "Identifying Habitat Elements from Bird Images Using Deep Convolutional Neural Networks." Animals 11, no. 5 (April 27, 2021): 1263. http://dx.doi.org/10.3390/ani11051263.

Full text
Abstract:
With the rapid development of digital technology, bird images have become an important part of ornithology research data. However, due to the rapid growth of bird image data, it has become a major challenge to effectively process such a large amount of data. In recent years, deep convolutional neural networks (DCNNs) have shown great potential and effectiveness in a variety of tasks regarding the automatic processing of bird images. However, no research has been conducted on the recognition of habitat elements in bird images, which is of great help when extracting habitat information from bird images. Here, we demonstrate the recognition of habitat elements using four DCNN models trained end-to-end directly based on images. To carry out this research, an image database called Habitat Elements of Bird Images (HEOBs-10) and composed of 10 categories of habitat elements was built, making future benchmarks and evaluations possible. Experiments showed that good results can be obtained by all the tested models. ResNet-152-based models yielded the best test accuracy rate (95.52%); the AlexNet-based model yielded the lowest test accuracy rate (89.48%). We conclude that DCNNs could be efficient and useful for automatically identifying habitat elements from bird images, and we believe that the practical application of this technology will be helpful for studying the relationships between birds and habitat elements.
APA, Harvard, Vancouver, ISO, and other styles
5

Boriskov, Petr, and Andrei Velichko. "Switch Elements with S-Shaped Current-Voltage Characteristic in Models of Neural Oscillators." Electronics 8, no. 9 (August 22, 2019): 922. http://dx.doi.org/10.3390/electronics8090922.

Full text
Abstract:
In this paper, we present circuit solutions based on a switch element with the S-type I–V characteristic implemented using the classic FitzHugh–Nagumo and FitzHugh–Rinzel models. Using the proposed simplified electrical circuits allows the modeling of the integrate-and-fire neuron and burst oscillation modes with the emulation of the mammalian cold receptor patterns. The circuits were studied using the experimental I–V characteristic of an NbO2 switch with a stable section of negative differential resistance (NDR) and a VO2 switch with an unstable NDR, considering the temperature dependences of the threshold characteristics. The results are relevant for modern neuroelectronics and have practical significance for the introduction of the neurodynamic models in circuit design and the brain–machine interface. The proposed systems of differential equations with the piecewise linear approximation of the S-type I–V characteristic may be of scientific interest for further analytical and numerical research and development of neural networks with artificial intelligence.
APA, Harvard, Vancouver, ISO, and other styles
6

Marchesin, Stefano, Alberto Purpura, and Gianmaria Silvello. "Focal elements of neural information retrieval models. An outlook through a reproducibility study." Information Processing & Management 57, no. 6 (November 2020): 102109. http://dx.doi.org/10.1016/j.ipm.2019.102109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

De Wolf, E. D., and L. J. Franel. "Neural Networks That Distinguish Infection Periods of Wheat Tan Spot in an Outdoor Environment." Phytopathology® 87, no. 1 (January 1997): 83–87. http://dx.doi.org/10.1094/phyto.1997.87.1.83.

Full text
Abstract:
Tan spot of wheat, caused by Pyrenophora tritici-repentis, provided a model system for testing disease forecasts based on an artificial neural network. Infection periods for P. tritici-repentis on susceptible wheat cultivars were identified from a bioassay system that correlated tan spot incidence with crop growth stage and 24-h summaries of environmental data, including temperature, relative humidity, wind speed, wind direction, solar radiation, precipitation, and flat-plate resistance-type wetness sensors. The resulting data set consisted of 97 discrete periods, of which 32 were reserved for validation analysis. Neural networks with zero to nine processing elements were evaluated 20 times each to identify the model that most accurately predicted an infection event. The 200 models averaged 74 to 77% accuracy, depending on the number of processing elements and random initialization of coefficients. The most accurate model had five processing elements and correctly predicted 87% of the infection periods in the validation set. In comparison, stepwise logistic regression correctly predicted 69% of the validation cases, and multivariate discriminant analysis distinguished 50% of the validation cases. When wetness-sensor inputs were withheld from the models, both the neural network and logistic regression models declined 6% in prediction accuracy. Thus, neural networks were more accurate than statistical procedures, both with and without wetness-sensor inputs. These results demonstrate the applicability of neural networks to plant disease forecasting.
APA, Harvard, Vancouver, ISO, and other styles
8

Cid, Juan M., Jesús García, Javier Monge, and Juan Zapata. "Design of microwave devices by segmentation, finite elements, reduced-order models, and neural networks." Microwave and Optical Technology Letters 49, no. 3 (January 26, 2007): 655–59. http://dx.doi.org/10.1002/mop.22248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yuille, Alan L. "Generalized Deformable Models, Statistical Physics, and Matching Problems." Neural Computation 2, no. 1 (March 1990): 1–24. http://dx.doi.org/10.1162/neco.1990.2.1.1.

Full text
Abstract:
We describe how to formulate matching and combinatorial problems of vision and neural network theory by generalizing elastic and deformable templates models to include binary matching elements. Techniques from statistical physics, which can be interpreted as computing marginal probability distributions, are then used to analyze these models and are shown to (1) relate them to existing theories and (2) give insight into the relations between, and relative effectivenesses of, existing theories. In particular we exploit the power of statistical techniques to put global constraints on the set of allowable states of the binary matching elements. The binary elements can then be removed analytically before minimization. This is demonstrated to be preferable to existing methods of imposing such constraints by adding bias terms in the energy functions. We give applications to winner-take-all networks, correspondence for stereo and long-range motion, the traveling salesman problem, deformable template matching, learning, content addressable memories, and models of brain development. The biological plausibility of these networks is briefly discussed.
APA, Harvard, Vancouver, ISO, and other styles
10

Fujiwara, Yusuke, Yoichi Miyawaki, and Yukiyasu Kamitani. "Modular Encoding and Decoding Models Derived from Bayesian Canonical Correlation Analysis." Neural Computation 25, no. 4 (April 2013): 979–1005. http://dx.doi.org/10.1162/neco_a_00423.

Full text
Abstract:
Neural encoding and decoding provide perspectives for understanding neural representations of sensory inputs. Recent functional magnetic resonance imaging (fMRI) studies have succeeded in building prediction models for encoding and decoding numerous stimuli by representing a complex stimulus as a combination of simple elements. While arbitrary visual images were reconstructed using a modular model that combined the outputs of decoder modules for multiscale local image bases (elements), the shapes of the image bases were heuristically determined. In this work, we propose a method to establish mappings between the stimulus and the brain by automatically extracting modules from measured data. We develop a model based on Bayesian canonical correlation analysis, in which each module is modeled by a latent variable that relates a set of pixels in a visual image to a set of voxels in an fMRI activity pattern. The estimated mapping from a latent variable to pixels can be regarded as an image basis. We show that the model estimates a modular representation with spatially localized multiscale image bases. Further, using the estimated mappings, we derive encoding and decoding models that produce accurate predictions for brain activity and stimulus images. Our approach thus provides a novel means of revealing neural representations of stimuli by automatically extracting modules, which can be used to generate effective prediction models for encoding and decoding.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Models of neural elements"

1

Andrzej, Tuchołka. "Methodology for assessing the construction of machine elements using neural models and antipatterns : doctoral dissertation." Rozprawa doktorska, [s.n.], 2020. http://dlibra.tu.koszalin.pl/Content/1317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Miocinovic, Svjetlana. "THEORETICAL AND EXPERIMENTAL PREDICTIONS OF NEURAL ELEMENTS ACTIVATED BY DEEP BRAIN STIMULATION." Case Western Reserve University School of Graduate Studies / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=case1181758206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tomov, Petar Georgiev. "Interplay of dynamics and network topology in systems of excitable elements." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17464.

Full text
Abstract:
Wir untersuchen globale dynamische Phänomene, die sich von dem Zusammenspiel zwischen Netzwerktopologie und Dynamik der einzelnen Elementen ergeben. Im ersten Teil untersuchen wir relativ kleine strukturierte Netzwerke mit überschaubarer Komplexität. Als geeigneter theoretischer Rahmen für erregbare Systeme verwenden wir das Kuramoto und Shinomoto Modell der sinusförmig-gekoppelten "aktiven Rotatoren" und studieren das Kollektivverhalten des Systems in Bezug auf Synchronisation. Wir besprechen die Einschränkungen, die durch die Netzwerktopologie auf dem Fluss im Phasenraum des Systems gestellt werden. Insbesondere interessieren wir uns für die Stabilitätseigenschaften von Fluss-invarianten Polydiagonalen und die Entwicklungen von Attraktoren in den Parameterräume solcher Systeme. Wir untersuchen zweidimensionale hexagonale Gitter mit periodischen Randbedingungen. Wir untersuchen allgemeine Bedingungen auf der Adjazenzmatrix von Netzwerken, die die Watanabe-Strogatz Reduktion ermöglichen, und diskutieren verschiedene Beispiele. Schließlich präsentieren wir eine generische Analyse der Bifurkationen, die auf der Untermannigfaltigkeit des Watanabe-Strogatz reduzierten Systems stattfinden. Im zweiten Teil der Arbeit untersuchen wir das globale dynamische Phänomen selbstanhaltender Aktivität (self-sustained activity / SSA) in neuronalen Netzwerken. Wir betrachten Netzwerke mit hierarchischer und modularer Topologie , umfassend Neuronen von verschiedenen kortikalen elektrophysiologischen Zellklassen. Wir zeigen, dass SSA Zustände mit ähnlich zu den experimentell beobachteten Eigenschaften existieren. Durch Analyse der Dynamik einzelner Neuronen sowie des Phasenraums des gesamten Systems erläutern wir die Rolle der Inhibierung. Darüber hinaus zeigen wir, dass beide Netzwerkarchitektur, in Bezug auf Modularität, sowie Mischung aus verschiedenen Neuronen, in Bezug auf die unterschiedlichen Zellklassen, einen Einfluss auf die Lebensdauer der SSA haben.
In this work we study global dynamical phenomena which emerge as a result of the interplay between network topology and single-node dynamics in systems of excitable elements. We first focus on relatively small structured networks with comprehensible complexity in terms of graph-symmetries. We discuss the constraints posed by the network topology on the dynamical flow in the phase space of the system and on the admissible synchronized states. In particular, we are interested in the stability properties of flow invariant polydiagonals and in the evolutions of attractors in the parameter spaces of such systems. As a suitable theoretical framework describing excitable elements we use the Kuramoto and Shinomoto model of sinusoidally coupled “active rotators”. We investigate plane hexagonal lattices of different size with periodic boundary conditions. We study general conditions posed on the adjacency matrix of the networks, enabling the Watanabe-Strogatz reduction, and discuss different examples. Finally, we present a generic analysis of bifurcations taking place on the submanifold associated with the Watanabe-Strogatz reduced system. In the second part of the work we investigate a global dynamical phenomenon in neuronal networks known as self-sustained activity (SSA). We consider networks of hierarchical and modular topology, comprising neurons of different cortical electrophysiological cell classes. In the investigated neural networks we show that SSA states with spiking characteristics, similar to the ones observed experimentally, can exist. By analyzing the dynamics of single neurons, as well as the phase space of the whole system, we explain the importance of inhibition for sustaining the global oscillatory activity of the network. Furthermore, we show that both network architecture, in terms of modularity level, as well as mixture of excitatory-inhibitory neurons, in terms of different cell classes, have influence on the lifetime of SSA.
APA, Harvard, Vancouver, ISO, and other styles
4

Wadagbalkar, Pushkar. "Real-time prediction of projectile penetration to laminates by training machine learning models with finite element solver as the trainer." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1592169428128864.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Citipitioglu, Ahmet Muhtar. "Development and assessment of response and strength models for bolted steel connections using refined nonlinear 3D finite element analysis." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31691.

Full text
Abstract:
Thesis (Ph.D)--Civil and Environmental Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Haj-Ali, Rami; Committee Co-Chair: Leon, Roberto; Committee Co-Chair: White, Donald; Committee Member: DesRoches, Reginald; Committee Member: Gentry, Russell. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
6

Паржин, Юрій Володимирович. "Моделі і методи побудови архітектури і компонентів детекторних нейроморфних комп'ютерних систем." Thesis, НТУ "ХПІ", 2018. http://repository.kpi.kharkov.ua/handle/KhPI-Press/34755.

Full text
Abstract:
Дисертація на здобуття наукового ступеня доктора технічних наук за спеціальністю 05.13.05 – комп'ютерні системи та компоненти. – Національний технічний університет "Харківський політехнічний інститут", Міністерство освіти і науки України, Харків, 2018. Дисертація присвячена вирішенню проблеми підвищення ефективності побудови та використання нейроморфних комп'ютерних систем (НКС) в результаті розробки моделей побудови їх компонентів та загальної архітектури, а також методів їх навчання на основі формалізованого детекторного принципу. В результаті аналізу і класифікації архітектури та компонентів НКС встановлено, що в основі всіх їх нейромережевих реалізацій лежить конекціоністська парадигма побудови штучних нейронних мереж. Було обґрунтовано та формалізовано альтернативний до конекціоністської парадигми детекторний принцип побудови архітектури НКС та її компонентів, в основі якого лежить встановлена властивість зв’язності елементів вхідного вектору сигналів та відповідних вагових коефіцієнтів нейроелемента НКС. На основі детекторного принципу були розроблені багатосегментні порогові інформаційні моделі компонентів детекторної НКС (ДНКС): блоків-детекторів, блоків-аналізаторів та блоку новизни, в яких в результаті розробленого методу зустрічного навчання формуються концепти, що визначають необхідні і достатні умови формування їх реакцій. Метод зустрічного навчання ДНКС дозволяє скоротити час її навчання при вирішенні практичних задач розпізнавання зображень до однієї епохи та скоротити розмірність навчальної вибірки. Крім того, цей метод дозволяє вирішити проблему стабільності-пластичності пам'яті ДНКС та проблему її перенавчання на основі самоорганізації карти блоків-детекторів вторинного рівня обробки інформації під управлінням блоку новизни. В результаті досліджень була розроблена модель мережевої архітектури ДНКС, що складається з двох шарів нейроморфних компонентів первинного та вторинного рівнів обробки інформації, та яка дозволяє скоротити кількість необхідних компонентів системи. Для обґрунтування підвищення ефективності побудови та використання НКС на основі детекторного принципу, були розроблені програмні моделі ДНКС автоматизованого моніторингу та аналізу зовнішньої електромагнітної обстановки, а також розпізнавання рукописних цифр бази даних MNIST. Результати дослідження цих систем підтвердили правильність теоретичних положень дисертації та високу ефективність розроблених моделей і методів.
Dissertation for the degree of Doctor of Technical Sciences in the specialty 05.13.05 – Computer systems and components. – National Technical University "Kharkiv Polytechnic Institute", Ministry of Education and Science of Ukraine, Kharkiv, 2018. The thesis is devoted to solving the problem of increasing the efficiency of building and using neuromorphic computer systems (NCS) as a result of developing models for constructing their components and a general architecture, as well as methods for their training based on the formalized detection principle. As a result of the analysis and classification of the architecture and components of the NCS, it is established that the connectionist paradigm for constructing artificial neural networks underlies all neural network implementations. The detector principle of constructing the architecture of the NCS and its components was substantiated and formalized, which is an alternative to the connectionist paradigm. This principle is based on the property of the binding of the elements of the input signal vector and the corresponding weighting coefficients of the NCS. On the basis of the detector principle, multi-segment threshold information models for the components of the detector NCS (DNCS): block-detectors, block-analyzers and a novelty block were developed. As a result of the developed method of counter training, these components form concepts that determine the necessary and sufficient conditions for the formation of reactions. The method of counter training of DNCS allows reducing the time of its training in solving practical problems of image recognition up to one epoch and reducing the dimension of the training sample. In addition, this method allows to solve the problem of stability-plasticity of DNCS memory and the problem of its overfitting based on self-organization of a map of block-detectors of a secondary level of information processing under the control of a novelty block. As a result of the research, a model of the network architecture of DNCS was developed, which consists of two layers of neuromorphic components of the primary and secondary levels of information processing, and which reduces the number of necessary components of the system. To substantiate the increase in the efficiency of constructing and using the NCS on the basis of the detector principle, software models were developed for automated monitoring and analysis of the external electromagnetic environment, as well as recognition of the manuscript figures of the MNIST database. The results of the study of these systems confirmed the correctness of the theoretical provisions of the dissertation and the high efficiency of the developed models and methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Паржин, Юрій Володимирович. "Моделі і методи побудови архітектури і компонентів детекторних нейроморфних комп'ютерних систем." Thesis, НТУ "ХПІ", 2018. http://repository.kpi.kharkov.ua/handle/KhPI-Press/34756.

Full text
Abstract:
Дисертація на здобуття наукового ступеня доктора технічних наук за спеціальністю 05.13.05 – комп'ютерні системи та компоненти. – Національний технічний університет "Харківський політехнічний інститут", Міністерство освіти і науки України, Харків, 2018. Дисертація присвячена вирішенню проблеми підвищення ефективності побудови та використання нейроморфних комп'ютерних систем (НКС) в результаті розробки моделей побудови їх компонентів та загальної архітектури, а також методів їх навчання на основі формалізованого детекторного принципу. В результаті аналізу і класифікації архітектури та компонентів НКС встановлено, що в основі всіх їх нейромережевих реалізацій лежить конекціоністська парадигма побудови штучних нейронних мереж. Було обґрунтовано та формалізовано альтернативний до конекціоністської парадигми детекторний принцип побудови архітектури НКС та її компонентів, в основі якого лежить встановлена властивість зв’язності елементів вхідного вектору сигналів та відповідних вагових коефіцієнтів нейроелемента НКС. На основі детекторного принципу були розроблені багатосегментні порогові інформаційні моделі компонентів детекторної НКС (ДНКС): блоків-детекторів, блоків-аналізаторів та блоку новизни, в яких в результаті розробленого методу зустрічного навчання формуються концепти, що визначають необхідні і достатні умови формування їх реакцій. Метод зустрічного навчання ДНКС дозволяє скоротити час її навчання при вирішенні практичних задач розпізнавання зображень до однієї епохи та скоротити розмірність навчальної вибірки. Крім того, цей метод дозволяє вирішити проблему стабільності-пластичності пам'яті ДНКС та проблему її перенавчання на основі самоорганізації карти блоків-детекторів вторинного рівня обробки інформації під управлінням блоку новизни. В результаті досліджень була розроблена модель мережевої архітектури ДНКС, що складається з двох шарів нейроморфних компонентів первинного та вторинного рівнів обробки інформації, та яка дозволяє скоротити кількість необхідних компонентів системи. Для обґрунтування підвищення ефективності побудови та використання НКС на основі детекторного принципу, були розроблені програмні моделі ДНКС автоматизованого моніторингу та аналізу зовнішньої електромагнітної обстановки, а також розпізнавання рукописних цифр бази даних MNIST. Результати дослідження цих систем підтвердили правильність теоретичних положень дисертації та високу ефективність розроблених моделей і методів.
Dissertation for the degree of Doctor of Technical Sciences in the specialty 05.13.05 – Computer systems and components. – National Technical University "Kharkiv Polytechnic Institute", Ministry of Education and Science of Ukraine, Kharkiv, 2018. The thesis is devoted to solving the problem of increasing the efficiency of building and using neuromorphic computer systems (NCS) as a result of developing models for constructing their components and a general architecture, as well as methods for their training based on the formalized detection principle. As a result of the analysis and classification of the architecture and components of the NCS, it is established that the connectionist paradigm for constructing artificial neural networks underlies all neural network implementations. The detector principle of constructing the architecture of the NCS and its components was substantiated and formalized, which is an alternative to the connectionist paradigm. This principle is based on the property of the binding of the elements of the input signal vector and the corresponding weighting coefficients of the NCS. On the basis of the detector principle, multi-segment threshold information models for the components of the detector NCS (DNCS): block-detectors, block-analyzers and a novelty block were developed. As a result of the developed method of counter training, these components form concepts that determine the necessary and sufficient conditions for the formation of reactions. The method of counter training of DNCS allows reducing the time of its training in solving practical problems of image recognition up to one epoch and reducing the dimension of the training sample. In addition, this method allows to solve the problem of stability-plasticity of DNCS memory and the problem of its overfitting based on self-organization of a map of block-detectors of a secondary level of information processing under the control of a novelty block. As a result of the research, a model of the network architecture of DNCS was developed, which consists of two layers of neuromorphic components of the primary and secondary levels of information processing, and which reduces the number of necessary components of the system. To substantiate the increase in the efficiency of constructing and using the NCS on the basis of the detector principle, software models were developed for automated monitoring and analysis of the external electromagnetic environment, as well as recognition of the manuscript figures of the MNIST database. The results of the study of these systems confirmed the correctness of the theoretical provisions of the dissertation and the high efficiency of the developed models and methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Levin, Robert Ian. "Dynamic Finite Element model updating using neural networks." Thesis, University of Bristol, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Stevenson, King Douglas Beverley. "Robust hardware elements for weightless artificial neural networks." Thesis, University of Central Lancashire, 2000. http://clok.uclan.ac.uk/1884/.

Full text
Abstract:
This thesis investigates novel robust hardware elements for weightless artificial neural systems with a bias towards high integrity avionics applications. The author initially reviews the building blocks of physiological neural systems and then chronologically describes the development of weightless artificial neural systems. Several new design methodologies for the implementation of robust binary sum-and-threshold neurons are presented. The new techniques do not rely on weighted binary counters or registered arithmetic units for their operation making them less susceptible to transient single event upsets. They employ Boolean, weightless binary, asynchronous elements throughout thus increasing robustness in the presence of impulsive noise. Hierarchies formed from these neural elements are studied and a weightless probabilisitic activation function proposed for non-deterministic applications. Neuroram, an auto-associative memory created using these weightless neurons is described and analysed. The signal-to-noise ratio characteristics are compared with the traditional Hamming distance metric. This led to the proposal that neuroram can form a threshold logic based digital signal filter. Two weightless autoassociative memory based neuro-filters are presented and their filtration properties studied and compared with a traditional median filter. Eachn novel architecture was emulated using weightless numericM ATLAB code prior to schematic design and functional simulation. Several neural elements were implemented and validated using FPGA technology. A preliminary robustness evaluation was performed. The large scale particle accelerator at the Theodor Svedberg Laboratory at the University of Uppsala, Sweden, was used to generate transienut psetsin an FPGA performing a weightless binary neural function. One paper,two letters and five international patents have been published during the course of this research. The author has significantly contributed to the field of weightless artificial neural systems in high integrity hardware applications.
APA, Harvard, Vancouver, ISO, and other styles
10

Venkov, Nikola A. "Dynamics of neural field models." Thesis, University of Nottingham, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.517742.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Models of neural elements"

1

De Wilde, Philippe. Neural Network Models. London: Springer London, 1997. http://dx.doi.org/10.1007/978-1-84628-614-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

De Wilde, Philippe. Neural Networks Models. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/bfb0034478.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

K, Mohan Chilukuri, and Ranka Sanjay, eds. Elements of artificial neural networks. Cambridge, Mass: MIT Press, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Domany, Eytan. Models of Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Domany, Eytan, J. Leo van Hemmen, and Klaus Schulten, eds. Models of Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/978-3-642-97171-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Domany, Eytan, J. Leo van Hemmen, and Klaus Schulten, eds. Models of Neural Networks. New York, NY: Springer New York, 1994. http://dx.doi.org/10.1007/978-1-4612-4320-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Domany, Eytan. Models of Neural Networks I. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Neural network models: An analysis. London: Springer, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Physical models of neural networks. Singapore: World Scientific, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

van Hemmen, J. Leo, Jack D. Cowan, and Eytan Domany, eds. Models of Neural Networks IV. New York, NY: Springer New York, 2002. http://dx.doi.org/10.1007/978-0-387-21703-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Models of neural elements"

1

Troiano, Amedeo, Fernando Corinto, and Eros Pasero. "A Memristor Circuit Using Basic Elements with Memory Capability." In Recent Advances of Neural Network Models and Applications, 117–24. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-04129-2_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Arle, Jeffrey E., Longzhi Mei, and Kristen W. Carlson. "Robustness in Neural Circuits." In Brain and Human Body Modeling 2020, 213–29. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45623-8_12.

Full text
Abstract:
AbstractComplex systems are found everywhere – from scheduling to traffic, food to climate, economics to ecology, the brain, and the universe. Complex systems typically have many elements, many modes of interconnectedness of those elements, and often exhibit sensitivity to initial conditions. Complex systems by their nature are generally unpredictable and can be highly unstable.
APA, Harvard, Vancouver, ISO, and other styles
3

Del Moral Hernandez, Emilio. "Studying Neural Networks of Bifurcating Recursive Processing Elements — Quantitative Methods for Architecture Design." In Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence, 546–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45720-8_65.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kashchenko, Serguey. "Model of the Neural System with Diffusive Interaction of Elements." In Lecture Notes in Morphogenesis, 125–45. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-19866-8_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Montesinos López, Osval Antonio, Abelardo Montesinos López, and Jose Crossa. "Artificial Neural Networks and Deep Learning for Genomic Prediction of Binary, Ordinal, and Mixed Outcomes." In Multivariate Statistical Machine Learning Methods for Genomic Prediction, 477–532. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-89010-0_12.

Full text
Abstract:
AbstractIn this chapter, we provide the main elements for implementing deep neural networks in Keras for binary, categorical, and mixed outcomes under feedforward networks as well as the main practical issues involved in implementing deep learning models with binary response variables. The same practical issues are provided for implementing deep neural networks with categorical and count traits under a univariate framework. We follow with a detailed assessment of information for implementing multivariate deep learning models for continuous, binary, categorical, count, and mixed outcomes. In all the examples given, the data came from plant breeding experiments including genomic data. The training process for binary, ordinal, count, and multivariate outcomes is similar to fitting DNN models with univariate continuous outcomes, since once we have the data to be trained, we need to (a) define the DNN model in Keras, (b) configure and compile the model, (c) fit the model, and finally, (d) evaluate the prediction performance in the testing set. In the next section, we provide illustrative examples of training DNN for binary outcomes in Keras R (Chollet and Allaire, Deep learning with R. Manning Publications, Manning Early Access Program (MEA), 2017; Allaire and Chollet, Keras: R interface to Keras’, 2019).
APA, Harvard, Vancouver, ISO, and other styles
6

Montesinos López, Osval Antonio, Abelardo Montesinos López, and Jose Crossa. "Artificial Neural Networks and Deep Learning for Genomic Prediction of Continuous Outcomes." In Multivariate Statistical Machine Learning Methods for Genomic Prediction, 427–76. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-89010-0_11.

Full text
Abstract:
AbstractThis chapter provides elements for implementing deep neural networks (deep learning) for continuous outcomes. We give details of the hyperparameters to be tuned in deep neural networks and provide a general guide for doing this task with more probability of success. Then we explain the most popular deep learning frameworks that can be used to implement these models as well as the most popular optimizers available in many software programs for deep learning. Several practical examples with plant breeding data for implementing deep neural networks in the Keras library are outlined. These examples take into account many components in the predictor as well many hyperparameters (hidden layer, number of neurons, learning rate, optimizers, penalization, etc.) for which we also illustrate how the tuning process can be done to increase the probability of a successful application.
APA, Harvard, Vancouver, ISO, and other styles
7

Riel, Stefanie, Mohammad Bashiri, Werner Hemmert, and Siwei Bai. "Computational Models of Brain Stimulation with Tractography Analysis." In Brain and Human Body Modeling 2020, 101–17. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45623-8_6.

Full text
Abstract:
AbstractComputational human head models have been used in studies of brain stimulation. These models have been able to provide useful information that can’t be acquired or difficult to acquire from experimental or imaging studies. However, most of these models are purely volume conductor models that overlooked the electric excitability of axons in the white matter of the brain. We hereby combined a finite element (FE) model of electroconvulsive therapy (ECT) with a whole-brain tractography analysis as well as the cable theory of neuronal excitation. We have reconstructed a whole-brain tractogram with 2000 neural fibres from diffusion-weighted magnetic resonance scans and extracted the information on electrical potential from the FE ECT model of the same head. Two different electrode placements and three different white matter conductivity settings were simulated and compared. We calculated the electric field and second spatial derivatives of the electrical potential along the fibre direction, which describes the activating function for homogenous axons, and investigated sensitive regions of white matter activation. Models with anisotropic white matter conductivity yielded the most distinctive electric field and activating function distribution. Activation was most likely to appear in regions between the electrodes where the electric potential gradient is most pronounced.
APA, Harvard, Vancouver, ISO, and other styles
8

Soloviev, Arcady, Boris Sobol, Pavel Vasiliev, and Alexander Senichev. "Generative Artificial Neural Network Model for Visualization of Internal Defects of Structural Elements." In Springer Proceedings in Materials, 587–95. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45120-2_48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zgonc, Kornelija, Jan D. Achenbach, and Yung-Chung Lee. "Crack Sizing Using a Neural Network Classifier Trained with Data Obtained from Finite Element Models." In Review of Progress in Quantitative Nondestructive Evaluation, 779–86. Boston, MA: Springer US, 1995. http://dx.doi.org/10.1007/978-1-4615-1987-4_97.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Becking, Daniel, Maximilian Dreyer, Wojciech Samek, Karsten Müller, and Sebastian Lapuschkin. "ECQ$$^{\text {x}}$$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs." In xxAI - Beyond Explainable AI, 271–96. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_14.

Full text
Abstract:
AbstractThe remarkable success of deep neural networks (DNNs) in various applications is accompanied by a significant increase in network parameters and arithmetic operations. Such increases in memory and computational demands make deep learning prohibitive for resource-constrained hardware platforms such as mobile devices. Recent efforts aim to reduce these overheads, while preserving model performance as much as possible, and include parameter reduction techniques, parameter quantization, and lossless compression techniques.In this chapter, we develop and describe a novel quantization paradigm for DNNs: Our method leverages concepts of explainable AI (XAI) and concepts of information theory: Instead of assigning weight values based on their distances to the quantization clusters, the assignment function additionally considers weight relevances obtained from Layer-wise Relevance Propagation (LRP) and the information content of the clusters (entropy optimization). The ultimate goal is to preserve the most relevant weights in quantization clusters of highest information content.Experimental results show that this novel Entropy-Constrained and XAI-adjusted Quantization (ECQ$$^{\text {x}}$$ x ) method generates ultra low-precision (2–5 bit) and simultaneously sparse neural networks while maintaining or even improving model performance. Due to reduced parameter precision and high number of zero-elements, the rendered networks are highly compressible in terms of file size, up to 103$$\times $$ × compared to the full-precision unquantized DNN model. Our approach was evaluated on different types of models and datasets (including Google Speech Commands, CIFAR-10 and Pascal VOC) and compared with previous work.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Models of neural elements"

1

Kaminski, Marcin, and Mateusz Malarczyk. "Neural Data Processing in Scanner of Static Elements." In 2019 24th International Conference on Methods and Models in Automation and Robotics (MMAR). IEEE, 2019. http://dx.doi.org/10.1109/mmar.2019.8864650.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Felix, Corinne Teeter, Sarah Luca, Srideep Musuvathy, and Brad Aimone. "Localization through Grid-basedEncodings on Digital Elevation Models." In NICE 2022: Neuro-Inspired Computational Elements Conference. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3517343.3517366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Leugering, Johannes. "Making spiking neurons more succinct with multi-compartment models." In NICE '20: Neuro-inspired Computational Elements Workshop. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3381755.3381763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Allen, Kathleen B., and Bradley E. Layton. "Mechanical Neural Growth Models." In ASME 2005 International Mechanical Engineering Congress and Exposition. ASMEDC, 2005. http://dx.doi.org/10.1115/imece2005-79445.

Full text
Abstract:
Critical to being able to control the growth patterns of cell-based sensors is being able to understand how the cytoskeleton of the cell maintains its structure and integrity both under mechanical load and in a load-free environment. Our approach to a better understanding of cell growth is to use a computer simulation that incorporates the primary structures, microtubules, necessary for growth along with their observed behaviors and experimentally determined mechanical properties. Microtubules are the main compressive structural support elements for the axon of a neuron and are created via polymerization of α-β tubulin dimers. Our de novo simulation explores the mechanics of the forces between microtubules and the membrane. We hypothesize that axonal growth is most influenced by the location and direction of the force exerted by the microtubule on the membrane, and furthermore that the interplay of forces between microtubules and the inner surface of the cell membrane dictates the polar structure of axons. The simulation will be used to understand cytoskeletal mechanics for the purpose of engineering cells to be used as sensors.
APA, Harvard, Vancouver, ISO, and other styles
5

Krasilenko, Vladimir G., Anatoly K. Bogukhvalskiy, and Andrey T. Magas. "Equivalent models of neural networks and their effective optoelectronic implementations based on matrix multivalued elements." In International Conference on Optical Storage, edited by Viacheslav V. Petrov and Sergei V. Svechnikov. SPIE, 1997. http://dx.doi.org/10.1117/12.267699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Roemer, Michael J., Chi-an Hong, and Stephen H. Hesler. "Machine Health Monitoring and Life Management Using Finite Element Based Neural Networks." In ASME 1995 International Gas Turbine and Aeroengine Congress and Exposition. American Society of Mechanical Engineers, 1995. http://dx.doi.org/10.1115/95-gt-243.

Full text
Abstract:
This paper demonstrates a novel approach to condition-based, health monitoring for rotating machinery using recent advances in neural network technology and rotordynamic, finite-element modeling. A desktop rotor demonstration rig was used as a proof of concept tool. The approach integrates machinery sensor measurements with detailed, rotordynamic, finite-element models through a neural network which is specifically trained to respond to the machine being monitored. The advantage of this approach over current methods lies in the use of an advanced neural network. The neural network is trained to contain the knowledge of a detailed finite-element model whose results are integrated with system measurements to produce accurate machine fault diagnostics and component stress predictions. This technique takes advantage of recent advances in neural network technology that enable real-time machinery diagnostics and component stress prediction to be performed on a PC with the accuracy of finite-element analysis. The availability of the real-time, finite-element based knowledge on rotating elements allows for real-time component life prediction as well as accurate and fast fault diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
7

Mahajan, R. L. "Strategies for Building Artificial Neural Network Models." In ASME 2000 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2000. http://dx.doi.org/10.1115/imece2000-1464.

Full text
Abstract:
Abstract An artificial neural network (ANN) is a massively parallel, dynamic system of processing elements, neurons, which are connected in complicated patterns to allow for a variety of interactions among the inputs to produce the desired output. It has the ability to learn directly from example data rather than by following the programmed rules based on a knowledge base. There is virtually no limit to what an ANN can predict or decipher, so long as it has been trained properly through examples which encompass the entire range of desired predictions. This paper provides an overview of such strategies needed to build accurate ANN models. Following a general introduction to artificial neural networks, the paper will describe different techniques to build and train ANN models. Step-by-step procedures will be described to demonstrate the mechanics of building neural network models, with particular emphasis on feedforward neural networks using back-propagation learning algorithm. The network structure and pre-processing of data are two significant aspects of ANN model building. The former has a significant influence on the predictive capability of the network [1]. Several studies have addressed the issue of optimal network structure. Kim and May [2] use statistical experimental design to determine an optimal network for a specific application. Bhat and McAvoy [3] propose a stripping algorithm, starting with a large network and then reducing the network complexity by removing unnecessary weights/nodes. This ‘complex-to-simple’ procedure requires heavy and tedious computation. Villiers and Bernard [4] conclude that although there is no significant difference between the optimal performance of one or two hidden layer networks, single layer networks do better classification on average. Marwah et al. [5] advocate a simple-to-complex methodology in which the training starts with the simplest ANN structure. The complexity of the structure is incrementally stepped-up till an acceptable learning performance is obtained. Preprocessing of data can lead to substantial improvements in the training process. Kown et al. [6] propose a data pre-processing algorithm for a highly skewed data set. Marwah et al. [5] propose two different strategies for dealing with the data. For applications with a significant amount of historical data, smart select methodology is proposed that ensures equal weighted distribution of the data over the range of the input parameters. For applications, where there is scarcity of data or where the experiments are expensive to perform, a statistical design of experiments approach is suggested. In either case, it is shown that dividing the data into training, testing and validation ensures an accurate ANN model that has excellent predictive capabilities. The paper also describes recently developed concepts of physical-neural network models and model transfer techniques. In the former, an ANN model is built on the data generated through the ‘first-principles’ analytical or numerical model of the process under consideration. It is shown that such a model, termed as a physical-neural network model has the accuracy of the first-principles model but yet is orders of magnitude faster to execute. In recognition of the fact that such a model has all the approximations that are generally inherent in physical models for many complex processes, model transfer techniques have been developed [6] that allow economical development of accurate process equipment models. Examples from thermally-based materials processing will be described to illustrate the application of the basic concepts involved.
APA, Harvard, Vancouver, ISO, and other styles
8

Madala, Kaushik, Shraddha Piparia, Hyunsook Do, and Renee Bryce. "Finding Component State Transition Model Elements Using Neural Networks: An Empirical Study." In 2018 5th International Workshop on Artificial Intelligence for Requirements Engineering (AIRE). IEEE, 2018. http://dx.doi.org/10.1109/aire.2018.00014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Balabanov, T., M. Hadjiski, P. Koprinkova-Hristova, S. Beloreshki, and L. Doukovska. "Neural network model of mill-fan system elements vibration for predictive maintenance." In 2011 International Symposium on Innovations in Intelligent Systems and Applications (INISTA). IEEE, 2011. http://dx.doi.org/10.1109/inista.2011.5946102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

PAGANI, ALFONSO, MARCO ENEA, and ERASMO CARRERA. "DAMAGE DETECTION IN LAMINATED COMPOSITES BY NEURAL NETWORKS AND HIGH ORDER FINITE ELEMENTS." In Thirty-sixth Technical Conference. Destech Publications, Inc., 2021. http://dx.doi.org/10.12783/asc36/35788.

Full text
Abstract:
In the aerospace industry, machine learning techniques are becoming more and more important for Structural Health Monitoring (SHM). In fact, they could be useful in giving a precise and complete mapping of damage distribution in a structure, including low-intensities or local defects, which cannot be detected via traditional tests. In this work, feedforward artificial neural networks (ANN) are employed for vibration-based damage detection in composite laminates. In the framework of Carrera Unified formulation (CUF), one-dimensional refined models in conjunction with layer-wise (LW) theory are adopted. CUF-based Monte Carlo simulations have been used for the creation of a dataset of damage scenarios for the training of the ANN. Therefore, the latter is fed with the vibrational characteristics of these structures. The trained ANN, given these dynamic parameters, is able to predict location and intensity of all damages in the laminated composite structures.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Models of neural elements"

1

Warrick, Arthur W., Gideon Oron, Mary M. Poulton, Rony Wallach, and Alex Furman. Multi-Dimensional Infiltration and Distribution of Water of Different Qualities and Solutes Related Through Artificial Neural Networks. United States Department of Agriculture, January 2009. http://dx.doi.org/10.32747/2009.7695865.bard.

Full text
Abstract:
The project exploits the use of Artificial Neural Networks (ANN) to describe infiltration, water, and solute distribution in the soil during irrigation. It provides a method of simulating water and solute movement in the subsurface which, in principle, is different and has some advantages over the more common approach of numerical modeling of flow and transport equations. The five objectives were (i) Numerically develop a database for the prediction of water and solute distribution for irrigation; (ii) Develop predictive models using ANN; (iii) Develop an experimental (laboratory) database of water distribution with time; within a transparent flow cell by high resolution CCD video camera; (iv) Conduct field studies to provide basic data for developing and testing the ANN; and (v) Investigate the inclusion of water quality [salinity and organic matter (OM)] in an ANN model used for predicting infiltration and subsurface water distribution. A major accomplishment was the successful use of Moment Analysis (MA) to characterize “plumes of water” applied by various types of irrigation (including drip and gravity sources). The general idea is to describe the subsurface water patterns statistically in terms of only a few (often 3) parameters which can then be predicted by the ANN. It was shown that ellipses (in two dimensions) or ellipsoids (in three dimensions) can be depicted about the center of the plume. Any fraction of water added can be related to a ‘‘probability’’ curve relating the size of the ellipse (or ellipsoid) that contains that amount of water. The initial test of an ANN to predict the moments (and hence the water plume) was with numerically generated data for infiltration from surface and subsurface drip line and point sources in three contrasting soils. The underlying dataset consisted of 1,684,500 vectors (5 soils×5 discharge rates×3 initial conditions×1,123 nodes×20 print times) where each vector had eleven elements consisting of initial water content, hydraulic properties of the soil, flow rate, time and space coordinates. The output is an estimate of subsurface water distribution for essentially any soil property, initial condition or flow rate from a drip source. Following the formal development of the ANN, we have prepared a “user-friendly” version in a spreadsheet environment (in “Excel”). The input data are selected from appropriate values and the output is instantaneous resulting in a picture of the resulting water plume. The MA has also proven valuable, on its own merit, in the description of the flow in soil under laboratory conditions for both wettable and repellant soils. This includes non-Darcian flow examples and redistribution and well as infiltration. Field experiments were conducted in different agricultural fields and various water qualities in Israel. The obtained results will be the basis for the further ANN models development. Regions of high repellence were identified primarily under the canopy of various orchard crops, including citrus and persimmons. Also, increasing OM in the applied water lead to greater repellency. Major scientific implications are that the ANN offers an alternative to conventional flow and transport modeling and that MA is a powerful technique for describing the subsurface water distributions for normal (wettable) and repellant soil. Implications of the field measurements point to the special role of OM in affecting wettability, both from the irrigation water and from soil accumulation below canopies. Implications for agriculture are that a modified approach for drip system design should be adopted for open area crops and orchards, and taking into account the OM components both in the soil and in the applied waters.
APA, Harvard, Vancouver, ISO, and other styles
2

Brown, Joshua W. Computational Neural Models of Risk. Fort Belvoir, VA: Defense Technical Information Center, February 2010. http://dx.doi.org/10.21236/ada515423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Byrne, John H. Analysis and Synthesis of Adaptive Neural Elements. Fort Belvoir, VA: Defense Technical Information Center, September 1987. http://dx.doi.org/10.21236/ada187047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gardner, Daniel. Symbolic Processor Based Models of Neural Networks. Fort Belvoir, VA: Defense Technical Information Center, May 1988. http://dx.doi.org/10.21236/ada200200.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Casey, Tiernan, and Bert Debusschere. Analysis of Neural Network Combustion Surrogate Models. Office of Scientific and Technical Information (OSTI), September 2019. http://dx.doi.org/10.2172/1569154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Byrne, John H. Analysis and Synthesis of Adaptive Neural Elements and Assemblies. Fort Belvoir, VA: Defense Technical Information Center, September 1988. http://dx.doi.org/10.21236/ada201239.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Leij, F. J., and M. T. Van Genuchten. Development of Pedotransfer Functions with Neural Network Models. Fort Belvoir, VA: Defense Technical Information Center, June 2001. http://dx.doi.org/10.21236/ada394563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fan, Hongyou, Catherine Branda, Richard Louis Schiek, Christina E. Warrender, and James Chris Forsythe. Neural assembly models derived through nano-scale measurements. Office of Scientific and Technical Information (OSTI), September 2009. http://dx.doi.org/10.2172/993899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Niederer, J. Particle Beam Control Design Notes for Neural Models. Office of Scientific and Technical Information (OSTI), June 1999. http://dx.doi.org/10.2172/1151384.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hirsch, Morris W., Bill Baird, Walter Freeman, and Bernice Gangale. Dynamical Systems, Neural Networks and Cortical Models ASSERT 93. Fort Belvoir, VA: Defense Technical Information Center, November 1994. http://dx.doi.org/10.21236/ada295495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography