To see the other types of publications on this topic, follow the link: Approximate identity neural networks.

Journal articles on the topic 'Approximate identity neural networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Approximate identity neural networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Moon, Sunghwan. "ReLU Network with Bounded Width Is a Universal Approximator in View of an Approximate Identity." Applied Sciences 11, no. 1 (January 4, 2021): 427. http://dx.doi.org/10.3390/app11010427.

Full text
Abstract:
Deep neural networks have shown very successful performance in a wide range of tasks, but a theory of why they work so well is in the early stage. Recently, the expressive power of neural networks, important for understanding deep learning, has received considerable attention. Classic results, provided by Cybenko, Barron, etc., state that a network with a single hidden layer and suitable activation functions is a universal approximator. A few years ago, one started to study how width affects the expressiveness of neural networks, i.e., a universal approximation theorem for a deep neural network with a Rectified Linear Unit (ReLU) activation function and bounded width. Here, we show how any continuous function on a compact set of Rnin,nin∈N can be approximated by a ReLU network having hidden layers with at most nin+5 nodes in view of an approximate identity.
APA, Harvard, Vancouver, ISO, and other styles
2

Funahashi, Ken-Ichi. "Approximate realization of identity mappings by three-layer neural networks." Electronics and Communications in Japan (Part III: Fundamental Electronic Science) 73, no. 11 (1990): 61–68. http://dx.doi.org/10.1002/ecjc.4430731107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zainuddin, Zarita, and Saeed Panahian Fard. "The Universal Approximation Capabilities of Cylindrical Approximate Identity Neural Networks." Arabian Journal for Science and Engineering 41, no. 8 (March 4, 2016): 3027–34. http://dx.doi.org/10.1007/s13369-016-2067-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Turchetti, C., M. Conti, P. Crippa, and S. Orcioni. "On the approximation of stochastic processes by approximate identity neural networks." IEEE Transactions on Neural Networks 9, no. 6 (1998): 1069–85. http://dx.doi.org/10.1109/72.728353.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Conti, M., and C. Turchetti. "Approximate identity neural networks for analog synthesis of nonlinear dynamical systems." IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications 41, no. 12 (1994): 841–58. http://dx.doi.org/10.1109/81.340846.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fard, Saeed Panahian, and Zarita Zainuddin. "Almost everywhere approximation capabilities of double Mellin approximate identity neural networks." Soft Computing 20, no. 11 (July 2, 2015): 4439–47. http://dx.doi.org/10.1007/s00500-015-1753-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Panahian Fard, Saeed, and Zarita Zainuddin. "The universal approximation capabilities of double 2 $$\pi $$ π -periodic approximate identity neural networks." Soft Computing 19, no. 10 (September 6, 2014): 2883–90. http://dx.doi.org/10.1007/s00500-014-1449-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Panahian Fard, Saeed, and Zarita Zainuddin. "Analyses for L p [a, b]-norm approximation capability of flexible approximate identity neural networks." Neural Computing and Applications 24, no. 1 (October 8, 2013): 45–50. http://dx.doi.org/10.1007/s00521-013-1493-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

DiMattina, Christopher, and Kechen Zhang. "How to Modify a Neural Network Gradually Without Changing Its Input-Output Functionality." Neural Computation 22, no. 1 (January 2010): 1–47. http://dx.doi.org/10.1162/neco.2009.05-08-781.

Full text
Abstract:
It is generally unknown when distinct neural networks having different synaptic weights and thresholds implement identical input-output transformations. Determining the exact conditions for structurally distinct yet functionally equivalent networks may shed light on the theoretical constraints on how diverse neural circuits might develop and be maintained to serve identical functions. Such consideration also imposes practical limits on our ability to uniquely infer the structure of underlying neural circuits from stimulus-response measurements. We introduce a biologically inspired mathematical method for determining when the structure of a neural network can be perturbed gradually while preserving functionality. We show that for common three-layer networks with convergent and nondegenerate connection weights, this is possible only when the hidden unit gains are power functions, exponentials, or logarithmic functions, which are known to approximate the gains seen in some biological neurons. For practical applications, our numerical simulations with finite and noisy data show that continuous confounding of parameters due to network functional equivalence tends to occur approximately even when the gain function is not one of the aforementioned three types, suggesting that our analytical results are applicable to more general situations and may help identify a common source of parameter variability in neural network modeling.
APA, Harvard, Vancouver, ISO, and other styles
10

Germani, S., G. Tosti, P. Lubrano, S. Cutini, I. Mereu, and A. Berretta. "Artificial Neural Network classification of 4FGL sources." Monthly Notices of the Royal Astronomical Society 505, no. 4 (June 24, 2021): 5853–61. http://dx.doi.org/10.1093/mnras/stab1748.

Full text
Abstract:
ABSTRACT The Fermi-LAT DR1 and DR2 4FGL catalogues feature more than 5000 gamma-ray sources of which about one fourth are not associated with already known objects, and approximately one third are associated with blazars of uncertain nature. We perform a three-category classification of the 4FGL DR1 and DR2 sources independently, using an ensemble of Artificial Neural Networks (ANNs) to characterize them based on the likelihood of being a Pulsar (PSR), a BL Lac type blazar (BLL) or a Flat Spectrum Radio Quasar (FSRQ). We identify candidate PSR, BLL, and FSRQ among the unassociated sources with approximate equipartition among the three categories and select 10 classification outliers as potentially interesting for follow-up studies.
APA, Harvard, Vancouver, ISO, and other styles
11

Kaminski, P. C. "The Approximate Location of Damage through the Analysis of Natural Frequencies with Artificial Neural Networks." Proceedings of the Institution of Mechanical Engineers, Part E: Journal of Process Mechanical Engineering 209, no. 2 (August 1995): 117–23. http://dx.doi.org/10.1243/pime_proc_1995_209_238_02.

Full text
Abstract:
An effective and reliable damage assessment methodology is a valuable tool for the timely determination of damage and the deterioration stage of structural members as well as for non-destructive testing (NDT). In this work artificial neural networks are used to identify the approximate location of damage through the analysis of changes in the natural frequencies. At first, a methodology for the use of artificial neural networks for this purpose is described. Different ways of pre-processing the data are discussed. The proposed approach is illustrated through the simulation of a free-free beam with a crack whose natural frequencies were obtained experimentally.
APA, Harvard, Vancouver, ISO, and other styles
12

Konstantaras, A., M. R. Varley, F. Vallianatos, G. Collins, and P. Holifield. "A neuro-fuzzy approach to the reliable recognition of electric earthquake precursors." Natural Hazards and Earth System Sciences 4, no. 5/6 (October 18, 2004): 641–46. http://dx.doi.org/10.5194/nhess-4-641-2004.

Full text
Abstract:
Abstract. Electric Earthquake Precursor (EEP) recognition is essentially a problem of weak signal detection. An EEP signal, according to the theory of propagating cracks, is usually a very weak electric potential anomaly appearing on the Earth's electric field prior to an earthquake, often unobservable within the electric background, which is significantly stronger and embedded in noise. Furthermore, EEP signals vary in terms of duration and size making reliable recognition even more difficult. An average model for EEP signals has been identified based on a time function describing the evolution of the number of propagating cracks. This paper describes the use of neuro-fuzzy networks (Neural Networks with intrinsic fuzzy logic abilities) for the reliable recognition of EEP signals within the electric field. Pattern recognition is performed by the neural network to identify the average EEP model from within the electric field. Use of the neuro-fuzzy model enables classification of signals that are not exactly the same, but do approximate the average EEP model, as EEPs. On the other hand, signals that look like EEPs but do not approximate enough the average model are suppressed, preventing false classification. The effectiveness of the proposed network is demonstrated using electrotelluric data recorded in NW Greece.
APA, Harvard, Vancouver, ISO, and other styles
13

Maurya, Sunil Kumar, Xin Liu, and Tsuyoshi Murata. "Graph Neural Networks for Fast Node Ranking Approximation." ACM Transactions on Knowledge Discovery from Data 15, no. 5 (June 26, 2021): 1–32. http://dx.doi.org/10.1145/3446217.

Full text
Abstract:
Graphs arise naturally in numerous situations, including social graphs, transportation graphs, web graphs, protein graphs, etc. One of the important problems in these settings is to identify which nodes are important in the graph and how they affect the graph structure as a whole. Betweenness centrality and closeness centrality are two commonly used node ranking measures to find out influential nodes in the graphs in terms of information spread and connectivity. Both of these are considered as shortest path based measures as the calculations require the assumption that the information flows between the nodes via the shortest paths. However, exact calculations of these centrality measures are computationally expensive and prohibitive, especially for large graphs. Although researchers have proposed approximation methods, they are either less efficient or suboptimal or both. We propose the first graph neural network (GNN) based model to approximate betweenness and closeness centrality. In GNN, each node aggregates features of the nodes in multihop neighborhood. We use this feature aggregation scheme to model paths and learn how many nodes are reachable to a specific node. We demonstrate that our approach significantly outperforms current techniques while taking less amount of time through extensive experiments on a series of synthetic and real-world datasets. A benefit of our approach is that the model is inductive, which means it can be trained on one set of graphs and evaluated on another set of graphs with varying structures. Thus, the model is useful for both static graphs and dynamic graphs. Source code is available at https://github.com/sunilkmaurya/GNN_Ranking
APA, Harvard, Vancouver, ISO, and other styles
14

FUENTES, R., A. POZNYAK, I. CHAIREZ, M. FRANCO, and T. POZNYAK. "CONTINUOUS NEURAL NETWORKS APPLIED TO IDENTIFY A CLASS OF UNCERTAIN PARABOLIC PARTIAL DIFFERENTIAL EQUATIONS." International Journal of Modeling, Simulation, and Scientific Computing 01, no. 04 (December 2010): 485–508. http://dx.doi.org/10.1142/s1793962310000304.

Full text
Abstract:
There are a lot of examples in science and engineering that may be described using a set of partial differential equations (PDEs). Those PDEs are obtained applying a process of mathematical modeling using complex physical, chemical, etc. laws. Nevertheless, there are many sources of uncertainties around the aforementioned mathematical representation. It is well known that neural networks can approximate a large set of continuous functions defined on a compact set. If the continuous mathematical model is incomplete or partially known, the methodology based on Differential Neural Network (DNN) provides an effective tool to solve problems on control theory such as identification, state estimation, trajectories tracking, etc. In this paper, a strategy based on DNN for the no parametric identification of a mathematical model described by parabolic partial differential equations is proposed. The identification solution allows finding an exact expression for the weights' dynamics. The weights adaptive laws ensure the "practical stability" of DNN trajectories. To verify the qualitative behavior of the suggested methodology, a no parametric modeling problem for a couple of distributed parameter plants is analyzed: the plug-flow reactor model and the anaerobic digestion system. The results obtained in the numerical simulations confirm the identification capability of the suggested methodology.
APA, Harvard, Vancouver, ISO, and other styles
15

Yuan, Hao, Yongjun Chen, Xia Hu, and Shuiwang Ji. "Interpreting Deep Models for Text Analysis via Optimization and Regularization Methods." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5717–24. http://dx.doi.org/10.1609/aaai.v33i01.33015717.

Full text
Abstract:
Interpreting deep neural networks is of great importance to understand and verify deep models for natural language processing (NLP) tasks. However, most existing approaches only focus on improving the performance of models but ignore their interpretability. In this work, we propose an approach to investigate the meaning of hidden neurons of the convolutional neural network (CNN) models. We first employ saliency map and optimization techniques to approximate the detected information of hidden neurons from input sentences. Then we develop regularization terms and explore words in vocabulary to interpret such detected information. Experimental results demonstrate that our approach can identify meaningful and reasonable interpretations for hidden spatial locations. Additionally, we show that our approach can describe the decision procedure of deep NLP models.
APA, Harvard, Vancouver, ISO, and other styles
16

Bartlett, Peter L., David P. Helmbold, and Philip M. Long. "Gradient Descent with Identity Initialization Efficiently Learns Positive-Definite Linear Transformations by Deep Residual Networks." Neural Computation 31, no. 3 (March 2019): 477–502. http://dx.doi.org/10.1162/neco_a_01164.

Full text
Abstract:
We analyze algorithms for approximating a function [Formula: see text] mapping [Formula: see text] to [Formula: see text] using deep linear neural networks, that is, that learn a function [Formula: see text] parameterized by matrices [Formula: see text] and defined by [Formula: see text]. We focus on algorithms that learn through gradient descent on the population quadratic loss in the case that the distribution over the inputs is isotropic. We provide polynomial bounds on the number of iterations for gradient descent to approximate the least-squares matrix [Formula: see text], in the case where the initial hypothesis [Formula: see text] has excess loss bounded by a small enough constant. We also show that gradient descent fails to converge for [Formula: see text] whose distance from the identity is a larger constant, and we show that some forms of regularization toward the identity in each layer do not help. If [Formula: see text] is symmetric positive definite, we show that an algorithm that initializes [Formula: see text] learns an [Formula: see text]-approximation of [Formula: see text] using a number of updates polynomial in [Formula: see text], the condition number of [Formula: see text], and [Formula: see text]. In contrast, we show that if the least-squares matrix [Formula: see text] is symmetric and has a negative eigenvalue, then all members of a class of algorithms that perform gradient descent with identity initialization, and optionally regularize toward the identity in each layer, fail to converge. We analyze an algorithm for the case that [Formula: see text] satisfies [Formula: see text] for all [Formula: see text] but may not be symmetric. This algorithm uses two regularizers: one that maintains the invariant [Formula: see text] for all [Formula: see text] and the other that “balances” [Formula: see text] so that they have the same singular values.
APA, Harvard, Vancouver, ISO, and other styles
17

Wu, Ga, Buser Say, and Scott Sanner. "Scalable Planning with Deep Neural Network Learned Transition Models." Journal of Artificial Intelligence Research 68 (July 20, 2020): 571–606. http://dx.doi.org/10.1613/jair.1.11829.

Full text
Abstract:
In many complex planning problems with factored, continuous state and action spaces such as Reservoir Control, Heating Ventilation and Air Conditioning (HVAC), and Navigation domains, it is difficult to obtain a model of the complex nonlinear dynamics that govern state evolution. However, the ubiquity of modern sensors allows us to collect large quantities of data from each of these complex systems and build accurate, nonlinear deep neural network models of their state transitions. But there remains one major problem for the task of control – how can we plan with deep network learned transition models without resorting to Monte Carlo Tree Search and other black-box transition model techniques that ignore model structure and do not easily extend to continuous domains? In this paper, we introduce two types of planning methods that can leverage deep neural network learned transition models: Hybrid Deep MILP Planner (HD-MILP-Plan) and Tensorflow Planner (TF-Plan). In HD-MILP-Plan, we make the critical observation that the Rectified Linear Unit (ReLU) transfer function for deep networks not only allows faster convergence of model learning, but also permits a direct compilation of the deep network transition model to a Mixed-Integer Linear Program (MILP) encoding. Further, we identify deep network specific optimizations for HD-MILP-Plan that improve performance over a base encoding and show that we can plan optimally with respect to the learned deep networks. In TF-Plan, we take advantage of the efficiency of auto-differentiation tools and GPU-based computation where we encode a subclass of purely continuous planning problems as Recurrent Neural Networks and directly optimize the actions through backpropagation. We compare both planners and show that TF-Plan is able to approximate the optimal plans found by HD-MILP-Plan in less computation time. Hence this article offers two novel planners for continuous state and action domains with learned deep neural net transition models: one optimal method (HD-MILP-Plan) and a scalable alternative for large-scale problems (TF-Plan).
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Pengfei, Yan Li, and Xiucheng Guo. "A Red-Light Running Prevention System Based on Artificial Neural Network and Vehicle Trajectory Data." Computational Intelligence and Neuroscience 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/892132.

Full text
Abstract:
The high frequency of red-light running and complex driving behaviors at the yellow onset at intersections cannot be explained solely by the dilemma zone and vehicle kinematics. In this paper, the author presented a red-light running prevention system which was based on artificial neural networks (ANNs) to approximate the complex driver behaviors during yellow and all-red clearance and serve as the basis of an innovative red-light running prevention system. The artificial neural network and vehicle trajectory are applied to identify the potential red-light runners. The ANN training time was also acceptable and its predicting accurate rate was over 80%. Lastly, a prototype red-light running prevention system with the trained ANN model was described. This new system can be directly retrofitted into the existing traffic signal systems.
APA, Harvard, Vancouver, ISO, and other styles
19

Jassem, Wiktor, and Waldemar Grygiel. "Off-line classification of Polish vowel spectra using artificial neural networks." Journal of the International Phonetic Association 34, no. 1 (January 2004): 37–52. http://dx.doi.org/10.1017/s0025100304001537.

Full text
Abstract:
The mid-frequencies and bandwidths of formants 1–5 were measured at targets, at plus 0.01 s and at minus 0.01 s off the targets of vowels in a 100-word list read by five male and five female speakers, for a total of 3390 10-variable spectrum specifications. Each of the six Polish vowel phonemes was represented approximately the same number of times. The 3390* 10 original-data matrix was processed by probabilistic neural networks to produce a classification of the spectra with respect to (a) vowel phoneme, (b) identity of the speaker, and (c) speaker gender. For (a) and (b), networks with added input information from another independent variable were also used, as well as matrices of the numerical data appropriately normalized. Mean scores for classification with respect to phonemes in a multi-speaker design in the testing sets were around 95%, and mean speaker-dependent scores for the phonemes varied between 86% and 100%, with two speakers scoring 100% correct. The individual voices were identified between 95% and 96% of the time, and classifications of the spectra for speaker gender were practically 100% correct.
APA, Harvard, Vancouver, ISO, and other styles
20

Song, Chang-Yong. "A Study on Learning Parameters in Application of Radial Basis Function Neural Network Model to Rotor Blade Design Approximation." Applied Sciences 11, no. 13 (July 1, 2021): 6133. http://dx.doi.org/10.3390/app11136133.

Full text
Abstract:
Meta-model sre generally applied to approximate multi-objective optimization, reliability analysis, reliability based design optimization, etc., not only in order to improve the efficiencies of numerical calculation and convergence, but also to facilitate the analysis of design sensitivity. The radial basis function neural network (RBFNN) is the meta-model employing hidden layer of radial units and output layer of linear units, and characterized by relatively fast training, generalization and compact type of networks. It is important to minimize some scattered noisy data to approximate the design space to prevent local minima in the gradient based optimization or the reliability analysis using the RBFNN. Since the noisy data must be smoothed out in order for the RBFNN to be applied as the meta-model to any actual structural design problem, the smoothing parameter must be properly determined. This study aims to identify the effect of various learning parameters including the spline smoothing parameter on the RBFNN performance regarding the design approximation. An actual rotor blade design problem was considered to investigate the characteristics of RBFNN approximation with respect to the range of spline smoothing parameter, the number of training data, and the number of hidden layers. In the RBFNN approximation of the rotor blade design, design sensitivity characteristics such as main effects were also evaluated including the performance analysis according to the variation of learning parameters. From the evaluation results of learning parameters in the rotor blade design, it was found that the number of training data had larger influence on the RBFNN meta-model accuracy than the spline smoothing parameter while the number of hidden layers had little effect on the performances of RBFNN meta-model.
APA, Harvard, Vancouver, ISO, and other styles
21

Mengall, G. "Fuzzy modelling for aircraft dynamics identification." Aeronautical Journal 105, no. 1051 (September 2001): 551–55. http://dx.doi.org/10.1017/s0001924000018029.

Full text
Abstract:
A new methodology is described to identify aircraft dynamics and extract the corresponding aerodynamic coefficients. The proposed approach makes use of fuzzy modelling for the identification process where input/output data are first classified by means of the concept of fuzzy clustering and then the linguistic rules are extracted from the fuzzy clusters. The fuzzy rule-based models are in the form of affine Takagi-Sugeno models, that are able to approximate a large class of nonlinear systems. A comparative study is performed with existing techniques based on the employment of neural networks, showing interesting advantages of the proposed methodology both for the physical insight of the identified model and the simplicity to obtain accurate results with fewer parameters to be properly tuned.
APA, Harvard, Vancouver, ISO, and other styles
22

Choi, Hwiyong, Haesang Yang, Seungjun Lee, and Woojae Seong. "Classification of Inter-Floor Noise Type/Position Via Convolutional Neural Network-Based Supervised Learning." Applied Sciences 9, no. 18 (September 7, 2019): 3735. http://dx.doi.org/10.3390/app9183735.

Full text
Abstract:
Inter-floor noise, i.e., noise transmitted from one floor to another floor through walls or ceilings in an apartment building or an office of a multi-layered structure, causes serious social problems in South Korea. Notably, inaccurate identification of the noise type and position by human hearing intensifies the conflicts between residents of apartment buildings. In this study, we propose a robust approach using deep convolutional neural networks (CNNs) to learn and identify the type and position of inter-floor noise. Using a single mobile device, we collected nearly 2000 inter-floor noise events that contain 5 types of inter-floor noises generated at 9 different positions on three floors in a Seoul National University campus building. Based on pre-trained CNN models designed and evaluated separately for type and position classification, we achieved type and position classification accuracy of 99.5% and 95.3%, respectively in validation datasets. In addition, the robustness of noise type classification with the model was checked against a new test dataset. This new dataset was generated in the building and contains 2 types of inter-floor noises at 10 new positions. The approximate positions of inter-floor noises in the new dataset with respect to the learned positions are presented.
APA, Harvard, Vancouver, ISO, and other styles
23

Silveira, Ana Claudia da, Luis Paulo Baldissera Schorr, Elisabete Vuaden, Jéssica Talheimer Aguiar, Tarik Cuchi, and Giselli Castilho Moraes. "MODELAGEM DA ALTURA E DO INCREMENTO EM ÁREA TRANSVERSAL DE LOURO PARDO." Nativa 6, no. 2 (March 26, 2018): 191. http://dx.doi.org/10.31413/nativa.v6i2.4790.

Full text
Abstract:
O estudo teve como objetivo verificar a melhor técnica para a modelagem da altura e do incremento periódico anual em área transversal para Cordia trichotoma (Vell.) Arrab. ex Steud. Para isso, foram identificados e mensurados os diâmetros à altura do peito e as alturas totais de 35 indivíduos localizados em área de preservação permanente e de pastagem, com aproximadamente 4 ha, no município de Salto do Lontra, estado do Paraná. Posteriormente, foi realizada a análise de tronco pelo método não destrutivo verificando o incremento dos últimos 5 anos. Para a estimativa da altura e do incremento periódico anual em área transversal utilizou-se a técnica dos Modelos Lineares Generalizados (MLG) nas distribuições Gamma, Normal e Poisson nas funções de ligação identidade e logarítmica e Redes Neurais Artificiais (RNA) do tipo Multilayer Perceptron. Para comparação e escolha da melhor técnica, utilizou-se a correlação entre os valores observados e estimados, a raiz quadrada do erro médio e a análise gráfica dos resíduos. Os resultados mostraram que dentre os modelos de MLG, a distribuição Gamma função logarítmica foi indicada para modelagem da altura, ao passo que a distribuição Gamma função identidade foi a recomendada para a modelagem do incremento periódico em área transversal. Quando comparadas as duas técnicas evidenciou-se melhores resultados com a utilização das RNAs, as quais estimaram as variáveis estudadas com maior precisão.Palavra-chave: Cordia trichotoma, modelos lineares generalizados, redes neurais artificiais. MODELING HEIGHT AND TRANVERSAL AREA INCREMENT OF LOURO PARDO ABSTRACT:The present study aimed to verify the best technique for modeling height and annual periodic increment in transversal area for Cordia trichotoma (Vell.) Arrab. Ex Steud. For this purpose, we identify and measured the diameter at breast height and the total height of 35 individuals of this species which located in a permanent preservation and pasture area with approximately 4 hectares, in the municipality of Salto do Lontra, Paraná State, Brazil. Subsequently, the trunk analysis was performed by the non-destructive method, verifying the increment of the last 5 years. For the estimation of height and periodic annual increment in the transversal area, the Generalized Linear Models (MLG) technique was used in the Gamma, Normal and Poisson distributions in the identity and logarithmic functions and Artificial Neural Networks (RNA) of the Multilayer Perceptron type. For comparison and choice of the best technique, the correlation between the observed and estimated values, the square root of the mean error and the graphic analysis of the residues were used. The results showed that among the MLG models, the Gamma distribution with the logarithmic function was indicated for modeling height, whereas the Gamma with identity function was recommended for modeling periodic increment in transversal area. When we compared the two techniques, better results were obtained with the use of RNAs, which estimated the variables studied with greater accuracy.Keywords: Cordia trichotoma, generalized linear models, artificial neural networks. DOI:
APA, Harvard, Vancouver, ISO, and other styles
24

Cintra, Renato J., Stefan Duffner, Christophe Garcia, and Andre Leite. "Low-Complexity Approximate Convolutional Neural Networks." IEEE Transactions on Neural Networks and Learning Systems 29, no. 12 (December 2018): 5981–92. http://dx.doi.org/10.1109/tnnls.2018.2815435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Llanas, B., and F. J. Sainz. "Constructive approximate interpolation by neural networks." Journal of Computational and Applied Mathematics 188, no. 2 (April 2006): 283–308. http://dx.doi.org/10.1016/j.cam.2005.04.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

TAKAGI, Hideyuki, Toshiyuki KOUDA, and Yoshihiro KOJIMA. "Neural-networks designed on Approximate Reasoning Architecture." Journal of Japan Society for Fuzzy Theory and Systems 3, no. 1 (1991): 133–41. http://dx.doi.org/10.3156/jfuzzy.3.1_133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Shen, Zuliang, Ho Chung Lui, and Liya Ding. "Approximate case-based reasoning on neural networks." International Journal of Approximate Reasoning 10, no. 1 (January 1994): 75–98. http://dx.doi.org/10.1016/0888-613x(94)90010-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Robinson, Haakon. "Approximate Piecewise Affine Decomposition of Neural Networks." IFAC-PapersOnLine 54, no. 7 (2021): 541–46. http://dx.doi.org/10.1016/j.ifacol.2021.08.416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Shyamalagowri, M., and R. Rajeswari. "Neural Network Predictive Controller Based Nonlinearity Identification Case Study: Nonlinear Process Reactor - CSTR." Advanced Materials Research 984-985 (July 2014): 1326–34. http://dx.doi.org/10.4028/www.scientific.net/amr.984-985.1326.

Full text
Abstract:
In the last decades, a substantial amount of research has been carried out on identification of nonlinear processes. Dynamical systems can be better represented by nonlinear models, which illustrate the global behavior of the nonlinear process reactor over the entire range. CSTR is highly nonlinear chemical reactor. A compact and resourceful model which approximates both linear and nonlinear component of the process is of highly demand. Process modeling is an essential constituent in the growth of sophisticated model-based process control systems. Driven by the contemporary economical needs, developments in process design point out that deliberate operation requires better models. The neural network predictive controller is very efficient to identify complex nonlinear systems with no complete model information. Closed loop method is preferred because it is sensitive to disturbances, no need identify the transfer function model of an unstable system. In this paper identification nonlinearities for a nonlinear process reactor CSTR is approached using neural network predictive controller. KEYWORDS Continuous Stirred Tank Reactor, Multi Input Multi Output, Neural Networks, Chebyshev Neural Networks, Predictive Controller.
APA, Harvard, Vancouver, ISO, and other styles
30

Xu, Xiangrui, Yaqin Li, and Cao Yuan. "“Identity Bracelets” for Deep Neural Networks." IEEE Access 8 (2020): 102065–74. http://dx.doi.org/10.1109/access.2020.2998784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Masegosa, Andrés R., Rafael Cabañas, Helge Langseth, Thomas D. Nielsen, and Antonio Salmerón. "Probabilistic Models with Deep Neural Networks." Entropy 23, no. 1 (January 18, 2021): 117. http://dx.doi.org/10.3390/e23010117.

Full text
Abstract:
Recent advances in statistical inference have significantly expanded the toolbox of probabilistic modeling. Historically, probabilistic modeling has been constrained to very restricted model classes, where exact or approximate probabilistic inference is feasible. However, developments in variational inference, a general form of approximate probabilistic inference that originated in statistical physics, have enabled probabilistic modeling to overcome these limitations: (i) Approximate probabilistic inference is now possible over a broad class of probabilistic models containing a large number of parameters, and (ii) scalable inference methods based on stochastic gradient descent and distributed computing engines allow probabilistic modeling to be applied to massive data sets. One important practical consequence of these advances is the possibility to include deep neural networks within probabilistic models, thereby capturing complex non-linear stochastic relationships between the random variables. These advances, in conjunction with the release of novel probabilistic modeling toolboxes, have greatly expanded the scope of applications of probabilistic models, and allowed the models to take advantage of the recent strides made by the deep learning community. In this paper, we provide an overview of the main concepts, methods, and tools needed to use deep neural networks within a probabilistic modeling framework.
APA, Harvard, Vancouver, ISO, and other styles
32

Stinchcombe, Maxwell B. "Precision and Approximate Flatness in Artificial Neural Networks." Neural Computation 7, no. 5 (September 1995): 1021–39. http://dx.doi.org/10.1162/neco.1995.7.5.1021.

Full text
Abstract:
Several of the major classes of artificial neural networks' output functions are linear combinations of elements of approximately flat sets. This gives a tool for understanding the precision problem as well as providing a rationale for mixing types of networks. Approximate flatness also helps explain the power of artificial neural network techniques relative to series regressions—series regressions take linear combinations of flat sets, while neural networks take linear combinations of the much larger class of approximately flat sets.
APA, Harvard, Vancouver, ISO, and other styles
33

Narendra, K. S., and S. Mukhopadhyay. "Adaptive control using neural networks and approximate models." IEEE Transactions on Neural Networks 8, no. 3 (May 1997): 475–85. http://dx.doi.org/10.1109/72.572089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Lotrič, Uroš, and Patricio Bulić. "Applicability of approximate multipliers in hardware neural networks." Neurocomputing 96 (November 2012): 57–65. http://dx.doi.org/10.1016/j.neucom.2011.09.039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Gerber, B. S., T. G. Tape, R. S. Wigton, and P. S. Heckerling. "Entering the Black Box of Neural Networks." Methods of Information in Medicine 42, no. 03 (2003): 287–96. http://dx.doi.org/10.1055/s-0038-1634363.

Full text
Abstract:
Summary Objectives: Artificial neural networks have proved to be accurate predictive instruments in several medical domains, but have been criticized for failing to specify the information upon which their predictions are based. We used methods of relevance analysis and sensitivity analysis to determine the most important predictor variables for a validated neural network for community-acquired pneumonia. Methods: We studied a feed-forward, back-propagation neural network trained to predict pneumonia among patients presenting to an emergency department with fever or respiratory complaints. We used the methods of full retraining, weight elimination, constant substitution, linear substitution, and data permutation to identify a consensus set of important demographic, symptom, sign, and comorbidity predictors that influenced network output for pneumonia. We compared predictors identified by these methods to those identified by a weight propagation analysis based on the matrices of the network, and by logistic regression. Results: Predictors identified by these methods were clinically plausible, and were concordant with those identified by weight analysis, and by logistic regression using the same data. The methods were highly correlated in network error, and led to variable sets with errors below bootstrap 95% confidence intervals for networks with similar numbers of inputs. Scores for variable relevance tended to be higher with methods that precluded network retraining (weight elimination) or that permuted variable values (data permutation), compared with methods that permitted retraining (full retraining) or that approximated its effects (constant and linear substitution). Conclusion: Methods of relevance analysis and sensitivity analysis are useful for identifying important predictor variables used by artificial neural networks.
APA, Harvard, Vancouver, ISO, and other styles
36

Mohaghegh, Shahab, Khalid Mohamad, Popa Andrei, Ameri Sam, and Dan Wood. "Performance Drivers in Restimulation of Gas-Storage Wells." SPE Reservoir Evaluation & Engineering 4, no. 06 (December 1, 2001): 536–42. http://dx.doi.org/10.2118/74715-pa.

Full text
Abstract:
Summary To maintain or enhance deliverability of gas-storage wells in the Clinton sand in northeast Ohio, an annual restimulation program was established in the late 1960s. The program calls for as many as 20 hydraulic fractures and refractures per year. Several wells have been refractured three to four times, while there are still wells that have only been fractured once in the past 30 years. As the program continues, many wells will be stimulated a second, third, or fourth time. This paper summarizes an attempt to carefully study the response of the Clinton sand to hydraulic fractures and to identify the performance drivers in each series of fracture jobs. Are the performance drivers the same for the later fractures (second, third, and fourth fracture jobs) as they were for the first ones, or do they change? This paper attempts to answer such questions. Identification of major performance drivers becomes important when new jobs are to be designed. They not only play an important role in enhancing the response of the wells to new stimulation jobs, but they also may prove to be an important economic factor in the design of new stimulation procedures. If, for instance, it is concluded that an increase in proppant volume does not influence the stimulation outcome after the second refracture, then fewer resources can be used for proppant volume and can be directed toward parameters that are more influential. This study employs a combined neural-network and fuzzy-logic tool to identify the performance drivers. Introduction In many industrial and manufacturing processes, it is important to know the role and influence of each component or parameter in the process outcome. Oilfield operations are no exception. Such information can contribute significantly to process efficiency and prevent wasteful usage of the resources. If the engineer knows in advance which component is the main driver of the process performance, efforts can be concentrated on manipulating that component to achieve the desired outcome. On the other hand, a lack of such information can result in wasting resources by using greater amounts of a component that does not make a significant difference in the process outcome and, therefore, increasing the cost and reducing the efficiency of the process. The conventional method of approaching this problem is to build an accurate mathematical model of the process and perform parametric sensitivity analysis on the model. The authors believe that developing a mathematical model is the best approach and that, whenever possible, all efforts should be concentrated on developing such models and performing detailed model analysis. The reality of many complex processes, including some in the petroleum and natural gas industry, is that there are no known mathematical models that can accurately describe these processes. The problem being discussed in this paper is one such example. Restimulation of gas-storage wells is a convoluted and complex problem that cannot be modeled mathematically for many reasons. The most important reasons for the inability to construct a mathematical model for restimulation of gas-storage wells are the lack of detailed reservoir data and the complexity of modeling a stimulated reservoir (and its response to restimulation). As has been shown in the past,1–3 the next best thing to mathematical or numerical modeling of a complex problem is the use of virtual intelligence techniques (neural networks, fuzzy logic, and genetic algorithms) to approximate the process behavior. In analyzing petroleum and natural gas engineering problems, when a mathematical model of a complex process cannot be constructed, other means have been used to identify the most influential parameters. These approaches can be as simplistic as statistical methods like linear regression analysis or as complex as fuzzy curves4 and neural networks.5 In this paper, we apply all these methods to the problem of restimulation of gas-storage wells and discuss their potential shortcomings. We also discuss two important issues and provide some ideas that might shed some light on the problem at hand. The first issue concerns the use of fuzzy curves in identifying the most important parameters and provides an extension that might help to further solidify their contribution to problems such as the one discussed here. The second issue is related to the use of neural networks to identify important parameters in a complex problem. In this paper, we demonstrate the usefulness and value of backward elimination neural-network analysis in providing important information about the process being analyzed. Statement of the Problem The restimulation of gas-storage wells discussed here takes place in the Clinton sand in northeast Ohio. A hydraulic fracturing and refracturing program has been in place in this field for approximately three decades. There are storage wells in this field that have not been hydraulically fractured, as well as storage wells that have been refractured more than four times in the past 25 years. Every year, several wells are selected for stimulation to maintain deliverability of the field. Identification of the most influential parameters in this process is an important part of designing new fractures and refractures to maximize return on investment. Data Set The data set used in this study was constructed with the well files. The following parameters were extracted from the well files for each hydraulic fracture treatment: the year the well was drilled, the total number of fractures performed on the well, the number of years since the last fracture, fracture fluid, amount of fluid, amount of sand used as proppant, sand concentration, acid volume, nitrogen volume, average pumping rate, and the name of the service company performing the job. The goal of the study is to identify the most important and most influential of the above-mentioned parameters when they are correlated with post-fracture deliverability. The matchup between hydraulic fracture design parameters and the available post-fracture deliverability data produces a data set with approximately 560 records. Post-fracture deliverability in this study refers to the actual peak deliverability after the well is cleaned up and before the new decline in production begins. Statistical Analysis Using straightforward regression analysis that can be performed with any widely available spreadsheet software, the data set shows that some trends may be identified quickly on these fracture jobs. Table 1 shows the ranking of the correlation between each parameter and the post-fracture deliverability. The ranking is based on the calculated R2 values.
APA, Harvard, Vancouver, ISO, and other styles
37

Gunhan, Atilla E., László P. Csernai, and Jørgen Randrup. "UNSUPERVISED COMPETITIVE LEARNING IN NEURAL NETWORKS." International Journal of Neural Systems 01, no. 02 (January 1989): 177–86. http://dx.doi.org/10.1142/s0129065789000086.

Full text
Abstract:
We study an idealized neural network that may approximate certain neurophysiological features of natural neural systems. The network contains a mutual lateral inhibition and is subjected to unsupervised learning by means of a Hebb-type learning principle. Its learning ability is analysed as a function of the strength of lateral inhibition and the training set.
APA, Harvard, Vancouver, ISO, and other styles
38

Jonathan Lee* and Hsiao-Fan Wang**. "Selected Papers from IFSA'99." Journal of Advanced Computational Intelligence and Intelligent Informatics 5, no. 3 (May 20, 2001): 127. http://dx.doi.org/10.20965/jaciii.2001.p0127.

Full text
Abstract:
The past few years we have witnessed a crystallization of soft computing as a means towards the conception and design of intelligent systems. Soft Computing is a synergetic integration of neural networks, fuzzy logic and evolutionary computation including genetic algorithms, chaotic systems, and belief networks. In this volume, we are featuting seven papers devoted to soft computing as a special issue. These papers are selected from papers submitted to the "The eighth International Fuzzy Systems Association World Congress (IFSA'99)", held in Taipei, Taiwan, in August 1999. Each paper received outstanding recommendations from its reviewers. G-H Tzeng et al. integrate fuzzy numbers, fuzzy regression, and a fuzzy DEA approach as a performance evaluation model for forecasting the productive efficiency of a set of production units when some data are fuzzy numbers. A case of Taipei City Bus Company is adopted for illustration. Y. Shi et al. adopts a fuzzy programming approach to solve a MCMDM (multiple criteria and multiple decision makers) capital budget problem. A solution procedure is proposed to systematically identify a fuzzy optimal selection of possible projects. N. Nguyen et al. propose a new formalism (Chu spaces) to describe parallelism and information flow. Chu spaces provide uniform explanations for different choices of fuzzy methodology, such as choices of fuzzy logical operations of membership functions or defuzzifications. M-C Su et al. propose a technique based on the SOM-based fuzzy systems for voltage security margin estimation. This technique was tested on 1604 simulated data randomly generated from operating conditions on the IEEE 30-bus system to indicate its high efficiency. By defining the concept of approximate dependency and a similarity measure, S-L Wang et al. present a method of using analogical reasoning to infer approximate answers for null queries on similarity-based fuzzy relational databases. K.Yeh et al. use adaptive fuzzy sliding mode control for the structural control of bridges. Combing fuzzy control and sliding mode control can reduce the complexity of fuzzy rule bases and ensure the stability and robustness. This model is demonstrated by three types of bridges, with LRB, sliding isolators and no isolation device. Based on a novel fuzzy clustering algorithm, Y-H Kuo et al. propose an adaptive traffic prediction approach to generalize and unveil the hidden structure of traffic patterns with features of robustness, high accuracy and high adaptability. The periodical, Poisson and real video traffic patterns have been used to verify their approach and investigate its properties. We would like to express our sincere gratitude to everyone who has contributed to this special issue including the authors, the co-reviewers, the JACI Editors-in-Chief Toshio Fukuda and Kaoru Hirota.
APA, Harvard, Vancouver, ISO, and other styles
39

Moran, Maira, Marcelo Faria, Gilson Giraldi, Luciana Bastos, Larissa Oliveira, and Aura Conci. "Classification of Approximal Caries in Bitewing Radiographs Using Convolutional Neural Networks." Sensors 21, no. 15 (July 31, 2021): 5192. http://dx.doi.org/10.3390/s21155192.

Full text
Abstract:
Dental caries is an extremely common problem in dentistry that affects a significant part of the population. Approximal caries are especially difficult to identify because their position makes clinical analysis difficult. Radiographic evaluation—more specifically, bitewing images—are mostly used in such cases. However, incorrect interpretations may interfere with the diagnostic process. To aid dentists in caries evaluation, computational methods and tools can be used. In this work, we propose a new method that combines image processing techniques and convolutional neural networks to identify approximal dental caries in bitewing radiographic images and classify them according to lesion severity. For this study, we acquired 112 bitewing radiographs. From these exams, we extracted individual tooth images from each exam, applied a data augmentation process, and used the resulting images to train CNN classification models. The tooth images were previously labeled by experts to denote the defined classes. We evaluated classification models based on the Inception and ResNet architectures using three different learning rates: 0.1, 0.01, and 0.001. The training process included 2000 iterations, and the best results were achieved by the Inception model with a 0.001 learning rate, whose accuracy on the test set was 73.3%. The results can be considered promising and suggest that the proposed method could be used to assist dentists in the evaluation of bitewing images, and the definition of lesion severity and appropriate treatments.
APA, Harvard, Vancouver, ISO, and other styles
40

Sutcliffe, P. R. "Substorm onset identification using neural networks and Pi2 pulsations." Annales Geophysicae 15, no. 10 (October 31, 1997): 1257–64. http://dx.doi.org/10.1007/s00585-997-1257-x.

Full text
Abstract:
Abstract. The pattern recognition capabilities of artificial neural networks (ANNs) have for the first time been used to identify Pi2 pulsations in magnetometer data, which in turn serve as indicators of substorm onsets and intensifications. The pulsation spectrum was used as input to the ANN and the network was trained to give an output of +1 for Pi2 signatures and -1 for non-Pi2 signatures. In order to evaluate the degree of success of the neural-network procedure for identifying Pi2 pulsations, the ANN was used to scan a number of data sets and the results compared with visual identification of Pi2 signatures. The ANN performed extremely well with a success rate of approximately 90% for Pi2 identification and a timing accuracy generally within 1 min compared to visual identification. A number of potential applications of the neural-network Pi2 scanning procedure are discussed.
APA, Harvard, Vancouver, ISO, and other styles
41

Yang, Xiaofeng, Tielong Shen, and Katsutoshi Tamura. "Approximate solution of Hamilton-Jacobi inequality by neural networks." Applied Mathematics and Computation 84, no. 1 (June 1997): 49–64. http://dx.doi.org/10.1016/s0096-3003(96)00053-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Kim, Min Soo, Alberto A. Del Barrio, Leonardo Tavares Oliveira, Roman Hermida, and Nader Bagherzadeh. "Efficient Mitchell’s Approximate Log Multipliers for Convolutional Neural Networks." IEEE Transactions on Computers 68, no. 5 (May 1, 2019): 660–75. http://dx.doi.org/10.1109/tc.2018.2880742.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

van der Baan, Mirko, and Christian Jutten. "Neural networks in geophysical applications." GEOPHYSICS 65, no. 4 (July 2000): 1032–47. http://dx.doi.org/10.1190/1.1444797.

Full text
Abstract:
Neural networks are increasingly popular in geophysics. Because they are universal approximators, these tools can approximate any continuous function with an arbitrary precision. Hence, they may yield important contributions to finding solutions to a variety of geophysical applications. However, knowledge of many methods and techniques recently developed to increase the performance and to facilitate the use of neural networks does not seem to be widespread in the geophysical community. Therefore, the power of these tools has not yet been explored to their full extent. In this paper, techniques are described for faster training, better overall performance, i.e., generalization, and the automatic estimation of network size and architecture.
APA, Harvard, Vancouver, ISO, and other styles
44

Tian, Hao, and Yue Qing Yu. "Neural Network Trajectory Tracking Control of Compliant Parallel Robot." Applied Mechanics and Materials 799-800 (October 2015): 1069–73. http://dx.doi.org/10.4028/www.scientific.net/amm.799-800.1069.

Full text
Abstract:
Trajectory tracking control of compliant parallel robot is presented. According to the characteristics of compliant joint, the system model is derived and the dynamic equation is obtained based on the Lagrange method. Radial Basis Function (RBF) neural network control is designed to globally approximate the model uncertainties. Further, an itemized approximate RBF control method is proposed for higher identify precision. The trajectory tracking abilities of two control strategies are compared through simulation.
APA, Harvard, Vancouver, ISO, and other styles
45

Vanchurin, Vitaly. "The World as a Neural Network." Entropy 22, no. 11 (October 26, 2020): 1210. http://dx.doi.org/10.3390/e22111210.

Full text
Abstract:
We discuss a possibility that the entire universe on its most fundamental level is a neural network. We identify two different types of dynamical degrees of freedom: “trainable” variables (e.g., bias vector or weight matrix) and “hidden” variables (e.g., state vector of neurons). We first consider stochastic evolution of the trainable variables to argue that near equilibrium their dynamics is well approximated by Madelung equations (with free energy representing the phase) and further away from the equilibrium by Hamilton–Jacobi equations (with free energy representing the Hamilton’s principal function). This shows that the trainable variables can indeed exhibit classical and quantum behaviors with the state vector of neurons representing the hidden variables. We then study stochastic evolution of the hidden variables by considering D non-interacting subsystems with average state vectors, x¯1, …, x¯D and an overall average state vector x¯0. In the limit when the weight matrix is a permutation matrix, the dynamics of x¯μ can be described in terms of relativistic strings in an emergent D+1 dimensional Minkowski space-time. If the subsystems are minimally interacting, with interactions that are described by a metric tensor, and then the emergent space-time becomes curved. We argue that the entropy production in such a system is a local function of the metric tensor which should be determined by the symmetries of the Onsager tensor. It turns out that a very simple and highly symmetric Onsager tensor leads to the entropy production described by the Einstein–Hilbert term. This shows that the learning dynamics of a neural network can indeed exhibit approximate behaviors that were described by both quantum mechanics and general relativity. We also discuss a possibility that the two descriptions are holographic duals of each other.
APA, Harvard, Vancouver, ISO, and other styles
46

Bhaya ‎, Eman Samir, and Zahraa Mahmoud Fadel. "Nearly Exponential Neural Networks Approximation in Lp Spaces." JOURNAL OF UNIVERSITY OF BABYLON for Pure and Applied Sciences 26, no. 1 (December 20, 2017): 103–13. http://dx.doi.org/10.29196/jub.v26i1.359.

Full text
Abstract:
In different applications, we can widely use the neural network approximation. They are being applied to solve many problems in computer science, engineering, physics, etc. The reason for successful application of neural network approximation is the neural network ability to approximate arbitrary function. In the last 30 years, many papers have been published showing that we can approximate any continuous function defined on a compact subset of the Euclidean spaces of dimensions greater than 1, uniformly using a neural network with one hidden layer. Here we prove that any real function in L_P (C) defined on a compact and convex subset of can be approximated by a sigmoidal neural network with one hidden layer, that we call nearly exponential approximation.
APA, Harvard, Vancouver, ISO, and other styles
47

Duggal, Ashmeet Kaur, and Meenu Dave Dr. "INTELLIGENT IDENTITY AND ACCESS MANAGEMENT USING NEURAL NETWORKS." Indian Journal of Computer Science and Engineering 12, no. 1 (February 20, 2021): 47–56. http://dx.doi.org/10.21817/indjcse/2021/v12i1/211201154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Li, Xiangyu, Chunhua Yuan, and Bonan Shan. "System Identification of Neural Signal Transmission Based on Backpropagation Neural Network." Mathematical Problems in Engineering 2020 (August 12, 2020): 1–8. http://dx.doi.org/10.1155/2020/9652678.

Full text
Abstract:
The identification method of backpropagation (BP) neural network is adopted to approximate the mapping relation between input and output of neurons based on neural firing trajectory in this paper. In advance, the input and output data of neural model is used for BP neural network learning, so that the identified BP neural network can present the transfer characteristics of the model, which makes the network precisely predict the firing trajectory of the neural model. In addition, the method is applied to identify electrophysiological experimental data of real neurons, so that the output of the identified BP neural network can not only accurately fit the neural firing trajectories of neurons participating in the network training but also predict the firing trajectories and spike moments of neurons which are not involved in the training process with high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
49

Abboud, Ralph, Ismail Ceylan, and Thomas Lukasiewicz. "Learning to Reason: Leveraging Neural Networks for Approximate DNF Counting." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3097–104. http://dx.doi.org/10.1609/aaai.v34i04.5705.

Full text
Abstract:
Weighted model counting (WMC) has emerged as a prevalent approach for probabilistic inference. In its most general form, WMC is #P-hard. Weighted DNF counting (weighted #DNF) is a special case, where approximations with probabilistic guarantees are obtained in O(nm), where n denotes the number of variables, and m the number of clauses of the input DNF, but this is not scalable in practice. In this paper, we propose a neural model counting approach for weighted #DNF that combines approximate model counting with deep learning, and accurately approximates model counts in linear time when width is bounded. We conduct experiments to validate our method, and show that our model learns and generalizes very well to large-scale #DNF instances.
APA, Harvard, Vancouver, ISO, and other styles
50

Cao, Feilong, Shaobo Lin, and Zongben Xu. "Constructive approximate interpolation by neural networks in the metric space." Mathematical and Computer Modelling 52, no. 9-10 (November 2010): 1674–81. http://dx.doi.org/10.1016/j.mcm.2010.06.035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography