To see the other types of publications on this topic, follow the link: Neural ODEs.

Journal articles on the topic 'Neural ODEs'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Neural ODEs.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Filici, Cristian. "On a Neural Approximator to ODEs." IEEE Transactions on Neural Networks 19, no. 3 (March 2008): 539–43. http://dx.doi.org/10.1109/tnn.2007.915109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhou, Fan, and Liang Li. "Forecasting Reservoir Inflow via Recurrent Neural ODEs." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 17 (May 18, 2021): 15025–32. http://dx.doi.org/10.1609/aaai.v35i17.17763.

Full text
Abstract:
Forecasting reservoir inflow is critical for making many policies, ranging from flood control and agriculture irrigation to water ecology management, hydropower generation, and landslide prevention. Prior studies mainly exploit autoregressive models -- e.g., recurrent neural networks (RNN) and its many variants -- to model the flow time series's temporal pattern. However, existing approaches rely on regular and accurate inflow observations, which either fail to predict multi-scale inflow (e.g., an hour, a day, or a month ahead prediction) or ignore the uncertainty of observations due to confounding factors such as snowmelt and precipitation. To address the limitations, we propose a novel inflow forecasting model by incorporating the uncertainty of the observations into the RNN model and the continuous-time dynamics of the latent states with neural ordinary differential equations (ODE). Our method, called FlowODE, explicitly encodes the stochasticity of hidden conditions in addition to the temporal dependencies among inflow observations. Moreover, FlowODE explores a continuum of layers instead of discrete RNNs to model the hidden states' dynamics, allowing us to infer the inflow at any time horizon flexibly. We conduct extensive experiments on the real-world datasets collected from two large-scale hydropower dams. The results show that our method consistently outperforms previous inflow forecasting models while providing adaptable predictions and a flexible balance between prediction accuracy and computational cost.
APA, Harvard, Vancouver, ISO, and other styles
3

Cui, Wenjun, Honglei Zhang, Haoyu Chu, Pipi Hu, and Yidong Li. "On robustness of neural ODEs image classifiers." Information Sciences 632 (June 2023): 576–93. http://dx.doi.org/10.1016/j.ins.2023.03.049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fronk, Colby, and Linda Petzold. "Interpretable polynomial neural ordinary differential equations." Chaos: An Interdisciplinary Journal of Nonlinear Science 33, no. 4 (April 2023): 043101. http://dx.doi.org/10.1063/5.0130803.

Full text
Abstract:
Neural networks have the ability to serve as universal function approximators, but they are not interpretable and do not generalize well outside of their training region. Both of these issues are problematic when trying to apply standard neural ordinary differential equations (ODEs) to dynamical systems. We introduce the polynomial neural ODE, which is a deep polynomial neural network inside of the neural ODE framework. We demonstrate the capability of polynomial neural ODEs to predict outside of the training region, as well as to perform direct symbolic regression without using additional tools such as SINDy.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhou, Fan, Liang Li, Kunpeng Zhang, and Goce Trajcevski. "Urban flow prediction with spatial–temporal neural ODEs." Transportation Research Part C: Emerging Technologies 124 (March 2021): 102912. http://dx.doi.org/10.1016/j.trc.2020.102912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Esteve-Yagüe, Carlos, and Borjan Geshkovski. "Sparsity in long-time control of neural ODEs." Systems & Control Letters 172 (February 2023): 105452. http://dx.doi.org/10.1016/j.sysconle.2022.105452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kuptsov, P. V., A. V. Kuptsova, and N. V. Stankevich. "Artificial Neural Network as a Universal Model of Nonlinear Dynamical Systems." Nelineinaya Dinamika 17, no. 1 (2021): 5–21. http://dx.doi.org/10.20537/nd210102.

Full text
Abstract:
We suggest a universal map capable of recovering the behavior of a wide range of dynamical systems given by ODEs. The map is built as an artificial neural network whose weights encode a modeled system. We assume that ODEs are known and prepare training datasets using the equations directly without computing numerical time series. Parameter variations are taken into account in the course of training so that the network model captures bifurcation scenarios of the modeled system. The theoretical benefit from this approach is that the universal model admits applying common mathematical methods without needing to develop a unique theory for each particular dynamical equations. From the practical point of view the developed method can be considered as an alternative numerical method for solving dynamical ODEs suitable for running on contemporary neural network specific hardware. We consider the Lorenz system, the Rössler system and also the Hindmarch – Rose model. For these three examples the network model is created and its dynamics is compared with ordinary numerical solutions. A high similarity is observed for visual images of attractors, power spectra, bifurcation diagrams and Lyapunov exponents.
APA, Harvard, Vancouver, ISO, and other styles
8

Grunbacher, Sophie, Ramin Hasani, Mathias Lechner, Jacek Cyranka, Scott A. Smolka, and Radu Grosu. "On the Verification of Neural ODEs with Stochastic Guarantees." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (May 18, 2021): 11525–35. http://dx.doi.org/10.1609/aaai.v35i13.17372.

Full text
Abstract:
We show that Neural ODEs, an emerging class of time-continuous neural networks, can be verified by solving a set of global-optimization problems. For this purpose, we introduce Stochastic Lagrangian Reachability (SLR), an abstraction-based technique for constructing a tight Reachtube (an over-approximation of the set of reachable states over a given time-horizon), and provide stochastic guarantees in the form of confidence intervals for the Reachtube bounds. SLR inherently avoids the infamous wrapping effect (accumulation of over-approximation errors) by performing local optimization steps to expand safe regions instead of repeatedly forward-propagating them as is done by deterministic reachability methods. To enable fast local optimizations, we introduce a novel forward-mode adjoint sensitivity method to compute gradients without the need for backpropagation. Finally, we establish asymptotic and non-asymptotic convergence rates for SLR.
APA, Harvard, Vancouver, ISO, and other styles
9

Ruiz-Balet, Domènec, Elisa Affili, and Enrique Zuazua. "Interpolation and approximation via Momentum ResNets and Neural ODEs." Systems & Control Letters 162 (April 2022): 105182. http://dx.doi.org/10.1016/j.sysconle.2022.105182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cuchiero, Christa, Martin Larsson, and Josef Teichmann. "Deep Neural Networks, Generic Universal Interpolation, and Controlled ODEs." SIAM Journal on Mathematics of Data Science 2, no. 3 (January 2020): 901–19. http://dx.doi.org/10.1137/19m1284117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Zakwan, M., L. Di Natale, B. Svetozarevic, P. Heer, C. N. Jones, and G. Ferrari Trecate. "Physically Consistent Neural ODEs for Learning Multi-Physics Systems*." IFAC-PapersOnLine 56, no. 2 (2023): 5855–60. http://dx.doi.org/10.1016/j.ifacol.2023.10.079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Sandoval, Ilya Orson, Panagiotis Petsagkourakis, and Ehecatl Antonio del Rio-Chanona. "Neural ODEs as Feedback Policies for Nonlinear Optimal Control." IFAC-PapersOnLine 56, no. 2 (2023): 4816–21. http://dx.doi.org/10.1016/j.ifacol.2023.10.1248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Sherry, Ferdia, Elena Celledoni, Matthias J. Ehrhardt, Davide Murari, Brynjulf Owren, and Carola-Bibiane Schönlieb. "Designing stable neural networks using convex analysis and ODEs." Physica D: Nonlinear Phenomena 463 (July 2024): 134159. http://dx.doi.org/10.1016/j.physd.2024.134159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Höge, Marvin, Andreas Scheidegger, Marco Baity-Jesi, Carlo Albert, and Fabrizio Fenicia. "Improving hydrologic models for predictions and process understanding using neural ODEs." Hydrology and Earth System Sciences 26, no. 19 (October 11, 2022): 5085–102. http://dx.doi.org/10.5194/hess-26-5085-2022.

Full text
Abstract:
Abstract. Deep learning methods have frequently outperformed conceptual hydrologic models in rainfall-runoff modelling. Attempts of investigating such deep learning models internally are being made, but the traceability of model states and processes and their interrelations to model input and output is not yet fully understood. Direct interpretability of mechanistic processes has always been considered an asset of conceptual models that helps to gain system understanding aside of predictability. We introduce hydrologic neural ordinary differential equation (ODE) models that perform as well as state-of-the-art deep learning methods in stream flow prediction while maintaining the ease of interpretability of conceptual hydrologic models. In neural ODEs, internal processes that are represented in differential equations, are substituted by neural networks. Therefore, neural ODE models enable the fusion of deep learning with mechanistic modelling. We demonstrate the basin-specific predictive performance for 569 catchments of the continental United States. For exemplary basins, we analyse the dynamics of states and processes learned by the model-internal neural networks. Finally, we discuss the potential of neural ODE models in hydrology.
APA, Harvard, Vancouver, ISO, and other styles
15

Li, Haoxuan. "The advance of neural ordinary differential ordinary differential equations." Applied and Computational Engineering 6, no. 1 (June 14, 2023): 1283–87. http://dx.doi.org/10.54254/2755-2721/6/20230709.

Full text
Abstract:
Differential methods are widely used to describe complex continuous processes. The main idea of ordinary differential equations is to treat a specific type of neural network as a discrete equation. Therefore, the differential equation solver can be used to optimize the solution process of the neural network. Compared with the conventional neural network solution, the solution process of the neural ordinary differential equation has the advantages of high storage efficiency and adaptive calculation. This paper first gives a brief review of the residual network (ResNet) and the relationship of ResNet to neural ordinary differential equations. Besides, his paper list three advantages of neural ordinary differential equations compared with ResNet and introduce the class of Deep Neural Network (DNN) models that can be seen as numerical discretization of neural ordinary differential equations (N-ODEs). Furthermore, this paper analyzes a defect of neural ordinary differential equations that do not appear in the traditional deep neural network. Finally, this paper demonstrates how to analyze ResNet with neural ordinary differential equations and shows the main application of neural ordinary differential equations (Neural-ODEs).
APA, Harvard, Vancouver, ISO, and other styles
16

Zheng, Bohong. "Ordinary Differential Equation and Its Application." Highlights in Science, Engineering and Technology 72 (December 15, 2023): 645–51. http://dx.doi.org/10.54097/rnnev212.

Full text
Abstract:
Ordinary Differential Equations (ODEs) serve as a foundational mathematical framework with immense versatility and applicability across various domains. This essay delves into the multifaceted applications of ODEs in two distinct but interrelated fields: the domain of deep neural networks and the analysis of economic models. By establishing a bridge between theoretical foundations and real-world implementations, this study underscores the pivotal role of ODEs in shaping contemporary advancements. In the realm of deep neural networks, ODEs have revolutionized training methodologies. They enable dynamic architectures, where the network's behavior evolves continuously over time, improving its ability to capture complex patterns and adapt to changing data. This innovation has found use in computer vision, natural language processing, and reinforcement learning, leading to more robust and efficient AI systems. Simultaneously, ODEs play a crucial role in economic modeling. They facilitate the formulation of dynamic systems that depict the evolution of economic variables over time. These models help economists analyze complex economic phenomena, make predictions, and formulate informed policy decisions, ultimately contributing to the stability and growth of economies. This essay explores the mathematics underpinning these applications, emphasizing ODEs’ pivotal role in advancing both the realms of artificial intelligence and economic analysis, highlighting their significance in shaping modern technological and economic landscape.
APA, Harvard, Vancouver, ISO, and other styles
17

Belozyorov, Vasiliy Ye, and Danylo V. Dantsev. "Modeling of Chaotic Processes by Means of Antisymmetric Neural ODEs." Journal of Optimization, Differential Equations and Their Applications 30, no. 1 (May 5, 2022): 1. http://dx.doi.org/10.15421/142201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Belozyorov, Vasiliy Ye, and Yevhen V. Koshel. "On Systems of Neural ODEs with Generalized Power Activation Functions." Journal of Optimization, Differential Equations and Their Applications 32, no. 2 (August 30, 2024): 56. https://doi.org/10.15421/142409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Gerstberger, R., and P. Rentrop. "Feedforward neural nets as discretization schemes for ODEs and DAEs." Journal of Computational and Applied Mathematics 82, no. 1-2 (September 1997): 117–28. http://dx.doi.org/10.1016/s0377-0427(97)00085-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Gonzalez, Martin, Thibault Defourneau, Hatem Hajri, and Mihaly Petreczky. "Realization Theory of Recurrent Neural ODEs using Polynomial System Embeddings." Systems & Control Letters 173 (March 2023): 105468. http://dx.doi.org/10.1016/j.sysconle.2023.105468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Luo, Chaoyang, Yan Zou, Wanying Li, and Nanjing Huang. "FxTS-Net: Fixed-time stable learning framework for Neural ODEs." Neural Networks 185 (May 2025): 107219. https://doi.org/10.1016/j.neunet.2025.107219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Alkhezi, Yousuf, Yousuf Almubarak, and Ahmad Shafee. "Neural-network-based approximations for investigating a Pantograph delay differential equation with application in Algebra." International Journal of Mathematics and Computer Science 20, no. 1 (2024): 195–209. http://dx.doi.org/10.69793/ijmcs/01.2025/ahmad.

Full text
Abstract:
Delay differential equations (DDE) have applications in many different scientific disciplines and show up in mathematical models of processes that evolve through time, where the rate of evolution is conditional on both the present and past states of the process. They also have significant applications in Algebra, where they are used to solve problems involving sequences and series, as well as in the analysis of algorithms and computational methods. Extensive new studies in fields as disparate as biology, economics, and physics all point to the importance of DDEs. When models based on ordinary differential equations (ODEs) fail, in particular, these methods become indispensable. In this study, we recommend a machine learning (ML) strategy for solving ODEs. An artificial neural network (ANN) with five fully connected layers are built and trained using examples of solutions to the differential equations. The ANN prediction is then compared to the Runge-Kutta (RK) scheme solution for each ODE. Each ODE's loss function is shown for inspection purposes.
APA, Harvard, Vancouver, ISO, and other styles
23

De Florio, Mario, Enrico Schiassi, and Roberto Furfaro. "Physics-informed neural networks and functional interpolation for stiff chemical kinetics." Chaos: An Interdisciplinary Journal of Nonlinear Science 32, no. 6 (June 2022): 063107. http://dx.doi.org/10.1063/5.0086649.

Full text
Abstract:
This work presents a recently developed approach based on physics-informed neural networks (PINNs) for the solution of initial value problems (IVPs), focusing on stiff chemical kinetic problems with governing equations of stiff ordinary differential equations (ODEs). The framework developed by the authors combines PINNs with the theory of functional connections and extreme learning machines in the so-called extreme theory of functional connections (X-TFC). While regular PINN methodologies appear to fail in solving stiff systems of ODEs easily, we show how our method, with a single-layer neural network (NN) is efficient and robust to solve such challenging problems without using artifacts to reduce the stiffness of problems. The accuracy of X-TFC is tested against several state-of-the-art methods, showing its performance both in terms of computational time and accuracy. A rigorous upper bound on the generalization error of X-TFC frameworks in learning the solutions of IVPs for ODEs is provided here for the first time. A significant advantage of this framework is its flexibility to adapt to various problems with minimal changes in coding. Also, once the NN is trained, it gives us an analytical representation of the solution at any desired instant in time outside the initial discretization. Learning stiff ODEs opens up possibilities of using X-TFC in applications with large time ranges, such as chemical dynamics in energy conversion, nuclear dynamics systems, life sciences, and environmental engineering.
APA, Harvard, Vancouver, ISO, and other styles
24

Fronk, Colby, Jaewoong Yun, Prashant Singh, and Linda Petzold. "Bayesian polynomial neural networks and polynomial neural ordinary differential equations." PLOS Computational Biology 20, no. 10 (October 10, 2024): e1012414. http://dx.doi.org/10.1371/journal.pcbi.1012414.

Full text
Abstract:
Symbolic regression with polynomial neural networks and polynomial neural ordinary differential equations (ODEs) are two recent and powerful approaches for equation recovery of many science and engineering problems. However, these methods provide point estimates for the model parameters and are currently unable to accommodate noisy data. We address this challenge by developing and validating the following Bayesian inference methods: the Laplace approximation, Markov Chain Monte Carlo (MCMC) sampling methods, and variational inference. We have found the Laplace approximation to be the best method for this class of problems. Our work can be easily extended to the broader class of symbolic neural networks to which the polynomial neural network belongs.
APA, Harvard, Vancouver, ISO, and other styles
25

Tappe, Aike Aline, Moritz Schulze, and René Schenkendorf. "Neural ODEs and differential flatness for total least squares parameter estimation." IFAC-PapersOnLine 55, no. 20 (2022): 421–26. http://dx.doi.org/10.1016/j.ifacol.2022.09.131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Smaoui, Nejib. "A hybrid neural network model for the dynamics of the Kuramoto-Sivashinsky equation." Mathematical Problems in Engineering 2004, no. 3 (2004): 305–21. http://dx.doi.org/10.1155/s1024123x0440101x.

Full text
Abstract:
A hybrid approach consisting of two neural networks is used to model the oscillatory dynamical behavior of the Kuramoto-Sivashinsky (KS) equation at a bifurcation parameterα=84.25. This oscillatory behavior results from a fixed point that occurs atα=72having a shape of two-humped curve that becomes unstable and undergoes a Hopf bifurcation atα=83.75. First, Karhunen-Loève (KL) decomposition was used to extract five coherent structures of the oscillatory behavior capturing almost 100% of the energy. Based on the five coherent structures, a system offive ordinary differential equations (ODEs) whose dynamics is similar to the original dynamics of the KS equation was derived via KL Galerkin projection. Then, an autoassociative neural network was utilized on the amplitudes of the ODEs system with the task of reducing the dimension of the dynamical behavior to its intrinsic dimension, and a feedforward neural network was usedto model the dynamics at a future time. We show that by combining KL decomposition and neural networks, a reduced dynamical model of the KS equation is obtained.
APA, Harvard, Vancouver, ISO, and other styles
27

Muppidi Maruthi. "Overview of Artificial Neural Network-Based Solution for Ordinary and Partial Differential Equations by Feed Forward Method Using Python." Communications on Applied Nonlinear Analysis 32, no. 3 (October 19, 2024): 512–24. http://dx.doi.org/10.52783/cana.v32.2012.

Full text
Abstract:
In the modern era, models such as Neural Networks are increasingly used to deal with ordinary and partial differential equations (ODEs-PDEs) because of their related complexity. The main aim of the paper is to focus on Artificial Neural Networks as a method to solve these equations since it is possible to approximate complex nonlinear relationships with great accuracy and adaptively reduce computational costs, thus enhancing the accuracy of solutions within highly dimensional and nonlinear problems. The traditional methods are not applicable to partial differential equations due to the problem of nonlinearity and high dimensionality. However, the Feed Forward method implemented in Python is quite a powerful alternative that can easily approximate complicated functions and very well have a hold on nonlinear equations. This research paper contains differential equations as nonlinear and linear equations and Feed Forward neural network models for both solutions of ODEs and PDEs. Python enables the integration of AI-based algorithms for the enhancement of more accurate and efficient results. This work compares the existing solutions based on ANNs and further highlights the possible advantages of using ANNs in ordinary as well as complex differential equations problems. The results indicate that ANNs have immense potential for further developing computational algorithms applicable to solving ODEs and PDEs, particularly in real-time applications that demand high precision and adaptability.
APA, Harvard, Vancouver, ISO, and other styles
28

Wen, Ying, Temuer Chaolu, and Xiangsheng Wang. "Solving the initial value problem of ordinary differential equations by Lie group based neural network method." PLOS ONE 17, no. 4 (April 6, 2022): e0265992. http://dx.doi.org/10.1371/journal.pone.0265992.

Full text
Abstract:
To combine a feedforward neural network (FNN) and Lie group (symmetry) theory of differential equations (DEs), an alternative artificial NN approach is proposed to solve the initial value problems (IVPs) of ordinary DEs (ODEs). Introducing the Lie group expressions of the solution, the trial solution of ODEs is split into two parts. The first part is a solution of other ODEs with initial values of original IVP. This is easily solved using the Lie group and known symbolic or numerical methods without any network parameters (weights and biases). The second part consists of an FNN with adjustable parameters. This is trained using the error back propagation method by minimizing an error (loss) function and updating the parameters. The method significantly reduces the number of the trainable parameters and can more quickly and accurately learn the real solution, compared to the existing similar methods. The numerical method is applied to several cases, including physical oscillation problems. The results have been graphically represented, and some conclusions have been made.
APA, Harvard, Vancouver, ISO, and other styles
29

Bradley, William, and Fani Boukouvala. "Two-Stage Approach to Parameter Estimation of Differential Equations Using Neural ODEs." Industrial & Engineering Chemistry Research 60, no. 45 (November 8, 2021): 16330–44. http://dx.doi.org/10.1021/acs.iecr.1c00552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Hu, Ran, Nan Ma, Bing Li, Kun Chen, Chen Chen, Zhanhua Huang, Fengshu Ye, and Chunpeng Pan. "Black-Box Modelling of Active Distribution Network Devices Based on Neural ODEs." Journal of Physics: Conference Series 2826, no. 1 (August 1, 2024): 012029. http://dx.doi.org/10.1088/1742-6596/2826/1/012029.

Full text
Abstract:
Abstract The morphological structure and dynamic characteristics of power distribution systems are rapidly evolving due to the widespread application of distributed renewable energy and the rapid advancement of power electronics technology. The stability analysis of these complex new distribution systems depends on electromagnetic transient (EMT) simulation. However, the intellectual property rights protection by manufacturers leads to many distribution network devices that can only use black-box models with missing parameters, which challenges the simulation. This reduces the accuracy of the dynamic analysis of these new distribution systems. To address this problem, this paper proposes a black-box modelling method based on neural Ordinary Differential Equation (ODE) for active distribution equipment. The method uses port measurement data to construct a data-driven model that accurately captures the black-box device characteristics, transforming the uncertainties in the user-side black-box devices into observable, controllable, and analysable models. The paper also builds an IEEE-33 node system that includes the energy storage black-box model for validation, and the results confirm the accuracy of the model.
APA, Harvard, Vancouver, ISO, and other styles
31

Ning, Xiao, Jinxing Guan, Xi-An Li, Yongyue Wei, and Feng Chen. "Physics-Informed Neural Networks Integrating Compartmental Model for Analyzing COVID-19 Transmission Dynamics." Viruses 15, no. 8 (August 16, 2023): 1749. http://dx.doi.org/10.3390/v15081749.

Full text
Abstract:
Modelling and predicting the behaviour of infectious diseases is essential for early warning and evaluating the most effective interventions to prevent significant harm. Compartmental models produce a system of ordinary differential equations (ODEs) that are renowned for simulating the transmission dynamics of infectious diseases. However, the parameters in compartmental models are often unknown, and they can even change over time in the real world, making them difficult to determine. This study proposes an advanced artificial intelligence approach based on physics-informed neural networks (PINNs) to estimate time-varying parameters from given data for the compartmental model. Our proposed PINNs method captures the complex dynamics of COVID-19 by integrating a modified Susceptible-Exposed-Infectious-Recovered-Death (SEIRD) compartmental model with deep neural networks. Specifically, we modelled the system of ODEs as one network and the time-varying parameters as another network to address significant unknown parameters and limited data. Such structure of the PINNs method is in line with the prior epidemiological correlations and comprises the mismatch between available data and network output and the residual of ODEs. The experimental findings on real-world reported data data have demonstrated that our method robustly and accurately learns the dynamics and forecasts future states. Moreover, as more data becomes available, our proposed PINNs method can be successfully extended to other regions and infectious diseases.
APA, Harvard, Vancouver, ISO, and other styles
32

Alsharaiah, Mohammad A., Laith H. Baniata, Omar Al Adwan, Orieb Abu Alghanam, Ahmad Adel Abu-Shareha, Laith Alzboon, Nedal Mustafa, and Mohammad Baniata. "Neural Network Prediction Model to Explore Complex Nonlinear Behavior in Dynamic Biological Network." International Journal of Interactive Mobile Technologies (iJIM) 16, no. 12 (June 21, 2022): 32–51. http://dx.doi.org/10.3991/ijim.v16i12.30467.

Full text
Abstract:
Organism network systems provide a biological data with high complex level. Besides, these data reflect the complex activities in organisms that identifies nonlinear behavior as well. Hence, mathematical modelling methods such as Ordinary Differential Equations model (ODE's) are becoming significant tools to predict, and expose implied knowledge and data. Unfortunately, the aforementioned approaches face some of cons such as the scarcity and the vagueness in the biological knowledge to expect the protein concentrations measurements. So, the main object of this research presents a computational model such as a neural Feed Forward Network model using Back Propagation algorithm to engage with imprecise and missing biological knowledge to provide more insight about biological systems in organisms. Therefore, the model predicts protein concentration and illustrates the nonlinear behavior for the biological dynamic behavior in precise form. Also, the desired results are matched with recent ODE's model and it provides precise results in simpler form than ODEs.
APA, Harvard, Vancouver, ISO, and other styles
33

Bailleul, Ismael, Carlo Bellingeri, Yvain Bruned, Adeline Fermanian, and Nicolas Marie. "Rough paths and SPDE." ESAIM: Proceedings and Surveys 74 (November 2023): 169–84. http://dx.doi.org/10.1051/proc/202374169.

Full text
Abstract:
After a brief survey on rough paths theory, from the seminal paper of T. Lyons to its recent developments, this proceeding provides details on C. Bellingeri, Y. Bruned and A. Fermanian’s talks during the Journées MAS 2020: a new formulation and a generalization of the signature operator, an extension of branched rough paths called Volterra branched rough paths, and recurrent neural networks investigated as a kernel method thanks to the signature operator and neural ODEs.
APA, Harvard, Vancouver, ISO, and other styles
34

Nadar, Sreenivasan Rajamoni, and Vikas Rai. "Transient Periodicity in a Morris-Lecar Neural System." ISRN Biomathematics 2012 (July 1, 2012): 1–7. http://dx.doi.org/10.5402/2012/546315.

Full text
Abstract:
The dynamical complexity of a system of ordinary differential equations (ODEs) modeling the dynamics of a neuron that interacts with other neurons through on-off excitatory and inhibitory synapses in a neural system was investigated in detail. The model used Morris-Lecar (ML) equations with an additional autonomous variable representing the input from interaction of excitatory neuronal cells with local interneurons. Numerical simulations yielded a rich repertoire of dynamical behavior associated with this three-dimensional system, which included periodic, chaotic oscillation and rare bursts of episodic periodicity called the transient periodicity.
APA, Harvard, Vancouver, ISO, and other styles
35

Fabiani, Gianluca, Evangelos Galaris, Lucia Russo, and Constantinos Siettos. "Parsimonious physics-informed random projection neural networks for initial value problems of ODEs and index-1 DAEs." Chaos: An Interdisciplinary Journal of Nonlinear Science 33, no. 4 (April 2023): 043128. http://dx.doi.org/10.1063/5.0135903.

Full text
Abstract:
We present a numerical method based on random projections with Gaussian kernels and physics-informed neural networks for the numerical solution of initial value problems (IVPs) of nonlinear stiff ordinary differential equations (ODEs) and index-1 differential algebraic equations (DAEs), which may also arise from spatial discretization of partial differential equations (PDEs). The internal weights are fixed to ones while the unknown weights between the hidden and output layer are computed with Newton’s iterations using the Moore–Penrose pseudo-inverse for low to medium scale and sparse QR decomposition with [Formula: see text] regularization for medium- to large-scale systems. Building on previous works on random projections, we also prove its approximation accuracy. To deal with stiffness and sharp gradients, we propose an adaptive step-size scheme and address a continuation method for providing good initial guesses for Newton iterations. The “optimal” bounds of the uniform distribution from which the values of the shape parameters of the Gaussian kernels are sampled and the number of basis functions are “parsimoniously” chosen based on bias-variance trade-off decomposition. To assess the performance of the scheme in terms of both numerical approximation accuracy and computational cost, we used eight benchmark problems (three index-1 DAEs problems, and five stiff ODEs problems including the Hindmarsh–Rose neuronal model of chaotic dynamics and the Allen–Cahn phase-field PDE). The efficiency of the scheme was compared against two stiff ODEs/DAEs solvers, namely, ode15s and ode23t solvers of the MATLAB ODE suite as well as against deep learning as implemented in the DeepXDE library for scientific machine learning and physics-informed learning for the solution of the Lotka–Volterra ODEs included in the demos of the library. A software/toolbox in Matlab (that we call RanDiffNet) with demos is also provided.
APA, Harvard, Vancouver, ISO, and other styles
36

Arif, Muhammad Shoaib, Kamaleldin Abodayeh, and Yasir Nawaz. "Design of Finite Difference Method and Neural Network Approach for Casson Nanofluid Flow: A Computational Study." Axioms 12, no. 6 (May 27, 2023): 527. http://dx.doi.org/10.3390/axioms12060527.

Full text
Abstract:
To boost productivity, commercial strategies, and social advancement, neural network techniques are gaining popularity among engineering and technical research groups. This work proposes a numerical scheme to solve linear and non-linear ordinary differential equations (ODEs). The scheme’s primary benefit included its third-order accuracy in two stages, whereas most examples in the literature do not provide third-order accuracy in two stages. The scheme was explicit and correct to the third order. The stability region and consistency analysis of the scheme for linear ODE are provided in this paper. Moreover, a mathematical model of heat and mass transfer for the non-Newtonian Casson nanofluid flow is given under the effects of the induced magnetic field, which was explored quantitatively using the method of Levenberg–Marquardt back propagation artificial neural networks. The governing equations were reduced to ODEs using suitable similarity transformations and later solved by the proposed scheme with a third-order accuracy. Additionally, a neural network approach for input and output/predicted values is given. In addition, inputs for velocity, temperature, and concentration profiles were mapped to the outputs using a neural network. The results are displayed in different types of graphs. Absolute error, regression studies, mean square error, and error histogram analyses are presented to validate the suggested neural networks’ performance. The neural network technique is currently used on three of these four targets. Two hundred points were utilized, with 140 samples used for training, 30 samples used for validation, and 30 samples used for testing. These findings demonstrate the efficacy of artificial neural networks in forecasting and optimizing complex systems.
APA, Harvard, Vancouver, ISO, and other styles
37

Tan, Chenkai, Yingfeng Cai, Hai Wang, Xiaoqiang Sun, and Long Chen. "Vehicle State Estimation Combining Physics-Informed Neural Network and Unscented Kalman Filtering on Manifolds." Sensors 23, no. 15 (July 25, 2023): 6665. http://dx.doi.org/10.3390/s23156665.

Full text
Abstract:
This paper proposes a novel vehicle state estimation (VSE) method that combines a physics-informed neural network (PINN) and an unscented Kalman filter on manifolds (UKF-M). This VSE aimed to achieve inertial measurement unit (IMU) calibration and provide comprehensive information on the vehicle’s dynamic state. The proposed method leverages a PINN to eliminate IMU drift by constraining the loss function with ordinary differential equations (ODEs). Then, the UKF-M is used to estimate the 3D attitude, velocity, and position of the vehicle more accurately using a six-degrees-of-freedom vehicle model. Experimental results demonstrate that the proposed PINN method can learn from multiple sensors and reduce the impact of sensor biases by constraining the ODEs without affecting the sensor characteristics. Compared to the UKF-M algorithm alone, our VSE can better estimate vehicle states. The proposed method has the potential to automatically reduce the impact of sensor drift during vehicle operation, making it more suitable for real-world applications.
APA, Harvard, Vancouver, ISO, and other styles
38

B, Vembu, and Loghambal S. "Pseudo-Graph Neural Networks On Ordinary Differential Equations." Journal of Computational Mathematica 6, no. 1 (March 22, 2022): 117–23. http://dx.doi.org/10.26524/cm.125.

Full text
Abstract:
In this paper, we extend the idea of continuous-depth models to pseudo graphs and present pseudo graph ordinary differential equations (PGODE), which are inspired by the neural ordinary differential equation (NODE) for data in the Euclidean domain. All existing graph networks have discrete depth. A pseudo graph neural network (PGNN) is used to parameterize the derivative of hidden node states, and the output states are the solution to this ordinary differential equation (ODE). A memory-efficient framework with precise gradient estimates is then proposed for free-form ODEs. We also introduce the framework of continuous–depth pseudo graph neural networks (PGNNs) on ODE by blending discrete structures and differential equations
APA, Harvard, Vancouver, ISO, and other styles
39

Schiassi, Enrico, Mario De Florio, Andrea D’Ambrosio, Daniele Mortari, and Roberto Furfaro. "Physics-Informed Neural Networks and Functional Interpolation for Data-Driven Parameters Discovery of Epidemiological Compartmental Models." Mathematics 9, no. 17 (August 27, 2021): 2069. http://dx.doi.org/10.3390/math9172069.

Full text
Abstract:
In this work, we apply a novel and accurate Physics-Informed Neural Network Theory of Functional Connections (PINN-TFC) based framework, called Extreme Theory of Functional Connections (X-TFC), for data-physics-driven parameters’ discovery of problems modeled via Ordinary Differential Equations (ODEs). The proposed method merges the standard PINNs with a functional interpolation technique named Theory of Functional Connections (TFC). In particular, this work focuses on the capability of X-TFC in solving inverse problems to estimate the parameters governing the epidemiological compartmental models via a deterministic approach. The epidemiological compartmental models treated in this work are Susceptible-Infectious-Recovered (SIR), Susceptible-Exposed-Infectious-Recovered (SEIR), and Susceptible-Exposed-Infectious-Recovered-Susceptible (SEIRS). The results show the low computational times, the high accuracy, and effectiveness of the X-TFC method in performing data-driven parameters’ discovery systems modeled via parametric ODEs using unperturbed and perturbed data.
APA, Harvard, Vancouver, ISO, and other styles
40

Zhu, Qunxi, Yifei Shen, Dongsheng Li, and Wei Lin. "Neural Piecewise-Constant Delay Differential Equations." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 9242–50. http://dx.doi.org/10.1609/aaai.v36i8.20911.

Full text
Abstract:
Continuous-depth neural networks, such as the Neural Ordinary Differential Equations (ODEs), have aroused a great deal of interest from the communities of machine learning and data science in recent years, which bridge the connection between deep neural networks and dynamical systems. In this article, we introduce a new sort of continuous-depth neural network, called the Neural Piecewise-Constant Delay Differential Equations (PCDDEs). Here, unlike the recently proposed framework of the Neural Delay Differential Equations (DDEs), we transform the single delay into the piecewise-constant delay(s). The Neural PCDDEs with such a transformation, on one hand, inherit the strength of universal approximating capability in Neural DDEs. On the other hand, the Neural PCDDEs, leveraging the contributions of the information from the multiple previous time steps, further promote the modeling capability without augmenting the network dimension. With such a promotion, we show that the Neural PCDDEs do outperform the several existing continuous-depth neural frameworks on the one-dimensional piecewise-constant delay population dynamics and real-world datasets, including MNIST, CIFAR10, and SVHN.
APA, Harvard, Vancouver, ISO, and other styles
41

Patsatzis, Dimitrios G., Lucia Russo, and Constantinos Siettos. "Slow Invariant Manifolds of Fast-Slow Systems of ODEs with Physics-Informed Neural Networks." SIAM Journal on Applied Dynamical Systems 23, no. 4 (December 12, 2024): 3077–122. https://doi.org/10.1137/24m1656402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Huang, Zhanhua, Ran Hu, Nan Ma, Bing Li, Chen Chen, Qiangqiang Guo, Wuping Cheng, and Chunpeng Pan. "Black-box modeling of PMSG-based wind energy conversion systems based on neural ODEs." Journal of Physics: Conference Series 2814, no. 1 (August 1, 2024): 012005. http://dx.doi.org/10.1088/1742-6596/2814/1/012005.

Full text
Abstract:
Abstract Integrating renewable energy sources like wind power into the power grid enhances the dynamic interactions among renewable energy-producing equipment, leading to new technological issues for the power grid. Modeling and simulation are essential to ensure the stability of the emerging power grid, but they require precise dynamic component modeling, which is often unavailable due to technical confidentiality and other factors. Conventional hardware-in-the-loop (HIL) simulation can accurately simulate the dynamics of a single renewable energy device, but not the complex dynamics of multiple devices. This research introduces a method that combines classical mechanism modeling and differential neural network modeling to create accurate wind turbine models utilizing equipment measurement data or HIL simulation data. A realistic wind turbine electromagnetic transient simulation model of a specific type is developed and validated by connecting it to the IEEE-39 node system, confirming the model’s accuracy.
APA, Harvard, Vancouver, ISO, and other styles
43

Puchkov, Andrey Yu, Yaroslav A. Fedulov, Vladimir S. Minin, and Alexander S. Fedulov. "Hybrid digital model based on Neural ODE in the task of increasing the economic efficiency of processing small-ore raw materials." Journal Of Applied Informatics 19, no. 4 (August 21, 2024): 107–25. http://dx.doi.org/10.37791/2687-0649-2024-19-4-107-125.

Full text
Abstract:
The results of a study are presented, the purpose of which was to develop the structure of a hybrid digital model for managing the processes of processing small-ore raw materials, as well as an algorithm for converting technological data in accordance with this structure, ensuring improved management quality and, as a consequence, the economic efficiency of processing. The original idea underlying the hybrid digital model is the use of neural ordinary differential equations (Neural ODE) to calculate the dynamics of technological objects and the processes implemented in them. Neural ODEs are a type of physics-motivated neural networks that use physical laws during their learning process. The resulting digital intelligent machine learning system is capable of highly accurate reconstruction of the dynamics function using observational data of a technological object or process. The proposed hybrid model provides for the joint use of Neural ODE and Simulink simulation models of technological processes for processing fine ore raw materials when calculating control actions. This allows you to quickly model and analyze the reaction of dynamic objects to control inputs and quickly make the necessary changes without waiting for the reaction of the physical original. Numerical experiments have shown that the use of Neural ODE as part of a hybrid digital model accurately reproduces the dynamics of technological objects under various initial conditions. For comparison, experiments were carried out with a model in which an LSTM recurrent neural network was used instead of Neural ODE. Experiments demonstrated that in the latter case, the dynamics were simulated with high accuracy only under the original initial conditions, and when they changed, it was severely degraded. At the same time, the use of Neural ODE instead of LSTM has shown consistently high accuracy in displaying dynamics under these changes, which will help improve the quality of control of technological processes for processing fine ore raw materials and their economic efficiency.
APA, Harvard, Vancouver, ISO, and other styles
44

Samia Atallah. "The Numerical Methods of Fractional Differential Equations." مجلة جامعة بني وليد للعلوم الإنسانية والتطبيقية 8, no. 4 (September 25, 2023): 496–512. http://dx.doi.org/10.58916/jhas.v8i4.44.

Full text
Abstract:
Differential equations with non-integer order derivatives have demonstrated are suitable models for a variety of physical events in several fields including diffusion processes and damping laws, fluid mechanics neural networks. In this study, i will discuss two numerical methods Diethelm's method and Adams-Bashforth-Moulton method for solving fractional ordinary differential equations (ODEs) with initial conditions.
APA, Harvard, Vancouver, ISO, and other styles
45

Zaman, Muhammad Adib Uz, and Dongping Du. "A Stochastic Multivariate Irregularly Sampled Time Series Imputation Method for Electronic Health Records." BioMedInformatics 1, no. 3 (November 16, 2021): 166–81. http://dx.doi.org/10.3390/biomedinformatics1030011.

Full text
Abstract:
Electronic health records (EHRs) can be very difficult to analyze since they usually contain many missing values. To build an efficient predictive model, a complete dataset is necessary. An EHR usually contains high-dimensional longitudinal time series data. Most commonly used imputation methods do not consider the importance of temporal information embedded in EHR data. Besides, most time-dependent neural networks such as recurrent neural networks (RNNs) inherently consider the time steps to be equal, which in many cases, is not appropriate. This study presents a method using the gated recurrent unit (GRU), neural ordinary differential equations (ODEs), and Bayesian estimation to incorporate the temporal information and impute sporadically observed time series measurements in high-dimensional EHR data.
APA, Harvard, Vancouver, ISO, and other styles
46

Niu, Haiqiang. "Evaluation of data-driven neural operators in ocean acoustic propagation modeling." Journal of the Acoustical Society of America 155, no. 3_Supplement (March 1, 2024): A44. http://dx.doi.org/10.1121/10.0026741.

Full text
Abstract:
Various types of neural networks have been employed to tackle a wide range of complex partial differential equations (PDEs) and ordinary differential equations (ODEs). Notably, neural operators such as DeepONet and FNO show promise in handling these problems, offering potential real-time prediction capabilities. In contrast to physics-informed neural network (PINN) methods, neural operators primarily derive insight from extensive, well-prepared datasets. In the context of ocean acoustic propagation modeling, where the challenge involves solving the wave or Helmholtz equation given specific boundary conditions, this study focuses on assessing the performance of data-driven neural operators in predicting sound pressure. Unlike conventional approaches that map between finite-dimensional Euclidean spaces, neural operators excel in learning mappings between infinite-dimensional function spaces—a particularly advantageous feature in sound propagation modeling tasks. This research specifically delves into evaluating the generalization capabilities of neural operators when applied to sound propagation modeling in a range-independent shallow water environment. By exploring the neural operators' effectiveness in this domain, the study aims to contribute valuable insights into their potential applications for real-world ocean acoustics simulations.
APA, Harvard, Vancouver, ISO, and other styles
47

Dong, Xunde, and Cong Wang. "Identification of the FitzHugh–Nagumo Model Dynamics via Deterministic Learning." International Journal of Bifurcation and Chaos 25, no. 12 (November 2015): 1550159. http://dx.doi.org/10.1142/s021812741550159x.

Full text
Abstract:
In this paper, a new method is proposed for the identification of the FitzHugh–Nagumo (FHN) model dynamics via deterministic learning. The FHN model is a classic and simple model for studying spiral waves in excitable media, such as the cardiac tissue, biological neural networks. Firstly, the FHN model described by partial differential equations (PDEs) is transformed into a set of ordinary differential equations (ODEs) by using finite difference method. Secondly, the dynamics of the ODEs is identified using the deterministic learning theory. It is shown that, for the spiral waves generated by the FHN model, the dynamics underlying the recurrent trajectory corresponding to any spatial point can be accurately identified by using the proposed approach. Numerical experiments are included to demonstrate the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
48

Yang, Chengdong, Zhenxing Li, Xiangyong Chen, Ancai Zhang, and Jianlong Qiu. "Boundary Control for Exponential Synchronization of Reaction-Diffusion Neural Networks Based on Coupled PDE-ODEs." IFAC-PapersOnLine 53, no. 2 (2020): 3415–20. http://dx.doi.org/10.1016/j.ifacol.2020.12.2543.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Hopkins, Michael, Mantas Mikaitis, Dave R. Lester, and Steve Furber. "Stochastic rounding and reduced-precision fixed-point arithmetic for solving neural ordinary differential equations." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 378, no. 2166 (January 20, 2020): 20190052. http://dx.doi.org/10.1098/rsta.2019.0052.

Full text
Abstract:
Although double-precision floating-point arithmetic currently dominates high-performance computing, there is increasing interest in smaller and simpler arithmetic types. The main reasons are potential improvements in energy efficiency and memory footprint and bandwidth. However, simply switching to lower-precision types typically results in increased numerical errors. We investigate approaches to improving the accuracy of reduced-precision fixed-point arithmetic types, using examples in an important domain for numerical computation in neuroscience: the solution of ordinary differential equations (ODEs). The Izhikevich neuron model is used to demonstrate that rounding has an important role in producing accurate spike timings from explicit ODE solution algorithms. In particular, fixed-point arithmetic with stochastic rounding consistently results in smaller errors compared to single-precision floating-point and fixed-point arithmetic with round-to-nearest across a range of neuron behaviours and ODE solvers. A computationally much cheaper alternative is also investigated, inspired by the concept of dither that is a widely understood mechanism for providing resolution below the least significant bit in digital signal processing. These results will have implications for the solution of ODEs in other subject areas, and should also be directly relevant to the huge range of practical problems that are represented by partial differential equations. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.
APA, Harvard, Vancouver, ISO, and other styles
50

Yin, Qiang, Juntong Cai, Xue Gong, and Qian Ding. "Local parameter identification with neural ordinary differential equations." Applied Mathematics and Mechanics 43, no. 12 (December 2022): 1887–900. http://dx.doi.org/10.1007/s10483-022-2926-9.

Full text
Abstract:
AbstractThe data-driven methods extract the feature information from data to build system models, which enable estimation and identification of the systems and can be utilized for prognosis and health management (PHM). However, most data-driven models are still black-box models that cannot be interpreted. In this study, we use the neural ordinary differential equations (ODEs), especially the inherent computational relationships of a system added to the loss function calculation, to approximate the governing equations. In addition, a new strategy for identifying the local parameters of the system is investigated, which can be utilized for system parameter identification and damage detection. The numerical and experimental examples presented in the paper demonstrate that the strategy has high accuracy and good local parameter identification. Moreover, the proposed method has the advantage of being interpretable. It can directly approximate the underlying governing dynamics and be a worthwhile strategy for system identification and PHM.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography