Journal articles on the topic 'Neural fields equations'

To see the other types of publications on this topic, follow the link: Neural fields equations.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Neural fields equations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Veltz, Romain, and Olivier Faugeras. "A Center Manifold Result for Delayed Neural Fields Equations." SIAM Journal on Mathematical Analysis 45, no. 3 (January 2013): 1527–62. http://dx.doi.org/10.1137/110856162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Belhe, Yash, Michaël Gharbi, Matthew Fisher, Iliyan Georgiev, Ravi Ramamoorthi, and Tzu-Mao Li. "Discontinuity-Aware 2D Neural Fields." ACM Transactions on Graphics 42, no. 6 (December 5, 2023): 1–11. http://dx.doi.org/10.1145/3618379.

Full text
Abstract:
Neural image representations offer the possibility of high fidelity, compact storage, and resolution-independent accuracy, providing an attractive alternative to traditional pixel- and grid-based representations. However, coordinate neural networks fail to capture discontinuities present in the image and tend to blur across them; we aim to address this challenge. In many cases, such as rendered images, vector graphics, diffusion curves, or solutions to partial differential equations, the locations of the discontinuities are known. We take those locations as input, represented as linear, quadratic, or cubic Bézier curves, and construct a feature field that is discontinuous across these locations and smooth everywhere else. Finally, we use a shallow multi-layer perceptron to decode the features into the signal value. To construct the feature field, we develop a new data structure based on a curved triangular mesh, with features stored on the vertices and on a subset of the edges that are marked as discontinuous. We show that our method can be used to compress a 100, 000 2 -pixel rendered image into a 25MB file; can be used as a new diffusion-curve solver by combining with Monte-Carlo-based methods or directly supervised by the diffusion-curve energy; or can be used for compressing 2D physics simulation data.
APA, Harvard, Vancouver, ISO, and other styles
3

Nicks, Rachel, Abigail Cocks, Daniele Avitabile, Alan Johnston, and Stephen Coombes. "Understanding Sensory Induced Hallucinations: From Neural Fields to Amplitude Equations." SIAM Journal on Applied Dynamical Systems 20, no. 4 (January 2021): 1683–714. http://dx.doi.org/10.1137/20m1366885.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Veltz, Romain, and Olivier Faugeras. "ERRATUM: A Center Manifold Result for Delayed Neural Fields Equations." SIAM Journal on Mathematical Analysis 47, no. 2 (January 2015): 1665–70. http://dx.doi.org/10.1137/140962279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bressloff, Paul C., and Zachary P. Kilpatrick. "Nonlinear Langevin Equations for Wandering Patterns in Stochastic Neural Fields." SIAM Journal on Applied Dynamical Systems 14, no. 1 (January 2015): 305–34. http://dx.doi.org/10.1137/140990371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Scheinker, Alexander, and Reeju Pokharel. "Physics-constrained 3D convolutional neural networks for electrodynamics." APL Machine Learning 1, no. 2 (June 1, 2023): 026109. http://dx.doi.org/10.1063/5.0132433.

Full text
Abstract:
We present a physics-constrained neural network (PCNN) approach to solving Maxwell’s equations for the electromagnetic fields of intense relativistic charged particle beams. We create a 3D convolutional PCNN to map time-varying current and charge densities J(r, t) and ρ(r, t) to vector and scalar potentials A(r, t) and φ(r, t) from which we generate electromagnetic fields according to Maxwell’s equations: B = ∇ × A and E = −∇ φ − ∂A/ ∂t. Our PCNNs satisfy hard constraints, such as ∇ · B = 0, by construction. Soft constraints push A and φ toward satisfying the Lorenz gauge.
APA, Harvard, Vancouver, ISO, and other styles
7

Sim, Fabio M., Eka Budiarto, and Rusman Rusyadi. "Comparison and Analysis of Neural Solver Methods for Differential Equations in Physical Systems." ELKHA 13, no. 2 (October 22, 2021): 134. http://dx.doi.org/10.26418/elkha.v13i2.49097.

Full text
Abstract:
Differential equations are ubiquitous in many fields of study, yet not all equations, whether ordinary or partial, can be solved analytically. Traditional numerical methods such as time-stepping schemes have been devised to approximate these solutions. With the advent of modern deep learning, neural networks have become a viable alternative to traditional numerical methods. By reformulating the problem as an optimisation task, neural networks can be trained in a semi-supervised learning fashion to approximate nonlinear solutions. In this paper, neural solvers are implemented in TensorFlow for a variety of differential equations, namely: linear and nonlinear ordinary differential equations of the first and second order; Poisson’s equation, the heat equation, and the inviscid Burgers’ equation. Different methods, such as the naive and ansatz formulations, are contrasted, and their overall performance is analysed. Experimental data is also used to validate the neural solutions on test cases, specifically: the spring-mass system and Gauss’s law for electric fields. The errors of the neural solvers against exact solutions are investigated and found to surpass traditional schemes in certain cases. Although neural solvers will not replace the computational speed offered by traditional schemes in the near future, they remain a feasible, easy-to-implement substitute when all else fails.
APA, Harvard, Vancouver, ISO, and other styles
8

ITOH, MAKOTO, and LEON O. CHUA. "IMAGE PROCESSING AND SELF-ORGANIZING CNN." International Journal of Bifurcation and Chaos 15, no. 09 (September 2005): 2939–58. http://dx.doi.org/10.1142/s0218127405013794.

Full text
Abstract:
CNN templates for image processing and pattern formation are derived from neural field equations, advection equations and reaction–diffusion equations by discretizing spatial integrals and derivatives. Many useful CNN templates are derived by this approach. Furthermore, self-organization is investigated from the viewpoint of divergence of vector fields.
APA, Harvard, Vancouver, ISO, and other styles
9

Wennekers, Thomas. "Dynamic Approximation of Spatiotemporal Receptive Fields in Nonlinear Neural Field Models." Neural Computation 14, no. 8 (August 1, 2002): 1801–25. http://dx.doi.org/10.1162/089976602760128027.

Full text
Abstract:
This article presents an approximation method to reduce the spatiotemporal behavior of localized activation peaks (also called “bumps”) in nonlinear neural field equations to a set of coupled ordinary differential equations (ODEs) for only the amplitudes and tuning widths of these peaks. This enables a simplified analysis of steady-state receptive fields and their stability, as well as spatiotemporal point spread functions and dynamic tuning properties. A lowest-order approximation for peak amplitudes alone shows that much of the well-studied behavior of small neural systems (e.g., the Wilson-Cowan oscillator) should carry over to localized solutions in neural fields. Full spatiotemporal response profiles can further be reconstructed from this low-dimensional approximation. The method is applied to two standard neural field models: a one-layer model with difference-of-gaussians connectivity kernel and a two-layer excitatory-inhibitory network. Similar models have been previously employed in numerical studies addressing orientation tuning of cortical simple cells. Explicit formulas for tuning properties, instabilities, and oscillation frequencies are given, and exemplary spatiotemporal response functions, reconstructed from the low-dimensional approximation, are compared with full network simulations.
APA, Harvard, Vancouver, ISO, and other styles
10

Mentzer, Katherine L., and J. Luc Peterson. "Neural network surrogate models for equations of state." Physics of Plasmas 30, no. 3 (March 2023): 032704. http://dx.doi.org/10.1063/5.0126708.

Full text
Abstract:
Equation of state (EOS) data provide necessary information for accurate multiphysics modeling, which is necessary for fields such as inertial confinement fusion. Here, we suggest a neural network surrogate model of energy and entropy and use thermodynamic relationships to derive other necessary thermodynamic EOS quantities. We incorporate phase information into the model by training a phase classifier and using phase-specific regression models, which improves the modal prediction accuracy. Our model predicts energy values to 1% relative error and entropy to 3.5% relative error in a log-transformed space. Although sound speed predictions require further improvement, the derived pressure values are accurate within 10% relative error. Our results suggest that neural network models can effectively model EOS for inertial confinement fusion simulation applications.
APA, Harvard, Vancouver, ISO, and other styles
11

Samia Atallah. "The Numerical Methods of Fractional Differential Equations." مجلة جامعة بني وليد للعلوم الإنسانية والتطبيقية 8, no. 4 (September 25, 2023): 496–512. http://dx.doi.org/10.58916/jhas.v8i4.44.

Full text
Abstract:
Differential equations with non-integer order derivatives have demonstrated are suitable models for a variety of physical events in several fields including diffusion processes and damping laws, fluid mechanics neural networks. In this study, i will discuss two numerical methods Diethelm's method and Adams-Bashforth-Moulton method for solving fractional ordinary differential equations (ODEs) with initial conditions.
APA, Harvard, Vancouver, ISO, and other styles
12

Chu, Mengyu, Lingjie Liu, Quan Zheng, Erik Franz, Hans-Peter Seidel, Christian Theobalt, and Rhaleb Zayer. "Physics informed neural fields for smoke reconstruction with sparse data." ACM Transactions on Graphics 41, no. 4 (July 2022): 1–14. http://dx.doi.org/10.1145/3528223.3530169.

Full text
Abstract:
High-fidelity reconstruction of dynamic fluids from sparse multiview RGB videos remains a formidable challenge, due to the complexity of the underlying physics as well as the severe occlusion and complex lighting in the captured data. Existing solutions either assume knowledge of obstacles and lighting, or only focus on simple fluid scenes without obstacles or complex lighting, and thus are unsuitable for real-world scenes with unknown lighting conditions or arbitrary obstacles. We present the first method to reconstruct dynamic fluid phenomena by leveraging the governing physics (ie, Navier -Stokes equations) in an end-to-end optimization from a mere set of sparse video frames without taking lighting conditions, geometry information, or boundary conditions as input. Our method provides a continuous spatio-temporal scene representation using neural networks as the ansatz of density and velocity solution functions for fluids as well as the radiance field for static objects. With a hybrid architecture that separates static and dynamic contents apart, fluid interactions with static obstacles are reconstructed for the first time without additional geometry input or human labeling. By augmenting time-varying neural radiance fields with physics-informed deep learning, our method benefits from the supervision of images and physical priors. Our progressively growing model with regularization further disentangles the density-color ambiguity in the radiance field, which allows for a more robust optimization from the given input of sparse views. A pretrained density-to-velocity fluid model is leveraged in addition as the data prior to avoid suboptimal velocity solutions which underestimate vorticity but trivially fulfill physical equations. Our method exhibits high-quality results with relaxed constraints and strong flexibility on a representative set of synthetic and real flow captures. Code and sample tests are at https://people.mpi-inf.mpg.de/~mchu/projects/PI-NeRF/.
APA, Harvard, Vancouver, ISO, and other styles
13

Guo, Yanan, Xiaoqun Cao, Bainian Liu, and Mei Gao. "Solving Partial Differential Equations Using Deep Learning and Physical Constraints." Applied Sciences 10, no. 17 (August 26, 2020): 5917. http://dx.doi.org/10.3390/app10175917.

Full text
Abstract:
The various studies of partial differential equations (PDEs) are hot topics of mathematical research. Among them, solving PDEs is a very important and difficult task. Since many partial differential equations do not have analytical solutions, numerical methods are widely used to solve PDEs. Although numerical methods have been widely used with good performance, researchers are still searching for new methods for solving partial differential equations. In recent years, deep learning has achieved great success in many fields, such as image classification and natural language processing. Studies have shown that deep neural networks have powerful function-fitting capabilities and have great potential in the study of partial differential equations. In this paper, we introduce an improved Physics Informed Neural Network (PINN) for solving partial differential equations. PINN takes the physical information that is contained in partial differential equations as a regularization term, which improves the performance of neural networks. In this study, we use the method to study the wave equation, the KdV–Burgers equation, and the KdV equation. The experimental results show that PINN is effective in solving partial differential equations and deserves further research.
APA, Harvard, Vancouver, ISO, and other styles
14

Raissi, Maziar, Alireza Yazdani, and George Em Karniadakis. "Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations." Science 367, no. 6481 (January 30, 2020): 1026–30. http://dx.doi.org/10.1126/science.aaw4741.

Full text
Abstract:
For centuries, flow visualization has been the art of making fluid motion visible in physical and biological systems. Although such flow patterns can be, in principle, described by the Navier-Stokes equations, extracting the velocity and pressure fields directly from the images is challenging. We addressed this problem by developing hidden fluid mechanics (HFM), a physics-informed deep-learning framework capable of encoding the Navier-Stokes equations into the neural networks while being agnostic to the geometry or the initial and boundary conditions. We demonstrate HFM for several physical and biomedical problems by extracting quantitative information for which direct measurements may not be possible. HFM is robust to low resolution and substantial noise in the observation data, which is important for potential applications.
APA, Harvard, Vancouver, ISO, and other styles
15

Kwessi, Eddy. "A Consistent Estimator of Nontrivial Stationary Solutions of Dynamic Neural Fields." Stats 4, no. 1 (February 13, 2021): 122–37. http://dx.doi.org/10.3390/stats4010010.

Full text
Abstract:
Dynamics of neural fields are tools used in neurosciences to understand the activities generated by large ensembles of neurons. They are also used in networks analysis and neuroinformatics in particular to model a continuum of neural networks. They are mathematical models that describe the average behavior of these congregations of neurons, which are often in large numbers, even in small cortexes of the brain. Therefore, change of average activity (potential, connectivity, firing rate, etc.) are described using systems of partial different equations. In their continuous or discrete forms, these systems have a rich array of properties, among which is the existence of nontrivial stationary solutions. In this paper, we propose an estimator for nontrivial solutions of dynamical neural fields with a single layer. The estimator is shown to be consistent and a computational algorithm is proposed to help carry out implementation. An illustrations of this consistency is given based on different inputs functions, different kernels, and different pulse emission rate functions.
APA, Harvard, Vancouver, ISO, and other styles
16

Di Carlo, D., D. Heitz, and T. Corpetti. "Post Processing Sparse And Instantaneous 2D Velocity Fields Using Physics-Informed Neural Networks." Proceedings of the International Symposium on the Application of Laser and Imaging Techniques to Fluid Mechanics 20 (July 11, 2022): 1–11. http://dx.doi.org/10.55037/lxlaser.20th.183.

Full text
Abstract:
This work tackles the problem of resolving high-resolution velocity fields from a set of sparse off-grid observations. We follow the framework of Physics-Informed Neural Networks where simple Multi-layer Perceptor (MLP) are trained to solve partial differential equations (PDEs). In contrast with other state-of-the-art methods based of Convolutional Neural Networks, these models can be applied to super-resolve sparse Lagrangian velocity measurements. Moreover, such a framework can be easily extended to output divergence-free quantities and offer simple implementation of prior physical as regularization terms. In particular, we employ a sub-grid model based on structure-functions to improve the accuracy of the super-resolved velocity fields of turbulent flows. Numerical experimentation on synthetic data shows that the proposed approach can accurately reconstruct dense Eulerian velocity fields from sparse Lagrangian velocity measurements.
APA, Harvard, Vancouver, ISO, and other styles
17

BÄKER, M., T. KALKREUTER, G. MACK, and M. SPEH. "NEURAL MULTIGRID METHODS FOR GAUGE THEORIES AND OTHER DISORDERED SYSTEMS." International Journal of Modern Physics C 04, no. 02 (April 1993): 239–47. http://dx.doi.org/10.1142/s0129183193000252.

Full text
Abstract:
We present evidence that multigrid works for wave equations in disordered systems, e.g. in the presence of gauge fields, no matter how strong the disorder, but one needs to introduce a "neural computations" point of view into large scale simulations: First, the system must learn how to do the simulations efficiently, then do the simulation (fast). The method can also be used to provide smooth interpolation kernels which are needed in multigrid Monte Carlo updates.
APA, Harvard, Vancouver, ISO, and other styles
18

Pang, Xue, Jian Wang, Faliang Yin, and Jun Yao. "Construction of elliptic stochastic partial differential equations solver in groundwater flow with convolutional neural networks." Journal of Physics: Conference Series 2083, no. 4 (November 1, 2021): 042064. http://dx.doi.org/10.1088/1742-6596/2083/4/042064.

Full text
Abstract:
Abstract Elliptic stochastic partial differential equations (SPDEs) play an indispensable role in mathematics, engineering and other fields, and its solution methods emerge in endlessly with the progress of science and technology. In this paper, we make use of the convolutional neural networks (CNNs), which are widely used in machine learning, to construct a solver for SPDEs. The SPDEs with Neumann boundary conditions are considered, and two CNNs are employed. One is used to deal with the essential equation, and the other satisfies the boundary conditions. With the help of the length factor, the integrated neural network model can predict the solution of the equations accurately. We show an example of groundwater flow to evaluate the model proposed with Gaussian random field (GRF). The experimental results show that the proposed neural network solver can approximate the traditional numerical algorithm, and has high computational efficiency.
APA, Harvard, Vancouver, ISO, and other styles
19

Aqil, Marco, Selen Atasoy, Morten L. Kringelbach, and Rikkert Hindriks. "Graph neural fields: A framework for spatiotemporal dynamical models on the human connectome." PLOS Computational Biology 17, no. 1 (January 28, 2021): e1008310. http://dx.doi.org/10.1371/journal.pcbi.1008310.

Full text
Abstract:
Tools from the field of graph signal processing, in particular the graph Laplacian operator, have recently been successfully applied to the investigation of structure-function relationships in the human brain. The eigenvectors of the human connectome graph Laplacian, dubbed “connectome harmonics”, have been shown to relate to the functionally relevant resting-state networks. Whole-brain modelling of brain activity combines structural connectivity with local dynamical models to provide insight into the large-scale functional organization of the human brain. In this study, we employ the graph Laplacian and its properties to define and implement a large class of neural activity models directly on the human connectome. These models, consisting of systems of stochastic integrodifferential equations on graphs, are dubbed graph neural fields, in analogy with the well-established continuous neural fields. We obtain analytic predictions for harmonic and temporal power spectra, as well as functional connectivity and coherence matrices, of graph neural fields, with a technique dubbed CHAOSS (shorthand for Connectome-Harmonic Analysis Of Spatiotemporal Spectra). Combining graph neural fields with appropriate observation models allows for estimating model parameters from experimental data as obtained from electroencephalography (EEG), magnetoencephalography (MEG), or functional magnetic resonance imaging (fMRI). As an example application, we study a stochastic Wilson-Cowan graph neural field model on a high-resolution connectome graph constructed from diffusion tensor imaging (DTI) and structural MRI data. We show that the model equilibrium fluctuations can reproduce the empirically observed harmonic power spectrum of resting-state fMRI data, and predict its functional connectivity, with a high level of detail. Graph neural fields natively allow the inclusion of important features of cortical anatomy and fast computations of observable quantities for comparison with multimodal empirical data. They thus appear particularly suitable for modelling whole-brain activity at mesoscopic scales, and opening new potential avenues for connectome-graph-based investigations of structure-function relationships.
APA, Harvard, Vancouver, ISO, and other styles
20

Peng, Liangrong, and Liu Hong. "Recent Advances in Conservation–Dissipation Formalism for Irreversible Processes." Entropy 23, no. 11 (October 31, 2021): 1447. http://dx.doi.org/10.3390/e23111447.

Full text
Abstract:
The main purpose of this review is to summarize the recent advances of the Conservation–Dissipation Formalism (CDF), a new way for constructing both thermodynamically compatible and mathematically stable and well-posed models for irreversible processes. The contents include but are not restricted to the CDF’s physical motivations, mathematical foundations, formulations of several classical models in mathematical physics from master equations and Fokker–Planck equations to Boltzmann equations and quasi-linear Maxwell equations, as well as novel applications in the fields of non-Fourier heat conduction, non-Newtonian viscoelastic fluids, wave propagation/transportation in geophysics and neural science, soft matter physics, etc. Connections with other popular theories in the field of non-equilibrium thermodynamics are examined too.
APA, Harvard, Vancouver, ISO, and other styles
21

Hu, Beichao, and Dwayne McDaniel. "Applying Physics-Informed Neural Networks to Solve Navier–Stokes Equations for Laminar Flow around a Particle." Mathematical and Computational Applications 28, no. 5 (October 13, 2023): 102. http://dx.doi.org/10.3390/mca28050102.

Full text
Abstract:
In recent years, Physics-Informed Neural Networks (PINNs) have drawn great interest among researchers as a tool to solve computational physics problems. Unlike conventional neural networks, which are black-box models that “blindly” establish a correlation between input and output variables using a large quantity of labeled data, PINNs directly embed physical laws (primarily partial differential equations) within the loss function of neural networks. By minimizing the loss function, this approach allows the output variables to automatically satisfy physical equations without the need for labeled data. The Navier–Stokes equation is one of the most classic governing equations in thermal fluid engineering. This study constructs a PINN to solve the Navier–Stokes equations for a 2D incompressible laminar flow problem. Flows passing around a 2D circular particle are chosen as the benchmark case, and an elliptical particle is also examined to enrich the research. The velocity and pressure fields are predicted by the PINNs, and the results are compared with those derived from Computational Fluid Dynamics (CFD). Additionally, the particle drag force coefficient is calculated to quantify the discrepancy in the results of the PINNs as compared to CFD outcomes. The drag coefficient maintained an error within 10% across all test scenarios.
APA, Harvard, Vancouver, ISO, and other styles
22

Shinde, Rajwardhan, Onkar Dherange, Rahul Gavhane, Hemant Koul, and Nilam Patil. "HANDWRITTEN MATHEMATICAL EQUATION SOLVER." International Journal of Engineering Applied Sciences and Technology 6, no. 10 (February 1, 2022): 146–49. http://dx.doi.org/10.33564/ijeast.2022.v06i10.018.

Full text
Abstract:
With recent developments in Artificial intelligence and deep learning every major field which is using computers for any type of work is trying to ease the work using deep learning methods. Deep learning is used in a wide range of fields due to its diverse range of applications like health, sports, robotics, education, etc. In deep learning, a Convolutional neural network (CNN) is being used in image classification, pattern recognition, Text classification, face recognition, live monitoring systems, handwriting recognition, Digit recognition, etc. In this paper, we propose a system for educational use where the recognition and solving process of mathematical equations will be done by machine. In this system for recognition of equations, we use a Convolutional neural network (CNN) model. The proposed system can recognize and solve mathematical equations with basic operations (-,+,/,*) of multiple digits as well as polynomial equations. The model is trained with Modified National Institute of Standards and Technology (MNIST) dataset as well as a manually prepared dataset of operator symbols (“-”,”+”, “/”, “*”, “(“, “)” ). Further, the system uses the RNN model to solve the recognized operations.
APA, Harvard, Vancouver, ISO, and other styles
23

Yang, Zhou, Yuwang Xu, Jionglin Jing, Xuepeng Fu, Bofu Wang, Haojie Ren, Mengmeng Zhang, and Tongxiao Sun. "Investigation of Physics-Informed Neural Networks to Reconstruct a Flow Field with High Resolution." Journal of Marine Science and Engineering 11, no. 11 (October 25, 2023): 2045. http://dx.doi.org/10.3390/jmse11112045.

Full text
Abstract:
Particle image velocimetry (PIV) is a widely used experimental technique in ocean engineering, for instance, to study the vortex fields near marine risers and the wake fields behind wind turbines or ship propellers. However, the flow fields measured using PIV in water tanks or wind tunnels always have low resolution; hence, it is difficult to accurately reveal the mechanics behind the complex phenomena sometimes observed. In this paper, physics-informed neural networks (PINNs), which introduce the Navier–Stokes equations or the continuity equation into the loss function during training to reconstruct a flow field with high resolution, are investigated. The accuracy is compared with the cubic spline interpolation method and a classic neural network in a case study of reconstructing a two-dimensional flow field around a cylinder, which is obtained through direct numerical simulation. Finally, the validated PINN method is applied to reconstruct a flow field measured using PIV and shows good performance.
APA, Harvard, Vancouver, ISO, and other styles
24

Ta, Hoa, Shi Wen Wong, Nathan McClanahan, Jung-Han Kimn, and Kaiqun Fu. "Exploration on Physics-Informed Neural Networks on Partial Differential Equations (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (June 26, 2023): 16344–45. http://dx.doi.org/10.1609/aaai.v37i13.27032.

Full text
Abstract:
Data-driven related solutions are dominating various scientific fields with the assistance of machine learning and data analytics. Finding effective solutions has long been discussed in the area of machine learning. The recent decade has witnessed the promising performance of the Physics-Informed Neural Networks (PINN) in bridging the gap between real-world scientific problems and machine learning models. In this paper, we explore the behavior of PINN in a particular range of different diffusion coefficients under specific boundary conditions. In addition, different initial conditions of partial differential equations are solved by applying the proposed PINN. Our paper illustrates how the effectiveness of the PINN can change under various scenarios. As a result, we demonstrate a better insight into the behaviors of the PINN and how to make the proposed method more robust while encountering different scientific and engineering problems.
APA, Harvard, Vancouver, ISO, and other styles
25

Liu, Xiangdong, and Yu Gu. "Study of Pricing of High-Dimensional Financial Derivatives Based on Deep Learning." Mathematics 11, no. 12 (June 11, 2023): 2658. http://dx.doi.org/10.3390/math11122658.

Full text
Abstract:
Many problems in the fields of finance and actuarial science can be transformed into the problem of solving backward stochastic differential equations (BSDE) and partial differential equations (PDEs) with jumps, which are often difficult to solve in high-dimensional cases. To solve this problem, this paper applies the deep learning algorithm to solve a class of high-dimensional nonlinear partial differential equations with jump terms and their corresponding backward stochastic differential equations (BSDEs) with jump terms. Using the nonlinear Feynman-Kac formula, the problem of solving this kind of PDE is transformed into the problem of solving the corresponding backward stochastic differential equations with jump terms, and the numerical solution problem is turned into a stochastic control problem. At the same time, the gradient and jump process of the unknown solution are separately regarded as the strategy function, and they are approximated, respectively, by using two multilayer neural networks as function approximators. Thus, the deep learning-based method is used to overcome the “curse of dimensionality” caused by high-dimensional PDE with jump, and the numerical solution is obtained. In addition, this paper proposes a new optimization algorithm based on the existing neural network random optimization algorithm, and compares the results with the traditional optimization algorithm, and achieves good results. Finally, the proposed method is applied to three practical high-dimensional problems: Hamilton-Jacobi-Bellman equation, bond pricing under the jump Vasicek model and option pricing under the jump diffusion model. The proposed numerical method has obtained satisfactory accuracy and efficiency. The method has important application value and practical significance in investment decision-making, option pricing, insurance and other fields.
APA, Harvard, Vancouver, ISO, and other styles
26

ATALAY, VOLKAN, and EROL GELENBE. "PARALLEL ALGORITHM FOR COLOUR TEXTURE GENERATION USING THE RANDOM NEURAL NETWORK MODEL." International Journal of Pattern Recognition and Artificial Intelligence 06, no. 02n03 (August 1992): 437–46. http://dx.doi.org/10.1142/s0218001492000266.

Full text
Abstract:
We propose a parallel algorithm for the generation of colour textures based upon the non-linear equations of the "multiple class random neural network model". A neuron is used to obtain the texture value of each pixel in the bit-map plane. Each neuron interacts with its immediate planar neighbours in order to obtain the texture for the whole plane. A model which uses at most 4(C2 + C) parameters for the whole network, where C is the number of colours, is proposed. Numerical iterations of the non-linear field equations of the neural network model, starting with a randomly generated image, are shown to produce textures having different desirable features such as granularity, inclination and randomness. The experimental evaluation shows that the random network provides good results, at a computational cost which is considerably less than that of other approaches such as Markov random fields.
APA, Harvard, Vancouver, ISO, and other styles
27

Touboul, Jonathan. "Mean-field equations for stochastic firing-rate neural fields with delays: Derivation and noise-induced transitions." Physica D: Nonlinear Phenomena 241, no. 15 (August 2012): 1223–44. http://dx.doi.org/10.1016/j.physd.2012.03.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Schaback, Robert, and Holger Wendland. "Kernel techniques: From machine learning to meshless methods." Acta Numerica 15 (May 2006): 543–639. http://dx.doi.org/10.1017/s0962492906270016.

Full text
Abstract:
Kernels are valuable tools in various fields of numerical analysis, including approximation, interpolation, meshless methods for solving partial differential equations, neural networks, and machine learning. This contribution explains why and how kernels are applied in these disciplines. It uncovers the links between them, in so far as they are related to kernel techniques. It addresses non-expert readers and focuses on practical guidelines for using kernels in applications.
APA, Harvard, Vancouver, ISO, and other styles
29

Williams, Kyle, Stephen Rudin, Daniel Bednarek, Ammad Baig, Adnan Hussain Siddiqui, Elad I. Levy, and Ciprian Ionita. "226 Advancing Neurovascular Diagnostics for Abnormal Hemodynamic Conditions Through AI-Driven Physics-informed Neural Networks." Neurosurgery 70, Supplement_1 (April 2024): 61. http://dx.doi.org/10.1227/neu.0000000000002809_226.

Full text
Abstract:
INTRODUCTION: Many studies have explored the application of machine learning and neural networks in extracting critical diagnostic information from structural and functional medical imaging. While these methods show potential for improving efficiency, concerns arise when interpreting subtle imaging features, especially when training data is limited. Physics-informed neural networks (PINNs) address these issues by incorporating governing equations from physical and mechanical models into the analysis. METHODS: We examined the use of a PINN that enforces the convection equation, relating contrast media propagation to blood velocity, the Navier-Stokes equations for velocity and pressure distributions, and the conservation of mass requirement to calculate velocity and pressure distributions within patient-specific blood vessels. We also implemented a boundary condition that accounts for real contrast-media propagation from 1000 fps high-speed angiographic (HSA) image sequences, allowing for an assumption-free problem space within a medical imaging framework. Velocity fields and pressure gradients were calculated for flow in aneurysms and carotid bifurcations. RESULTS: Our method demonstrates comparable results to computational fluid dynamics (CFD) without requiring manual data processing, significantly improving the efficiency of calculating high-resolution velocity and pressure fields. By integrating AI with physics modeling, this novel approach holds the potential for advancing neurovascular diagnostics and providing more accurate, personalized treatment plans for patients with neurovascular pathologies. CONCLUSIONS: This innovative approach, integrating physics modeling and artificial intelligence, offers more accurate and personalized diagnostics for patients with neurovascular pathologies. The PINN method not only achieves results comparable to computational fluid dynamics without the need for manual data processing but also greatly enhances the efficiency of calculating high-resolution velocity and pressure fields.
APA, Harvard, Vancouver, ISO, and other styles
30

ATALAY, VOLKAN, EROL GELENBE, and NESE YALABIK. "THE RANDOM NEURAL NETWORK MODEL FOR TEXTURE GENERATION." International Journal of Pattern Recognition and Artificial Intelligence 06, no. 01 (April 1992): 131–41. http://dx.doi.org/10.1142/s0218001492000072.

Full text
Abstract:
The generation of artifical textures is a useful function in image synthesis systems. The purpose of this paper is to describe the use of the random neural network (RN) model developed by Gelenbe to generate various textures having different characteristics. An eight parameter model, based on a choice of the local interaction parameters between neighbouring neurons in the plane, is proposed. Numerical iterations of the field equations of the neural network model, starting with a randomly generated gray-level image, are shown to produce textures having different desirable features such as granularity, inclination, and randomness. The experimental evaluation shows that the random network provides good results, at a computational cost less than that of other approaches such as Markov random fields. Various examples of textures generated by our method are presented.
APA, Harvard, Vancouver, ISO, and other styles
31

Baazeem, Amani S., Muhammad Shoaib Arif, and Kamaleldin Abodayeh. "An Efficient and Accurate Approach to Electrical Boundary Layer Nanofluid Flow Simulation: A Use of Artificial Intelligence." Processes 11, no. 9 (September 13, 2023): 2736. http://dx.doi.org/10.3390/pr11092736.

Full text
Abstract:
Engineering and technological research groups are becoming interested in neural network techniques to improve productivity, business strategies, and societal development. In this paper, an explicit numerical scheme is given for both linear and nonlinear differential equations. The scheme is correct to second order. Additionally, the scheme’s consistency and stability are guaranteed. Backpropagation of Levenberg–Marquardt, the effect of including an induced magnetic field in a mathematical model for electrical boundary layer nanofluid flow on a flat plate, is quantitatively investigated using artificial neural networks. Later, the model is reduced into a set of boundary value problems, which are then resolved using the suggested scheme and a shooting strategy. The outcomes are also contrasted with earlier studies and the MATLAB solver bvp4c for validation purposes. In addition, neural networking is also employed for mapping input to outputs for velocity, temperature, and concentration profiles. These results prove that artificial neural networks can make precise forecasts and optimizations. Using a neural network to optimize the fluid flow in an electrical boundary layer while subjected to an induced magnetic field is a promising application of the suggested computational scheme. Fluid dynamics benefits greatly from combining numerical methods and artificial neural networks, which could lead to new developments in various fields. Results from this study may aid in optimizing fluid systems, leading to greater productivity and effectiveness in numerous technical fields.
APA, Harvard, Vancouver, ISO, and other styles
32

Ara, Asmat, Oyoon Abdul Razzaq, and Najeeb Alam Khan. "A Single Layer Functional Link Artificial Neural Network based on Chebyshev Polynomials for Neural Evaluations of Nonlinear Nth Order Fuzzy Differential Equations." Annals of West University of Timisoara - Mathematics and Computer Science 56, no. 1 (July 1, 2018): 3–22. http://dx.doi.org/10.2478/awutm-2018-0001.

Full text
Abstract:
Abstract Bearing in mind the considerable importance of fuzzy differential equations (FDEs) in different fields of science and engineering, in this paper, nonlinear nth order FDEs are approximated, heuristically. The analysis is carried out on using Chebyshev neural network (ChNN), which is a type of single layer functional link artificial neural network (FLANN). Besides, explication of generalized Hukuhara differentiability (gH-differentiability) is also added for the nth order differentiability of fuzzy-valued functions. Moreover, general formulation of the structure of ChNN for the governing problem is described and assessed on some examples of nonlinear FDEs. In addition, comparison analysis of the proposed method with Runge-Kutta method is added and also portrayed the error bars that clarify the feasibility of attained solutions and validity of the method.
APA, Harvard, Vancouver, ISO, and other styles
33

Chen, Simin, Zhixiang Liu, Wenbo Zhang, and Jinkun Yang. "A Hard-Constraint Wide-Body Physics-Informed Neural Network Model for Solving Multiple Cases in Forward Problems for Partial Differential Equations." Applied Sciences 14, no. 1 (December 25, 2023): 189. http://dx.doi.org/10.3390/app14010189.

Full text
Abstract:
In the fields of physics and engineering, it is crucial to understand phase transition dynamics. This field involves fundamental partial differential equations (PDEs) such as the Allen–Cahn, Burgers, and two-dimensional (2D) wave equations. In alloys, the evolution of the phase transition interface is described by the Allen–Cahn equation. Vibrational and wave phenomena during phase transitions are modeled using the Burgers and 2D wave equations. The combination of these equations gives comprehensive information about the dynamic behavior during a phase transition. Numerical modeling methods such as finite difference method (FDM), finite volume method (FVM) and finite element method (FEM) are often applied to solve phase transition problems that involve many partial differential equations (PDEs). However, physical problems can lead to computational complexity, increasing computational costs dramatically. Physics-informed neural networks (PINNs), as new neural network algorithms, can integrate physical law constraints with neural network algorithms to solve partial differential equations (PDEs), providing a new way to solve PDEs in addition to the traditional numerical modeling methods. In this paper, a hard-constraint wide-body PINN (HWPINN) model based on PINN is proposed. This model improves the effectiveness of the approximation by adding a wide-body structure to the approximation neural network part of the PINN architecture. A hard constraint is used in the physically driven part instead of the traditional practice of PINN constituting a residual network with boundary or initial conditions. The high accuracy of HWPINN for solving PDEs is verified through numerical experiments. One-dimensional (1D) Allen–Cahn, one-dimensional Burgers, and two-dimensional wave equation cases are set up for numerical experiments. The properties of the HWPINN model are inferred from the experimental data. The solution predicted by the model is compared with the FDM solution for evaluating the experimental error in the numerical experiments. HWPINN shows great potential for solving the PDE forward problem and provides a new approach for solving PDEs.
APA, Harvard, Vancouver, ISO, and other styles
34

Jakeer, Shaik, Seethi Reddy Reddisekhar Reddy, Sathishkumar Veerappampalayam Easwaramoorthy, Hayath Thameem Basha, and Jaehyuk Cho. "Exploring the Influence of Induced Magnetic Fields and Double-Diffusive Convection on Carreau Nanofluid Flow through Diverse Geometries: A Comparative Study Using Numerical and ANN Approaches." Mathematics 11, no. 17 (August 27, 2023): 3687. http://dx.doi.org/10.3390/math11173687.

Full text
Abstract:
This current investigation aims to explore the significance of induced magnetic fields and double-diffusive convection in the radiative flow of Carreau nanofluid through three distinct geometries. To simplify the fluid transport equations, appropriate self-similarity variables were employed, converting them into ordinary differential equations. These equations were subsequently solved using the Runge–Kutta–Fehlberg (RKF) method. Through graphical representations like graphs and tables, the study demonstrates how various dynamic factors influence the fluid’s transport characteristics. Additionally, the artificial neural network (ANN) approach is considered an alternative method to handle fluid flow issues, significantly reducing processing time. In this study, a novel intelligent numerical computing approach was adopted, implementing a Levenberg–Marquardt algorithm-based MLP feed-forward back-propagation ANN. Data collection was conducted to evaluate, validate, and guide the artificial neural network model. Throughout all the investigated geometries, both velocity and induced magnetic profiles exhibit a declining trend for higher values of the magnetic parameter. An increase in the Dufour number corresponds to a rise in the nanofluid temperature. The concentration of nanofluid increases with higher values of the Soret number. Similarly, the nanofluid velocity increases with higher velocity slip parameter values, while the fluid temperature exhibits opposite behavior, decreasing with increasing velocity slip parameter values.
APA, Harvard, Vancouver, ISO, and other styles
35

Pioch, Fabian, Jan Hauke Harmening, Andreas Maximilian Müller, Franz-Josef Peitzmann, Dieter Schramm, and Ould el Moctar. "Turbulence Modeling for Physics-Informed Neural Networks: Comparison of Different RANS Models for the Backward-Facing Step Flow." Fluids 8, no. 2 (January 26, 2023): 43. http://dx.doi.org/10.3390/fluids8020043.

Full text
Abstract:
Physics-informed neural networks (PINN) can be used to predict flow fields with a minimum of simulated or measured training data. As most technical flows are turbulent, PINNs based on the Reynolds-averaged Navier–Stokes (RANS) equations incorporating a turbulence model are needed. Several studies demonstrated the capability of PINNs to solve the Naver–Stokes equations for laminar flows. However, little work has been published concerning the application of PINNs to solve the RANS equations for turbulent flows. This study applied a RANS-based PINN approach to a backward-facing step flow at a Reynolds number of 5100. The standard k-ω model, the mixing length model, an equation-free νt and an equation-free pseudo-Reynolds stress model were applied. The results compared favorably to DNS data when provided with three vertical lines of labeled training data. For five lines of training data, all models predicted the separated shear layer and the associated vortex more accurately.
APA, Harvard, Vancouver, ISO, and other styles
36

Portal-Porras, Koldo, Unai Fernandez-Gamiz, Ainara Ugarte-Anero, Ekaitz Zulueta, and Asier Zulueta. "Alternative Artificial Neural Network Structures for Turbulent Flow Velocity Field Prediction." Mathematics 9, no. 16 (August 14, 2021): 1939. http://dx.doi.org/10.3390/math9161939.

Full text
Abstract:
Turbulence in fluids has been a popular research topic for many years due to its influence on a wide range of applications. Computational Fluid Dynamics (CFD) tools are able to provide plenty of information about this phenomenon, but their computational cost often makes the use of these tools unfeasible. For that reason, in recent years, turbulence modelling using Artificial Neural Networks (ANNs) is becoming increasingly popular. These networks typically calculate directly the desired magnitude, having input information about the computational domain. In this paper, a Convolutional Neural Network (CNN) for predicting different magnitudes of turbulent flows around different geometries by approximating the equations of the Reynolds-Averaged Navier-Stokes (RANS)-based realizable k-ε two-layer turbulence model is proposed. Using that CNN, alternative network structures are proposed to predict the velocity fields of a turbulent flow around different geometries on a rectangular channel, with a preliminary stage to predict pressure and vorticity fields before calculating the velocity fields, and the obtained results are compared with the ones obtained with the basic structure. The results demonstrate that the proposed structures clearly outperform the basic one, especially when the flow becomes uncertain. In addition, considering the results, the best network configuration is proposed. That network is tested with a domain with multiple geometries and a domain with a narrowing of the channel, which are domains with different conditions from the training ones, showing fairly accurate predictions.
APA, Harvard, Vancouver, ISO, and other styles
37

Abudusaimaiti, Mairemunisa, Abuduwali Abudukeremu, and Amina Sabir. "Fixed/Preassigned-Time Stochastic Synchronization of Complex-Valued Fuzzy Neural Networks with Time Delay." Mathematics 11, no. 17 (September 2, 2023): 3769. http://dx.doi.org/10.3390/math11173769.

Full text
Abstract:
Instead of the separation approach, this paper mainly centers on studying the fixed/preassigned-time (FXT/PAT) synchronization of a type of complex-valued stochastic fuzzy cellular neural networks (CVSFCNNs) with time delay based on the direct method. Firstly, some basic properties of the sign function in complex fields and some generalized FXT/PAT stability lemmas for nonlinear stochastic differential equations are introduced. Secondly, by designing two delay-dependent complex-valued controllers with/without a sign function, sufficient conditions for CVSFCNNs to achieve FXT/PAT synchronization are obtained. Finally, the feasibility of the theoretical results is verified through a numerical example.
APA, Harvard, Vancouver, ISO, and other styles
38

Du, Mengxuan. "Analysis of Chaos Fluctuations in Atmospheric Prediction, Fluid Mechanics and Power System Load Forecasting." Highlights in Science, Engineering and Technology 72 (December 15, 2023): 594–601. http://dx.doi.org/10.54097/3kqd5952.

Full text
Abstract:
Chaos fluctuation is one of the hottest topics in differential equations and dynamics systems. On this basis, this study discusses the development of chaos theory, introduces Lorenz's discovery of the butterfly effect, Ruelle's research on strange attractors, and Mandelbrot's contributions to fractals; briefly explains the basic principles of chaos theory and bifurcation theory and analyzes them based on partial differential equations. It provides the Lyapunov exponent method for identifying chaos and illustrates the basic properties and laws of chaotic motion using the Logistic map as an example; focuses on the applications of chaos theory in atmospheric science, fluid mechanics, and load forecasting, explaining the basic principles, research methods, and roles of the theory in solving these three complex fields involving disaster prediction, turbulence analysis, SVM model and Legendre orthogonal basis neural networks. Finally, this paper analyzes the limitations of chaos theory in the study of multiple parameters, multiple scales, and unknown influencing factors. These results express hope for the development of chaos theory in more fields such as communication, quantum computing, and biomedical science, giving prospects for the future.
APA, Harvard, Vancouver, ISO, and other styles
39

Hu, Fujia, Weebeng Tay, Yilun Zhou, and Boocheong Khoo. "A Novel Hybrid Deep Learning Method for Predicting the Flow Fields of Biomimetic Flapping Wings." Biomimetics 9, no. 2 (January 25, 2024): 72. http://dx.doi.org/10.3390/biomimetics9020072.

Full text
Abstract:
The physics governing the fluid dynamics of bio-inspired flapping wings is effectively characterized by partial differential equations (PDEs). Nevertheless, the process of discretizing these equations at spatiotemporal scales is notably time consuming and resource intensive. Traditional PDE-based computations are constrained in their applicability, which is mainly due to the presence of numerous shape parameters and intricate flow patterns associated with bionic flapping wings. Consequently, there is a significant demand for a rapid and accurate solution to nonlinear PDEs, to facilitate the analysis of bionic flapping structures. Deep learning, especially physics-informed deep learning (PINN), offers an alternative due to its great nonlinear curve-fitting capability. In the present work, a hybrid coarse-data-driven physics-informed neural network model (HCDD-PINN) is proposed to improve the accuracy and reliability of predicting the time evolution of nonlinear PDEs solutions, by using an order-of-magnitude-coarser grid than traditional computational fluid dynamics (CFDs) require as internal training data. The architecture is devised to enforce the initial and boundary conditions, and incorporate the governing equations and the low-resolution spatiotemporal internal data into the loss function of the neural network, to drive the training. Compared to the original PINN with no internal data, the training and predicting dynamics of HCDD-PINN with different resolutions of coarse internal data are analyzed on the problem relevant to the two-dimensional unsteady flapping wing, which involves unsteady flow features and moving boundaries. Additionally, a hyper-parametrical study is conducted to obtain an optimal model for the problem under consideration, which is then utilized for investigating the effects of the snapshot and fraction of the coarse internal data on the HCDD-PINN’s performances. The results show that the proposed framework has a sufficient stability and accuracy for solving the considered biomimetic flapping-wing problem, and its great potential means that it can be considered as an alternative to accelerate or replace traditional CFD solvers in the future. The interested variables of the flow field at any instant can be rapidly obtained by the trained HCDD-PINN model, which is superior to the traditional CFD method that usually needs to be re-run. For the three-dimensional and optimization problems of flapping wings, the advantages of the proposed method are supposedly even more apparent.
APA, Harvard, Vancouver, ISO, and other styles
40

Jenison, Rick L., Richard A. Reale, Joseph E. Hind, and John F. Brugge. "Modeling of Auditory Spatial Receptive Fields With Spherical Approximation Functions." Journal of Neurophysiology 80, no. 5 (November 1, 1998): 2645–56. http://dx.doi.org/10.1152/jn.1998.80.5.2645.

Full text
Abstract:
Jenison, Rick L., Richard A. Reale, Joseph E. Hind, and John F. Brugge. Modeling of auditory spatial receptive fields with spherical approximation functions. J. Neurophysiol. 80: 2645–2656, 1998. A spherical approximation technique is presented that affords a mathematical characterization of a virtual space receptive field (VSRF) based on first-spike latency in the auditory cortex of cat. Parameterizing directional sensitivity in this fashion is much akin to the use of difference-of-Gaussian (DOG) functions for modeling neural responses in visual cortex. Artificial neural networks and approximation techniques typically have been applied to problems conforming to a multidimensional Cartesian input space. The problem with using classical planar Gaussians is that radial symmetry and consistency on the plane actually translate into directionally dependent distortion on spherical surfaces. An alternative set of spherical basis functions, the von Mises basis function (VMBF), is used to eliminate spherical approximation distortion. Unlike the Fourier transform or spherical harmonic expansions, the VMBFs are nonorthogonal, and hence require some form of gradient-descent search for optimal estimation of parameters in the modeling of the VSRF. The optimization equations required to solve this problem are presented. Three descriptive classes of VSRF (contralateral, frontal, and ipsilateral) approximations are investigated, together with an examination of the residual error after parameter optimization. The use of the analytic receptive field model in computational models of population coding of sound direction is discussed, together with the importance of quantifying receptive field gradients. Because spatial hearing is by its very nature three dimensional or, more precisely, two dimensional (directional) on the sphere, we find that spatial receptive field models are best developed on the sphere.
APA, Harvard, Vancouver, ISO, and other styles
41

Champion, Kathleen, Bethany Lusch, J. Nathan Kutz, and Steven L. Brunton. "Data-driven discovery of coordinates and governing equations." Proceedings of the National Academy of Sciences 116, no. 45 (October 21, 2019): 22445–51. http://dx.doi.org/10.1073/pnas.1906995116.

Full text
Abstract:
The discovery of governing equations from scientific data has the potential to transform data-rich fields that lack well-characterized quantitative descriptions. Advances in sparse regression are currently enabling the tractable identification of both the structure and parameters of a nonlinear dynamical system from data. The resulting models have the fewest terms necessary to describe the dynamics, balancing model complexity with descriptive ability, and thus promoting interpretability and generalizability. This provides an algorithmic approach to Occam’s razor for model discovery. However, this approach fundamentally relies on an effective coordinate system in which the dynamics have a simple representation. In this work, we design a custom deep autoencoder network to discover a coordinate transformation into a reduced space where the dynamics may be sparsely represented. Thus, we simultaneously learn the governing equations and the associated coordinate system. We demonstrate this approach on several example high-dimensional systems with low-dimensional behavior. The resulting modeling framework combines the strengths of deep neural networks for flexible representation and sparse identification of nonlinear dynamics (SINDy) for parsimonious models. This method places the discovery of coordinates and models on an equal footing.
APA, Harvard, Vancouver, ISO, and other styles
42

Zancanaro, Matteo, Markus Mrosek, Giovanni Stabile, Carsten Othmer, and Gianluigi Rozza. "Hybrid Neural Network Reduced Order Modelling for Turbulent Flows with Geometric Parameters." Fluids 6, no. 8 (August 22, 2021): 296. http://dx.doi.org/10.3390/fluids6080296.

Full text
Abstract:
Geometrically parametrized partial differential equations are currently widely used in many different fields, such as shape optimization processes or patient-specific surgery studies. The focus of this work is some advances on this topic, capable of increasing the accuracy with respect to previous approaches while relying on a high cost–benefit ratio performance. The main scope of this paper is the introduction of a new technique combining a classical Galerkin-projection approach together with a data-driven method to obtain a versatile and accurate algorithm for the resolution of geometrically parametrized incompressible turbulent Navier–Stokes problems. The effectiveness of this procedure is demonstrated on two different test cases: a classical academic back step problem and a shape deformation Ahmed body application. The results provide insight into details about the properties of the architecture we developed while exposing possible future perspectives for this work.
APA, Harvard, Vancouver, ISO, and other styles
43

Bohner, Martin, Giuseppe Caristi, Shapour Heidarkhani, and Shahin Moradi. "Three solutions for a discrete fourth-order boundary value problem with four parameters." Boletim da Sociedade Paranaense de Matemática 42 (April 19, 2024): 1–13. http://dx.doi.org/10.5269/bspm.64229.

Full text
Abstract:
This paper presents several sufficient conditions for the existence of at least three classical solutions of a boundary value problem for a fourth-order difference equation. Fourth-order boundary value problems act as models for the bending or deforming of elastic beams. In different fields of research, such as computer science, mechanical engineering, control systems, artificial or biological neural networks, economics and many others,the mathematical modelling of important questions leads naturally to the consideration of nonlinear difference equations. Our technical approach is based on variational methods. An example is included in the paper.Numerical computations of the example confirm our theoretical results.
APA, Harvard, Vancouver, ISO, and other styles
44

Todorova, Sonia, and Valérie Ventura. "Neural Decoding: A Predictive Viewpoint." Neural Computation 29, no. 12 (December 2017): 3290–310. http://dx.doi.org/10.1162/neco_a_01020.

Full text
Abstract:
Decoding in the context of brain-machine interface is a prediction problem, with the aim of retrieving the most accurate kinematic predictions attainable from the available neural signals. While selecting models that reduce the prediction error is done to various degrees, decoding has not received the attention that the fields of statistics and machine learning have lavished on the prediction problem in the past two decades. Here, we take a more systematic approach to the decoding prediction problem and search for risk-optimized reverse regression, optimal linear estimation (OLE), and Kalman filter models within a large model space composed of several nonlinear transformations of neural spike counts at multiple temporal lags. The reverse regression decoding framework is a standard prediction problem, where penalized methods such as ridge regression or Lasso are routinely used to find minimum risk models. We argue that minimum risk reverse regression is always more efficient than OLE and also happens to be 44% more efficient than a standard Kalman filter in a particular application of offline reconstruction of arm reaches of a rhesus macaque monkey. Yet model selection for tuning curves–based decoding models such as OLE and Kalman filtering is not a standard statistical prediction problem, and no efficient method exists to identify minimum risk models. We apply several methods to build low-risk models and show that in our application, a Kalman filter that includes multiple carefully chosen observation equations per neural unit is 67% more efficient than a standard Kalman filter, but with the drawback that finding such a model is computationally very costly.
APA, Harvard, Vancouver, ISO, and other styles
45

da Silva, Severino Horácio. "Lower Semicontinuity of Global Attractors for a Class of Evolution Equations of Neural Fields Type in a Bounded Domain." Differential Equations and Dynamical Systems 26, no. 4 (August 7, 2015): 371–91. http://dx.doi.org/10.1007/s12591-015-0258-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Gajamannage, K., D. I. Jayathilake, Y. Park, and E. M. Bollt. "Recurrent neural networks for dynamical systems: Applications to ordinary differential equations, collective motion, and hydrological modeling." Chaos: An Interdisciplinary Journal of Nonlinear Science 33, no. 1 (January 2023): 013109. http://dx.doi.org/10.1063/5.0088748.

Full text
Abstract:
Classical methods of solving spatiotemporal dynamical systems include statistical approaches such as autoregressive integrated moving average, which assume linear and stationary relationships between systems’ previous outputs. Development and implementation of linear methods are relatively simple, but they often do not capture non-linear relationships in the data. Thus, artificial neural networks (ANNs) are receiving attention from researchers in analyzing and forecasting dynamical systems. Recurrent neural networks (RNNs), derived from feed-forward ANNs, use internal memory to process variable-length sequences of inputs. This allows RNNs to be applicable for finding solutions for a vast variety of problems in spatiotemporal dynamical systems. Thus, in this paper, we utilize RNNs to treat some specific issues associated with dynamical systems. Specifically, we analyze the performance of RNNs applied to three tasks: reconstruction of correct Lorenz solutions for a system with a formulation error, reconstruction of corrupted collective motion trajectories, and forecasting of streamflow time series possessing spikes, representing three fields, namely, ordinary differential equations, collective motion, and hydrological modeling, respectively. We train and test RNNs uniquely in each task to demonstrate the broad applicability of RNNs in the reconstruction and forecasting the dynamics of dynamical systems.
APA, Harvard, Vancouver, ISO, and other styles
47

Sitte, Michael Philip, and Nguyen Anh Khoa Doan. "Velocity reconstruction in puffing pool fires with physics-informed neural networks." Physics of Fluids 34, no. 8 (August 2022): 087124. http://dx.doi.org/10.1063/5.0097496.

Full text
Abstract:
Pool fires are canonical representations of many accidental fires which can exhibit an unstable unsteady behavior, known as puffing, which involves a strong coupling between the temperature and velocity fields. Despite their practical relevance to fire research, their experimental study can be limited due to the complexity of measuring relevant quantities in parallel. In this work, we analyze the use of a recent physics-informed machine learning approach, called hidden fluid mechanics (HFM), to reconstruct unmeasured quantities in a puffing pool fire from measured quantities. The HFM framework relies on a physics-informed neural network (PINN) for this task. A PINN is a neural network that uses both the available data, here the measured quantities, and the physical equations governing the system, here the reacting Navier–Stokes equations, to infer the full fluid dynamic state. This framework is used to infer the velocity field in a puffing pool fire from measurements of density, pressure, and temperature. In this work, the dataset used for this test was generated from numerical simulations. It is shown that the PINN is able to reconstruct the velocity field accurately and to infer most features of the velocity field. In addition, it is shown that the reconstruction accuracy is robust with respect to noisy data, and a reduction in the number of measured quantities is explored and discussed. This study opens up the possibility of using PINNs for the reconstruction of unmeasured quantities from measured ones, providing the potential groundwork for their use in experiments for fire research.
APA, Harvard, Vancouver, ISO, and other styles
48

Yan, Xiaohui, Fu Du, Tianqi Zhang, Qian Cui, Zuhao Zhu, and Ziming Song. "Predicting the Flow Fields in Meandering Rivers with a Deep Super-Resolution Convolutional Neural Network." Water 16, no. 3 (January 28, 2024): 425. http://dx.doi.org/10.3390/w16030425.

Full text
Abstract:
The hydrodynamic model, based on the strict conservation of momentum and continuity equations, can accurately simulate the distribution of a flow field. However, significant computing time and storage space requirements limit real-time prediction. Machine learning is well known for its fast computing speed and powerful learning ability, but its accuracy depends on an abundance of training data, hindering its wider use in locations without sufficient measurements. Application restrictions in data-deficient areas can be addressed through transfer learning, provided that two areas share common characteristics. In this study, a machine learning method based on a deep super-resolution convolutional neural network (DSRCNN) and transfer learning is proposed, validated, and applied to model two bend flows and one realistic test case. Firstly, the hydrodynamic model was established and validated against measured data. The validated model was considered to have the ability to generate real data and was used to generate a comprehensive data set for training and validating the machine learning model. Three different methods were compared and tested, with Realizable k-ε performing better than the others in predicting the outer bank flow distribution. DSRCNN was compared to a plain SRCNN (PSRCNN), as well as Bilinear, Nearest, and Bicubic methods, and the results showed that DSRCNN had the best performance. We compared Raw, RT, and TL methods, finding that the TL method performed the best overall. Therefore, the research results showed that the developed super-resolution convolutional neural network can provide more reliable predications and serve as an ideal tool for simulating flow field distribution in bends.
APA, Harvard, Vancouver, ISO, and other styles
49

Hu, Yaowei, Yongkai Wu, Lu Zhang, and Xintao Wu. "A Generative Adversarial Framework for Bounding Confounded Causal Effects." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (May 18, 2021): 12104–12. http://dx.doi.org/10.1609/aaai.v35i13.17437.

Full text
Abstract:
Causal inference from observational data is receiving wide applications in many fields. However, unidentifiable situations, where causal effects cannot be uniquely computed from observational data, pose critical barriers to applying causal inference to complicated real applications. In this paper, we develop a bounding method for estimating the average causal effect (ACE) under unidentifiable situations due to hidden confounding based on Pearl's structural causal model. We propose to parameterize the unknown exogenous random variables and structural equations of a causal model using neural networks and implicit generative models. Then, using an adversarial learning framework, we search the parameter space to explicitly traverse causal models that agree with the given observational distribution, and find those that minimize or maximize the ACE to obtain its lower and upper bounds. The proposed method does not make assumption about the type of structural equations and variables. Experiments using both synthetic and real-world datasets are conducted.
APA, Harvard, Vancouver, ISO, and other styles
50

Peng, Jiang-Zhou, Xianglei Liu, Zhen-Dong Xia, Nadine Aubry, Zhihua Chen, and Wei-Tao Wu. "Data-Driven Modeling of Geometry-Adaptive Steady Heat Convection Based on Convolutional Neural Networks." Fluids 6, no. 12 (December 1, 2021): 436. http://dx.doi.org/10.3390/fluids6120436.

Full text
Abstract:
Heat convection is one of the main mechanisms of heat transfer, and it involves both heat conduction and heat transportation by fluid flow; as a result, it usually requires numerical simulation for solving heat convection problems. Although the derivation of governing equations is not difficult, the solution process can be complicated and usually requires numerical discretization and iteration of differential equations. In this paper, based on neural networks, we developed a data-driven model for an extremely fast prediction of steady-state heat convection of a hot object with an arbitrary complex geometry in a two-dimensional space. According to the governing equations, the steady-state heat convection is dominated by convection and thermal diffusion terms; thus the distribution of the physical fields would exhibit stronger correlations between adjacent points. Therefore, the proposed neural network model uses convolutional neural network (CNN) layers as the encoder and deconvolutional neural network (DCNN) layers as the decoder. Compared with a fully connected (FC) network model, the CNN-based model is good for capturing and reconstructing the spatial relationships of low-rank feature spaces, such as edge intersections, parallelism, and symmetry. Furthermore, we applied the signed distance function (SDF) as the network input for representing the problem geometry, which contains more information compared with a binary image. For displaying the strong learning and generalization ability of the proposed network model, the training dataset only contains hot objects with simple geometries: triangles, quadrilaterals, pentagons, hexagons, and dodecagons, while the testing cases use arbitrary and complex geometries. According to the study, the trained network model can accurately predict the velocity and temperature field of the problems with complex geometries, which has never been seen by the network model during the model training; and the prediction speed is two orders faster than the CFD. The ability of accurate and extremely fast prediction of the network model suggests the potential of applying reduced-order network models to the applications of real-time control and fast optimization in the future.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography