Auswahl der wissenschaftlichen Literatur zum Thema „Physics-Informed neural network“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Physics-Informed neural network" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Physics-Informed neural network"

1

Hofmann, Tobias, Jacob Hamar, Marcel Rogge, Christoph Zoerr, Simon Erhard und Jan Philipp Schmidt. „Physics-Informed Neural Networks for State of Health Estimation in Lithium-Ion Batteries“. Journal of The Electrochemical Society 170, Nr. 9 (01.09.2023): 090524. http://dx.doi.org/10.1149/1945-7111/acf0ef.

Der volle Inhalt der Quelle
Annotation:
One of the most challenging tasks of modern battery management systems is the accurate state of health estimation. While physico-chemical models are accurate, they have high computational cost. Neural networks lack physical interpretability but are efficient. Physics-informed neural networks tackle the aforementioned shortcomings by combining the efficiency of neural networks with the accuracy of physico-chemical models. A physics-informed neural network is developed and evaluated against three different datasets: A pseudo-two-dimensional Newman model generates data at various state of health points. This dataset is fused with experimental data from laboratory measurements and vehicle field data to train a neural network in which it exploits correlation from internal modeled states to the measurable state of health. The resulting physics-informed neural network performs best with the synthetic dataset and achieves a root mean squared error below 2% at estimating the state of health. The root mean squared error stays within 3% for laboratory test data, with the lowest error observed for constant current discharge samples. The physics-informed neural network outperforms several other purely data-driven methods and proves its advantage. The inclusion of physico-chemical information from simulation increases accuracy and further enables broader application ranges.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Karakonstantis, Xenofon, Diego Caviedes-Nozal, Antoine Richard und Efren Fernandez-Grande. „Room impulse response reconstruction with physics-informed deep learning“. Journal of the Acoustical Society of America 155, Nr. 2 (01.02.2024): 1048–59. http://dx.doi.org/10.1121/10.0024750.

Der volle Inhalt der Quelle
Annotation:
A method is presented for estimating and reconstructing the sound field within a room using physics-informed neural networks. By incorporating a limited set of experimental room impulse responses as training data, this approach combines neural network processing capabilities with the underlying physics of sound propagation, as articulated by the wave equation. The network's ability to estimate particle velocity and intensity, in addition to sound pressure, demonstrates its capacity to represent the flow of acoustic energy and completely characterise the sound field with only a few measurements. Additionally, an investigation into the potential of this network as a tool for improving acoustic simulations is conducted. This is due to its proficiency in offering grid-free sound field mappings with minimal inference time. Furthermore, a study is carried out which encompasses comparative analyses against current approaches for sound field reconstruction. Specifically, the proposed approach is evaluated against both data-driven techniques and elementary wave-based regression methods. The results demonstrate that the physics-informed neural network stands out when reconstructing the early part of the room impulse response, while simultaneously allowing for complete sound field characterisation in the time domain.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Kenzhebek, Y., T. S. Imankulov und D. Zh Akhmed-Zaki. „PREDICTION OF OIL PRODUCTION USING PHYSICS-INFORMED NEURAL NETWORKS“. BULLETIN Series of Physics & Mathematical Sciences 76, Nr. 4 (15.12.2021): 45–50. http://dx.doi.org/10.51889/2021-4.1728-7901.06.

Der volle Inhalt der Quelle
Annotation:
In recent years, modern information technologies have been actively used in various industries. The oil industry is no exception, since high-performance computing technologies, artificial intelligence algorithms, methods of collecting, processing and storing information are actively used to solve the problems of increasing oil recovery. Deep learning has made remarkable strides in a variety of applications, but its use for solving partial differential equations has only recently emerged. In particular, you can replace traditional numerical methods with a neural network that approximates the solution to a partial differential equation. Physically Informed Neural Networks (PINNs) embed partial differential equations into the neural network loss function using automatic differentiation. A numerical algorithm and PINN have been developed for solving the one-dimensional pressure equation from the Buckley-Leverett mathematical model. The results of numerical solution and prediction of the PINN neural network for solving the pressure equation are obtained.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Pu, Ruilong, und Xinlong Feng. „Physics-Informed Neural Networks for Solving Coupled Stokes–Darcy Equation“. Entropy 24, Nr. 8 (11.08.2022): 1106. http://dx.doi.org/10.3390/e24081106.

Der volle Inhalt der Quelle
Annotation:
In this paper, a grid-free deep learning method based on a physics-informed neural network is proposed for solving coupled Stokes–Darcy equations with Bever–Joseph–Saffman interface conditions. This method has the advantage of avoiding grid generation and can greatly reduce the amount of computation when solving complex problems. Although original physical neural network algorithms have been used to solve many differential equations, we find that the direct use of physical neural networks to solve coupled Stokes–Darcy equations does not provide accurate solutions in some cases, such as rigid terms due to small parameters and interface discontinuity problems. In order to improve the approximation ability of a physics-informed neural network, we propose a loss-function-weighted function strategy, a parallel network structure strategy, and a local adaptive activation function strategy. In addition, the physical information neural network with an added strategy provides inspiration for solving other more complicated problems of multi-physical field coupling. Finally, the effectiveness of the proposed strategy is verified by numerical experiments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Yoon, Seunghyun, Yongsung Park und Woojae Seong. „Improving mode extraction with physics-informed neural network“. Journal of the Acoustical Society of America 154, Nr. 4_supplement (01.10.2023): A339—A340. http://dx.doi.org/10.1121/10.0023729.

Der volle Inhalt der Quelle
Annotation:
This study aims to enhance conventional mode extraction methods in ocean waveguides using a physics-informed neural network (PINN). Mode extraction involves estimating mode wavenumbers and corresponding mode depth functions. The approach considers a scenario with a single frequency source towed at a constant depth and measured from a vertical line array (VLA). Conventional mode extraction methods applied to experimental data face two problems. First, mode shape estimation is limited because the receivers only cover a partial waveguide. Second, the wavenumber spectrum is affected by issues such as Doppler shift and range errors. To address these challenges, we train the PINN with measured data, generating a densely sampled complex pressure field, including the unmeasured region above the VLA. We then apply the same mode extraction methods to both the raw data and the PINN-generated data for comparison. The proposed method is validated using data from the SWellEx-96, demonstrating improved mode extraction performance compared to using raw experimental data directly.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Stenkin, Dmitry, und Vladimir Gorbachenko. „Mathematical Modeling on a Physics-Informed Radial Basis Function Network“. Mathematics 12, Nr. 2 (11.01.2024): 241. http://dx.doi.org/10.3390/math12020241.

Der volle Inhalt der Quelle
Annotation:
The article is devoted to approximate methods for solving differential equations. An approach based on neural networks with radial basis functions is presented. Neural network training algorithms adapted to radial basis function networks are proposed, in particular adaptations of the Nesterov and Levenberg-Marquardt algorithms. The effectiveness of the proposed algorithms is demonstrated for solving model problems of function approximation, differential equations, direct and inverse boundary value problems, and modeling processes in piecewise homogeneous media.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Schmid, Johannes D., Philipp Bauerschmidt, Caglar Gurbuz und Steffen Marburg. „Physics-informed neural networks for characterization of structural dynamic boundary conditions“. Journal of the Acoustical Society of America 154, Nr. 4_supplement (01.10.2023): A99. http://dx.doi.org/10.1121/10.0022923.

Der volle Inhalt der Quelle
Annotation:
Structural dynamics simulations are often faced with challenges arising from unknown boundary conditions, leading to considerable prediction uncertainties. Direct measurement of these boundary conditions can be impractical for certain mounting scenarios, such as joints or screw connections. In addition, conventional inverse methods face limitations in integrating measured data and solving inverse problems when the forward model is computationally expensive. In this study, we explore the potential of physics-informed neural networks that incorporate the residual of a partial differential equation into the loss function of a neural network to ensure physically consistent predictions. We train the neural network using noisy boundary displacement data of a structure from a finite element reference solution. The network learns to predict the displacement field within the structure while satisfying the Navier–Lamé equations in the frequency domain. Our results show that physics-informed neural networks accurately predict the displacement field within a three-dimensional structure using only boundary training data. Additionally, differentiating the trained network allows precise characterization of previously unknown boundary conditions and facilitates the assessment of non-measurable quantities, such as the stress tensor.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zhai, Hanfeng, Quan Zhou und Guohui Hu. „Predicting micro-bubble dynamics with semi-physics-informed deep learning“. AIP Advances 12, Nr. 3 (01.03.2022): 035153. http://dx.doi.org/10.1063/5.0079602.

Der volle Inhalt der Quelle
Annotation:
Utilizing physical information to improve the performance of the conventional neural networks is becoming a promising research direction in scientific computing recently. For multiphase flows, it would require significant computational resources for neural network training due to the large gradients near the interface between the two fluids. Based on the idea of the physics-informed neural networks (PINNs), a modified deep learning framework BubbleNet is proposed to overcome this difficulty in the present study. The deep neural network (DNN) with separate sub-nets is adopted to predict physics fields, with the semi-physics-informed part encoding the continuity equation and the pressure Poisson equation [Formula: see text] for supervision and the time discretized normalizer to normalize field data per time step before training. Two bubbly flows, i.e., single bubble flow and multiple bubble flow in a microchannel, are considered to test the algorithm. The conventional computational fluid dynamics software is applied to obtain the training dataset. The traditional DNN and the BubbleNet(s) are utilized to train the neural network and predict the flow fields for the two bubbly flows. Results indicate the BubbleNet frameworks are able to successfully predict the physics fields, and the inclusion of the continuity equation significantly improves the performance of deep NNs. The introduction of the Poisson equation also has slightly positive effects on the prediction results. The results suggest that constructing semi-PINNs by flexibly considering the physical information into neural networks will be helpful in the learning of complex flow problems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Karakonstantis, Xenofon, und Efren Fernandez-Grande. „Advancing sound field analysis with physics-informed neural networks“. Journal of the Acoustical Society of America 154, Nr. 4_supplement (01.10.2023): A98. http://dx.doi.org/10.1121/10.0022920.

Der volle Inhalt der Quelle
Annotation:
This work introduces a method that employs physics-informed neural networks to reconstruct sound fields in diverse rooms, including both typical acoustically damped meeting rooms and more spaces of cultural significance, such as concert halls or theatres. The neural network is trained using a limited set of room impulse responses, integrating the expressive capacity of neural networks with the fundamental physics of sound propagation governed by the wave equation. Consequently, the network accurately represents sound fields within an aperture without requiring extensive measurements, regardless of the complexity of the sound field. Notably, our approach extends beyond sound pressure estimation and includes valuable vectorial quantities, such as particle velocity and intensity, resembling classical holography methods. Experimental results confirm the efficacy of the proposed approach, underscoring its reconstruction accuracy and computational efficiency. Moreover, by enabling the acquisition of sound field quantities in the time domain, which were previously challenging to obtain from measurements, our method opens up new frontiers for the analysis and comprehension of sound propagation phenomena in rooms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Pannekoucke, Olivier, und Ronan Fablet. „PDE-NetGen 1.0: from symbolic partial differential equation (PDE) representations of physical processes to trainable neural network representations“. Geoscientific Model Development 13, Nr. 7 (30.07.2020): 3373–82. http://dx.doi.org/10.5194/gmd-13-3373-2020.

Der volle Inhalt der Quelle
Annotation:
Abstract. Bridging physics and deep learning is a topical challenge. While deep learning frameworks open avenues in physical science, the design of physically consistent deep neural network architectures is an open issue. In the spirit of physics-informed neural networks (NNs), the PDE-NetGen package provides new means to automatically translate physical equations, given as partial differential equations (PDEs), into neural network architectures. PDE-NetGen combines symbolic calculus and a neural network generator. The latter exploits NN-based implementations of PDE solvers using Keras. With some knowledge of a problem, PDE-NetGen is a plug-and-play tool to generate physics-informed NN architectures. They provide computationally efficient yet compact representations to address a variety of issues, including, among others, adjoint derivation, model calibration, forecasting and data assimilation as well as uncertainty quantification. As an illustration, the workflow is first presented for the 2D diffusion equation, then applied to the data-driven and physics-informed identification of uncertainty dynamics for the Burgers equation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Physics-Informed neural network"

1

Mirzai, Badi. „Physics-Informed Deep Learning for System Identification of Autonomous Underwater Vehicles : A Lagrangian Neural Network Approach“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301626.

Der volle Inhalt der Quelle
Annotation:
In this thesis, we explore Lagrangian Neural Networks (LNNs) for system identification of Autonomous Underwater Vehicles (AUVs) with 6 degrees of freedom. One of the main challenges of AUVs is that they have limited wireless communication and navigation under water. AUVs operate under strict and uncertain conditions, where they need to be able to navigate and perform tasks in unknown ocean environments with limited and noisy sensor data. A crucial requirement for localization and adaptive control of AUVs is having an accurate and reliable model of the system’s nonlinear dynamics while taking into account the dynamic environment of the ocean. Most of these dynamics models do not incorporate data. The collection of data for AUVs is difficult, but necessary in order to have more flexibility in the model’s parameters due to the dynamic environment of the ocean. Yet, traditional system identification methods are still dominant today, despite the recent breakthroughs in Deep Learning. Therefore, in this thesis, we aim for a data-driven approach that embeds laws from physics in order to learn the state-space model of an AUV. More precisely, exploring the LNN framework for higher-dimensional systems. Furthermore, we also extend the LNN to account for non-conservative forces acting upon the system, such as damping and control inputs. The networks are trained to learn from simulated data of a second-order ordinary differential equation of an AUV. The trained model is evaluated by integrating paths from different initial states and comparing them to the true dynamics. The results yielded a model capable of predicting the output acceleration of the state space model but struggled in learning the direction of the forward movement with time.
I den här uppsatsen utforskas Lagrangianska Neurala Nätverk (LNN) för systemidentifiering av Autonoma Undervattensfordon (AUV) med 6 frihetsgrader. En av de största utmaningarna med AUV är deras begränsningar när det kommer till trådlös kommunikation och navigering under vatten. Ett krav för att ha fungerande AUV är deras förmåga att navigera och utföra uppdrag under okända undervattensförhållanden med begränsad och brusig sensordata. Dessutom är ett kritiskt krav för lokalisering och adaptiv reglerteknik att ha noggranna modeller av systemets olinjära dynamik, samtidigt som den dynamiska miljön i havet tas i beaktande. De flesta sådana modeller tar inte i beaktande sensordata för att reglera dess parameterar. Insamling av sådan data för AUVer är besvärligt, men nödvändigt för att skapa större flexibilitet hos modellens parametrar. Trots de senaste genombrotten inom djupinlärning är traditionella metoder av systemidentifiering dominanta än idag för AUV. Det är av dessa anledningar som vi i denna uppsats strävar efter en datadriven metod, där vi förankrar lagar från fysik under inlärningen av systemets state-space modell. Mer specifikt utforskar vi LNN för ett system med högre dimension. Vidare expanderar vi även LNN till att även ta ickekonservativa krafter som verkar på systemet i beaktande, såsom dämpning och styrsignaler. Nätverket tränas att lära sig från simulerad data från en andra ordningens differentialekvation som beskriver en AUV. Den tränade modellen utvärderas genom att iterativt integrera fram dess rörelse från olika initialstillstånd, vilket jämförs med den korrekta modellen. Resultaten visade en modell som till viss del var kapabel till att förutspå korrekt acceleration, med begränsad framgång i att lära sig korrekt rörelseriktning framåt i tiden.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Cedergren, Linnéa. „Physics-informed Neural Networks for Biopharma Applications“. Thesis, Umeå universitet, Institutionen för fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-185423.

Der volle Inhalt der Quelle
Annotation:
Physics-Informed Neural Networks (PINNs) are hybrid models that incorporate differential equations into the training of neural networks, with the aim of bringing the best of both worlds. This project used a mathematical model describing a Continuous Stirred-Tank Reactor (CSTR), to test two possible applications of PINNs. The first type of PINN was trained to predict an unknown reaction rate law, based only on the differential equation and a time series of the reactor state. The resulting model was used inside a multi-step solver to simulate the system state over time. The results showed that the PINN could accurately model the behaviour of the missing physics also for new initial conditions. However, the model suffered from extrapolation error when tested on a larger reactor, with a much lower reaction rate. Comparisons between using a numerical derivative or automatic differentiation in the loss equation, indicated that the latter had a higher robustness to noise. Thus, it is likely the best choice for real applications. A second type of PINN was trained to forecast the system state one-step-ahead based on previous states and other known model parameters. An ordinary feed-forward neural network with an equal architecture was used as baseline. The second type of PINN did not outperform the baseline network. Further studies are needed to conclude if or when physics-informed loss should be used in autoregressive applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Wu, Dawen. „Solving Some Nonlinear Optimization Problems with Deep Learning“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG083.

Der volle Inhalt der Quelle
Annotation:
Cette thèse considère quatre types de problèmes d'optimisation non linéaire, à savoir les jeux de bimatrice, les équations de projection non linéaire (NPEs), les problèmes d'optimisation convexe non lisse (NCOPs) et les jeux à contraintes stochastiques (CCGs). Ces quatre classes de problèmes d'optimisation non linéaire trouvent de nombreuses applications dans divers domaines tels que l'ingénierie, l'informatique, l'économie et la finance. Notre objectif est d'introduire des algorithmes basés sur l'apprentissage profond pour calculer efficacement les solutions optimales de ces problèmes d'optimisation non linéaire.Pour les jeux de bimatrice, nous utilisons des réseaux neuronaux convolutionnels (CNNs) pour calculer les équilibres de Nash. Plus précisément, nous concevons une architecture de CNN où l'entrée est un jeu de bimatrice et la sortie est l'équilibre de Nash prédit pour le jeu. Nous générons un ensemble de jeux de bimatrice suivant une distribution de probabilité donnée et utilisons l'algorithme de Lemke-Howson pour trouver leurs véritables équilibres de Nash, constituant ainsi un ensemble d'entraînement. Le CNN proposé est formé sur cet ensemble de données pour améliorer sa précision. Une fois l'apprentissage terminée, le CNN est capable de prédire les équilibres de Nash pour des jeux de bimatrice inédits. Les résultats expérimentaux démontrent l'efficacité computationnelle exceptionnelle de notre approche basée sur CNN, au détriment de la précision.Pour les NPEs, NCOPs et CCGs, qui sont des problèmes d'optimisation plus complexes, ils ne peuvent pas être directement introduits dans les réseaux neuronaux. Par conséquent, nous avons recours à des outils avancés, à savoir l'optimisation neurodynamique et les réseaux neuronaux informés par la physique (PINNs), pour résoudre ces problèmes. Plus précisément, nous utilisons d'abord une approche neurodynamique pour modéliser un problème d'optimisation non linéaire sous forme de système d'équations différentielles ordinaires (ODEs). Ensuite, nous utilisons un modèle basé sur PINN pour résoudre le système d'ODE résultant, où l'état final du modèle représente la solution prédite au problème d'optimisation initial. Le réseau neuronal est formé pour résoudre le système d'ODE, résolvant ainsi le problème d'optimisation initial. Une contribution clé de notre méthode proposée réside dans la transformation d'un problème d'optimisation non linéaire en un problème d'entraînement de réseau neuronal. En conséquence, nous pouvons maintenant résoudre des problèmes d'optimisation non linéaire en utilisant uniquement PyTorch, sans compter sur des solveurs d'optimisation convexe classiques tels que CVXPY, CPLEX ou Gurobi
This thesis considers four types of nonlinear optimization problems, namely bimatrix games, nonlinear projection equations (NPEs), nonsmooth convex optimization problems (NCOPs), and chance-constrained games (CCGs).These four classes of nonlinear optimization problems find extensive applications in various domains such as engineering, computer science, economics, and finance.We aim to introduce deep learning-based algorithms to efficiently compute the optimal solutions for these nonlinear optimization problems.For bimatrix games, we use Convolutional Neural Networks (CNNs) to compute Nash equilibria.Specifically, we design a CNN architecture where the input is a bimatrix game and the output is the predicted Nash equilibrium for the game.We generate a set of bimatrix games by a given probability distribution and use the Lemke-Howson algorithm to find their true Nash equilibria, thereby constructing a training dataset.The proposed CNN is trained on this dataset to improve its accuracy. Upon completion of training, the CNN is capable of predicting Nash equilibria for unseen bimatrix games.Experimental results demonstrate the exceptional computational efficiency of our CNN-based approach, at the cost of sacrificing some accuracy.For NPEs, NCOPs, and CCGs, which are more complex optimization problems, they cannot be directly fed into neural networks.Therefore, we resort to advanced tools, namely neurodynamic optimization and Physics-Informed Neural Networks (PINNs), for solving these problems.Specifically, we first use a neurodynamic approach to model a nonlinear optimization problem as a system of Ordinary Differential Equations (ODEs).Then, we utilize a PINN-based model to solve the resulting ODE system, where the end state of the model represents the predicted solution to the original optimization problem.The neural network is trained toward solving the ODE system, thereby solving the original optimization problem.A key contribution of our proposed method lies in transforming a nonlinear optimization problem into a neural network training problem.As a result, we can now solve nonlinear optimization problems using only PyTorch, without relying on classical convex optimization solvers such as CVXPY, CPLEX, or Gurobi
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

(8828960), Sukirt. „Physics Informed Neural Networks for Engineering Systems“. Thesis, 2020.

Den vollen Inhalt der Quelle finden
Annotation:
This thesis explores the application of deep learning techniques to problems in fluid mechanics, with particular focus on physics informed neural networks. Physics
informed neural networks leverage the information gathered over centuries in the
form of physical laws mathematically represented in the form of partial differential
equations to make up for the dearth of data associated with engineering and physi-
cal systems. To demonstrate the capability of physics informed neural networks, an
inverse and a forward problem are considered. The inverse problem involves discov-
ering a spatially varying concentration ?field from the observations of concentration
of a passive scalar. A forward problem involving conjugate heat transfer is solved as
well, where the boundary conditions on velocity and temperature are used to discover
the velocity, pressure and temperature ?fields in the entire domain. The predictions of
the physics informed neural networks are compared against simulated data generated
using OpenFOAM.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Yadav, Sangeeta. „Data Driven Stabilization Schemes for Singularly Perturbed Differential Equations“. Thesis, 2023. https://etd.iisc.ac.in/handle/2005/6095.

Der volle Inhalt der Quelle
Annotation:
This thesis presents a novel way of leveraging Artificial Neural Network (ANN) to aid conventional numerical techniques for solving Singularly Perturbed Differential Equation (SPDE). SPDEs are challenging to solve with conventional numerical techniques such as Finite Element Methods (FEM) due to the presence of boundary and interior layers. Often the standard numerical solution shows spurious oscillations in the vicinity of these layers. Stabilization techniques are often employed to eliminate these spurious oscillations in the numerical solution. The accuracy of the stabilization technique depends on a user-chosen stabilization parameter whose optimal value is challenging to find. A few formulas for the stabilization parameter exist in the literature, but none extends well for high-dimensional and complex problems. In order to solve this challenge, we have developed the following ANN-based techniques for predicting this stabilization parameter: 1) SPDE-Net: As a proof of concept, we have developed an ANN called SPDE-Net for one-dimensional SPDEs. In the proposed method, we predict the stabilization parameter for the Streamline Upwind Petrov Galerkin (SUPG) stabilization technique. The prediction task is modelled as a regression problem using equation coefficients and domain parameters as inputs to the neural network. Three training strategies have been proposed, i.e. supervised learning, L 2-Error minimization (global) and L2-Error minimization (local). The proposed method outperforms existing state-of-the-art ANN-based partial differential equations (PDE) solvers, such as Physics Informed Neural Networks (PINNs). 2) AI-stab FEM With an aim for extending SPDE-Net for two-dimensional problems, we have also developed an optimization scheme using another Neural Network called AI-stab FEM and showed its utility in solving higher-dimensional problems. Unlike SPDE-Net, it minimizes the equation residual along with the crosswind derivative term and can be classified as an unsupervised method. We have shown that the proposed approach yields stable solutions for several two-dimensional benchmark problems while being more accurate than other contemporary ANN-based PDE solvers such as PINNs and Variational Neural Networks for the Solution of Partial Differential Equations (VarNet) 3) SPDE-ConvNet In the last phase of the thesis, we attempt to predict a cell-wise stabilization parameter to treat the interior/boundary layer regions adequately by developing an oscillations-aware neural network. We present SPDE-ConvNet, Convolutional Neural Network (CNN), for predicting the local (cell-wise) stabilization parameter. For the network training, we feed the gradient of the Galerkin solution, which is an indirect metric for representing oscillations in the numerical solution, along with the equation coefficients, to the network. It obtains a cell-wise stabilization parameter while sharing the network parameters among all the cells for an equation. Similar to AI-stab FEM, this technique outperforms PINNs and VarNet. We conclude the thesis with suggestions for future work that can leverage our current understanding of data-driven stabilization schemes for SPDEs to develop and improve the next-generation neural network-based numerical solvers for SPDEs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

(10141679), Haoyang Zheng. „Quantifying implicit and explicit constraints on physics-informed neural processes“. Thesis, 2021.

Den vollen Inhalt der Quelle finden
Annotation:

Due to strong interactions among various phases and among the phases and fluid motions, multiphase flows (MPFs) are so complex that lots of efforts have to be paid to predict its sequential patterns of phases and motions. The present paper applies the physical constraints inherent in MPFs and enforces them to a physics-informed neural network (PINN) model either explicitly or implicitly, depending on the type of constraints. To predict the unobserved order parameters (OPs) (which locate the phases) in the future steps, the conditional neural processes (CNPs) with long short-term memory (LSTM, combined as CNPLSTM) are applied to quickly infer the dynamics of the phases after encoding only a few observations. After that, the multiphase consistent and conservative boundedness mapping (MCBOM) algorithm is implemented the correction the predicted OPs from CNP-LSTM so that the mass conservation, the summation of the volume fractions of the phases being unity, the consistency of reduction, and the boundedness of the OPs are strictly satisfied. Next, the density of the fluid mixture is computed from the corrected OPs. The observed velocity and density of the fluid mixture then encode in a physics-informed conditional neural processes and long short-term memory (PICNP-LSTM) where the constraint of momentum conservation is included in the loss function. Finally, the unobserved velocity in future steps is predicted from PICNP-LSTM. The proposed physics-informed neural processes (PINPs) model (CNP-LSTM-MCBOM-PICNP-LSTM) for MPFs avoids unphysical behaviors of the OPs, accelerates the convergence, and requires fewer data. The proposed model successfully predicts several canonical MPF problems, i.e., the horizontal shear layer (HSL) and dam break (DB) problems, and its performances are validated.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Alhubail, Ali. „Application of Physics-Informed Neural Networks to Solve 2-D Single-phase Flow in Heterogeneous Porous Media“. Thesis, 2021. http://hdl.handle.net/10754/670174.

Der volle Inhalt der Quelle
Annotation:
Neural networks have recently seen tremendous advancements in applicability in many areas, one of which is their utilization in solving physical problems governed by partial differential equations and the constraints of these equations. Physics-informed neural networks is the name given to such neural networks. They are different from typical neural networks in that they include loss terms that represent the physics of the problem. These terms often include partial derivatives of the neural network outputs with respect to its inputs, and these derivatives are found through the use of automatic differentiation. The purpose of this thesis is to showcase the ability of physics-informed neural networks to solve basic fluid flow problems in homogeneous and heterogeneous porous media. This is done through the utilization of the pressure equation under a set of assumptions as well as the inclusion of Dirichlet and Neumann boundary conditions. The goal is to create a surrogate model that allows for finding the pressure and velocity profiles everywhere inside the domain of interest. In the homogeneous case, minimization of the loss function that included the boundary conditions term and the partial differential equation term allowed for producing results that show good agreement with the results from a numerical simulator. However, in the case of heterogeneous media where there are sharp discontinuities in hydraulic conductivity inside the domain, the model failed to produce accurate results. To resolve this issue, extended physics-informed neural networks were used. This method involves the decomposition of the domain into multiple homogeneous sub-domains. Each sub-domain has its own physics informed neural network structure, equation parameters, and equation constraints. To allow the sub-domains to communicate, interface conditions are placed on the interfaces that separate the different sub-domains. The results from this method matched well with the results of the simulator. In both the homogeneous and heterogeneous cases, neural networks with only one hidden layer with thirty nodes were used. Even with this simple structure for the neural networks, the computations are expensive and a large number of training iterations is required to converge.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Physics-Informed neural network"

1

Madenci, Erdogan, Pranesh Roy und Deepak Behera. „Peridynamics for Physics Informed Neural Network“. In Advances in Peridynamics, 399–418. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-97858-7_16.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Beniwal, Kirti, und Vivek Kumar. „Gradient-Based Physics-Informed Neural Network“. In Third Congress on Intelligent Systems, 749–61. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-9379-4_54.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Dhamirah Mohamad, Najwa Zawani, Akram Yousif, Nasiha Athira Binti Shaari, Hasreq Iskandar Mustafa, Samsul Ariffin Abdul Karim, Afza Shafie und Muhammad Izzatullah. „Heat Transfer Modelling with Physics-Informed Neural Network (PINN)“. In Studies in Systems, Decision and Control, 25–35. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04028-3_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Mahesh, Ragini Bal, Jorge Leandro und Qing Lin. „Physics Informed Neural Network for Spatial-Temporal Flood Forecasting“. In Lecture Notes in Civil Engineering, 77–91. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-5501-2_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Oh, Dong Keun. „Pure Physics-Informed Echo State Network of ODE Solution Replicator“. In Artificial Neural Networks and Machine Learning – ICANN 2023, 225–36. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44201-8_19.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ibrahim, Abdul Qadir, Sebastian Götschel und Daniel Ruprecht. „Parareal with a Physics-Informed Neural Network as Coarse Propagator“. In Euro-Par 2023: Parallel Processing, 649–63. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-39698-4_44.

Der volle Inhalt der Quelle
Annotation:
AbstractParallel-in-time algorithms provide an additional layer of concurrency for the numerical integration of models based on time-dependent differential equations. Methods like Parareal, which parallelize across multiple time steps, rely on a computationally cheap and coarse integrator to propagate information forward in time, while a parallelizable expensive fine propagator provides accuracy. Typically, the coarse method is a numerical integrator using lower resolution, reduced order or a simplified model. Our paper proposes to use a physics-informed neural network (PINN) instead. We demonstrate for the Black-Scholes equation, a partial differential equation from computational finance, that Parareal with a PINN coarse propagator provides better speedup than a numerical coarse propagator. Training and evaluating a neural network are both tasks whose computing patterns are well suited for GPUs. By contrast, mesh-based algorithms with their low computational intensity struggle to perform well. We show that moving the coarse propagator PINN to a GPU while running the numerical fine propagator on the CPU further improves Parareal’s single-node performance. This suggests that integrating machine learning techniques into parallel-in-time integration methods and exploiting their differences in computing patterns might offer a way to better utilize heterogeneous architectures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Sadouk, Lamyaa, Mohamed ElHassan Bassir, Ibrahim Bassir und Boujemâa Achchab. „Physics-Informed Neural Network with PDE Soft Constraint Regularization Invariance“. In Advances in Intelligent System and Smart Technologies, 315–26. Cham: Springer International Publishing, 2024. http://dx.doi.org/10.1007/978-3-031-47672-3_31.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Fallah, Ali, und Mohammad Mohammadi Aghdam. „Physics-Informed Neural Network for Solution of Nonlinear Differential Equations“. In Nonlinear Approaches in Engineering Application, 163–78. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-53582-6_5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Xie, Baihong, Xiujian Liu, Heye Zhang, Chenchu Xu, Tieyong Zeng, Yixuan Yuan, Guang Yang und Zhifan Gao. „Conditional Physics-Informed Graph Neural Network for Fractional Flow Reserve Assessment“. In Lecture Notes in Computer Science, 110–20. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43990-2_11.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Zarzycki, Krzysztof, und Maciej Ławryńczuk. „Physics-Informed Hybrid Neural Network Model for MPC: A Fuzzy Approach“. In Advanced, Contemporary Control, 183–92. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-35170-9_17.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Physics-Informed neural network"

1

Qin, Jingtao, und Nanpeng Yu. „Reconfigure Distribution Network with Physics-informed Graph Neural Network“. In 2023 IEEE PES Innovative Smart Grid Technologies Europe (ISGT EUROPE). IEEE, 2023. http://dx.doi.org/10.1109/isgteurope56780.2023.10407802.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Nakamura, Yo, Suguru Shiratori, Hideaki Nagano und Kenjiro Shimano. „Physics-Informed Neural Network with Variable Initial Conditions“. In 7th World Congress on Mechanical, Chemical, and Material Engineering. Avestia Publishing, 2021. http://dx.doi.org/10.11159/htff21.113.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Lim, Kart Leong, Rahul Dutta und Mihai Rotaru. „Physics Informed Neural Network using Finite Difference Method“. In 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2022. http://dx.doi.org/10.1109/smc53654.2022.9945171.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Sha, Yanliang, Lingyun Ouyang und Quan Chen. „A Physics-Informed Neural Network for RRAM modeling“. In 2021 International Applied Computational Electromagnetics Society (ACES-China) Symposium. IEEE, 2021. http://dx.doi.org/10.23919/aces-china52398.2021.9581858.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Su, Yawei, Shubin Zeng, Xuqing Wu, Yueqin Huang und Jiefu Chen. „Physics-Informed Graph Neural Network for Electromagnetic Simulations“. In 2023 XXXVth General Assembly and Scientific Symposium of the International Union of Radio Science (URSI GASS). IEEE, 2023. http://dx.doi.org/10.23919/ursigass57860.2023.10265621.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ashqar, Farah, Rakan Khoury, Caroline Wood, Yi-Hsuan Yeh, Aristeidis Seretis und Costas D. Sarris. „Physics-Informed Convolutional Neural Network for Indoor Localization“. In 2021 IEEE International Symposium on Antennas and Propagation and USNC-URSI Radio Science Meeting (APS/URSI). IEEE, 2021. http://dx.doi.org/10.1109/aps/ursi47566.2021.9704309.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Nair, Siddharth, Timothy F. Walsh, Greg Pickrell und Fabio Semperlotti. „Acoustic scattering simulations via physics-informed neural network“. In Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems 2024, herausgegeben von Maria Pina Limongelli, Ching Tai Ng und Branko Glisic. SPIE, 2024. http://dx.doi.org/10.1117/12.3010166.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Pérez, José, Rafael Baez, Jose Terrazas, Arturo Rodríguez, Daniel Villanueva, Olac Fuentes, Vinod Kumar, Brandon Paez und Abdiel Cruz. „Physics-Informed Long-Short Term Memory Neural Network Performance on Holloman High-Speed Test Track Sled Study“. In ASME 2022 Fluids Engineering Division Summer Meeting. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/fedsm2022-86953.

Der volle Inhalt der Quelle
Annotation:
Abstract Physics Informed Neural Networks (PINNs) incorporate known physics equations into a network to reduce training time and increase accuracy. Traditional PINNs approaches are based on dense networks that do not consider the fact that simulations are a type of sequential data. Long-Short Term Memory (LSTM) networks are a modified version of Recurrent Neural Networks (RNNs) which are used to analyze sequential datasets. We propose a Physics Informed LSTM network that leverages the power of LSTMs for sequential datasets that also incorporates the governing physics equations of 2D incompressible Navier-Stokes fluid to analyze fluid flow around a stationary geometry resembling the water braking mechanism at the Holloman High-Speed Test Track. Currently, simulation data to analyze the fluid flow of the braking mechanism is generated through ANSYS and is costly, taking several days to generate a single simulation. By incorporating physics equations, our proposed Physics-Informed LSTM network was able to predict the last 20% of a simulation given the first 80% within a small margin of error in a shorter amount of time than a non-informed LSTM. This demonstrates the potential that physics-informed networks that leverage sequential information may have at speeding up computational fluid dynamics simulations and serves as a first step towards adapting PINNs for more advanced network architectures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Almeldein, Ahmed, und Noah Van Dam. „Accelerating Chemical Kinetics Calculations With Physics Informed Neural Networks“. In ASME 2022 ICE Forward Conference. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/icef2022-90371.

Der volle Inhalt der Quelle
Annotation:
Abstract Detailed chemical kinetics calculations can be very computationally expensive, and so various approaches have been used to speed up combustion calculations. Deep neural networks (DNNs) are one promising approach that has seen significant development recently. Standard DNNs, however, do not necessarily follow physical constraints such as conservation of mass. Physics Informed Neural Networks (PINNs) are a class of neural networks that have physical laws embedded within the training process to create networks that follow those physical laws. A new PINN-based DNN approach to chemical kinetics modeling has been developed to make sure mass fraction predictions adhere to the conservation of atomic species. The approach also utilizes a mixture-of-experts (MOE) architecture where the data is distributed on multiple sub-networks followed by a softmax selective layer. The MOE architecture allows the different sub-networks to specialize in different thermochemical regimes, such as early stage ignition reactions or post-flame equilibrium chemistry, then the softmax layer smoothly transitions between the sub-network predictions. This modeling approach was applied to the prediction of methane-air combustion using the GRI-Mech 3.0 as the reference mechanism. The training database was composed of data from 0D ignition delay simulations under initial conditions of 0.2–50 bar pressure, 500–2000 K temperature, an equivalence ratio between 0 and 2, and an N2-dilution percentage of up to 50%. A wide variety of network sizes and architectures of between 3 and 20 sub-networks and 6,600 to 77,000 neurons were tested. The resulting networks were able to predict 0D combustion simulations with similar accuracy and atomic mass conservation as standard kinetics solvers while having a 10–50× speedup in online evaluation time using CPUs, and on average over 200× when using a GPU.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Villanueva, Daniel, Brandon Paez, Arturo Rodriguez, Ashesh Chattopadhyay, V. M. Krushnarao Kotteda, Rafael Baez, Jose Perez, Jose Terrazas und Vinod Kumar. „Field Predictions of Hypersonic Cones Using Physics-Informed Neural Networks“. In ASME 2022 Fluids Engineering Division Summer Meeting. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/fedsm2022-86957.

Der volle Inhalt der Quelle
Annotation:
Abstract Physics Informed Neural Networks (PINNs) provide a way to apply deep learning to train a network using data and governing differential equations that control the physical behavior of a system. In this text, we propose using the PINNs framework to solve an inverse problem which will discover the partial differential equations for compressible flow from Mach number = 5 by coupling Navier Stokes Equations with a Deep Neural Network (DNN) based on training data generated by a CFD solver. The equations of momentum in 2-dimensions and the equation of energy will be parametrized to let Neural Networks calculate two established learnable parameters. This paper will focus on capturing physics governed by fluid flow phenomena applied to high-speed flows using PINNs, which will allow us to see disturbances such as a shock wave interaction with the free stream. Subsequently, a quantification of the predicted results of PINNs will be carried out, and it will be determined if PINNs are computationally less expensive than the Spectral Element Method codes so widely used.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Physics-Informed neural network"

1

Ellis, Kai, Nilanjan Banerjee und Christopher Pierce. Modeling a Thermionic Electron Source Using a Physics-Informed Neural Network. Office of Scientific and Technical Information (OSTI), Oktober 2023. http://dx.doi.org/10.2172/2008057.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Pettit, Chris, und D. Wilson. A physics-informed neural network for sound propagation in the atmospheric boundary layer. Engineer Research and Development Center (U.S.), Juni 2021. http://dx.doi.org/10.21079/11681/41034.

Der volle Inhalt der Quelle
Annotation:
We describe what we believe is the first effort to develop a physics-informed neural network (PINN) to predict sound propagation through the atmospheric boundary layer. PINN is a recent innovation in the application of deep learning to simulate physics. The motivation is to combine the strengths of data-driven models and physics models, thereby producing a regularized surrogate model using less data than a purely data-driven model. In a PINN, the data-driven loss function is augmented with penalty terms for deviations from the underlying physics, e.g., a governing equation or a boundary condition. Training data are obtained from Crank-Nicholson solutions of the parabolic equation with homogeneous ground impedance and Monin-Obukhov similarity theory for the effective sound speed in the moving atmosphere. Training data are random samples from an ensemble of solutions for combinations of parameters governing the impedance and the effective sound speed. PINN output is processed to produce realizations of transmission loss that look much like the Crank-Nicholson solutions. We describe the framework for implementing PINN for outdoor sound, and we outline practical matters related to network architecture, the size of the training set, the physics-informed loss function, and challenge of managing the spatial complexity of the complex pressure.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Nadiga, Balasubramanya, und Robert Lowrie. Physics Informed Neural Networks as Computational Physics Emulators. Office of Scientific and Technical Information (OSTI), Juni 2023. http://dx.doi.org/10.2172/1985825.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

D'Elia, Marta, Michael L. Parks, Guofei Pang und George Karniadakis. nPINNs: nonlocal Physics-Informed Neural Networks for a parametrized nonlocal universal Laplacian operator. Algorithms and Applications. Office of Scientific and Technical Information (OSTI), April 2020. http://dx.doi.org/10.2172/1614899.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

SECOND-ORDER ANALYSIS OF BEAM-COLUMNS BY MACHINE LEARNING-BASED STRUCTURAL ANALYSIS THROUGH PHYSICS-INFORMED NEURAL NETWORKS. The Hong Kong Institute of Steel Construction, Dezember 2023. http://dx.doi.org/10.18057/ijasc.2023.19.4.10.

Der volle Inhalt der Quelle
Annotation:
The second-order analysis of slender steel members could be challenging, especially when large deflection is involved. This paper proposes a novel machine learning-based structural analysis (MLSA) method for second-order analysis of beam-columns, which could be a promising alternative to the prevailing solutions using over-simplified analytical equations or traditional finite-element-based methods. The effectiveness of the conventional machine learning method heavily depends on both the qualitative and the quantitative of the provided data. However, such data are typically scarce and expensive to obtain in structural engineering practices. To address this problem, a new and explainable machine learning-based method, named Physics-informed Neural Networks (PINN), is employed, where the physical information will be utilized to orientate the learning process to create a self-supervised learning procedure, making it possible to train the neural network with few or even no predefined datasets to achieve an accurate approximation. This research extends the PINN method to the problems of second-order analysis of steel beam-columns. Detailed derivations of the governing equations, as well as the essential physical information for the training process, are given. The PINN framework and the training procedure are provided, where an adaptive loss weight control algorithm and the transfer learning technic are adopted to improve numerical efficiency. The practicability and accuracy of which are validated by four sets of verification examples.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie