Segui questo link per vedere altri tipi di pubblicazioni sul tema: Data-Driven reduced order modeling.

Tesi sul tema "Data-Driven reduced order modeling"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-23 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Data-Driven reduced order modeling".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Mou, Changhong. "Cross-Validation of Data-Driven Correction Reduced Order Modeling". Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/87610.

Testo completo
Abstract (sommario):
In this thesis, we develop a data-driven correction reduced order model (DDC-ROM) for numerical simulation of fluid flows. The general DDC-ROM involves two stages: (1) we apply ROM filtering (such as ROM projection) to the full order model (FOM) and construct the filtered ROM (F-ROM). (2) We use data-driven modeling to model the nonlinear interactions between resolved and unresolved modes, which solves the F-ROM's closure problem. In the DDC-ROM, a linear or quadratic ansatz is used in the data-driven modeling step. In this thesis, we propose a new cubic ansatz. To get the unknown coefficients in our ansatz, we solve an optimization problem that minimizes the difference between the FOM data and the ansatz. We test the new DDC-ROM in the numerical simulation of the one-dimensional Burgers equation with a small diffusion coefficient. Furthermore, we perform a cross-validation of the DDC-ROM to investigate whether it can be successful in computational settings that are different from the training regime.
M.S.
Practical engineering and scientific problems often require the repeated simulation of unsteady fluid flows. In these applications, the computational cost of high-fidelity full-order models can be prohibitively high. Reduced order models (ROMs) represent efficient alternatives to brute force computational approaches. In this thesis, we propose a data-driven correction ROM (DDC-ROM) in which available data and an optimization problem are used to model the nonlinear interactions between resolved and unresolved modes. In order to test the new DDC-ROM's predictability, we perform its cross-validation for the one-dimensional viscous Burgers equation and different training regimes.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Koc, Birgul. "Commutation Error in Reduced Order Modeling". Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/87537.

Testo completo
Abstract (sommario):
We investigate the effect of spatial filtering on the recently proposed data-driven correction reduced order model (DDC-ROM). We compare two filters: the ROM projection, which was originally used to develop the DDC-ROM, and the ROM differential filter, which uses a Helmholtz operator to attenuate the small scales in the input signal. We focus on the following questions: ``Do filtering and differentiation with respect to space variable commute, when filtering is applied to the diffusion term?'' or in other words ``Do we have commutation error (CE) in the diffusion term?" and ``If so, is the commutation error data-driven correction ROM (CE-DDC-ROM) more accurate than the original DDC-ROM?'' If the CE exists, the DDC-ROM has two different correction terms: one comes from the diffusion term and the other from the nonlinear convection term. We investigate the DDC-ROM and the CE-DDC-ROM equipped with the two ROM spatial filters in the numerical simulation of the Burgers equation with different diffusion coefficients and two different initial conditions (smooth and non-smooth).
M.S.
We propose reduced order models (ROMs) for an efficient and relatively accurate numerical simulation of nonlinear systems. We use the ROM projection and the ROM differential filters to construct a novel data-driven correction ROM (DDC-ROM). We show that the ROM spatial filtering and differentiation do not commute for the diffusion operator. Furthermore, we show that the resulting commutation error has an important effect on the ROM, especially for low viscosity values. As a mathematical model for our numerical study, we use the one-dimensional Burgers equations with smooth and non-smooth initial conditions.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Mou, Changhong. "Data-Driven Variational Multiscale Reduced Order Modeling of Turbulent Flows". Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103895.

Testo completo
Abstract (sommario):
In this dissertation, we consider two different strategies for improving the projection-based reduced order model (ROM) accuracy: (I) adding closure terms to the standard ROM; and (II) using Lagrangian data to improve the ROM basis. Following strategy (I), we propose a new data-driven reduced order model (ROM) framework that centers around the hierarchical structure of the variational multiscale (VMS) methodology and utilizes data to increase the ROM accuracy at a modest computational cost. The VMS methodology is a natural fit for the hierarchical structure of the ROM basis: In the first step, we use the ROM projection to separate the scales into three categories: (i) resolved large scales, (ii) resolved small scales, and (iii) unresolved scales. In the second step, we explicitly identify the VMS-ROM closure terms, i.e., the terms representing the interactions among the three types of scales. In the third step, we use available data to model the VMS-ROM closure terms. Thus, instead of phenomenological models used in VMS for standard numerical discretizations (e.g., eddy viscosity models), we utilize available data to construct new structural VMS-ROM closure models. Specifically, we build ROM operators (vectors, matrices, and tensors) that are closest to the true ROM closure terms evaluated with the available data. We test the new data-driven VMS-ROM in the numerical simulation of four test cases: (i) the 1D Burgers equation with viscosity coefficient $nu = 10^{-3}$; (ii) a 2D flow past a circular cylinder at Reynolds numbers $Re=100$, $Re=500$, and $Re=1000$; (iii) the quasi-geostrophic equations at Reynolds number $Re=450$ and Rossby number $Ro=0.0036$; and (iv) a 2D flow over a backward facing step at Reynolds number $Re=1000$. The numerical results show that the data-driven VMS-ROM is significantly more accurate than standard ROMs. Furthermore, we propose a new hybrid ROM framework for the numerical simulation of fluid flows. This hybrid framework incorporates two closure modeling strategies: (i) A structural closure modeling component that involves the recently proposed data-driven variational multiscale ROM approach, and (ii) A functional closure modeling component that introduces an artificial viscosity term. We also utilize physical constraints for the structural ROM operators in order to add robustness to the hybrid ROM. We perform a numerical investigation of the hybrid ROM for the three-dimensional turbulent channel flow at a Reynolds number $Re = 13,750$. In addition, we focus on the mathematical foundations of ROM closures. First, we extend the verifiability concept from large eddy simulation to the ROM setting. Specifically, we call a ROM closure model verifiable if a small ROM closure model error (i.e., a small difference between the true ROM closure and the modeled ROM closure) implies a small ROM error. Second, we prove that a data-driven ROM closure (i.e., the data-driven variational multiscale ROM) is verifiable. For strategy (II), we propose new Lagrangian inner products that we use together with Eulerian and Lagrangian data to construct new Lagrangian ROMs. We show that the new Lagrangian ROMs are orders of magnitude more accurate than the standard Eulerian ROMs, i.e., ROMs that use standard Eulerian inner product and data to construct the ROM basis. Specifically, for the quasi-geostrophic equations, we show that the new Lagrangian ROMs are more accurate than the standard Eulerian ROMs in approximating not only Lagrangian fields (e.g., the finite time Lyapunov exponent (FTLE)), but also Eulerian fields (e.g., the streamfunction). We emphasize that the new Lagrangian ROMs do not employ any closure modeling to model the effect of discarded modes (which is standard procedure for low-dimensional ROMs of complex nonlinear systems). Thus, the dramatic increase in the new Lagrangian ROMs' accuracy is entirely due to the novel Lagrangian inner products used to build the Lagrangian ROM basis.
Doctor of Philosophy
Reduced order models (ROMs) are popular in physical and engineering applications: for example, ROMs are widely used in aircraft designing as it can greatly reduce computational cost for the aircraft's aeroelastic predictions while retaining good accuracy. However, for high Reynolds number turbulent flows, such as blood flows in arteries, oil transport in pipelines, and ocean currents, the standard ROMs may yield inaccurate results. In this dissertation, to improve ROM's accuracy for turbulent flows, we investigate three different types of ROMs. In this dissertation, both numerical and theoretical results show that the proposed new ROMs yield more accurate results than the standard ROM and thus can be more useful.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Swischuk, Renee C. (Renee Copland). "Physics-based machine learning and data-driven reduced-order modeling". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122682.

Testo completo
Abstract (sommario):
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 123-128).
This thesis considers the task of learning efficient low-dimensional models for dynamical systems. To be effective in an engineering setting, these models must be predictive -- that is, they must yield reliable predictions for conditions outside the data used to train them. These models must also be able to make predictions that enforce physical constraints. Achieving these tasks is particularly challenging for the case of systems governed by partial differential equations, where generating data (either from high-fidelity simulations or from physical experiments) is expensive. We address this challenge by developing learning approaches that embed physical constraints. We propose two physics-based approaches for generating low-dimensional predictive models. The first leverages the proper orthogonal decomposition (POD) to represent high-dimensional simulation data with a low-dimensional physics-based parameterization in combination with machine learning methods to construct a map from model inputs to POD coefficients. A comparison of four machine learning methods is provided through an application of predicting flow around an airfoil. This framework also provides a way to enforce a number of linear constraints by modifying the data with a particular solution. The results help to highlight the importance of including physics knowledge when learning from small amounts of data. We also apply a data-driven approach to learning the operators of low-dimensional models. This method provides an avenue for constructing low-dimensional models of systems where the operators of discretized governing equations are unknown or too complex, while also having the ability to enforce physical constraints. The methodology is applied to a two-dimensional combustion problem, where discretized model operators are unavailable. The results show that the method is able to accurately make predictions and enforce important physical constraints.
by Renee C. Swischuk.
S.M.
S.M. Massachusetts Institute of Technology, Computation for Design and Optimization Program
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Ali, Naseem Kamil. "Thermally (Un-) Stratified Wind Plants: Stochastic and Data-Driven Reduced Order Descriptions/Modeling". PDXScholar, 2018. https://pdxscholar.library.pdx.edu/open_access_etds/4634.

Testo completo
Abstract (sommario):
Wind energy is one of the significant sources of renewable energy, yet a number of challenges preclude optimal operation of wind plants. Research is warranted in order to minimize the power losses and improve the productivity of wind plants. Here, a framework combining turbulence theory and data mining techniques is built to elucidate physics and mechanisms driving the energy extraction of the wind plants under a number of atmospheric/operating conditions. The performance of wind turbines is subjected to adverse effects caused by wake interactions. Therefore, it is crucial to understand wake-to-wake interactions as well as wake-to-atmospheric boundary layer interactions. Experimental and numerical data sets are examined in order to provide descriptions of the wakes and extract relevant features. As wakes merge, it is of interest to observe characteristics within the turbulent velocity signal obtained via wind tunnel experiments. Higher order moments, structure functions, intermittency and multifractality analysis are investigated to distinguish the flow dynamics. In this manner, considered approaches highlight the flow deceleration induced by the wind turbines, which subsequently changes the energy transfer rate imposed by the coherent eddies, and adapt the equilibrium range in the energy cascade. Also, wind turbines induce scale interactions and cause the intermittency that lingers at large and small scales. When wind plants interact dynamically with small scales, the flow becomes highly intermittent and multifractality is increased, especially near the rotor. Multifractality parameters, including the Hurst exponent and the combination factor, show the ability to describe the flow state in terms of its development. Based on Markov theory, the time evolution of the probability density function of the velocity is described via the Fokker-Planck equation and its Kramers-Moyal coefficients. Stochastic analysis proves the non-universality of the turbulent cascade immediate to the rotor, and the impact of the generation mechanism on flow cascade. Classifying the wake flow based the velocity and intermittency signs emphasizes that a negative correlation is dominant downstream from the rotor. These results reflect large-scale organization of the velocity-intermittency events corresponding to a recirculation region near the hub height and bottom tip. A linear regression approach based on the Gram-Charlier series expansion of the joint probability density function successfully models the contribution of the second and fourth quadrants. Thus, the model is able to predict the imbalance in the velocity and intermittency contribution to momentum transfer. Via large eddy simulations, the structure of the turbulent flow within the array under stratified conditions is quantified through the use of the Reynolds stress anisotropy tensor, proper orthogonal decomposition and cluster-based modeling. Perturbations induced by the turbine wakes are absorbed by the background turbulence in the unstable and neutrally stratified cases. Contrary, the flow in the stable stratified case is fully dominated by the presence of turbines and extremely influenced by the Coriolis force. Also, during the unstable period the turbulent kinetic energy is maximum. Thus, leading to fast convergence of the cumulative energy with only few modes. Reynolds stress anisotropy tensor reveals that under unstable thermal stratification the turbulence state tends to be more isotropic. The turbulent mixing due to buoyancy determines the degree of anisotropy and the energy distribution between the flow layers. The wakes of the turbines display large degree of anisotropy due to the correlation with the turbulent kinetic energy production. A combinatorial technique merging image segmentation via K-Means clustering and colormap of the barycentric map is posed. Clustering aids in extracting identical features from the spatial distribution of anisotropy colormap images by minimizing the sum of squared error over all clusters. Clustering also enables to highlight the wake expansion and interaction as produced by the wind turbines as a function of thermal stratification. A cluster-based reduced-order dynamical model is proposed for flow field and passive scalars; the model relies on full-state measurements. The dynamical behavior is predicted through the cluster transition matrix and modeled as a Markov process. The geometric nature of the attractor shows the ability to assess the quality of the clustering and identify transition regions. Periodical trends in the cluster transition matrix characterize the intrinsic periodical behavior of the wake. The modeling strategy points out a feasible path for future design and control that can be used to maximize power output. In addition, characterization of intermittency with power integration model can allow for power fluctuation arrangement/prediction in wind plants.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Xie, Xuping. "Large Eddy Simulation Reduced Order Models". Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/77626.

Testo completo
Abstract (sommario):
This dissertation uses spatial filtering to develop a large eddy simulation reduced order model (LES-ROM) framework for fluid flows. Proper orthogonal decomposition is utilized to extract the dominant spatial structures of the system. Within the general LES-ROM framework, two approaches are proposed to address the celebrated ROM closure problem. No phenomenological arguments (e.g., of eddy viscosity type) are used to develop these new ROM closure models. The first novel model is the approximate deconvolution ROM (AD-ROM), which uses methods from image processing and inverse problems to solve the ROM closure problem. The AD-ROM is investigated in the numerical simulation of a 3D flow past a circular cylinder at a Reynolds number $Re=1000$. The AD-ROM generates accurate results without any numerical dissipation mechanism. It also decreases the CPU time of the standard ROM by orders of magnitude. The second new model is the calibrated-filtered ROM (CF-ROM), which is a data-driven ROM. The available full order model results are used offline in an optimization problem to calibrate the ROM subfilter-scale stress tensor. The resulting CF-ROM is tested numerically in the simulation of the 1D Burgers equation with a small diffusion parameter. The numerical results show that the CF-ROM is more efficient than and as accurate as state-of-the-art ROM closure models.
Ph. D.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Bertram, Anna Verfasser], e Ralf [Akademischer Betreuer] [Zimmermann. "Data-driven variable-fidelity reduced order modeling for efficient vehicle shape optimization / Anna Bertram ; Betreuer: Ralf Zimmermann". Braunschweig : Technische Universität Braunschweig, 2018. http://d-nb.info/1175392154/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Bertram, Anna [Verfasser], e Ralf [Akademischer Betreuer] Zimmermann. "Data-driven variable-fidelity reduced order modeling for efficient vehicle shape optimization / Anna Bertram ; Betreuer: Ralf Zimmermann". Braunschweig : Technische Universität Braunschweig, 2018. http://d-nb.info/1175392154/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

D'Alessio, Giuseppe. "Data-driven models for reacting flows simulations: reduced-order modelling, chemistry acceleration and analysis of high-fidelity data". Doctoral thesis, Universite Libre de Bruxelles, 2021. https://dipot.ulb.ac.be/dspace/bitstream/2013/328064/5/contratGA.pdf.

Testo completo
Abstract (sommario):
Combustion science must necessarily go through a deep process of innovation, as only improving the energy efficiency and the fuel flexibility it will be possible to mitigate the impact of the anthropogenic activities on the climate and the environment. Because of the strong relation that is observed in chemically reacting flows between the fluid-dynamic conditions and the chemical kinetics, the use of Computational Fluid Dynamics (CFD) simulations with detailed kinetic mechanisms represents the best tool to optimize and develop novel combustion systems. In fact, while the CFD provides for the possibility to retrieve information that cannot be extracted by using experimental means (such as the turbulence-chemistry interaction and the local straining rates) and it avoids the costs associated to the scale-up process from laboratory scale experiments, the use of detailed kinetic mechanisms offers the possibility to correctly describe process conditions which are relevant from an industrial point of view (i.e. in which the chemical and mixing time scales are comparable), as well as to predict the formation of complex chemical species, such as the pollutants. Nevertheless, the use of detailed kinetic mechanisms in numerical simulations adds a considerable number of differential equations to be solved (because of the large number of species which are taken into account), and therefore increases the computational complexity of the CFD model. Thus, Machine Learning (ML) algorithms and Reduced-Order Models (ROMs) can be effectively included in the numerical description of chemically reacting flows. In fact, they can be used either to reduce the computational cost associated to the large number of equations in CFD simulations carried out with detailed chemistry, or to leverage the detailed information which can be found in massive, high-fidelity, data obtained from Direct Numerical Simulations (DNS), for model development and validation. In this Thesis, unsupervised and supervised learning algorithms were employed to design a novel adaptive-chemistry approach: the Sample-Partitioning Adaptive Reduced Chemistry (SPARC). This framework can be used to reduce the computational effort required by detailed CFD simulations thanks to a kinetic reduction accomplished in light of the local conditions of the thermochemical field. Several machine-learning algorithms, such as the Principal Component Analysis (PCA), the Local Principal Component Analysis (LPCA), and Artificial Neural Networks (ANNs) were coupled with the Direct Relation Graph with Error Propagation (DRGEP), a graph-based tool for the automatic reduction of kinetic mechanisms. The aforementioned algorithms were compared to achieve the optimal formulation of the adaptive approach, such that the best performances, in terms of accuracy and computational speed-up with respect to the CFD simulation carried out with detailed kinetics, could be obtained. Finally, PCA-based algorithms were proposed and tested to perform feature extraction and local feature selection from high-fidelity data, which were obtained by means of a DNS of a n-heptane jet reacting in air. The PCA, as well as two formulations of LPCA, and the Procrustes analysis were employed and compared with the aim to extract the main features of the turbulent reacting jet in an unsupervised fashion (i.e. to perform data mining tasks), as well as to aid the formulation of local optimized ROMs. All the codes employed to perform the unsupervised and supervised machine learning tasks in the current work were also included in an open-source Python framework, called OpenMORe, designed to perform reduction, clustering and data analysis, and specifically conceived for reacting flows. In fact, although many open-source Python software are already available, they often cannot be adapted to the user’s specific needs, unlike OpenMORe. In addition, many features such as the PCA-based clustering algorithm, or the local feature selection via PCA, are not yet available on any commercial or open-source software, to the best of the author’s knowledge.
Doctorat en Sciences de l'ingénieur et technologie
This thesis is submitted to the Université Libre de Bruxelles (ULB) and to the Politecnico di Milano for the degree of philosophy doctor. This doctoral work has been performed at the Université Libre de Bruxelles, École polytechnique de Bruxelles, Aero-Thermo-Mechanics Laboratory, Bruxelles, Belgium with Professor Alessandro Parente and at the Politecnico di Milano, CRECK Modelling Lab, Department of Chemistry, Materials and Chemical Engineering, Milan, Italy with Professor Alberto Cuoci.
info:eu-repo/semantics/nonPublished
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Ghosh, Rajat. "Transient reduced-order convective heat transfer modeling for a data center". Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50380.

Testo completo
Abstract (sommario):
A measurement-based reduced-order heat transfer modeling framework is developed to optimize cooling costs of dynamic and virtualized data centers. The reduced-order model is based on a proper orthogonal decomposition-based model order reduction technique. For data center heat transfer modeling, the framework simulates air temperatures and CPU temperatures as a parametric response surface with different cooling infrastructure design variables as the input parameters. The parametric framework enables an efficient design optimization tool and is used to solve several important problems related to energy-efficient thermal design of data centers. The first of these problems is about determining optimal response time during emergencies such as power outages in data centers. To solve this problem, transient air temperatures are modeled with time as a parameter. This parametric prediction framework is useful as a near-real-time thermal prognostic tool. The second problem pertains to reducing temperature monitoring cost in data centers. To solve this problem, transient air temperatures are modeled with spatial location as the parameter. This parametric model improves spatial resolution of measured temperature data and thereby reduces sensor requisition for transient temperature monitoring in data centers. The third problem is related to determining optimal cooling set points in response to dynamically-evolving heat loads in a data center. To solve this problem, transient air temperatures are modeled with heat load and time as the parameters. This modeling framework is particularly suitable for life-cycle design of data center cooling infrastructure. The last problem is related to determining optimal cooling set points in response to dynamically-evolving computing workload in a virtualized data center. To solve this problem, transient CPU temperatures under a given computing load profile are modeled with cooling resource set-points as the parameters.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Rambo, Jeffrey. "Reduced order modeling of turbulent convection application to data center thermal management". Saarbrücken VDM Verlag Dr. Müller, 2006. http://d-nb.info/989386961/04.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

SPOTTSWOOD, STEPHEN MICHAEL. "IDENTIFICATION OF NONLINEAR PARAMETERS FROM EXPERIMENTAL DATA FOR REDUCED ORDER MODELS". University of Cincinnati / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1163016945.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Rambo, Jeffrey D. "Reduced-Order Modeling of Multiscale Turbulent Convection: Application to Data Center Thermal Management". Diss., Available online, Georgia Institute of Technology, 2006, 2006. http://etd.gatech.edu/theses/available/etd-03272006-080024/.

Testo completo
Abstract (sommario):
Thesis (Ph. D.)--Mechanical Engineering, Georgia Institute of Technology, 2006.
Marc Smith, Committee Member ; P.K. Yeung, Committee Member ; Benjamin Shapiro, Committee Member ; Sheldon Jeter, Committee Member ; Yogendra Joshi, Committee Chair.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Lauzeral, Nathan. "Reduced order and sparse representations for patient-specific modeling in computational surgery". Thesis, Ecole centrale de Nantes, 2019. http://www.theses.fr/2019ECDN0062.

Testo completo
Abstract (sommario):
Cette thèse a pour but d’évaluer l'utilisation des méthodes de réduction de modèles fondées sur des approches parcimonieuses pour atteindre des performances en temps réel dans la cadre de la chirurgie computationnelle. Elle se concentre notamment sur l’intégration de la simulation biophysique dans des modèles personnalisés de tissus et d'organes afin d'augmenter les images médicales et ainsi éclairer le clinicien dans sa prise de décision. Dans ce contexte, trois enjeux fondamentaux sont mis en évidence. Le premier réside dans l'intégration de la paramétrisation de la forme au sein du modèle réduit afin de représenter fidèlement l'anatomie du patient. Une approche non intrusive reposant sur un échantillonnage parcimonieux de l'espace des caractéristiques anatomiques est introduite et validée. Ensuite, nous abordons le problème de la complétion des données et de la reconstruction des images à partir de données partielles ou incomplètes via des à priori physiques. Nous explorons le potentiel de la solution proposée dans le cadre du recalage d’images pour la réalité augmentée en laparoscopie. Des performances proches du temps réel sont obtenues grâce à une nouvelle approche d'hyper-réduction fondée sur une technique de représentation parcimonieuse. Enfin, le troisième défi concerne la propagation des incertitudes dans le cadre de systèmes biophysiques. Il est démontré que les approches de réduction de modèles traditionnelles ne réussissent pas toujours à produire une représentation de faible rang, et ce, en particulier dans le cas de la simulation électrochirurgicale. Une alternative est alors proposée via la métamodélisation. Pour ce faire, nous étendons avec succès l'utilisation de méthodes de régression parcimonieuses aux cas des systèmes à paramètres stochastiques
This thesis investigates the use of model order reduction methods based on sparsity-related techniques for the development of real-time biophysical modeling. In particular, it focuses on the embedding of interactive biophysical simulation into patient-specific models of tissues and organs to enhance medical images and assist the clinician in the process of informed decision making. In this context, three fundamental bottlenecks arise. The first lies in the embedding of the shape parametrization into the parametric reduced order model to faithfully represent the patient’s anatomy. A non-intrusive approach relying on a sparse sampling of the space of anatomical features is introduced and validated. Then, we tackle the problem of data completion and image reconstruction from partial or incomplete datasets based on physical priors. The proposed solution has the potential to perform scene registration in the context of augmented reality for laparoscopy. Quasi-real-time computations are reached by using a new hyperreduction approach based on a sparsity promoting technique. Finally, the third challenge concerns the representation of biophysical systems under uncertainty of the underlying parameters. It is shown that traditional model order reduction approaches are not always successful in producing a low dimensional representation of a model, in particular in the case of electrosurgery simulation. An alternative is proposed using a metamodeling approach. To this end, we successfully extend the use of sparse regression methods to the case of systems with stochastic parameters
Gli stili APA, Harvard, Vancouver, ISO e altri
15

DE, STEFANO MARCO. "Modeling and Simulation of Nonlinearly Loaded Electromagnetic Systems via Reduced Order Models - A Case Study: Energy Selective Surfaces". Doctoral thesis, Politecnico di Torino, 2022. http://hdl.handle.net/11583/2972203.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Zavar, Moosavi Azam Sadat. "Probabilistic and Statistical Learning Models for Error Modeling and Uncertainty Quantification". Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/82491.

Testo completo
Abstract (sommario):
Simulations and modeling of large-scale systems are vital to understanding real world phenomena. However, even advanced numerical models can only approximate the true physics. The discrepancy between model results and nature can be attributed to different sources of uncertainty including the parameters of the model, input data, or some missing physics that is not included in the model due to a lack of knowledge or high computational costs. Uncertainty reduction approaches seek to improve the model accuracy by decreasing the overall uncertainties in models. Aiming to contribute to this area, this study explores uncertainty quantification and reduction approaches for complex physical problems. This study proposes several novel probabilistic and statistical approaches for identifying the sources of uncertainty, modeling the errors, and reducing uncertainty to improve the model predictions for large-scale simulations. We explore different computational models. The first class of models studied herein are inherently stochastic, and numerical approximations suffer from stability and accuracy issues. The second class of models are partial differential equations, which capture the laws of mathematical physics; however, they only approximate a more complex reality, and have uncertainties due to missing dynamics which is not captured by the models. The third class are low-fidelity models, which are fast approximations of very expensive high-fidelity models. The reduced-order models have uncertainty due to loss of information in the dimension reduction process. We also consider uncertainty analysis in the data assimilation framework, specifically for ensemble based methods where the effect of sampling errors is alleviated by localization. Finally, we study the uncertainty in numerical weather prediction models coming from approximate descriptions of physical processes.
Ph. D.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Koc, Birgul. "Numerical Analysis for Data-Driven Reduced Order Model Closures". Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103202.

Testo completo
Abstract (sommario):
This dissertation contains work that addresses both theoretical and numerical aspects of reduced order models (ROMs). In an under-resolved regime, the classical Galerkin reduced order model (G-ROM) fails to yield accurate approximations. Thus, we propose a new ROM, the data-driven variational multiscale ROM (DD-VMS-ROM) built by adding a closure term to the G-ROM, aiming to increase the numerical accuracy of the ROM approximation without decreasing the computational efficiency. The closure term is constructed based on the variational multiscale framework. To model the closure term, we use data-driven modeling. In other words, by using the available data, we find ROM operators that approximate the closure term. To present the closure term's effect on the ROMs, we numerically compare the DD-VMS-ROM with other standard ROMs. In numerical experiments, we show that the DD-VMS-ROM is significantly more accurate than the standard ROMs. Furthermore, to understand the closure term's physical role, we present a theoretical and numerical investigation of the closure term's role in long-time integration. We theoretically prove and numerically show that there is energy exchange from the most energetic modes to the least energetic modes in closure terms in a long time averaging. One of the promising contributions of this dissertation is providing the numerical analysis of the data-driven closure model, which has not been studied before. At both the theoretical and the numerical levels, we investigate what conditions guarantee that the small difference between the data-driven closure model and the full order model (FOM) closure term implies that the approximated solution is close to the FOM solution. In other words, we perform theoretical and numerical investigations to show that the data-driven model is verifiable. Apart from studying the ROM closure problem, we also investigate the setting in which the G-ROM converges optimality. We explore the ROM error bounds' optimality by considering the difference quotients (DQs). We theoretically prove and numerically illustrate that both the ROM projection error and the ROM error are suboptimal without the DQs, and optimal if the DQs are used.
Doctor of Philosophy
In many realistic applications, obtaining an accurate approximation to a given problem can require a tremendous number of degrees of freedom. Solving these large systems of equations can take days or even weeks on standard computational platforms. Thus, lower-dimensional models, i.e., reduced order models (ROMs), are often used instead. The ROMs are computationally efficient and accurate when the underlying system has dominant and recurrent spatial structures. Our contribution to reduced order modeling is adding a data-driven correction term, which carries important information and yields better ROM approximations. This dissertation's theoretical and numerical results show that the new ROM equipped with a closure term yields more accurate approximations than the standard ROM.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Benaceur, Amina. "Réduction de modèles en thermo-mécanique". Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1140/document.

Testo completo
Abstract (sommario):
Cette thèse propose trois nouveaux développements de la méthode des bases réduites (RB) et de la méthode d'interpolation empirique (EIM) pour des problèmes non-linéaires. La première contribution est une nouvelle méthodologie, la méthode progressive RB-EIM (PREIM) dont l'objectif est de réduire le coût de la phase de construction du modèle réduit tout en maintenant une bonne approximation RB finale. L'idée est d'enrichir progressivement l'approximation EIM et l'espace RB, contrairement à l'approche standard où leurs constructions sont disjointes. La deuxième contribution concerne la RB pour les inéquations variationnelles avec contraintes non-linéaires. Nous proposons une combinaison RB-EIM pour traiter la contrainte. En outre, nous construisons une base réduite pour les multiplicateurs de Lagrange via un algorithme hiérarchique qui conserve la positivité des vecteurs cette base. Nous appliquons cette stratégie aux problèmes de contact élastique sans frottement pour les maillages non-coïncidents. La troisième contribution concerne la réduction de modèles avec assimilation de données. Une méthode dédiée a été introduite dans la littérature pour combiner un modèle numérique avec des mesures expérimentales. Nous élargissons son cadre d'application aux problèmes instationnaires en exploitant la méthode POD-greedy afin de construire des espaces réduits pour tout le transitoire temporel. Enfin, nous proposons un nouvel algorithme qui produit des espaces réduits plus représentatifs de la solution recherchée tout en minimisant le nombre de mesures nécessaires pour le problème réduit final
This thesis introduces three new developments of the reduced basis method (RB) and the empirical interpolation method (EIM) for nonlinear problems. The first contribution is a new methodology, the Progressive RB-EIM (PREIM) which aims at reducing the cost of the phase during which the reduced model is constructed without compromising the accuracy of the final RB approximation. The idea is to gradually enrich the EIM approximation and the RB space, in contrast to the standard approach where both constructions are separate. The second contribution is related to the RB for variational inequalities with nonlinear constraints. We employ an RB-EIM combination to treat the nonlinear constraint. Also, we build a reduced basis for the Lagrange multipliers via a hierarchical algorithm that preserves the non-negativity of the basis vectors. We apply this strategy to elastic frictionless contact for non-matching meshes. Finally, the third contribution focuses on model reduction with data assimilation. A dedicated method has been introduced in the literature so as to combine numerical models with experimental measurements. We extend the method to a time-dependent framework using a POD-greedy algorithm in order to build accurate reduced spaces for all the time steps. Besides, we devise a new algorithm that produces better reduced spaces while minimizing the number of measurements required for the final reduced problem
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Elhawary, Mohamed. "Apprentissage profond informé par la physique pour les écoulements complexes". Electronic Thesis or Diss., Paris, ENSAM, 2024. http://www.theses.fr/2024ENAME068.

Testo completo
Abstract (sommario):
Ce travail de doctorat étudie deux problèmes spécifiques concernant les turbomachines en utilisant des algorithmes d'apprentissage automatique. Le premier se concentre sur un compresseur axial, en abordant les problèmes de décrochage tournant, qui sont des phénomènes instables limitant la plage de fonctionnement des compresseurs. Les avancées récentes comprennent le développement de techniques de contrôle d'écoulement, telles que des jets au niveau du carter et du bord d’attaque du rotor, qui ont montré un potentiel pour étendre les plages de fonctionnement des compresseurs. Cependant, l’optimisation de ces stratégies de contrôle représente un défi en raison du grand nombre de paramètres et de configurations, y compris le nombre de jets, la vitesse d’injection et l’angle d’injection dans le cadre fixe. Cela soulève la question suivante : les algorithmes d'apprentissage automatique peuvent-ils aider à explorer cet vaste espace de paramètres et à optimiser la stratégie de contrôle ? À cette fin, une base de données complète des résultats expérimentaux issus de divers paramètres de contrôle et évaluations des performances du compresseur sur un compresseur axial a été utilisée, avec des tests effectués sur le banc d'essai CME2 au laboratoire LMFL. Le deuxième problème examine un diffuseur lisse radial, un composant statorique annulaire positionné en aval du rotor dans les pompes et compresseurs radiaux. Son rôle principal est de décélérer le fluide tout en augmentant la pression statique et l'enthalpie. Malgré sa fonction apparemment simple, prédire le comportement de l'écoulement à l'intérieur du diffuseur est assez difficile en raison du manque de guidage du fluide, de la structure complexe du flux de jet à l'entrée, des instabilités d'écoulement et de la nature tridimensionnelle du flux. Cela mène à la question suivante : les algorithmes d'apprentissage automatique peuvent-ils prédire efficacement cet écoulement ? Pour cette analyse, nous utilisons une base de données composée de simulations numériques (URANS) réalisées sur une géométrie de pompe centrifuge effectuées au laboratoire LMFL. Nous avons employé deux approches d'apprentissage automatique pour étudier ces sujets distincts liés aux dispositifs de turbomachinerie. La première approche utilise des réseaux de neurones (NN) et des algorithmes génétiques (GA) pour explorer des stratégies de contrôle actif du flux dans un compresseur axial. La deuxième approche applique des réseaux de neurones informés par la physique (PINN) pour modéliser un écoulement turbulent en 2D dans le diffuseur d'une pompe radiale
This PhD work investigates two specific problems concerning turbomachinery using machine learning algorithms. The first focuses on the axial flow compressor, addressing the issues of rotating stall and surge which is unstable phenomena that limit the operational range of compressors. Recent advancements include the development of flow control techniques, such as jets at the casing and leading edge of the rotor, which have shown promise in extending compressor operating ranges. However, optimizing these control strategies poses a challenge due to the large number of parameters and configurations, including the number of jets, the injection velocity, and the injection angle in the fixed frame. This raises the question: can ML algorithms assist in exploring this extensive parameter space and optimizing the control strategy? To this end, a comprehensive database of experimental results from various control parameters and compressor performance evaluations on an axial flow compressor has been utilized, with tests conducted on the CME2 test bench at LMFL laboratory. The second problem examines the radial vaneless diffuser, an annular stator component positioned downstream of the rotor in radial pumps and compressors. Its primary role is to decelerate the fluid while increasing static pressure and enthalpy. Despite its seemingly straightforward function, predicting the flow behaviour within the diffuser is quite challenging due to the lack of fluid guidance, the complex jet wake flow structure at the inlet, flow instabilities, three-dimensional nature of the flow. This leads to the inquiry: can ML algorithms effectively predict this flow? For this analysis, we utilize a database consisting of numerical simulations (URANS) obtained on a radial flow pump geometry performed at LMFL laboratory. We employed two machine learning approaches to investigate these distinct topics related to turbomachinery devices. The first approach utilizes Neural Networks (NNs) and Genetic Algorithms (GAs) to explore active flow control strategies in an axial compressor. The second approach applies Physics-Informed Neural Networks (PINNs) to model 2D turbulent flow in the vaneless diffuser of a radial pump
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Hammond, Janelle K. "Méthodes des bases réduites pour la modélisation de la qualité de l'air urbaine". Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC1230/document.

Testo completo
Abstract (sommario):
L'objectif principal de cette thèse est le développement d'outils numériques peu coûteux pour la cartographie de concentrations de polluants a partir de mesures et de modèles déterministes avancés. Le développement mondial et l'urbanisation des populations génèrent une hausse d’émissions et d'expositions. A n d'estimer les expositions individuelles et évaluer leur association à des pathologies diverses, les campagnes de mesure de qualité de l'air, et des études épidémiologiques sur les effets de santé de la pollution sont devenues plus courantes. Cependant, les concentrations de pollution de l'air sont très variables en temps et en espace. La sensibilité et la précision de ces études est souvent détériorée par de mauvais classements des expositions dus aux estimations grossières des expositions individuelles. Les méthodes d'assimilation de données intègrent des données de mesures et des modèles mathématiques a n de mieux approximer le champ de concentration. Quand ces méthodes sont basées sur un modèle de qualité de l'air (AQM) déterministe avancé, elles sont capables de fournir des approximations détaillées et de petite échelle. Ces informations précises permettront de meilleures estimations d'exposition. Néanmoins, ces méthodes sont souvent tr es coûteuses. Elles nécessitent la résolution a plusieurs reprises du modèle, qui peut être coûteux soi-même. Dans ce travail nous enquêtons sur la combinaison des méthodes des bases réduites (RB) et d'assimilation de données pour des AQM avancés a l'échelle urbaine. Nous souhaitons diminuer le coût de résolution en exploitant les RB, et incorporer des données de mesure a n d'améliorer la qualité de la solution. On étend la méthode de Parameterized-Background Data-Weak (PBDW) pour des AQMs basés sur la physique. Cette méthode est capable d'estimer de façon rapide et "online" des concentrations de polluants à l'échelle du quartier. Elle se sert des AQMs disponibles dans une procédure non intrusive et efficace par rapport aux temps de calculs pour réduire le coût de résolution par des centaines de fois. Les résultats de PBDW sont comparés à la méthode d'interpolation empirique généralisée (GEIM) et à une méthode inverse usuelle, la méthode adjointe, a n de mesurer l'efficacité de la PBDW. Cette comparaison montre la possibilité d'augmenter la précision de la solution, et d'une grande réduction en temps de calcul par rapport à des méthodes classiques. Dans nos applications sur un modèle imparfait, l'étude a fourni des estimations d'état avec erreur d'approximation de moins de 10% presque partout. Les résultats se montrent prometteurs pour la reconstruction en temps réel de champs de pollution sur de grands domaines par la PBDW
The principal objective of this thesis is the development of low-cost numerical tools for spatial mapping of pollutant concentrations from field observations and advanced deterministic models. With increased pollutant emissions and exposure due to mass urbanization and development worldwide, air quality measurement campaigns and epidemiology studies of the association between air pollution and adverse health effects have become increasingly common. However, as air pollution concentrations are highly variable spatially and temporally, the sensitivity and accuracy of these epidemiology studies is often deteriorated by exposure misclassi cation due to poor estimates of individual exposures. Data assimilation methods incorporate available measurement data and mathematical models to provide improved approximations of the concentration. These methods, when based on an advanced deterministic air quality models (AQMs), could provide spatially-rich small-scale approximations and can enable better estimates of effects and exposures. However, these methods can be computationally expensive. They require repeated solution of the model, which could itself be costly. In this work we investigate a combined reduced basis (RB) data assimilation method for use with advanced AQMs on urban scales. We want to diminish the cost of resolution, using RB arguments, and incorporate measurement data to improve the quality of the solution. We extend the Parameterized-Background Data-Weak (PBDW) method to physically-based AQMs. This method can rapidly estimate "online" pollutant concentrations at urban scale, using available AQMs in a non-intrusive and computationally effcient manner, reducing computation times by factors up to hundreds. We apply this method in case studies representing urban residential pollution of PM2.5, and we study the stability of the method depending on the placement or air quality sensors. Results from the PBDW are compared to the Generalized Empirical Interpolation Method (GEIM) and a standard inverse problem, the adjoint method, in order to measure effciency of the method. This comparison shows possible improvement in precision and great improvement in computation cost with respect to classical methods. We fi nd that the PBDW method shows promise for the real-time reconstruction of a pollution eld in large-scale problems, providing state estimation with approximation error generally under 10% when applied to an imperfect model
Gli stili APA, Harvard, Vancouver, ISO e altri
21

McMullen, Ryan Michael. "Aspects of Reduced-Order Modeling of Turbulent Channel Flows: From Linear Mechanisms to Data-Driven Approaches". Thesis, 2020. https://thesis.library.caltech.edu/13730/2/mcmullen_thesis_submitted.pdf.

Testo completo
Abstract (sommario):

This thesis concerns three key aspects of reduced-order modeling for turbulent shear flows. They are linear mechanisms, nonlinear interactions, and data-driven techniques. Each aspect is explored by way of example through analysis of three different problems relevant to the broad area of turbulent channel flow.

First, linear analyses are used to both describe and better understand the dominant flow structures in elastoinertial turbulence of dilute polymer solutions. It is demonstrated that the most-amplified mode predicted by resolvent analysis (McKeon and Sharma, 2010) strongly resembles these features. Then, the origin of these structures is investigated, and it is shown that they are likely linked to the classical Tollmien-Schichting waves.

Second, resolvent analysis is again utilized to investigate nonlinear interactions in Newtonian turbulence. An alternative decomposition of the resolvent operator into Orr-Sommerfeld and Squire families (Rosenberg and McKeon, 2019b) enables a highly accurate low-order representation of the second-order turbulence statistics. The reason for its excellent performance is argued to result from the fact that the decomposition enables a competition mechanism between the Orr-Sommerfeld and Squire vorticity responses. This insight is then leveraged to make predictions about how resolvent mode weights belonging to several special classes scale with increasing Reynolds number.

The final application concerns special solutions of the Navier-Stokes equations known as exact coherent states. Specifically, we detail a proof of concept for a data-driven method centered around a neural network to generate good initial guesses for upper-branch equilibria in Couette flow. It is demonstrated that the neural network is capable of producing upper-branch solution predictions that successfully converge to numerical solutions of the governing equations over a limited range of Reynolds numbers. These converged solutions are then analyzed, with a particular emphasis on symmetries. Interestingly, they do not share any symmetries with the known equilibria used to train the network. The implications of this finding, as well as broader outlook for the scope of the proposed method, are discussed.

Gli stili APA, Harvard, Vancouver, ISO e altri
22

DI, ROCCO FEDERICO. "Predictive modeling analysis of a wet cooling tower - Adjoint sensitivity analysis, uncertainty quantification, data assimilation, model calibration, best-estimate predictions with reduced uncertainties". Doctoral thesis, 2018. http://hdl.handle.net/11573/1091474.

Testo completo
Abstract (sommario):
It is common practice, in the modern era, to base the process of understanding and eventually predicting the behavior of complex physical systems upon simulating operational situations through system codes. In order to provide a more thorough and accurate comprehension of the system dynamics, these numerical simulations are often and preferably flanked by experimental measurements. In practice, repeated measurements of the same physical quantity produce values differing from each other and from the measured quantity true value, which remains unknown; the errors leading to this variation in results can be of methodological, instrumental or personal nature. It is not feasible to obtain experimental results devoid of uncertainty, and this means that a range of values possibly representative of the true value always exists around any value stemming from experimental measurements. A quantification of this range is critical to any practical application of the measured data, whose nominal measured values are insufficient for applications unless the quantitative uncertainties associated to the experimental data are also provided. Not even numerical models can reveal the true value of the investigated quantity, for two reasons: first, any numerical model is imperfect, meaning that it constitutes an inevitable simplification of the real world system it aims to represent; in second place, a hypothetically perfect model would still have uncertain values for its model parameters - such as initial conditions, boundary conditions and material properties - and the stemming results would therefore still be differing from the true value and from the experimental measurements of the quantity. With both computational and experimental results at hand, the final aim is to obtain a probabilistic description of possible future outcomes based on all recognized errors and uncertainties. This operation falls within the scope of predictive modeling procedures, which rely on three key elements: model calibration, model extrapolation and estimation of the validation domain. The first step of the procedure involves the adjustment of the numerical model parameters accordingly to the experimental results; this aim is achieved by integrating computed and measured data, and the associated procedure is known as model calibration. In order for this operation to be properly executed, all errors and uncertainties at any level of the modeling path leading to numerical results have to be identified and characterized, including errors and uncertainties on the model parameters, numerical discretization errors and possible incomplete knowledge of the physical process being modeled. Calibration of models is performed through the mathematical framework provided by data assimilation procedures; these procedures strongly rely on sensitivity analysis, and for this reason are often cumbersome in terms of computational load. Generally speaking, sensitivity analyses can be conducted with two different techniques, respectively known as direct or forward methods and adjoint methods. The forward methods calculate the finite difference of a small perturbation in a parameter by means of differences between two independent calculations, and are advantageous only for systems in which the number of responses exceeds the number of model parameters; unfortunately this is seldom the case in real large-scale systems. In this work, this problem has been overcome by using the adjoint sensitivity analysis methodology (ASAM) by Cacuci: as opposed to forward methods, the ASAM is most efficient for systems in which the number of parameters is greater than the number of responses, such as the model investigated in this thesis and many others currently used for numerical simulations of industrial systems. This methodology has been recently extended to second-order sensitivities (2nd-ASAM) by Cacuci for linear and nonlinear systems, for computing exactly and efficiently the second-order functional derivatives of system responses to the system model parameters. Model extrapolation addresses the prediction of uncertainty in new environments or conditions of interest, including both untested parts of the parameter space and higher levels of system complexity in the validation hierarchy. Estimation of the validation domain addresses the estimation of contours of constant uncertainty in the high-dimensional space that characterizes the application of interest. The present work focuses on performing sensitivity and uncertainty analysis, data assimilation, model calibration, model validation and best-estimate predictions with reduced uncertainties on a counter-flow, wet cooling tower model developed by Savannah River National Laboratory. A cooling tower generally discharges waste heat produced by an industrial plant to the external environment. The amount of thermal energy discharged into the environment can be determined by measurements of quantities representing the external conditions, such as outlet air temperature, outlet water temperature, and outlet air relative humidity, in conjunction with computational models that simulate numerically the cooling tower behavior. Variations in the model parameters (e.g., material properties, model correlations, boundary conditions) cause variations in the model response. The functional derivatives of the model response with respect to the model parameters (called “sensitivities”) are needed to quantify such response variations changes. In this work, the comprehensive adjoint sensitivity analysis methodology for nonlinear systems is applied to compute the cooling tower response sensitivities to all of its model parameters. Moreover, the utilization of the adjoint state functions allows the simultaneous computation of the sensitivities of each model response to all of the 47 model parameters just running a single adjoint model computation; obtaining the same results making use of finite-difference forward methods would have required 47 separate computations, with the relevant disadvantage of leading to approximate values of the sensitivities, as opposed to the exact ones yielded by applying the adjoint procedure. In addition, the forward cooling tower model presents nonlinearity in their state functions; the adjoint sensitivity model possess the relevant feature of being instead linear in the adjoint state functions, whose one-to-one correspondence to the forward state functions is essential for the calculation of the adjoint sensitivities. Sensitivities are subsequently used in this work to realize many operations, such as: (i) ranking the model parameters according to the magnitude of their contribution to response uncertainties; (ii) determine the propagation of uncertainties, in form of variances and covariances, of the parameters in the model in order to quantify the uncertainties of the model responses; (iii) allow predictive modeling operations, such as experimental data assimilation and model parameters calibration, with the aim to yield best-estimate predicted nominal values both for model parameters and responses, with correspondently reduced values for the predicted uncertainties associated. The methodologies are part of two distinct mathematical frameworks: the Adjoint Sensitivity Analysis Methodology (ASAM) is used to compute the adjoint sensitivities of the model quantities of interest (called “model responses”) with respect to the model parameters; the Predictive Modeling of Coupled Multi-Physics Systems (PM_CMPS) simultaneously combines all of the available computed information and experimentally measured data to yield optimal values of the system parameters and responses, while simultaneously reducing the corresponding uncertainties in parameters and responses. In the present work, a relevantly more efficient numerical method has been applied to the cooling tower model analyzed, leading to the accurate computation of the steady-state distributions for the following quantities of interest: (i) the water mass flow rates at the exit of each control volume along the height of the fill section of the cooling tower; (ii) the water temperatures at the exit of each control volume along the height of the fill section of the cooling tower; (iii) the air temperatures at the exit of each control volume along the height of the fill section of the cooling tower; (iv) the humidity ratios at the exit of each control volume along the height of the fill section of the cooling tower; and (v) the air mass flow rates at the exit of the cooling tower. The application of the numerical method selected eliminates any convergence issue, yielding accurate results for all the control volumes of the cooling tower and for all the data set of interest. This work is organized as follows: Chapter 2 provides a description of the physical system simulated, along with presenting the mathematical model used in this work for simulating a counter-flow cooling tower operating under saturated and unsaturated conditions. The three cases analyzed in this work and their corresponding sets of governing equations are detailed in this chapter. Chapter 3 presents the development of the adjoint sensitivity model for the counter-flow cooling tower operating under saturated and unsaturated conditions using the general adjoint sensitivity analysis methodology (ASAM) for nonlinear systems. Using a single adjoint computation enables the efficient and exact computation of the sensitivities (functional derivatives) of the model responses to all of the model parameters, thus alleviating the need for repeated forward model computations in conjunction with finite difference methods. The mathematical framework of the “predictive modeling for coupled multi-physics systems” (PM_CMPS) is also detailed. Chapter 4 presents the results of applying the ASAM and PM_CMPS methodologies to all the cases listed in Chapter 2: after being calculated, sensitivities are subsequently used for ranking the contributions of the single model parameters to the model responses variations, for computing the propagated uncertainties of the model responses, and for the application of the PM_CMPS methodology, aimed at yielding best-estimate predicted nominal values and uncertainties for model parameters and responses. This methodology simultaneously combines all of the available computed information and experimentally measured data for the counter-flow cooling tower operating under saturated and unsaturated conditions. The best-estimate results predicted by the PM_CMPS methodology reveal that the predicted values of the standard deviations for all the model responses, even those for which no experimental data have been recorded, are smaller than either the computed or the measured standards deviations for the respective responses. This work concludes with Chapter 5 by discussing the significance of these predicted results and by indicating possible further generalizations of the adjoint sensitivity analysis and PM_CMPS methodologies.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Della, Santa Francesco. "Data-Driven Deep Learning Methods for Physically-Based Simulations". Doctoral thesis, 2021. http://hdl.handle.net/11583/2971158.

Testo completo
Abstract (sommario):
In this doctoral thesis, we study and analyze Deep Learning applications to learn physically-based simulations’ results. The first type of application is focused on underground flow analysis problems modeled through Discrete Fracture Networks, training Deep Learning models as reduced models for Uncertainty Quantification. In particular, we look for trained Neural Networks able to predict the outflowing fluxes of a Discrete Fracture Network model. These Neural Networks are also exploited to define a new backbone identification method for a network of underground fractures. The second type of application deals with the parametric design optimization processes; specifically, we train Deep Learning models to speed up the objective function calls. These applications are described after a novel and sound formalization of the main concepts of supervised Machine Learning and Deep Neural Networks (the quintessence of Deep Learning). A numerical approach characterizes this new formalization of the learning problem; furthermore, we describe how the learning techniques evolved in history to reach the current state of the art. We test different kinds of Neural Network architectures in this thesis, such as the multitask Neural Networks and the residual Neural Networks. Moreover, we develop two new Neural Network layers: the Graph Informed layer and the Discontinuous layer. The first one is defined to embed Neural Network architectures with graphs and improve prediction abilities on graph-structured data (as the Discrete Fracture Network models). The second one introduces learnable discontinuities into Neural Networks, to approximate discontinuous functions and identify discontinuity interfaces. The last chapter of this thesis is dedicated to the definition and description of the Discontinuous layers.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia