To see the other types of publications on this topic, follow the link: Multiscale optimization.

Dissertations / Theses on the topic 'Multiscale optimization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 43 dissertations / theses for your research on the topic 'Multiscale optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lalanne, Jean-Benoît. "Multiscale dissection of bacterial proteome optimization." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/130217.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, May, 2020
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 315-348).
The quantitative composition of proteomes results from biophysical and biochemical selective pressures acting under system-level resource allocation constraints. The nature and strength of these evolutionary driving forces remain obscure. Through the development of analytical tools and precision measurement platforms spanning biological scales, we found evidence of optimization in bacterial gene expression programs. We compared protein synthesis rates across distant lineages and found tight conservation of in-pathway enzyme expression stoichiometry, suggesting generic selective pressures on expression setpoints. Beyond conservation, we used high-resolution transcriptomics to identify numerous examples of stoichiometry preserving cis-elements compensation in pathway operons. Genome-wide mapping of transcription termination sites also led to the discovery of a phylogenetically widespread mode of bacterial gene expression, 'runaway transcription', whereby RNA polymerases are functionally uncoupled from pioneering ribosomes on mRNAs. To delineate biophysical rationales underlying these pressures, we formulated a parsimonious ribosome allocation model capturing the trade-off between reaction flux and protein production cost. The model correctly predicts the expression hierarchy of key translation factors. We then directly measured the quantitative relationship between expression and fitness for specific translation factors in the Gram-positive species Bacillus subtilis. These precision measurements confirmed that endogenous expression maximizes growth rate. Idiosyncratic transcriptional changes in regulons were however observed away from endogenous expression. The resulting physiological burdens sharpened the fitness landscapes. Spurious system-level responses to targeted expression perturbations, called 'regulatory entrenchment', thus exacerbate the requirement for precisely set expression stoichiometry.
by Jean-Benoît Lalanne.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Physics
APA, Harvard, Vancouver, ISO, and other styles
2

Fitriani. "Multiscale Dynamic Time and Space Warping." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45279.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2008.
Includes bibliographical references (p. 149-151).
Dynamic Time and Space Warping (DTSW) is a technique used in video matching applications to find the optimal alignment between two videos. Because DTSW requires O(N4) time and space complexity, it is only suitable for short and coarse resolution videos. In this thesis, we introduce Multiscale DTSW: a modification of DTSW that has linear time and space complexity (O(N)) with good accuracy. The first step in Multiscale DTSW is to apply the DTSW algorithm to coarse resolution input videos. In the next step, Multiscale DTSW projects the solution from coarse resolution to finer resolution. A solution for finer resolution can be found effectively by refining the projected solution. Multiscale DTSW then repeatedly projects a solution from the current resolution to finer resolution and refines it until the desired resolution is reached. I have explored the linear time and space complexity (O(N)) of Multiscale DTSW both theoretically and empirically. I also have shown that Multiscale DTSW achieves almost the same accuracy as DTSW. Because of its efficiency in computational cost, Multiscale DTSW is suitable for video detection and video classification applications. We have developed a Multiscale-DTSW-based video classification framework that achieves the same accuracy as a DTSW-based video classification framework with greater than 50 percent reduction in the execution time. We have also developed a video detection application that is based on Dynamic Space Warping (DSW) and Multiscale DTSW methods and is able to detect a query video inside a target video in a short time.
by Fitriani.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
3

Yourdkhani, Mostafa. "Multiscale modeling and optimization of seashell structure and material." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66991.

Full text
Abstract:
The vast majority of mollusks grow a hard shell for protection. Typical seashells are composed of two distinct layers, with an outer layer made of calcite (a hard but brittle material) and an inner layer made of a tough and ductile material called nacre. Nacre is a biocomposite material that consists of more than 95% of tablet-shape aragonite, CaCO3, and a soft organic material as matrix. Although the brittle ceramic aragonite composes a high volume fraction of nacre, its mechanical prop-erties are found to be surprisingly higher than those of its constituents. It has been suggested that calcite and nacre, two materials with distinct structures and proper-ties, are arranged in an optimal fashion to defeat attacks from predators. This re-search aims at exploring this hypothesis by capturing the design rules of a gastro-pod seashell using multiscale modeling and optimization techniques. At the mi-croscale, a representative volume element of the microstructure of nacre was used to formulate an analytical solution for the elastic modulus of nacre, and a multiax-ial failure criterion as a function of the microstructure. At the macroscale, a two-layer finite element model of the seashell was developed to include shell thick-ness, curvature and calcite/nacre thickness ratio as geometric parameters. The maximum load that the shell can carry at its apex was obtained. A multiscale op-timization approach was also employed to evaluate whether the natural seashell is optimally designed. Finally, actual penetration experiments were performed on red abalone shells to validate the results.
Une vaste majorité des mollusques développent une coquille dure pour leur pro-tection. Une coquille typique est constitué de deux couches distinctes. La couche externe est faite de calcite (un matériau dur mais fragile), tandis que la couche in-terne est composée de nacre, un matériau plus résiliant et ductile. La nacre est un matériau biocomposite constitué de plus de 95% d'aragonite sous forme de ta-blette et d'un matériel organique souple qui forme la matrice. Bien que la cérami-que aragonite constitue une grande portion de la nacre, ses propriétés mécaniques sont étonnamment plus élevées de celles de ses constituants. La calcite et la nacre, deux matériaux avec des propriétés et des structures différentes, sont supposément étalonnées de façon optimale pour combattre les attaques de prédateurs. Cette étude cherche à déterminer les règles de construction d'une coquille de gastropode en utilisant la modélisation multi-échelle et des techniques d'optimisation. À l'échelle microscopique, un volume représentatif de la microstructure de la nacre a été utilisé pour formuler une solution analytique de son module d'élasticité et un critère de fracture multiaxial fonction des dimensions de la microstructure. À l'échelle macroscopique, un modèle d'éléments finis à deux couches de la co-quille à été utilisé pour représenter la curvature et le ratio calcite/nacre en fonction des paramètres géométriques. La charge maximale que la coquille peut supporter à son apex a été déterminée. Une approche d'optimisation multi-échelle a aussi été employée pour évaluer la reconstruction optimale du coquillage naturel. Fina-lement, plusieurs tests ont été effectués sur une coquille d'abalone rouge pour valider les résultats.
APA, Harvard, Vancouver, ISO, and other styles
4

Umoh, Utibe Godwin. "Multiscale analysis of cohesive fluidization." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/28988.

Full text
Abstract:
Fluidization of a granular assembly of solid particles is a process where particles are suspended in a fluid by the upward flow of fluid through the bed. This process is important in industry as it has a wide range of applications due to the high mixing and mass transfer rates present as a result of the rapid movement of particles which occurs in the bed. The dynamics of fluidization is heavily dependent on the particle scale physics and the forces acting at a particle level. For particles with sizes and densities less than 100μm and 103 kg/m3, the importance of interparticle forces such as cohesion to the fluidization phenomena observed increases compared to larger particles where phenomena observed are more dependent on hydrodynamic forces. These smaller sized particles are increasingly in high demand in industrial processes due to the increasing surface area per unit volume obtained by decreasing the particle size. Decreasing particle however leads to an increase in the impact of cohesive interparticle forces present between particles thus altering fluidization phenomena. It is thus necessary to get a greater understanding of how these cohesive forces alter fluidization behaviour both at the particle and also at the bulk scale. This work begins with an experimental study of a fluidized bed using high speed imaging. The applicability of particle image velocimetry for a dense bed is examined with verification and validation studies showing that particle image velocimetry is able to accurately capture averaged velocity profiles for particles at the front wall. A digital image analysis algorithm which is capable of accurately extracting particle solid fraction data for a dense bed at non-optimum lighting conditions was also developed. Together both experimental techniques were used to extract averaged particle mass flux data capable of accurately capturing and probing fluidization phenomena for a dense fluidized bed. This simulation studies carried out for this work looks to examine the impact of cohesive forces introduced using a van der waal cohesion model on phenomena observed at different length scales using DEM-CFD simulations. Numerical simulations were run for Geldart A sized particles at different cohesion levels represented by the bond number and at different inlet gas velocities encompassing the different regimes fluidization regimes present. A stress analysis was used to examine the mechanical state of the expanded bed at different cohesion levels with the vertical component of the total stress showing negative tensile stresses observed at the center of the bed. Further analysis of the contact and cohesive components of the stress together with a kcore and microstructural analysis focusing on the solid fraction and coordination number profiles indicated that this negative total stress was caused by a decrease in the contact stress due to breakage of mechanical contacts as cohesive forces are introduced and increased. A pressure overshoot analysis was also conducted with the magnitude of the overshoot in pressure seen during the pressure drop analysis of a cohesive bed shown to be of equivalent magnitude to the gradient of the total negative stress profile. The in-homogeneous nature of the bed was probed with the focus on how introducing cohesion levels increase the degree of inhomogeneity present in the expanded bed and how local mesoscopic structures change with cohesion and gas velocity. It was shown that increasing cohesion increases the degree of inhomogeneity in the bed as well as increasing the degree of clustering between particles. A majority of particles were shown to be present in a single macroscopic cluster in the mechanical network with distinct local mesoscopic structures forming within the macroscopic cluster. The cohesive bed also expanded as distinct dense regions with low mechanical contact zones in between these regions. A macroscopic cluster analysis showed that the majority of particles are in strong enduring mechanical and cohesive contact. Increasing cohesive forces were also shown to not only create a cohesive support network around the mechanical network but also strengthen the mechanical contact network as well. The significance of the strong and weak mechanical and cohesive forces on fluidization phenomena was also examined with analysis showing that the weak mechanical forces act to support the weak mechanical forces. The cohesive force network however was non coherent with strong forces significantly greater than weak forces. Fluidization phenomena was shown to be driven by the magnitude of the strong cohesive forces set by the minimum particle cutoff distance. This also called into question the significance of the cohesive coordination number which is dependent on the maximum cohesive cutoff. The value of the maximum cutoff was shown to be less significant as no significant changes were observed in the stress and microstructure data as the maximum cutoff was altered. Simulations with different ratios of cohesive and non cohesive particles were also undertaken and showed that a disruption in the cohesive force network leads to changes in the stress state and microstructure of the bed thus changing the fluidization phenomena observed at all length scales. The nature of the strong cohesive force network thus drives fluidization phenomena seen in the bed.
APA, Harvard, Vancouver, ISO, and other styles
5

Sorrentino, Luigi. "Simulation and optimization of crowd dynamics using a multiscale model." Doctoral thesis, Universita degli studi di Salerno, 2012. http://hdl.handle.net/10556/318.

Full text
Abstract:
2010 - 2011
In the last decades, the modeling of crowd motion and pedestrian .ow has attracted the attention of applied mathematicians, because of an increasing num- ber of applications, in engineering and social sciences, dealing with this or similar complex systems, for design and optimization purposes. The crowd has caused many disasters, in the stadiums during some major sporting events as the "Hillsborough disaster" occurred on 15 April 1989 at Hills- borough, a football stadium, in She¢ eld, England, resulting in the deaths of 96 people, and 766 being injured that remains the deadliest stadium-related disaster in British history and one of the worst ever international football accidents. Other example is the "Heysel Stadium disaster" occurred on 29 May 1985 when escaping, fans were pressed against a wall in the Heysel Stadium in Brussels, Belgium, as a result of rioting before the start of the 1985 European Cup Final between Liv- erpool of England and Juventus of Italy. Thirty-nine Juventus fans died and 600 were injured. It is well know the case of the London Millennium Footbridge, that was closed the very day of its opening due to macroscopic lateral oscillations of the structure developing while pedestrians crossed the bridge. This phenomenon renewed the interest toward the investigation of these issues by means of mathe- matical modeling techniques. Other examples are emergency situations in crowded areas as airports or railway stations. In some cases, as the pedestrian disaster in Jamarat Bridge located in South Arabia, mathematical modeling and numerical simulation have already been successfully employed to study the dynamics of the .ow of pilgrims, so as to highlight critical circumstances under which crowd ac- cidents tend to occur and suggest counter-measures to improve the safety of the event. In the existing literature on mathematical modeling of human crowds we can distinguish two approaches: microscopic and macroscopic models. In model at microscopic scale pedestrians are described individually in their motion by ordinary di¤erential equations and problems are usually set in two-dimensional domains delimiting the walking area under consideration, with the presence of obstacles within the domain and a target. The basic modeling framework relies on classical Newtonian laws of point. The model at the macroscopic scale consists in using partial di¤erential equations, that is in describing the evolution in time and space of pedestrians supplemented by either suitable closure relations linking the velocity of the latter to their density or analogous balance law for the momentum. Again, typical guidelines in devising this kind of models are the concepts of preferred direction of motion and discomfort at high densities. In the framework of scalar conservation laws, a macroscopic onedimensional model has been proposed by Colombo and Rosini, resorting to some common ideas to vehicular tra¢ c modeling, with the speci.c aim of describing the transition from normal to panic conditions. Piccoli and Tosin propose to adopt a di¤erent macroscopic point of view, based on a measure-theoretical framework which has recently been introduced by Canuto et al. for coordination problems (rendez-vous) of multiagent systems. This approach consists in a discrete-time Eulerian macroscopic representation of the system via a family of measures which, pushed forward by some motion mappings, provide an estimate of the space occupancy by pedestrians at successive time steps. From the modeling point of view, this setting is particularly suitable to treat nonlocal interactions among pedestrians, obstacles, and wall boundary conditions. A microscopic approach is advantageous when one wants to model di¤erences among the individuals, random disturbances, or small environments. Moreover, it is the only reliable approach when one wants to track exactly the position of a few walkers. On the other hand, it may not be convenient to use a microscopic approach to model pedestrian .ow in large environments, due to the high com- putational e¤ort required. A macroscopic approach may be preferable to address optimization problems and analytical issues, as well as to handle experimental data. Nonetheless, despite the fact that self-organization phenomena are often visible only in large crowds, they are a consequence of strategical behaviors devel- oped by individual pedestrians. The two scales may reproduce the same features of the group behavior, thus providing a perfect matching between the results of the simulations for the micro- scopic and the macroscopic model in some test cases. This motivated the multiscale approach proposed by Cristiani, Piccoli and Tosin. Such an approach allows one to keep a macroscopic view without losing the right amount of .granularity,.which is crucial for the emergence of some self-organized patterns. Furthermore, the method allows one to introduce in a macroscopic (averaged) context some micro- scopic e¤ects, such as random disturbances or di¤erences among the individuals, in a fully justi.able manner from both the physical and the mathematical perspec- tive. In the model, microscopic and macroscopic scales coexist and continuously share information on the overall dynamics. More precisely, the microscopic part tracks the trajectories of single pedestrians and the macroscopic part the density of pedestrians using the same evolution equation duly interpreted in the sense of measures. In this respect, the two scales are indivisible. Starting from model of Cristiani, Piccoli and Tosin we have implemented algo- rithms to simulate the pedestrians motion toward a target to reach in a bounded area, with one or more obstacles inside. In this work di¤erent scenarios have been analyzed in order to .nd the obstacle con.guration which minimizes the pedes- trian average exit time. The optimization is achieved using to algorithms. The .rst one is based on the exhaustive exploration of all positions: the average exit time for all scenarios is computed and then the best one is chosen. The second algorithm is of steepest descent type according to which the obstacle con.guration corresponding to the minimum exit time is found using an iterative method. A variant has been introduced to the algorithm so to obtain a more e¢ cient proce- dure. The latter allows to .nd better solutions in few steps than other algorithms. Finally we performed other simulations with bounded domains like a classical .at with .ve rooms and two exits, comparing the results of three di¤erent scenario changing the positions of exit doors. [edited by author]
X n.s.
APA, Harvard, Vancouver, ISO, and other styles
6

Parno, Matthew David. "A multiscale framework for Bayesian inference in elliptic problems." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/65322.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2011.
Page 118 blank. Cataloged from PDF version of thesis.
Includes bibliographical references (p. 112-117).
The Bayesian approach to inference problems provides a systematic way of updating prior knowledge with data. A likelihood function involving a forward model of the problem is used to incorporate data into a posterior distribution. The standard method of sampling this distribution is Markov chain Monte Carlo which can become inefficient in high dimensions, wasting many evaluations of the likelihood function. In many applications the likelihood function involves the solution of a partial differential equation so the large number of evaluations required by Markov chain Monte Carlo can quickly become computationally intractable. This work aims to reduce the computational cost of sampling the posterior by introducing a multiscale framework for inference problems involving elliptic forward problems. Through the construction of a low dimensional prior on a coarse scale and the use of iterative conditioning technique the scales are decouples and efficient inference can proceed. This work considers nonlinear mappings from a fine scale to a coarse scale based on the Multiscale Finite Element Method. Permeability characterization is the primary focus but a discussion of other applications is also provided. After some theoretical justification, several test problems are shown that demonstrate the efficiency of the multiscale framework.
by Matthew David Parno.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
7

MEJIAS, TUNI JESUS ALBERTO. "Multiscale approach applied to fires in tunnels, Model optimization and development." Doctoral thesis, Politecnico di Torino, 2022. http://hdl.handle.net/11583/2960751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Minghan. "Stochastic Modeling and Simulation of Multiscale Biochemical Systems." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/90898.

Full text
Abstract:
Numerous challenges arise in modeling and simulation as biochemical networks are discovered with increasing complexities and unknown mechanisms. With the improvement in experimental techniques, biologists are able to quantify genes and proteins and their dynamics in a single cell, which calls for quantitative stochastic models for gene and protein networks at cellular levels that match well with the data and account for cellular noise. This dissertation studies a stochastic spatiotemporal model of the Caulobacter crescentus cell cycle. A two-dimensional model based on a Turing mechanism is investigated to illustrate the bipolar localization of the protein PopZ. However, stochastic simulations are often impeded by expensive computational cost for large and complex biochemical networks. The hybrid stochastic simulation algorithm is a combination of differential equations for traditional deterministic models and Gillespie's algorithm (SSA) for stochastic models. The hybrid method can significantly improve the efficiency of stochastic simulations for biochemical networks with multiscale features, which contain both species populations and reaction rates with widely varying magnitude. The populations of some reactant species might be driven negative if they are involved in both deterministic and stochastic systems. This dissertation investigates the negativity problem of the hybrid method, proposes several remedies, and tests them with several models including a realistic biological system. As a key factor that affects the quality of biological models, parameter estimation in stochastic models is challenging because the amount of empirical data must be large enough to obtain statistically valid parameter estimates. To optimize system parameters, a quasi-Newton algorithm for stochastic optimization (QNSTOP) was studied and applied to a stochastic budding yeast cell cycle model by matching multivariate probability distributions between simulated results and empirical data. Furthermore, to reduce model complexity, this dissertation simplifies the fundamental cooperative binding mechanism by a stochastic Hill equation model with optimized system parameters. Considering that many parameter vectors generate similar system dynamics and results, this dissertation proposes a general α-β-γ rule to return an acceptable parameter region of the stochastic Hill equation based on QNSTOP. Different objective functions are explored targeting different features of the empirical data.
Doctor of Philosophy
Modeling and simulation of biochemical networks faces numerous challenges as biochemical networks are discovered with increased complexity and unknown mechanisms. With improvement in experimental techniques, biologists are able to quantify genes and proteins and their dynamics in a single cell, which calls for quantitative stochastic models, or numerical models based on probability distributions, for gene and protein networks at cellular levels that match well with the data and account for randomness. This dissertation studies a stochastic model in space and time of a bacterium’s life cycle— Caulobacter. A two-dimensional model based on a natural pattern mechanism is investigated to illustrate the changes in space and time of a key protein population. However, stochastic simulations are often complicated by the expensive computational cost for large and sophisticated biochemical networks. The hybrid stochastic simulation algorithm is a combination of traditional deterministic models, or analytical models with a single output for a given input, and stochastic models. The hybrid method can significantly improve the efficiency of stochastic simulations for biochemical networks that contain both species populations and reaction rates with widely varying magnitude. The populations of some species may become negative in the simulation under some circumstances. This dissertation investigates negative population estimates from the hybrid method, proposes several remedies, and tests them with several cases including a realistic biological system. As a key factor that affects the quality of biological models, parameter estimation in stochastic models is challenging because the amount of observed data must be large enough to obtain valid results. To optimize system parameters, the quasi-Newton algorithm for stochastic optimization (QNSTOP) was studied and applied to a stochastic (budding) yeast life cycle model by matching different distributions between simulated results and observed data. Furthermore, to reduce model complexity, this dissertation simplifies the fundamental molecular binding mechanism by the stochastic Hill equation model with optimized system parameters. Considering that many parameter vectors generate similar system dynamics and results, this dissertation proposes a general α-β-γ rule to return an acceptable parameter region of the stochastic Hill equation based on QNSTOP. Different optimization strategies are explored targeting different features of the observed data.
APA, Harvard, Vancouver, ISO, and other styles
9

Ahamad, Intan Salwani. "Multiscale line search in interior point methods for nonlinear optimization and applications." Thesis, University of Cambridge, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Arabnejad, Sajad. "Multiscale mechanics and multiobjective optimization of cellular hip implants with variable stiffness." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=119630.

Full text
Abstract:
Bone resorption and bone-implant interface instability are two bottlenecks of current orthopaedic hip implant designs. Bone resorption is often triggered by mechanical bio-incompatibility of the implant with the surrounding bone. It has serious clinical consequences in both primary and revision surgery of hip replacements. After primary surgery, bone resorption can cause periprosthetic fractures, leading to implant loosening. For the revision surgery, the loss of bone stock compromises the ability of bone to adequately fix the implant. Interface instability, on the other hand, occurs as a result of excessive micromotion and stress at the bone-implant interface, which prevents implant fixation. As a result, the implant fails, and revision surgery is required.Many studies have been performed to design an implant minimizing both bone resorption and interface instability. However, the results have not been effective since minimizing one objective would penalize the other. As a result, among all designs available in the market, there is no implant that can concurrently minimize these two conflicting objectives. The goal of this thesis is to design an orthopaedic hip replacement implant that can simultaneously minimize bone resorption and implant instability. We propose a novel concept of a variable stiffness implant that is implemented through the use of graded lattice materials. A design methodology based on multiscale mechanics and multiobjective optimization is developed for the analysis and design of a fully porous implant with a lattice microstructure. The mechanical properties of the implant are locally optimized to minimize bone resorption and interface instability. Asymptotic homogenization (AH) theory is used to capture stress distribution for failure analysis throughout the implant and its lattice microstructure. For the implant lattice microstructure, a library of 2D cell topologies is developed, and their effective mechanical properties, including elastic moduli and yield strength, are computed using AH. Since orthopaedic hip implants are generally expected to support dynamic forces generated by human activities, they should be also designed against fatigue fracture to avoid progressive damage. A methodology for fatigue design of cellular materials is proposed and applied to a two dimensional implant, with Kagome and square cell topologies. A lattice implant with an optimum distribution of material properties is proved to considerably reduce the amount of bone resorption and interface shear stress compared to a fully dense titanium implant. The manufacturability of the lattice implants is demonstrated by fabricating a set of 2D proof-of-concept prototypes using Electron Beam Melting (EBM) with Ti6Al4V powder. Optical microscopy is used to measure the morphological parameters of the cellular microstructure. The numerical analysis and the manufacturability tests performed in this preliminary study suggest that the developed methodology can be used for the design and manufacturing of novel orthopaedic implants that can significantly contribute to reducing some clinical consequences of current implants.
La résorption osseuse et l'instabilité de l'interface os-implant sont deux goulots d'étranglement de modèles actuels d'implants orthopédiques de hanche. La résorption osseuse est souvent déclenchée par une bio-incompatibilité mécanique de l'implant avec l'os environnant. Il en résulte de graves conséquences cliniques à la fois en chirurgie primaire et en chirurgie de révision des arthroplasties de la hanche. Après la chirurgie primaire, la résorption osseuse peut entraîner des fractures périprothétiques, conduisant au descellement de l'implant. Pour la chirurgie de révision, la perte de substance osseuse compromet la capacité de l'os à bien fixer l'implant. L'instabilité de l'interface, d'autre part, se produit à la suite d'un stress excessif et de micromouvements à l'interface os-implant, ce qui empêche la fixation des implants. De ce fait, l'implant échoue, et la chirurgie de révision est nécessaire.De nombreuses études ont été réalisées pour concevoir un implant qui minimise la résorption osseuse et l'instabilité de l'interface. Cependant, les résultats n'ont pas été efficaces, car minimiser un objectif pénaliserait l'autre. En conséquence, parmi tous les modèles disponibles sur le marché, il n'y a pas d'implant qui puisse en même temps réduire ces deux objectifs contradictoires. L'objectif de cette thèse est de concevoir une prothèse orthopédique de la hanche qui puisse simultanément réduire la résorption osseuse et l'instabilité de l'implant. Nous proposons un nouveau concept d'implant à raideur variable qui est mis en œuvre grâce à l'utilisation de matériaux assemblés en treillis.Une méthodologie de conception basée sur la mécanique multi-échelle et l'optimisation multiobjectif est développé pour l'analyse et la conception d'un implant totalement poreux avec une microstructure en treillis. Les propriétés mécaniques de l'implant sont localement optimisés pour minimiser la résorption osseuse et l'instabilité d'interface. La théorie de l'homogénéisation asymptotique (HA) est utilisée pour capturer la distribution des contraintes pour l'analyse des défaillances tout le long de l'implant et de sa microstructure en treillis. Concernant cette microstructure en treillis, une bibliothèque de topologies de cellules 2D est développée, et leurs propriétés mécaniques efficaces, y compris les modules d'élasticité et la limite d'élasticité, sont calculées en utilisant le théorie HA. Puisque les prothèses orthopédiques de hanche sont généralement censées soutenir les forces dynamiques générées par les activités humaines, elles doivent être également conçues contre les fractures de fatigue pour éviter des dommages progressifs. Une méthodologie pour la conception en fatigue des matériaux cellulaires est proposée et appliquée à un implant en deux dimensions, et aux topologies de cellules carrées et de Kagome. Il est prouvé qu'un implant en treillis avec une répartition optimale des propriétés des matériaux réduit considérablement la quantité de la résorption osseuse et la contrainte de cisaillement de l'interface par rapport à un implant en titane totalement dense. La fabricabilité des implants en treillis est démontrée par la fabrication d'un ensemble de concepts de prototypes utilisant la fusion par faisceau d'électronsde poudre Ti6Al4V. La microscopie optique est utilisée pour mesurer les paramètres morphologiques de la microstructure cellulaire. L'analyse numérique et les tests de fabricabilité effectués dans cette étude préliminaire suggèrent que la méthodologie développée peut être utilisée pour la conception et la fabrication d'implants orthopédiques innovants qui peuvent contribuer de manière significative à la réduction des conséquences cliniques des implants actuels.
APA, Harvard, Vancouver, ISO, and other styles
11

Petay, Margaux. "Multimodal and multiscale analysis of complex biomaterials : optimization and constraints of infrared nanospectroscopy measurements." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASF092.

Full text
Abstract:
Dans le domaine du biomédical, l'étude des changements physico-chimiques induits par une pathologie au sein des tissus, à l'échelle cellulaire, peut être cruciale pour élucider les mécanismes à l'origine de ce phénomène. Toutefois, seules quelques techniques d'analyse permettent une description chimique à cette échelle. La nanospectroscopie infrarouge, en particulier l'AFM-IR (Microscopie à Force Atomique-Infrarouge) est prometteuse en permettant une description chimique des matériaux à l'échelle nanométrique. Actuellement, l'AFM-IR est souvent utilisée pour l'étude des cellules et micro-organismes, mais très peu pour l'étude des tissus biologiques en raison de la complexité de ces derniers. Pourtant, de nombreuses applications pourraient bénéficier d'une telle description, comme l'étude des phénomènes de minéralisation dans les tissus mammaires. Les microcalcifications mammaires (MCMs) sont des dépôts calciques anormaux (oxalates ou phosphates de calcium) et dont la composition est, dans la littérature, présumée associée à la nature des lésions : cancéreuses ou non. Malgré la multiplication des recherches sur le sujet au cours des dix dernières années, les processus de formation de ces MCMs et leur lien avec les pathologies et notamment les cancers du sein restent mal compris. Dans ce contexte, une description chimique des MCMs à l'échelle nanométrique pourrait fournir un nouvel éclairage et aider à la compréhension de leur genèse. Les biopsies mammaires (typiquement quelques millimètres à quelques centimètres) contiennent généralement plusieurs MCMs avec une forte dispersion en taille, de quelques centaines de nanomètres à un millimètre. Une stratégie de caractérisation multi-échelle est donc nécessaire pour décrire chimiquement l'entièreté de l'échantillon mais également accéder à une description fine des MCMs. Une approche multimodale et multi-échelle a ainsi été mise en place. Celle-ci permet d'étudier les propriétés morphologiques des MCMs en utilisant la microscopie électronique à balayage, ainsi que leurs propriétés chimiques à l'échelle micrométrique et nanométrique grâce à la microscopie et nanospectroscopie IR (e.g., AFM-IR). Bien que l'étude d'objets inorganiques et cristallins au sein d'une matrice organique par AFM-IR soit complexe, en raison des fortes variations locales des propriétés optiques et mécaniques au sein de ces matériaux hybrides, nous avons réussi à caractériser par AFM-IR des dépôts calciques au sein de tissus biologiques. La mise en œuvre d'une telle approche comporte plusieurs défis, tant d'un point de vue méthodologique qu'expérimental, notamment pour la préparation des échantillons, au cours des mesures, du traitement et de la gestion des données générées, ainsi que de leur interprétation. Tous ces aspects seront détaillés et des solutions proposées illustrant ainsi les capacités de l'AFM-IR pour l'étude des biomatériaux complexes
In the biomedical field, understanding the physicochemical changes at the cellular level in tissues can be crucial for unraveling the mechanisms of pathological phenomena. However, the number of techniques providing chemical descriptions at the cellular/molecular level is limited. Infrared (IR) nanospectroscopy techniques, particularly AFM-IR (Atomic Force Microscopy-infrared), are promising as they offer materials' chemical descriptions at the nanometer scale. Up to now, AFM-IR is mainly used in biology for studying individual cells or micro-organisms, but its direct application in biological tissues is relatively scarce due to tissue sections' complex nature. Yet, many applications could benefit from such description, such as mineralization phenomena in breast tissue. Breast microcalcifications (BMCs) are calcium-based deposits (such as calcium oxalate and calcium phosphate) hypothesized to be associated with some breast pathologies, including cancer. Despite increased research over the past decade, BMCs' formation process and connection with breast conditions remain poorly understood. Still, BMCs nanoscale chemical speciation might offer new insights into their chemical architecture. However, breast biopsies typically range from a few millimeters to a few centimeters, containing many BMCs ranging from hundreds of nanometers to a millimeter. Thus, a breast biopsy multiscale characterization strategy is required to provide both a global chemical description of the sample and a fine chemical description of BMCs. We, thus, propose a new multimodal and multiscale approach to investigate BMCs' morphological properties using scanning electron microscopy and their chemical composition at the microscale using IR spectromicroscopy, extending up to the nanometer scale thanks to AFM-IR analysis. Although AFM-IR measurements of inorganic and crystalline objects can be challenging due to their specific optical and mechanical properties, we demonstrate AFM-IR capabilities to characterize pathological deposits directly in biological tissues. Furthermore, implementing a multimodal and multiscale methodology comes with significant challenges in terms of sample preparation, measurements, data processing, and data management, as well as their interpretation: challenges which will be outlined and addressed
APA, Harvard, Vancouver, ISO, and other styles
12

Sato, Ayami. "A structural optimization methodology for multiscale designs considering local deformation in microstructures and rarefied gas flows in microchannels." Kyoto University, 2019. http://hdl.handle.net/2433/242495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Xia, Liang. "Towards optimal design of multiscale nonlinear structures : reduced-order modeling approaches." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2230/document.

Full text
Abstract:
L'objectif principal est de faire premiers pas vers la conception topologique de structures hétérogènes à comportement non-linéaires. Le deuxième objectif est d’optimiser simultanément la topologie de la structure et du matériau. Il requiert la combinaison des méthodes de conception optimale et des approches de modélisation multi-échelle. En raison des lourdes exigences de calcul, nous avons introduit des techniques de réduction de modèle et de calcul parallèle. Nous avons développé tout d’abord un cadre de conception multi-échelle constitué de l’optimisation topologique et la modélisation multi-échelle. Ce cadre fournit un outil automatique pour des structures dont le modèle de matériau sous-jacent est directement régi par la géométrie de la microstructure réaliste et des lois de comportement microscopiques. Nous avons ensuite étendu le cadre en introduisant des variables supplémentaires à l’échelle microscopique pour effectuer la conception simultanée de la structure et de la microstructure. En ce qui concerne les exigences de calcul et de stockage de données en raison de multiples réalisations de calcul multi-échelle sur les configurations similaires, nous avons introduit: les approches de réduction de modèle. Nous avons développé un substitut d'apprentissage adaptatif pour le cas de l’élasticité non-linéaire. Pour viscoplasticité, nous avons collaboré avec le Professeur Felix Fritzen de l’Université de Stuttgart en utilisant son modèle de réduction avec la programmation parallèle sur GPU. Nous avons également adopté une autre approche basée sur le potentiel de réduction issue de la littérature pour améliorer l’efficacité de la conception simultanée
High-performance heterogeneous materials have been increasingly used nowadays for their advantageous overall characteristics resulting in superior structural mechanical performance. The pronounced heterogeneities of materials have significant impact on the structural behavior that one needs to account for both material microscopic heterogeneities and constituent behaviors to achieve reliable structural designs. Meanwhile, the fast progress of material science and the latest development of 3D printing techniques make it possible to generate more innovative, lightweight, and structurally efficient designs through controlling the composition and the microstructure of material at the microscopic scale. In this thesis, we have made first attempts towards topology optimization design of multiscale nonlinear structures, including design of highly heterogeneous structures, material microstructural design, and simultaneous design of structure and materials. We have primarily developed a multiscale design framework, constituted of two key ingredients : multiscale modeling for structural performance simulation and topology optimization forstructural design. With regard to the first ingredient, we employ the first-order computational homogenization method FE2 to bridge structural and material scales. With regard to the second ingredient, we apply the method Bi-directional Evolutionary Structural Optimization (BESO) to perform topology optimization. In contrast to the conventional nonlinear design of homogeneous structures, this design framework provides an automatic design tool for nonlinear highly heterogeneous structures of which the underlying material model is governed directly by the realistic microstructural geometry and the microscopic constitutive laws. Note that the FE2 method is extremely expensive in terms of computing time and storage requirement. The dilemma of heavy computational burden is even more pronounced when it comes to topology optimization : not only is it required to solve the time-consuming multiscale problem once, but for many different realizations of the structural topology. Meanwhile we note that the optimization process requires multiple design loops involving similar or even repeated computations at the microscopic scale. For these reasons, we introduce to the design framework a third ingredient : reduced-order modeling (ROM). We develop an adaptive surrogate model using snapshot Proper Orthogonal Decomposition (POD) and Diffuse Approximation to substitute the microscopic solutions. The surrogate model is initially built by the first design iteration and updated adaptively in the subsequent design iterations. This surrogate model has shown promising performance in terms of reducing computing cost and modeling accuracy when applied to the design framework for nonlinear elastic cases. As for more severe material nonlinearity, we employ directly an established method potential based Reduced Basis Model Order Reduction (pRBMOR). The key idea of pRBMOR is to approximate the internal variables of the dissipative material by a precomputed reduced basis computed from snapshot POD. To drastically accelerate the computing procedure, pRBMOR has been implemented by parallelization on modern Graphics Processing Units (GPUs). The implementation of pRBMOR with GPU acceleration enables us to realize the design of multiscale elastoviscoplastic structures using the previously developed design framework inrealistic computing time and with affordable memory requirement. We have so far assumed a fixed material microstructure at the microscopic scale. The remaining part of the thesis is dedicated to simultaneous design of both macroscopic structure and microscopic materials. By the previously established multiscale design framework, we have topology variables and volume constraints defined at both scales
APA, Harvard, Vancouver, ISO, and other styles
14

Dang, Hieu. "Adaptive multiobjective memetic optimization: algorithms and applications." Journal of Cognitive Informatics and Natural Intelligence, 2012. http://hdl.handle.net/1993/30856.

Full text
Abstract:
The thesis presents research on multiobjective optimization based on memetic computing and its applications in engineering. We have introduced a framework for adaptive multiobjective memetic optimization algorithms (AMMOA) with an information theoretic criterion for guiding the selection, clustering, and local refinements. A robust stopping criterion for AMMOA has also been introduced to solve non-linear and large-scale optimization problems. The framework has been implemented for different benchmark test problems with remarkable results. This thesis also presents two applications of these algorithms. First, an optimal image data hiding technique has been formulated as a multiobjective optimization problem with conflicting objectives. In particular, trade-off factors in designing an optimal image data hiding are investigated to maximize the quality of watermarked images and the robustness of watermark. With the fixed size of a logo watermark, there is a conflict between these two objectives, thus a multiobjective optimization problem is introduced. We propose to use a hybrid between general regression neural networks (GRNN) and the adaptive multiobjective memetic optimization algorithm (AMMOA) to solve this challenging problem. This novel image data hiding approach has been implemented for many different test natural images with remarkable robustness and transparency of the embedded logo watermark. We also introduce a perceptual measure based on the relative Rényi information spectrum to evaluate the quality of watermarked images. The second application is the problem of joint spectrum sensing and power control optimization for a multichannel, multiple-user cognitive radio network. We investigated trade-off factors in designing efficient spectrum sensing techniques to maximize the throughput and minimize the interference. To maximize the throughput of secondary users and minimize the interference to primary users, we propose a joint determination of the sensing and transmission parameters of the secondary users, such as sensing times, decision threshold vectors, and power allocation vectors. There is a conflict between these two objectives, thus a multiobjective optimization problem is used again in the form of AMMOA. This algorithm learns to find optimal spectrum sensing times, decision threshold vectors, and power allocation vectors to maximize the averaged opportunistic throughput and minimize the averaged interference to the cognitive radio network.
February 2016
APA, Harvard, Vancouver, ISO, and other styles
15

Carriou, Vincent. "Multiscale, multiphysic modeling of the skeletal muscle during isometric contraction." Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2376/document.

Full text
Abstract:
Les systèmes neuromusculaire et musculosquelettique sont des systèmes de systèmes complexes qui interagissent parfaitement entre eux afin de produire le mouvement. En y regardant de plus près, ce mouvement est la résultante d'une force musculaire créée à partir d'une activation du muscle par le système nerveux central. En parallèle de cette activité mécanique, le muscle produit aussi une activité électrique elle aussi contrôlée par la même activation. Cette activité électrique peut être mesurée à la surface de la peau à l'aide d'électrode, ce signal enregistré par l'électrode se nomme le signal Électromyogramme de surface (sEMG). Comprendre comment ces résultats de l'activation du muscle sont générés est primordial en biomécanique ou pour des applications cliniques. Évaluer et quantifier ces interactions intervenant durant la contraction musculaire est difficile et complexe à étudier dans des conditions expérimentales. Par conséquent, il est nécessaire de développer un moyen pour pouvoir décrire et estimer ces interactions. Dans la littérature de la bioingénierie, plusieurs modèles de génération de signaux sEMG et de force ont été publiés. Ces modèles sont principalement utilisés pour décrire une partie des résultats de la contraction musculaire. Ces modèles souffrent de plusieurs limites telles que le manque de réalisme physiologique, la personnalisation des paramètres, ou la représentativité lorsqu'un muscle complet est considéré. Dans ce travail de thèse, nous nous proposons de développer un modèle biofidèle, personnalisable et rapide décrivant l'activité électrique et mécanique du muscle en contraction isométrique. Pour se faire, nous proposons d'abord un modèle décrivant l'activité électrique du muscle à la surface de la peau. Cette activité électrique sera commandé par une commande volontaire venant du système nerveux périphérique, qui va activer les fibres musculaires qui vont alors dépolariser leur membrane. Cette dépolarisation sera alors filtrée par le volume conducteur afin d'obtenir l'activité électrique à la surface de la peau. Une fois cette activité obtenue, le système d'enregistrement décrivant une grille d'électrode à haute densité (HD-sEMG) est modélisée à la surface de la peau afin d'obtenir les signaux sEMG à partir d'une intégration surfacique sous le domaine de l'électrode. Dans ce modèle de génération de l'activité électrique, le membre est considéré cylindrique et multi couches avec la considération des tissus musculaire, adipeux et la peau. Par la suite, nous proposons un modèle mécanique du muscle décrit à l'échelle de l'Unité Motrice (UM). L'ensemble des résultats mécaniques de la contraction musculaire (force, raideur et déformation) sont déterminées à partir de la même commande excitatrice du système nerveux périphérique. Ce modèle est basé sur le modèle de coulissement des filaments d'actine-myosine proposé par Huxley que l'on modélise à l'échelle UM en utilisant la théorie des moments utilisée par Zahalak. Ce modèle mécanique est validé avec un profil de force enregistré sur un sujet paraplégique avec un implant de stimulation neurale. Finalement, nous proposons aussi trois applications des modèles proposés afin d'illustrer leurs fiabilités ainsi que leurs utilité. Tout d'abord une analyse de sensibilité globale des paramètres de la grille HDsEMG est présentée. Puis, nous présenterons un travail fait en collaboration avec une autre doctorante une nouvelle étude plus précise sur la modélisation de la relation HDsEMG/force en personnalisant les paramètres afin de mimer au mieux le comportement du Biceps Brachii. Pour conclure, nous proposons un dernier modèle quasi­ dynamique décrivant l'activité électro-mécanique du muscle en contraction isométrique. Ce modèle déformable va actualiser l'anatomie cylindrique du membre sous une hypothèse isovolumique du muscle
The neuromuscular and musculoskeletal systems are complex System of Systems (SoS) that perfectly interact to provide motion. From this interaction, muscular force is generated from the muscle activation commanded by the Central Nervous System (CNS) that pilots joint motion. In parallel an electrical activity of the muscle is generated driven by the same command of the CNS. This electrical activity can be measured at the skin surface using electrodes, namely the surface electromyogram (sEMG). The knowledge of how these muscle out comes are generated is highly important in biomechanical and clinical applications. Evaluating and quantifying the interactions arising during the muscle activation are hard and complex to investigate in experimental conditions. Therefore, it is necessary to develop a way to describe and estimate it. In the bioengineering literature, several models of the sEMG and the force generation are provided. They are principally used to describe subparts of themuscular outcomes. These models suffer from several important limitations such lacks of physiological realism, personalization, and representability when a complete muscle is considered. In this work, we propose to construct bioreliable, personalized and fast models describing electrical and mechanical activities of the muscle during contraction. For this purpose, we first propose a model describing the electrical activity at the skin surface of the muscle where this electrical activity is determined from a voluntary command of the Peripheral Nervous System (PNS), activating the muscle fibers that generate a depolarization of their membrane that is filtered by the limbvolume. Once this electrical activity is computed, the recording system, i.e. the High Density sEMG (HD-sEMG) grid is define over the skin where the sEMG signal is determined as a numerical integration of the electrical activity under the electrode area. In this model, the limb is considered as a multilayered cylinder where muscle, adipose and skin tissues are described. Therefore, we propose a mechanical model described at the Motor Unit (MU) scale. The mechanical outcomes (muscle force, stiffness and deformation) are determined from the same voluntary command of the PNS, and is based on the Huxley sliding filaments model upscale at the MU scale using the distribution-moment theory proposed by Zahalak. This model is validated with force profile recorded from a subject implanted with an electrical stimulation device. Finally, we proposed three applications of the proposed models to illustrate their reliability and usefulness. A global sensitivity analysis of the statistics computed over the sEMG signals according to variation of the HD-sEMG electrode grid is performed. Then, we proposed in collaboration a new HDsEMG/force relationship, using personalized simulated data of the Biceps Brachii from the electrical model and a Twitch based model to estimate a specific force profile corresponding to a specific sEMG sensor network and muscle configuration. To conclude, a deformableelectro-mechanicalmodelcouplingthetwoproposedmodelsisproposed. This deformable model updates the limb cylinder anatomy considering isovolumic assumption and respecting incompressible property of the muscle
APA, Harvard, Vancouver, ISO, and other styles
16

Liu, Mingyong. "Optimization of electromagnetic and acoustic performances of power transformers." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS256/document.

Full text
Abstract:
Le travail présenté dans ce mémoire s’intéresse à la prédiction des vibrations d'un noyau de transformateur multicouche, constitué d'un assemblage de tôles ferromagnétiques. Le problème couplé magnéto-mécanique est résolu par une approche séquentielle progressive : la résolution magnétique est suivie d'une résolution mécanique. Un modèle multi-échelle simplifié 3D décrivant les anisotropies magnétiques et magnétostrictives, et considérant les non-linéarités magnétiques et de magnétostriction, est utilisé comme loi de comportement du matériau. La structure du noyau du transformateur est modélisée en 2D. Une technique d'homogénéisation permet de tenir compte du comportement anisotrope de chaque couche afin de définir un comportement moyen pour chaque élément du maillage éléments finis.. Des mesures expérimentales sont ensuite effectuées, permettant d’une part la validation des lois de comportement matériau utilisées, et d’autres part des modèles de comportement structurel statique, du comportement structurel dynamique et de l'estimation du bruit. Différents matériaux et différentes géométries de prototypes de transformateurs sont considérés pour ce travail. Des optimisations structurelles sont finalement proposées grâce à des simulations numériques s’appuyant sur le modèle développé, afin de réduire les vibrations et les émissions de bruit du noyau du transformateur
This thesis deals with the prediction of the vibration of a multi-layer transformer core made of an assembly of electrical sheets. This magneto-mechanical coupled problem is solved by a stepping finite element method sequential approach: magnetic resolution is followed by mechanical resolution. A 3D Simplified Multi-Scale Model (SMSM) describing both magnetic and magnetostrictive anisotropies is used as the constitutive law of the material. The transformer core structure is modeled in 2D and a homogenization technique is implemented to take the anisotropic behavior of each layer into consideration and define an average behavior at each element of the finite element mesh. Experimental measurements are then carried out, allowing the validation of the material constitutive law, static structural behavior, dynamic structural behavior, and the noise estimation. Different materials geometries are considered for this workStructural optimizations are finally achieved by numerical simulation for lower vibration and noise emission of the transformer cores
APA, Harvard, Vancouver, ISO, and other styles
17

Chen, Yun. "Mining Dynamic Recurrences in Nonlinear and Nonstationary Systems for Feature Extraction, Process Monitoring and Fault Diagnosis." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6072.

Full text
Abstract:
Real-time sensing brings the proliferation of big data that contains rich information of complex systems. It is well known that real-world systems show high levels of nonlinear and nonstationary behaviors in the presence of extraneous noise. This brings significant challenges for human experts to visually inspect the integrity and performance of complex systems from the collected data. My research goal is to develop innovative methodologies for modeling and optimizing complex systems, and create enabling technologies for real-world applications. Specifically, my research focuses on Mining Dynamic Recurrences in Nonlinear and Nonstationary Systems for Feature Extraction, Process Monitoring and Fault Diagnosis. This research will enable and assist in (i) sensor-driven modeling, monitoring and optimization of complex systems; (ii) integrating product design with system design of nonlinear dynamic processes; and (iii) creating better prediction/diagnostic tools for real-world complex processes. My research accomplishments include the following. (1) Feature Extraction and Analysis: I proposed a novel multiscale recurrence analysis to not only delineate recurrence dynamics in complex systems, but also resolve the computational issues for the large-scale datasets. It was utilized to identify heart failure subjects from the 24-hour heart rate variability (HRV) time series and control the quality of mobile-phone-based electrocardiogram (ECG) signals. (2) Modeling and Prediction: I proposed the design of stochastic sensor network to allow a subset of sensors at varying locations within the network to transmit dynamic information intermittently, and a new approach of sparse particle filtering to model spatiotemporal dynamics of big data in the stochastic sensor network. It may be noted that the proposed algorithm is very general and can be potentially applicable for stochastic sensor networks in a variety of disciplines, e.g., environmental sensor network and battlefield surveillance network. (3) Monitoring and Control: Process monitoring of dynamic transitions in complex systems is more concerned with aperiodic recurrences and heterogeneous types of recurrence variations. However, traditional recurrence analysis treats all recurrence states homogeneously, thereby failing to delineate heterogeneous recurrence patterns. I developed a new approach of heterogeneous recurrence analysis for complex systems informatics, process monitoring and anomaly detection. (4) Simulation and Optimization: Another research focuses on fractal-based simulation to study spatiotemporal dynamics on fractal surfaces of high-dimensional complex systems, and further optimize spatiotemporal patterns. This proposed algorithm is applied to study the reaction-diffusion modeling on fractal surfaces and real-world 3D heart surfaces.
APA, Harvard, Vancouver, ISO, and other styles
18

Pereira, Danillo Roberto 1984. "Fitting 3D deformable biological models to microscope images = Alinhamento de modelos tridimensionais usando imagens de microscopia." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275627.

Full text
Abstract:
Orientador: Jorge Stolfi
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-23T12:30:57Z (GMT). No. of bitstreams: 1 Pereira_DanilloRoberto_D.pdf: 2771811 bytes, checksum: 6d5b092c08b5c011636be5fc2661e4a0 (MD5) Previous issue date: 2013
Resumo: Nesta tese descrevemos um algoritmo genérico (que denominamos MSFit) capaz de estimar a pose e as deformações de modelos 3D de estruturas biológicas (bactérias, células e etc.) em imagens obtidas por meio de microscópios óticos ou de varredura eletrônica. O algoritmo usa comparação multi-escala de imagens utilizando uma métrica sensível ao contorno; e um método original de otimização não-linear. Nos nossos testes com modelos de complexidade moderada (até 12 parâmetros) o algoritmo identifica corretamente os parâmetros do modelo em 60-70% dos casos com imagens reais e entre 80-90% dos casos com imagens sintéticas
Abstract: In this thesis we describe a generic algorithm (which we call MSFit) able to estimate the pose and deformations of 3D models of biological structures (bacteria, cells, etc.) with images obtained by optical and scanning electron microscopes. The algorithm uses an image comparison metric multi-scale, that is outline-sensitive, and a novel nonlinear optimization method. In our tests with models of moderate complexity (up to 12 parameters) the algorithm correctly identifies the model parameters in 60-70 % of the cases with real images and 80-90 % of the cases with synthetic images
Doutorado
Ciência da Computação
Doutor em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
19

Waldspurger, Irène. "Wavelet transform modulus : phase retrieval and scattering." Thesis, Paris, Ecole normale supérieure, 2015. http://www.theses.fr/2015ENSU0036/document.

Full text
Abstract:
Les tâches qui consistent à comprendre automatiquement le contenu d’un signal naturel, comme une image ou un son, sont en général difficiles. En effet, dans leur représentation naïve, les signaux sont des objets compliqués, appartenant à des espaces de grande dimension. Représentés différemment, ils peuvent en revanche être plus faciles à interpréter. Cette thèse s’intéresse à une représentation fréquemment utilisée dans ce genre de situations, notamment pour analyser des signaux audio : le module de la transformée en ondelettes. Pour mieux comprendre son comportement, nous considérons, d’un point de vue théorique et algorithmique, le problème inverse correspondant : la reconstruction d’un signal à partir du module de sa transformée en ondelettes. Ce problème appartient à une classe plus générale de problèmes inverses : les problèmes de reconstruction de phase. Dans un premier chapitre, nous décrivons un nouvel algorithme, PhaseCut, qui résout numériquement un problème de reconstruction de phase générique. Comme l’algorithme similaire PhaseLift, PhaseCut utilise une relaxation convexe, qui se trouve en l’occurence être de la même forme que les relaxations du problème abondamment étudié MaxCut. Nous comparons les performances de PhaseCut et PhaseLift, en termes de précision et de rapidité. Dans les deux chapitres suivants, nous étudions le cas particulier de la reconstruction de phase pour la transformée en ondelettes. Nous montrons que toute fonction sans fréquence négative est uniquement déterminée (à une phase globale près) par le module de sa transformée en ondelettes, mais que la reconstruction à partir du module n’est pas stable au bruit, pour une définition forte de la stabilité. On démontre en revanche une propriété de stabilité locale. Nous présentons également un nouvel algorithme de reconstruction de phase, non-convexe, qui est spécifique à la transformée en ondelettes, et étudions numériquement ses performances. Enfin, dans les deux derniers chapitres, nous étudions une représentation plus sophistiquée, construite à partir du module de transformée en ondelettes : la transformée de scattering. Notre but est de comprendre quelles propriétés d’un signal sont caractérisées par sa transformée de scattering. On commence par démontrer un théorème majorant l’énergie des coefficients de scattering d’un signal, à un ordre donné, en fonction de l’énergie du signal initial, convolé par un filtre passe-haut qui dépend de l’ordre. On étudie ensuite une généralisation de la transformée de scattering, qui s’applique à des processus stationnaires. On montre qu’en dimension finie, cette transformée généralisée préserve la norme. En dimension un, on montre également que les coefficients de scattering généralisés d’un processus caractérisent la queue de distribution du processus
Automatically understanding the content of a natural signal, like a sound or an image, is in general a difficult task. In their naive representation, signals are indeed complicated objects, belonging to high-dimensional spaces. With a different representation, they can however be easier to interpret. This thesis considers a representation commonly used in these cases, in particular for theanalysis of audio signals: the modulus of the wavelet transform. To better understand the behaviour of this operator, we study, from a theoretical as well as algorithmic point of view, the corresponding inverse problem: the reconstruction of a signal from the modulus of its wavelet transform. This problem belongs to a wider class of inverse problems: phase retrieval problems. In a first chapter, we describe a new algorithm, PhaseCut, which numerically solves a generic phase retrieval problem. Like the similar algorithm PhaseLift, PhaseCut relies on a convex relaxation of the phase retrieval problem, which happens to be of the same form as relaxations of the widely studied problem MaxCut. We compare the performances of PhaseCut and PhaseLift, in terms of precision and complexity. In the next two chapters, we study the specific case of phase retrieval for the wavelet transform. We show that any function with no negative frequencies is uniquely determined (up to a global phase) by the modulus of its wavelet transform, but that the reconstruction from the modulus is not stable to noise, for a strong notion of stability. However, we prove a local stability property. We also present a new non-convex phase retrieval algorithm, which is specific to the case of the wavelet transform, and we numerically study its performances. Finally, in the last two chapters, we study a more sophisticated representation, built from the modulus of the wavelet transform: the scattering transform. Our goal is to understand which properties of a signal are characterized by its scattering transform. We first prove that the energy of scattering coefficients of a signal, at a given order, is upper bounded by the energy of the signal itself, convolved with a high-pass filter that depends on the order. We then study a generalization of the scattering transform, for stationary processes. We show that, in finite dimension, this generalized transform preserves the norm. In dimension one, we also show that the generalized scattering coefficients of a process characterize the tail of its distribution
APA, Harvard, Vancouver, ISO, and other styles
20

Koyeerath, Graham Danny. "Topology optimization in interfacial flows using the pseudopotential model." Electronic Thesis or Diss., Nantes Université, 2024. http://www.theses.fr/2024NANU4008.

Full text
Abstract:
L'optimisation des systèmes et des processus est un exercice qui s'effectue en tenant compte de l'expérience et des connaissances de chacun. Nous explorons ici une approche mathématique pour optimiser les problèmes physiques en utilisant divers algorithmes d'optimisation. Dans cette thèse, l'objectif préliminaire de l'optimiseur est de modifier les caractéristiques d'écoulement du système en ajustant les forces capillaires. Cet objectif peut être atteint en modifiant l'un des deux ensembles de paramètres : (a) en introduisant un matériau solide mouillant (paramètre de niveau) ou (b) en changeant la mouillabilité des surfaces solides existantes (paramètre de mouillabilité). Nous proposons que le premier ensemble de paramètres soit modifié à l'aide de l'algorithme d'optimisation topologique, où le gradient de la fonction de coût est obtenu en résolvant un modèle d'état adjoint pour le modèle monocomposant multiphase de Shan et Chen (SCMP-SC). De même, nous proposons que ce dernier ensemble de paramètres soit modifié à l'aide de l'algorithme d'optimisation de la mouillabilité, où nous dérivons à nouveau un modèle d'état adjoint pour le modèle SCMPSC. Enfin, nous utilisons un algorithme d'optimisation multi-échelle, dans lequel nous calculons le gradient de la fonction de coût à l'aide de la différence finie. Nous avons réussi à démontrer la compétence de cet optimiseur pour maximiser la vitesse moyenne d'une gouttelette 2D jusqu'à 69%
The optimization of systems and processes is an exercise that is carried out taking into account one’s experience and knowledge. Here we explore a mathematical approach to optimize physical problems by utilizing various optimization algorithms. In this thesis, the preliminary objective of the optimizer is to modify the flow characteristics of the system by tweaking the capillary forces. This could be accomplished by modifying either of the two sets of parameters: (a) by introducing a wetting solid material i.e. the level-set parameter or (b) by changing the wettability of the existing solid surfaces i.e. the wettability parameter. We propose that the former set of parameters could be modified using the topology optimization algorithm, where the gradient of the cost function is obtained by solving an adjointstate state model for the single component multiphase Shan and Chen (SCMP-SC) model. Similarly, we propose that the latter set of parameters are modified using the wettability optimization algorithm where we again derive an adjoint-state model for the SCMP-SC. Lastly, we utilize a multiscale optimization algorithm, where we compute the gradient of the cost function using the finite difference. We have succeeded in demonstrating the competence of this optimizer for maximizing the mean velocity of a 2D droplet by up to 69%
APA, Harvard, Vancouver, ISO, and other styles
21

Billy, Frédérique. "Modélisation mathématique multi-échelle de l'angiogenèse tumorale : analyse de la réponse tumorale aux traitements anti-angiogéniques." Phd thesis, Université Claude Bernard - Lyon I, 2009. http://tel.archives-ouvertes.fr/tel-00631513.

Full text
Abstract:
Le cancer est l'une des principales causes de décès dans le monde. L'angiogenèse tumorale est le processus de formation de nouveaux vaisseaux sanguins à partir de vaisseaux préexistants. Une tumeur cancéreuse peut induire l'angiogenèse afin de disposer d'apports supplémentaires en oxygène et nutriments, indispensables à la poursuite de son développement. Cette thèse consiste en l'élaboration d'un modèle mathématique multi-échelle de l'angiogenèse tumorale. Ce modèle intègre les principaux mécanismes intervenant aux échelles tissulaire et moléculaire. Couplé à un modèle de croissance tumorale, notre modèle permet d'étudier les effets de l'apport en oxygène sur la croissance tumorale. D'un point de vue mathématique, ces modèles d'angiogenèse et de croissance tumorale reposent sur des équations aux dérivées partielles de réaction-diffusion et d'advection régissant l'évolution spatio-temporelle des densités de cellules endothéliales, cellules constituant la paroi des vaisseaux sanguins, et tumorales, ainsi que celle des concentrations tissulaires en substances pro- et antiangiogéniques et en oxygène. A l'échelle moléculaire, la liaison des substances angiogéniques aux récepteurs membranaires des cellules endothéliales, mécanisme clé de la communication intercellulaire, est modélisée à l'aide de lois pharmacologiques. Ce modèle permet ainsi de reproduire in silico les principaux mécanismes de l'angiogenèse et d'analyser leur rôle dans la croissance tumorale. Il permet également de simuler l'action de différentes thérapies anti-angiogéniques, et d'étudier leur efficacité sur le développement tumoral afin d'aider à l'innovation thérapeutique
APA, Harvard, Vancouver, ISO, and other styles
22

Wenzel, Moritz. "Development of a Metamaterial-Based Foundation System for the Seismic Protection of Fuel Storage Tanks." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/256685.

Full text
Abstract:
Metamaterials are typically described as materials with ’unusual’ wave propagation properties. Originally developed for electromagnetic waves, these materials have also spread into the field of acoustic wave guiding and cloaking, with the most relevant of these ’unusual’ properties, being the so called band-gap phenomenon. A band-gap signifies a frequency region where elastic waves cannot propagate through the material, which in principle, could be used to protect buildings from earthquakes. Based on this, two relevant concepts have been proposed in the field of seismic engineering, namely: metabarriers, and metamaterial-based foundations. This thesis deals with the development of the Metafoundation, a metamaterial-based foundation system for the seismic protection of fuel storage tanks against excessive base shear and pipeline rupture. Note that storage tanks have proven to be highly sensitive to earthquakes, can trigger sever economic and environmental consequences in case of failure and were therefore chosen as a superstructure for this study. Furthermore, when tanks are protected with traditional base isolation systems, the resulting horizontal displacements, during seismic action, may become excessively large and subsequently damage connected pipelines. A novel system to protect both, tank and pipeline, could significantly augment the overall safety of industrial plants. With the tank as the primary structure of interest in mind, the Metafoundation was conceived as a locally resonant metamaterial with a band gap encompassing the tanks critical eigenfrequency. The initial design comprised a continuous concrete matrix with embedded resonators and rubber inclusions, which was later reinvented to be a column based structure with steel springs for resonator suspension. After investigating the band-gap phenomenon, a parametric study of the system specifications showed that the horizontal stiffness of the overall foundation is crucial to its functionality, while the superstructure turned out to be non-negligible when tuning the resonators. Furthermore, storage tanks are commonly connected to pipeline system, which can be damaged by the interaction between tank and pipeline during seismic events. Due to the complex and nonlinear response of pipeline systems, the coupled tank-pipeline behaviour becomes increasingly difficult to represent through numerical models, which lead to the experimental study of a foundation-tank-pipeline setup. Under the aid of a hybrid simulation, only the pipeline needed to be represented via a physical substructure, while both tank and Metafoundation were modelled as numerical substrucutres and coupled to the pipeline. The results showed that the foundation can effectively reduce the stresses in the tank and, at the same time, limit the displacements imposed on the pipeline. Leading up on this, an optimization algorithm was developed in the frequency domain, under the consideration of superstructure and ground motion spectrum. The advantages of optimizing in the frequency domain were on the one hand the reduction of computational effort, and on the other hand the consideration of the stochastic nature of the earthquake. Based on this, two different performance indices, investigating interstory drifts and energy dissipation, revealed that neither superstructure nor ground motion can be disregarded when designing a metamaterial-based foundation. Moreover, a 4 m tall optimized foundation, designed to remain elastic when verified with a response spectrum analysis at a return period of 2475 years (according to NTC 2018), reduced the tanks base shear on average by 30%. These results indicated that the foundation was feasible and functional in terms of construction practices and dynamic response, yet unpractical from an economic point of view. In order to tackle the issue of reducing the uneconomic system size, a negative stiffness mechanism was invented and implemented into the foundation as a periodic structure. This mechanism, based on a local instability, amplified the metamaterial like properties and thereby enhanced the overall system performance. Note that due to the considered instability, the device exerted a nonlinear force-displacement relationship, which had the interesting effect of reducing the band-gap instead of increasing it. Furthermore, time history analyses demonstrated that with 50% of the maximum admissible negative stiffness, the foundation could be reduced to 1/3 of its original size, while maintaining its performance. Last but not least, a study on wire ropes as resonator suspension was conducted. Their nonlinear behaviour was approximated with the Bouc Wen model, subsequently linearized by means of stochastic techniques and finally optimized with the algorithm developed earlier. The conclusion was that wire ropes could be used as a more realistic suspension mechanism, while maintaining the high damping values required by the optimized foundation layouts. In sum, a metamaterial-based foundation system is developed and studied herein, with the main findings being: (i) a structure of this type is feasible under common construction practices; (ii) the shear stiffness of the system has a fundamental impact on its functionality; (iii) the superstructure cannot be neglected when studying metamaterial-based foundations; (iv) the complete coupled system can be tuned with an optimization algorithm based on calculations in the frequency domain; (v) an experimental study suggests that the system could be advantageous to connected pipelines; (vi) wire ropes may serve as resonator suspension; and (vii) a novel negative stiffness mechanism can effectively improve the system performance.
APA, Harvard, Vancouver, ISO, and other styles
23

Wenzel, Moritz. "Development of a Metamaterial-Based Foundation System for the Seismic Protection of Fuel Storage Tanks." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/256685.

Full text
Abstract:
Metamaterials are typically described as materials with ’unusual’ wave propagation properties. Originally developed for electromagnetic waves, these materials have also spread into the field of acoustic wave guiding and cloaking, with the most relevant of these ’unusual’ properties, being the so called band-gap phenomenon. A band-gap signifies a frequency region where elastic waves cannot propagate through the material, which in principle, could be used to protect buildings from earthquakes. Based on this, two relevant concepts have been proposed in the field of seismic engineering, namely: metabarriers, and metamaterial-based foundations. This thesis deals with the development of the Metafoundation, a metamaterial-based foundation system for the seismic protection of fuel storage tanks against excessive base shear and pipeline rupture. Note that storage tanks have proven to be highly sensitive to earthquakes, can trigger sever economic and environmental consequences in case of failure and were therefore chosen as a superstructure for this study. Furthermore, when tanks are protected with traditional base isolation systems, the resulting horizontal displacements, during seismic action, may become excessively large and subsequently damage connected pipelines. A novel system to protect both, tank and pipeline, could significantly augment the overall safety of industrial plants. With the tank as the primary structure of interest in mind, the Metafoundation was conceived as a locally resonant metamaterial with a band gap encompassing the tanks critical eigenfrequency. The initial design comprised a continuous concrete matrix with embedded resonators and rubber inclusions, which was later reinvented to be a column based structure with steel springs for resonator suspension. After investigating the band-gap phenomenon, a parametric study of the system specifications showed that the horizontal stiffness of the overall foundation is crucial to its functionality, while the superstructure turned out to be non-negligible when tuning the resonators. Furthermore, storage tanks are commonly connected to pipeline system, which can be damaged by the interaction between tank and pipeline during seismic events. Due to the complex and nonlinear response of pipeline systems, the coupled tank-pipeline behaviour becomes increasingly difficult to represent through numerical models, which lead to the experimental study of a foundation-tank-pipeline setup. Under the aid of a hybrid simulation, only the pipeline needed to be represented via a physical substructure, while both tank and Metafoundation were modelled as numerical substrucutres and coupled to the pipeline. The results showed that the foundation can effectively reduce the stresses in the tank and, at the same time, limit the displacements imposed on the pipeline. Leading up on this, an optimization algorithm was developed in the frequency domain, under the consideration of superstructure and ground motion spectrum. The advantages of optimizing in the frequency domain were on the one hand the reduction of computational effort, and on the other hand the consideration of the stochastic nature of the earthquake. Based on this, two different performance indices, investigating interstory drifts and energy dissipation, revealed that neither superstructure nor ground motion can be disregarded when designing a metamaterial-based foundation. Moreover, a 4 m tall optimized foundation, designed to remain elastic when verified with a response spectrum analysis at a return period of 2475 years (according to NTC 2018), reduced the tanks base shear on average by 30%. These results indicated that the foundation was feasible and functional in terms of construction practices and dynamic response, yet unpractical from an economic point of view. In order to tackle the issue of reducing the uneconomic system size, a negative stiffness mechanism was invented and implemented into the foundation as a periodic structure. This mechanism, based on a local instability, amplified the metamaterial like properties and thereby enhanced the overall system performance. Note that due to the considered instability, the device exerted a nonlinear force-displacement relationship, which had the interesting effect of reducing the band-gap instead of increasing it. Furthermore, time history analyses demonstrated that with 50% of the maximum admissible negative stiffness, the foundation could be reduced to 1/3 of its original size, while maintaining its performance. Last but not least, a study on wire ropes as resonator suspension was conducted. Their nonlinear behaviour was approximated with the Bouc Wen model, subsequently linearized by means of stochastic techniques and finally optimized with the algorithm developed earlier. The conclusion was that wire ropes could be used as a more realistic suspension mechanism, while maintaining the high damping values required by the optimized foundation layouts. In sum, a metamaterial-based foundation system is developed and studied herein, with the main findings being: (i) a structure of this type is feasible under common construction practices; (ii) the shear stiffness of the system has a fundamental impact on its functionality; (iii) the superstructure cannot be neglected when studying metamaterial-based foundations; (iv) the complete coupled system can be tuned with an optimization algorithm based on calculations in the frequency domain; (v) an experimental study suggests that the system could be advantageous to connected pipelines; (vi) wire ropes may serve as resonator suspension; and (vii) a novel negative stiffness mechanism can effectively improve the system performance.
APA, Harvard, Vancouver, ISO, and other styles
24

Residori, Sara. "FABRICATION AND CHARACTERIZATION OF 3D PRINTED METALLIC OR NON-METALLIC GRAPHENE COMPOSITES." Doctoral thesis, Università degli studi di Trento, 2022. https://hdl.handle.net/11572/355324.

Full text
Abstract:
Nature develops several materials with remarkable functional properties composed of comparatively simple base substances. Biological materials are often composites, which optime the conformation to their function. On the other hand, synthetic materials are designed a priori, structuring them according to the performance to be achieved. 3D printing manufacturing is the most direct method for specific component production and earmarks the sample with material and geometry designed ad-hoc for a defined purpose, starting from a biomimetic approach to functional structures. The technique has the advantage of being quick, accurate, and with a limited waste of materials. The sample printing occurs through the deposition of material layer by layer. Furthermore, the material is often a composite, which matches the characteristics of components with different geometry and properties, achieving better mechanical and physical performances. This thesis analyses the mechanics of natural and custom-made composites: the spider body and the manufacturing of metallic and non-metallic graphene composites. The spider body is investigated in different sections of the exoskeleton and specifically the fangs. The study involves the mechanical characterization of the single components by the nanoindentation technique, with a special focus on the hardness and Young's modulus. The experimental results were mapped, purposing to present an accurate comparison of the mechanical properties of the spider body. The different stiffness of components is due to the tuning of the same basic material (the cuticle, i.e. mainly composed of chitin) for achieving different mechanical functions, which have improved the animal adaptation to specific evolutive requirements. The synthetic composites, suitable for 3D printing fabrication, are metallic and non-metallic matrices combined with carbon-based fillers. Non-metallic graphene composites are multiscale compounds. Specifically, the material is a blend of acrylonitrile-butadiene-styrene (ABS) matrix and different percentages of micro-carbon fibers (MCF). In the second step, nanoscale filler of carbon nanotubes (CNT) or graphene nanoplatelets (GNP) are added to the base mixture. The production process of composite materials followed a specific protocol for the optimal procedure and the machine parameters, as also foreseen in the literature. This method allowed the control over the percentages of the different materials to be adopted and ensured a homogeneous distribution of fillers in the plastic matrix. Multiscale compounds provide the basic materials for the extrusion of fused filaments, suitable for 3D printing of the samples. The composites were tested in the configuration of compression moulded sheets, as reference tests, and also in the corresponding 3D printed specimens. The addition of the micro-filler inside the ABS matrix caused a notable increment in stiffness and a slight increase in strength, with a significant reduction in deformation at the break. Concurrently, the addition of nanofillers was very effective in improving electrical conductivity compared to pure ABS and micro-composites, even at the lowest filler content. Composites with GNP as a nano-filler had a good impact on the stiffness of the materials, while the electrical conductivity of the composites is favoured by the presence of CNTs. Moreover, the extrusion of the filament and the print of fused filament fabrication led to the creation of voids within the structure, causing a significant loss of mechanical properties and a slight improvement in the electrical conductivity of the multiscale moulded composites. The final aim of this work is the identification of 3D-printed multiscale composites capable of the best matching of mechanical and electrical properties among the different compounds proposed. Since structures with metallic matrix and high mechanical performances are suitable for aerospace and automotive industry applications, metallic graphene composites are studied in the additive manufacturing sector. A comprehensive study of the mechanical and electrical properties of an innovative copper-graphene oxide composite (Cu-GO) was developed in collaboration with Fondazione E. Amaldi, in Rome. An extensive survey campaign on the working conditions was developed, leading to the definition of an optimal protocol of printing parameters for obtaining the samples with the highest density. The composite powders were prepared following two different routes to disperse the nanofiller into Cu matrix and, afterward, were processed by selective laser melting (SLM) technique. Analyses of the morphology, macroscopic and microscopic structure, and degree of oxidation of the printed samples were performed. Samples prepared followed the mechanical mixing procedure showed a better response to the 3D printing process in all tests. The mechanical characterization has instead provided a clear increase in the resistance of the material prepared with the ultrasonicated bath method, despite the greater porosity of specimens. The interesting comparison obtained between samples from different routes highlights the influence of powder preparation and working conditions on the printing results. We hope that the research could be useful to investigate in detail the potential applications suitable for composites in different technological fields and stimulate further comparative analysis.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhu, Yan. "Rational design of plastic packaging for alcoholic beverages." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLA020.

Full text
Abstract:
La perception des emballages alimentaires est passée d’utile à source majeure de contaminants dans les aliments et menace pour l’environnement. La substitution du verre par des con-tenants en plastiques recyclés ou biosourcés réduit l’impact environnemental des boissons embouteillées. La thèse a développé de nouveaux outils de simulation 3D et d’optimisation pour accélérer le prototypage des emballages éco-efficaces pour les boissons alcoolisées. La durée de conservation des boissons, la sécurité sanitaire des matériaux plastiques recyclés, les contraintes mécaniques, et la quantité de déchets sont considérées comme un seul problème d'optimisation multicritères. Les nouvelles bouteilles sont générées virtuellement et itérativement en trois étapes comprenant : i) une [E]valuation multiéchelle des transferts de masse couplés ; ii) une étape de [D]écision validant les contraintes techniques (forme, capacité, poids) et réglementaires (durée de conservation, migrations); iii) une étape globale de ré[S]olution recherchant des solutions de Pareto acceptables. La capacité de prédire la durée de vie des liqueurs dans des conditions réelles a été testée avec succès sur environ 500 miniatures en PET (polyéthylène téréphtalate) sur plusieurs mois. L’ensemble de l’approche a été conçu pour gérer tout transfert de matière couplé (perméation, sorption, migration). La sorption mutuelle est prise en compte via une formulation polynaire de Flory-Huggins. Une formulation gros grain de la théorie des volumes libres de Vrentas et Duda a été développée pour prédire les propriétés de diffusion dans les polymères vitreux de l’eau et des solutés organiques dans des polymères arbitraires (polyesters, polyamides, polyvinyles, polyoléfines). 409 diffusivités issues de la littérature ou mesurées ont été utilisée pour validation. La contribution de la relaxation du PET vitreux a été analysée par sorption différentielle (binaire et ternaire) de 25 à 50 °C. Une partie du code source sera partagé afin d'encourager l'intégration de davantage de paramètres affectant la durée de conservation des boissons et des produits alimentaires (cinétique d'oxydation, piégeage d'arômes)
The view of plastic food packaging turned from useful to a major source of contaminants in food and an environmental threat. Substituting glass by recycled or biosourced plastic containers reduces environmental impacts for bottled beverages. The thesis developed a 3D computational and optimization framework to accelerate the prototyping of eco-efficient packaging for alcoholic beverages. Shelf-life, food safety, mechanical constraints, and packaging wastes are considered into a single multicriteria optimization problem. New bottles are virtually generated within an iterative three steps process involving: i) a multiresolution [E]valuation of coupled mass transfer; ii) a [D]ecision step validating technical (shape, capacity, weight) and regulatory (shelf-life, migrations) constraints; iii) a global [Solving] step seeking acceptable Pareto solutions. The capacity to predict shelf-life of liquors in real conditions was tested successfully on ca. 500 hundred bottle min iatures in PET (polyethylene terephthalate) over several months. The entire approach has been designed to manage any coupled mass transfer (permeation, sorption, migration). Mutual sorption is considered via polynary Flory-Huggins formulation. A blob formulation of the free-volume theory of Vrentas and Duda was developed to predict the diffusion properties in glassy polymers of water and organic solutes in arbitrary polymers (polyesters, polyamides, polyvinyls, polyolefins). The validation set included 433 experimental diffusivities from literature and measured in this work. The contribution of polymer relaxation in glassy PET was analyzed in binary and ternary differential sorption using a cosorption microbalance from 25 to 50°C. Part of the framework will be released as an open-source project to encourage the integration of more factors affecting the shelf-life of beverages and food products (oxidation kinetics, aroma scalping)
APA, Harvard, Vancouver, ISO, and other styles
26

Derou, Dominique. "Optimisation neuronale et régularisation multiéchelle auto-organisée pour la trajectographie de particules : application à l'analyse d'écoulements en mécanique des fluides." Grenoble INPG, 1995. http://www.theses.fr/1995INPG0146.

Full text
Abstract:
Differentes techniques de traitement d'images, reunies sous l'appellation de velocimetrie par images de particules, sont utilisees pour extraire les caracteristiques d'ecoulements de mecanique des fluides. Avec l'emergence de nouvelles techniques de resolution de problemes combinatoires et de traitement d'images, il est possible d'aborder efficacement le probleme de velocimetrie sous un jour nouveau. Dans la presente etude, nous nous interessons aux deux points suivants: la restitution d'un champ de vitesse instantanee, a partir des images de visualisation d'ecoulements ensemences a l'aide de particules, la reconstruction d'un champ dense de vitesse a partir des mesures eparses precedemment determinees. Pour atteindre ces objectifs, dans la premiere partie de cette these, nous developpons une nouvelle methode de traitement d'images destinee a la trajectographie d'objets. Basee sur une modelisation inspiree de considerations sur la perception visuelle, elle conduit a une formulation originale sous la forme d'un probleme d'optimisation combinatoire. Pour le resoudre, un nouveau type de reseau de neurones est developpe, dont les proprietes tres generiques peuvent interesser nombre d'autres applications. La seconde partie de cette these se confronte au probleme d'approximation de donnees. Considerant le probleme particulier a resoudre, nous avons bati un modele markovien de reconstruction de champ de vitesse de mecanique des fluides et formule le probleme comme un probleme de regularisation. Nous avons alors propose une nouvelle methode de relaxation multiechelle a maillage adapte a la repartition spatiale des donnees. Les domaines abordes dans cette these sont multiples et varies: velocimetrie par images de particules, theorie de la gestalt, perception visuelle, groupement perceptif, optimisation, reseaux de neurones, modelisation markovienne, approximation de donnees, theorie de la regularisation, cartes neuronales auto-organisantes, relaxation multiechelle
APA, Harvard, Vancouver, ISO, and other styles
27

Eve, Thierry. "Optimisation decentralisee et coordonnee de la conduite de systemes electriques interconnectes." Paris 6, 1994. http://www.theses.fr/1994PA066569.

Full text
Abstract:
Dans notre travail, l'optimisation de la conduite de systemes electriques interconnectes est envisagee sous l'angle de la resolution d'un grand programme lineaire dont la formulation complete est inaccessible. En effet, les donnees necessaires a la description du probleme sont reparties entre differents centres de decision qui n'echangent entre eux qu'un volume limite d'informations. Par ailleurs, chaque centre de decision souhaite conserver son autonomie, c'est-a-dire resoudre avec ses propres outils une version restreinte du probleme de base. En s'appuyant sur le contexte mathematique de la decomposition-coordination, notre objectif est de proposer differents moyens d'obtenir une solution optimale du probleme lineaire global, tout en tenant compte de cette structure decentralisee. Cette approche comporte deux phases: l'une de modelisation, l'autre de resolution. Dans un premier temps, on identifie les differentes techniques de transformation de problemes, qui permettent de modeliser le probleme a resoudre sous une forme compatible avec les mecanismes de decomposition-coordination. On introduit en particulier la notion de modeles multi-echelles, qui donnent un sens tres precis aux roles attribues respectivement au centre coordonnateur et aux centres de decision locaux. Les problemes d'optimisation issus de l'etape de modelisation, sont de nature convexe et non differentiable. Pour les resoudre, on propose trois mises en uvre originales d'algorithmes bases sur un procede de regularisation: deux implementations de l'algorithme proximal, l'une adaptee a la minimisation d'une fonction convexe et non differentiable sous contraintes lineaires, l'autre adaptee cette fois a la resolution de problemes de points-selles ; et enfin, la methode barycentrique qui a pour cadre d'application la decomposition par les consignes
APA, Harvard, Vancouver, ISO, and other styles
28

Gobé, Alexis. "Méthodes Galerkin discontinues pour la simulation de problèmes multiéchelles en nanophotonique et applications au piégeage de la lumière dans des cellules solaires." Thesis, Université Côte d'Azur, 2020. http://www.theses.fr/2020COAZ4011.

Full text
Abstract:
L’objectif de cette thèse est l’étude numérique du piégeage de la lumière dans des cellules solaires nanostructurées. Le changement climatique est devenu une problématique majeure nécessitant une transition énergétique à court terme. Dans ce contexte, l'énergie solaire semble être une source énergétique idéale. Cette ressource est à la fois scalable à l’échelle planétaire et écologique. Afin de maximiser sa pénétration, des travaux visant à augmenter la quantité de lumière absorbée et à réduire les coûts liés à la conception des cellules sont nécessaires. Le piégeage de la lumière est une stratégie qui permet d’atteindre ces deux objectifs. Son principe consiste à utiliser des texturations nanométriques afin de focaliser la lumière dans les couches de semi-conducteur absorbantes. Dans ce travail, la méthode de Galerkine Discontinue en Domaine Temporel (DGTD) est introduite. Deux développements méthodologiques majeurs, permettant de mieux prendre en compte les caractéristiques des cellules solaires, sont présentés. Tout d'abord, l’utilisation d’un ordre d’approximation local est proposé , basé sur une stratégie de répartition particulière de l’ordre. Le deuxième développement est l’utilisation de maillage hybride mixant ses élements hexahédriques structurés et tétrahédriques non structurés. Des cas réalistes de cellules solaires issus de la littérature et de collaborations avec des physiciens du domaine du photovoltaïque permettent d'illustrer l' apport de ces développements. Un cas d’optimisation inverse de réseau de diffraction dans une cellule solaire est également présenté en couplant le solveur numérique avec un algorithme d’optimisation bayésienne. De plus, une étude approfondie des performances du solveur a également été réalisée avec des modifications méthodologiques pour contrer les problèmes de répartition de charge. Enfin, une méthode plus prospective, la méthode Multiéchelle Hybride-Mixte (MHM) spécialisée dans la résolution de problème très fortement multiéchelle est introduite. Un schéma en temps multiéchelle est présenté et sa stabilité prouvée
The objective of this thesis is the numerical study of light trapping in nanostructured solar cells. Climate change has become a major issue requiring a short-term energy transition. In this context, solar energy seems to be an ideal energy source. This resource is both globally scalable and environmentally friendly. In order to maximize its penetration, it is needed to increase the amount of light absorbed and reduce the costs associated with cell design. Light trapping is a strategy that achieves both of these objectives. The principle is to use nanometric textures to focus the light in the absorbing semiconductor layers. In this work, the Discontinuous Galerkin Time-Domain (DGTD) method is introduced. Two major methodological developments are presented, allowing to better take into account the characteristics of solar cells. First, the use of a local approximation order is proposed, based on a particular order distribution strategy. The second development is the use of hybrid meshes mixing structured hexahedral and unstructured tetrahedral elements. Realistic cases of solar cells from the literature and collaborations with physicists in the field of photovoltaics illustrate the contribution of these developments. A case of inverse optimization of a diffraction grating in a solar cell is also presented by coupling the numerical solver with a Bayesian optimization algorithm. In addition, an in-depth study of the solver's performance has also been carried out with methodological modifications to counter load balancing problems. Finally, a more prospective method, the Multiscale Hybrid-Mixed method (MHM) specialized in solving very highly multiscale problems is introduced. A multiscale time scheme is presented and its stability is proven
APA, Harvard, Vancouver, ISO, and other styles
29

Rath, James Michael 1975. "Multiscale basis optimization for Darcy flow." Thesis, 2007. http://hdl.handle.net/2152/3977.

Full text
Abstract:
Simulation of flow through a heterogeneous porous medium with fine-scale features can be computationally expensive if the flow is fully resolved. Coarsening the problem gives a faster approximation of the flow but loses some detail. We propose an algorithm that obtains the fully resolved approximation but only iterates on a sequence of coarsened problems. The sequence is chosen by optimizing the shapes of the coarse finite element basis functions. As a stand-alone method, the algorithm converges globally and monotonically with a quadratic asymptotic rate. Computational experience indicates the number of iterations needed is independent of the resolution and heterogeneity of the medium. However, an externally provided error estimate is required; the algorithm could be combined as an accelerator with another iterative algorithm. A single "inner" iteration of the other algorithm would yield an error estimate; following it with an "outer" iteration of our algorithm would give a viable method.
text
APA, Harvard, Vancouver, ISO, and other styles
30

Li, Xiaohai. "Multiscale simulation and optimization of copper electrodeposition /." 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3290296.

Full text
Abstract:
Thesis (Ph. D.)--University of Illinois at Urbana-Champaign, 2007.
Source: Dissertation Abstracts International, Volume: 68-11, Section: B, page: 7496. Advisers: Richard D. Braatz; Richard C. Alkire. Includes bibliographical references. Available on microfilm from Pro Quest Information and Learning.
APA, Harvard, Vancouver, ISO, and other styles
31

Raimondeau, Stephanie Marie. "Development, optimization, and reduction of hierarchical multiscale models." 2003. https://scholarworks.umass.edu/dissertations/AAI3078715.

Full text
Abstract:
The objective of this dissertation is to develop various multiscale modeling methods for chemical reactors. This objective stems from the demand for more accurate and predictive reactor models or the need for miniaturization of materials. The wide disparity in scales encountered in practical industrial reactors requires a hierarchical approach with a different model for each scale along with methods for coupling these models. These multiscale models entail dynamic, bi-directional coupling of quantum information with molecular simulations and continuum deterministic reactor models. The feasibility of such multiscale, hybrid models is demonstrated in a model system, that of the catalytic oxidation of CO and H2 on a Pt single crystal embedded in a continuous stirred tank reactor. Emphasis is placed on surface processes, such as lateral adsorbate-adsorbate interactions, proximity effects encountered in all bimolecular events, and surface diffusion. Significant differences in model responses have been observed between multiscale models and continuum, mean field based models when the dominant surface species is immobile (e.g., oxygen). Since multiscale models for realistic systems are currently semi-quantitative, a multistep optimization methodology has been introduced and successfully applied to the catalytic oxidation of CO on Pt that enables to refine model parameters. It has been found that the lower level continuum, mean field model can be used for preliminary optimization steps, such as identification of the important kinetic parameters and generation of initial estimates. The proper orthogonal decomposition technique has been used to obtain low dimensional approximations of such multiscale models describing epitaxial growth of materials as an example. Both the dynamics of a stagnation fluid phase and the surface morphology are successfully described by reduced mathematical models. Towards a more practical application, flame propagation in natural gas microburners is explored using a detailed two-dimensional model with emphasis on interfacial gas-phase phenomena. While radial gradients and gas-surface temperature discontinuity are found to be unimportant, it is shown that the critical quenching diameter strongly depends on the initial heat loss and radical wall quenching. Furthermore, catalytic microburners appear to be a more promising choice as compared to their homogeneous counterpart.
APA, Harvard, Vancouver, ISO, and other styles
32

Ngnotchouye, Jean Medard Techoukouegno. "Conservation laws models in networks and multiscale flow optimization." Thesis, 2011. http://hdl.handle.net/10413/7922.

Full text
Abstract:
The flow of fluids in a network is of practical importance in gas, oil and water transport for industrial and domestic use. When the flow dynamics are understood, one may be interested in the control of the flow formulated as follows: given some fluid properties at a final time, can one determine the initial flow properties that lead to the desired flow properties? In this thesis, we first consider the flow of a multiphase gas, described by the drift flux model, in a network of pipes and that of water, modeled by the shallow water equations, in a network of rivers. These two models are systems of partial differential equations of first order generally referred to as systems of conservation laws. In particular, our contribution in this regard can be summed up as follows: For the drift-flux model, we consider the flow in a network of pipes seen mathematically as an oriented graph. We solve the standard Riemann problem and prove a well posedness result for the Riemann problem at a junction. This result is obtained using coupling conditions that describe the dynamics at the intersection of the pipes. Moreover, we present numerical results for standard pipes junctions. The numerical results and the analytical results are in agreement. This is an extension for multiphase flows of some known results for single phase flows. Thereafter, the shallow water equations are considered as a model for the flow of water in a network of canals. We analyze coupling conditions at the confluence of rivers, precisely the conservation of mass and the equality of water height at the intersection, and implement these results for some classical river confluences. We also consider the case of pooled stepped chutes, a geometry frequently utilized by dams to spill floodwater. Here we consider an approach different from the engineering community in the sense that we resolve the dynamics by solving a Riemann problem at the dam for the shallow water equations with some suitable coupling conditions. Secondly, we consider an optimization problem constrained by the Euler equations with a flow-matching objective function. Differently from the existing approaches to this problem, we consider a linear approximation of the flow equation in the form of the microscopic Lattice Boltzmann Equations (LBE). We derive an adjoint calculus and the optimality conditions from the microscopic LBE. Using multiscale analysis, we obtain an equivalent macroscopic result at the hydrodynamic limit. Our numerical results demonstrate the ability of our method to solve challenging problems in fluid mechanics.
Thesis (Ph.D.)-University of KwaZulu-Natal, Pietermaritzburg, 2011.
APA, Harvard, Vancouver, ISO, and other styles
33

Sun, Hao. "Development of Hierarchical Optimization-based Models for Multiscale Damage Detection." Thesis, 2014. https://doi.org/10.7916/D8GQ6W2J.

Full text
Abstract:
In recent years, health monitoring of structure and infrastructure systems has become a valuable source of information for evaluating structural integrity, durability and reliability throughout the lifecycle of structures as well as ensuring optimal maintenance planning and operation. Important advances in sensor and computer technologies made possible to process a large amount of data, to extract the characteristic features of the signals, and to link those to the current structural conditions. In general, the process of data feature extraction relates to solving an inverse problem, in either a data-driven or a model-based type setting. This dissertation explores state-of-the-art hierarchical optimization-based computational algorithms for solving multiscale model-based inverse problems such as system identification and damage detection. The basic idea is to apply optimization tools to quantify an established model or system, characterized by a set of unknown governing parameters, via minimizing the discrepancy between the predicted system response and the measured data. We herein propose hierarchical optimization algorithms such as the improved artificial bee colony algorithms integrated with local search operators to accomplish this task. In this dissertation, developments in multiscale damage detection are presented in two parts. In the first part, efficient hybrid bee algorithms in both serial and parallel schemes are proposed for time domain input-output and output-only identification of macro-scale linear/nonlinear systems such as buildings and bridges. Solution updating strategies of the artificial bee colony algorithm are improved for faster convergence, meanwhile, the simplex method and gradient-based optimization techniques are employed as local search operators for accurate solution tuning. In the case of output-only measurements, both system parameters and the time history of input excitations can be simultaneously identified using a modified Newmark integration scheme. The synergy between the proposed method and Bayesian inference are proposed to quantify uncertainties of a system. Numerical and experimental applications are investigated and presented for macro-scale system identification, finite element model updating and damage detection. In the second part, a framework combining the eXtended Finite Element Method (XFEM) and the proposed optimization algorithms is investigated, for nondestructive detection of multiple flaws/defects embedded in meso-scale systems such as critical structural components like plates. The measurements are either static strains or displacements. The number of flaws as well as their locations and sizes can be identified. XFEM with circular and/or elliptical void enrichments is employed to solve the forward problem and alleviates the costly re-meshing along with the update of flaw boundaries in the identification process. Numerical investigations are presented to validate the proposed method in application to detection of multiple flaws and damage regions. Overall, the proposed multiscale methodologies show a great potential in assessing the structural integrity of building and bridge systems, critical structural components, etc., leading to a smart structure and infrastructure management system.
APA, Harvard, Vancouver, ISO, and other styles
34

(5930414), Tong Wu. "TOPOLOGY OPTIMIZATION OF MULTISCALE STRUCTURES COUPLING FLUID, THERMAL AND MECHANICAL ANALYSIS." Thesis, 2019.

Find full text
Abstract:
The objective of this dissertation is to develop new methods in the areas of multiscale topology optimization, thermomechanical topology optimization including heat convection, and thermal-fluid topology optimization. The dissertation mainly focuses on developing five innovative topology optimization algorithms with respect to structure and multistructure coupling fluid, thermal and mechanical analysis, in order to solve customary design requirements. Most of algorithms are coded as in-house code in MATLAB.

In Chapter One, a brief introduction of topology optimization, a brief literature review and the objective is presented. Five innovative algorithms are illustrated in Chapter Two
to Six. From Chapter Two to Four, the methods with respect to multiscale approach are presneted. and Chapter Five and Six aims to contribute further research associated with
topology optimization considering heat convection. In Chapter Two, a multiphse topology optimization of thermomechanical structures is presented, in which the optimized structure is composed of several phases of prescribed lattice unit cells. Chapter Three presents a
Multiscale, thermomechanical topology optimization of self-supporting cellular structures. Each lattice unit cell have a optimised porousity and diamond shape that benefit additive
manufacturing. In Chapter Four, the multiscale approach is extended to topology optimization involved with fluid mechanics problem to design optimized micropillar arrays in
microfludics devices. The optimised micropillars minimize the energy loss caused by local fluid drag force. In Chapter Five, a novel thermomechanical topology optimization is developed, in order to generate optimized multifunctional lattice heat transfer structure. The algorithm approximate convective heat transfer by design-dependent heat source and natural convection. In Chapter Six, an improved thermal-fluid topology optimization method is created to flexibly handle the changing of thermal-fluid parameters such as external heat source, Reynolds number, Prandtl number and thermal diffusivity. The results show the
changing of these parameters lead versatile optimized topologies. Finally, the summary and recommendations are presented in Chapter Seven.

APA, Harvard, Vancouver, ISO, and other styles
35

DE, MAIO RAUL. "Multiscale methods for traffic flow on networks." Doctoral thesis, 2019. http://hdl.handle.net/11573/1239374.

Full text
Abstract:
In this thesis we propose a model to describe traffic flows on network by the theory of measure-based equations. We first apply our approach to the initial/boundary-value problem for the measure-valued linear transport equation on a bounded interval, which is the prototype of an arc of the network. This simple case is the first step to build the solution of the respective linear problem on networks: we construct the global solution by gluing all the measure-valued solutions on the arcs by means of appropriate distribution rules at the vertices. The linear case is adopted to show the well-posedness for the transport equation on networks in case of nonlocal velocity fields, i.e. which depends not only on the state variable, but also on the solution itself. It is also studied a representation formula in terms of the push-forward of the initial and boundary data along the network along the admissible trajectories, weighted by a properly dened measure on curves space. Moreover, we discuss an example of nonlocal velocity eld tting our framework and show the related model features with numerical simulations. In the last part, we focus on a class of optimal control problems for measure-valued nonlinear transport equations describing traffic ow problems on networks. The objective is to optimize macroscopic quantities, such as traffic volume, average speed, pollution or average time in a fixed area, by controlling only few agents, for example smart traffic lights or automated cars. The measure-based approach allows to study in the same setting local and nonlocal drivers interactions and to consider the control variables as additional measures interacting with the drivers distribution. To complete our analysis, we propose a gradient descent adjoint-based optimization method and some numerical experiments in the case of smart traffic lights for a 2-1 junction.
APA, Harvard, Vancouver, ISO, and other styles
36

"Measurement, optimization and multiscale modeling of silicon wafer bonding interface fracture resistance." Université catholique de Louvain, 2006. http://edoc.bib.ucl.ac.be:81/ETD-db/collection/available/BelnUcetd-10272006-203227/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Hu, Nan. "Advances in Multiscale Methods with Applications in Optimization, Uncertainty Quantification and Biomechanics." Thesis, 2016. https://doi.org/10.7916/D8FX79N8.

Full text
Abstract:
Advances in multiscale methods are presented from two perspectives which address the issue of computational complexity of optimizing and inverse analyzing nonlinear composite materials and structures at multiple scales. The optimization algorithm provides several solutions to meet the enormous computational challenge of optimizing nonlinear structures at multiple scales including: (i) enhanced sampling procedure that provides superior performance of the well-known ant colony optimization algorithm, (ii) a mapping-based meshing of a representative volume element that unlike unstructured meshing permits sensitivity analysis on coarse meshes, and (iii) a multilevel optimization procedure that takes advantage of possible weak coupling of certain scales. We demonstrate the proposed optimization procedure on elastic and inelastic laminated plates involving three scales. We also present an adaptive variant of the measure-theoretic approach (MTA) for stochastic characterization of micromechanical properties based on the observations of quantities of interest at the coarse (macro) scale. The salient features of the proposed nonintrusive stochastic inverse solver are: identification of a nearly optimal sampling domain using enhanced ant colony optimization algorithm for multiscale problems, incremental Latin-hypercube sampling method, adaptive discretization of the parameter and observation spaces, and adaptive selection of number of samples. A complete test data of the TORAY T700GC-12K-31E and epoxy #2510 material system from the NIAR report is employed to characterize and validate the proposed adaptive nonintrusive stochastic inverse algorithm for various unnotched and open-hole laminates. Advances in Multiscale methods also provides us a unique tool to study and analyze human bones, which can be seen as a composite material, too. We used two multiscale approaches for fracture analysis of full scale femur. The two approaches are the reduced order homogenization (ROH) and the novel accelerated reduced order homogenization (AROH). The AROH is based on utilizing ROH calibrated to limited data as a training tool to calibrate a simpler, single-scale anisotropic damage model. For bone tissue orientation, we take advantage of so-called Wolff’s law. The meso-phase properties are identified from the least square minimization of error between the overall cortical and trabecular bone properties and those predicted from the homogenization. The overall elastic and inelastic properties of the cortical and trabecular bone microstructure are derived from bone density that can be estimated from the Hounsfield units (HU). For model validation, we conduct ROH and AROH simulations of full scale finite element model of femur created from the QCT and compare the simulation results with available experimental data.
APA, Harvard, Vancouver, ISO, and other styles
38

Park, Han-Young. "A Hierarchical Multiscale Approach to History Matching and Optimization for Reservoir Management in Mature Fields." Thesis, 2012. http://hdl.handle.net/1969.1/ETD-TAMU-2012-08-11779.

Full text
Abstract:
Reservoir management typically focuses on maximizing oil and gas recovery from a reservoir based on facts and information while minimizing capital and operating investments. Modern reservoir management uses history-matched simulation model to predict the range of recovery or to provide the economic assessment of different field development strategies. Geological models are becoming increasingly complex and more detailed with several hundred thousand to million cells, which include large sets of subsurface uncertainties. Current issues associated with history matching, therefore, involve extensive computation (flow simulations) time, preserving geologic realism, and non-uniqueness problem. Many of recent rate optimization methods utilize constrained optimization techniques, often making them inaccessible for field reservoir management. Field-scale rate optimization problems involve highly complex reservoir models, production and facilities constraints and a large number of unknowns. We present a hierarchical multiscale calibration approach using global and local updates in coarse and fine grid. We incorporate a multiscale framework into hierarchical updates: global and local updates. In global update we calibrate large-scale parameters to match global field-level energy (pressure), which is followed by local update where we match well-by-well performances by calibration of local cell properties. The inclusion of multiscale calibration, integrating production data in coarse grid and successively finer grids sequentially, is critical for history matching high-resolution geologic models through significant reduction in simulation time. For rate optimization, we develop a hierarchical analytical method using streamline-assisted flood efficiency maps. The proposed approach avoids use of complex optimization tools; rather we emphasize the visual and the intuitive appeal of streamline method and utilize analytic solutions derived from relationship between streamline time of flight and flow rates. The proposed approach is analytic, easy to implement and well-suited for large-scale field applications. Finally, we present a hierarchical Pareto-based approach to history matching under conflicting information. In this work we focus on multiobjective optimization problem, particularly conflicting multiple objectives during history matching of reservoir performances. We incorporate Pareto-based multiobjective evolutionary algorithm and Grid Connectivity-based Transformation (GCT) to account for history matching with conflicting information. The power and effectiveness of our approaches have been demonstrated using both synthetic and real field cases.
APA, Harvard, Vancouver, ISO, and other styles
39

Varshney, Amit. "Optimization and control of multiscale process systems using model reduction: application to thin-film growth." 2007. http://etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-1967/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Foderaro, Greg. "A Distributed Optimal Control Approach for Multi-agent Trajectory Optimization." Diss., 2013. http://hdl.handle.net/10161/8226.

Full text
Abstract:

This dissertation presents a novel distributed optimal control (DOC) problem formulation that is applicable to multiscale dynamical systems comprised of numerous interacting systems, or agents, that together give rise to coherent macroscopic behaviors, or coarse dynamics, that can be modeled by partial differential equations (PDEs) on larger spatial and time scales. The DOC methodology seeks to obtain optimal agent state and control trajectories by representing the system's performance as an integral cost function of the macroscopic state, which is optimized subject to the agents' dynamics. The macroscopic state is identified as a time-varying probability density function to which the states of the individual agents can be mapped via a restriction operator. Optimality conditions for the DOC problem are derived analytically, and the optimal trajectories of the macroscopic state and control are computed using direct and indirect optimization algorithms. Feedback microscopic control laws are then derived from the optimal macroscopic description using a potential function approach.

The DOC approach is demonstrated numerically through benchmark multi-agent trajectory optimization problems, where large systems of agents were given the objectives of traveling to goal state distributions, avoiding obstacles, maintaining formations, and minimizing energy consumption through control. Comparisons are provided between the direct and indirect optimization techniques, as well as existing methods from the literature, and a computational complexity analysis is presented. The methodology is also applied to a track coverage optimization problem for the control of distributed networks of mobile omnidirectional sensors, where the sensors move to maximize the probability of track detection of a known distribution of mobile targets traversing a region of interest (ROI). Through extensive simulations, DOC is shown to outperform several existing sensor deployment and control strategies. Furthermore, the computation required by the DOC algorithm is proven to be far reduced compared to that of classical, direct optimal control algorithms.


Dissertation
APA, Harvard, Vancouver, ISO, and other styles
41

Bauman, Paul Thomas 1980. "Adaptive multiscale modeling of polymeric materials using goal-oriented error estimation, Arlequin coupling, and goals algorithms." Thesis, 2008. http://hdl.handle.net/2152/3824.

Full text
Abstract:
Scientific theories that explain how physical systems behave are described by mathematical models which provide the basis for computer simulations of events that occur in the physical universe. These models, being only mathematical characterizations of actual phenomena, are obviously subject to error because of the inherent limitations of all mathematical abstractions. In this work, new theory and methodologies are developed to quantify such modeling error in a special way that resolves a fundamental and standing issue: multiscale modeling, the development of models of events that transcend many spatial and temporal scales. Specifically, we devise the machinery for a posteriori estimates of relative modeling error between a model of fine scale and another of coarser scale, and we use this methodology as a general approach to multiscale problems. The target application is one of critical importance to nanomanufacturing: imprint lithography of semiconductor devices. The development of numerical methods for multiscale modeling has become one of the most important areas of computational science. Technological developments in the manufacturing of semiconductors hinge upon the ability to understand physical phenomena from the nanoscale to the microscale and beyond. Predictive simulation tools are critical to the advancement of nanomanufacturing semiconductor devices. In principle, they can displace expensive experiments and testing and optimize the design of the manufacturing process. The development of such tools rest on the edge of contemporary methods and high-performance computing capabilities and is a major open problem in computational science. In this dissertation, a molecular model is used to simulate the deformation of polymeric materials used in the fabrication of semiconductor devices. Algorithms are described which lead to a complex molecular model of polymer materials designed to produce an etch barrier, a critical component in imprint lithography approaches to semiconductor manufacturing. Each application of this so-called polymerization process leads to one realization of a lattice-type model of the polymer, a molecular statics model of enormous size and complexity. This is referred to as the base model for analyzing the deformation of the etch barrier, a critical feature of the manufacturing process. To reduce the size and complexity of this model, a sequence of coarser surrogate models is generated. These surrogates are the multiscale models critical to the successful computer simulation of the entire manufacturing process. The surrogate involves a combination of particle models, the molecular model of the polymer, and a coarse-scale model of the polymer as a nonlinear hyperelastic material. Coefficients for the nonlinear elastic continuum model are determined using numerical experiments on representative volume elements of the polymer model. Furthermore, a simple model of initial strain is incorporated in the continuum equations to model the inherit shrinking of the A coupled particle and continuum model is constructed using a special algorithm designed to provide constraints on a region of overlap between the continuum and particle models. This coupled model is based on the so-called Arlequin method that was introduced in the context of coupling two continuum models with differing levels of discretization. It is shown that the Arlequin problem for the particle-tocontinuum model is well posed in a one-dimensional setting involving linear harmonic springs coupled with a linearly elastic continuum. Several numerical examples are presented. Numerical experiments in three dimensions are also discussed in which the polymer model is coupled to a nonlinear elastic continuum. Error estimates in local quantities of interest are constructed in order to estimate the modeling error due to the approximation of the particle model by the coupled multiscale surrogate model. The estimates of the error are computed by solving an auxiliary adjoint, or dual, problem that incorporates as data the quantity of interest or its derivatives. The solution of the adjoint problem indicates how the error in the approximation of the polymer model inferences the error in the quantity of interest. The error in the quantity of interest represents the relative error between the value of the quantity evaluated for the base model, a quantity typically unavailable or intractable, and the value of the quantity of interest provided by the multiscale surrogate model. To estimate the error in the quantity of interest, a theorem is employed that establishes that the error coincides with the value of the residual functional acting on the adjoint solution plus a higher-order remainder. For each surrogate in a sequence of surrogates generated, the residual functional acting on various approximations of the adjoint is computed. These error estimates are used to construct an adaptive algorithm whereby the model is adapted by supplying additional fine-scale data in certain subdomains in order to reduce the error in the quantity of interest. The adaptation algorithm involves partitioning the domain and selecting which subdomains are to use the particle model, the continuum model, and where the two overlap. When the algorithm identifies that a region contributes a relatively large amount to the error in the quantity of interest, it is scheduled for refinement by switching the model for that region to the particle model. Numerical experiments on several configurations representative of nano-features in semiconductor device fabrication demonstrate the effectiveness of the error estimate in controlling the modeling error as well as the ability of the adaptive algorithm to reduce the error in the quantity of interest. There are two major conclusions of this study: 1. an effective and well posed multiscale model that couples particle and continuum models can be constructed as a surrogate to molecular statics models of polymer networks and 2. an error estimate of the modeling error for such systems can be estimated with sufficient accuracy to provide the basis for very effective multiscale modeling procedures. The methodology developed in this study provides a general approach to multiscale modeling. The computational procedures, computer codes, and results could provide a powerful tool in understanding, designing, and optimizing an important class of semiconductormanufacturing processes. The study in this dissertation involves all three components of the CAM graduate program requirements: Area A, Applicable Mathematics; Area B, Numerical Analysis and Scientific Computation; and Area C, Mathematical Modeling and Applications. The multiscale modeling approach developed here is based on the construction of continuum surrogates and coupling them to molecular statics models of polymer as well as a posteriori estimates of error and their adaptive control. A detailed mathematical analysis is provided for the Arlequin method in the context of coupling particle and continuum models for a class of one-dimensional model problems. Algorithms are described and implemented that solve the adaptive, nonlinear problem proposed in the multiscale surrogate problem. Large scale, parallel computations for the base model are also shown. Finally, detailed studies of models relevant to applications to semiconductor manufacturing are presented.
text
APA, Harvard, Vancouver, ISO, and other styles
42

Liu, Kai. "Concurrent topology optimization of structures and materials." Thesis, 2013. http://hdl.handle.net/1805/3755.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Topology optimization allows designers to obtain lightweight structures considering the binary distribution of a solid material. The introduction of cellular material models in topology optimization allows designers to achieve significant weight reductions in structural applications. However, the traditional topology optimization method is challenged by the use of cellular materials. Furthermore, increased material savings and performance can be achieved if the material and the structure topologies are concurrently designed. Hence, multi-scale topology optimization methodologies are introduced to fulfill this goal. The objective of this investigation is to discuss and compare the design methodologies to obtaining optimal macro-scale structures and the corresponding optimal meso-scale material designs in continuum design domains. These approaches make use of homogenization theory to establish communication bridges between both material and structural scales. The periodicity constraint makes such cellular materials manufacturable while relaxing the periodicity constraint to achieve major improvements of structural performance. Penalization methods are used to obtain binary solutions in both scales. The proposed methodologies are demonstrated in the design of stiff structure and compliant mechanism synthesis. The multiscale results are compared with the traditional structural-level designs in the context of Pareto solutions, demonstrating benefits of ultra-lightweight configurations. Errors involved in the mult-scale topology optimization procedure are also discussed. Errors are mainly classified as mesh refinement errors and homogenization errors. Comparisons between the multi-level designs and uni-level designs of solid structures, structures using periodic cellular materials and non-periodic cellular materials are provided. Error quantifications also indicate the superiority of using non-periodic cellular materials rather than periodic cellular materials.
APA, Harvard, Vancouver, ISO, and other styles
43

Reis, Marco Paulo Seabra. "Monitorização, modelação e melhoria de processos químicos : abordagem multiescala baseada em dados." Doctoral thesis, 2006. http://hdl.handle.net/10316/7375.

Full text
Abstract:
Tese de doutoramento em Engenharia Química (Processos Químicos) apresentada à Faculdade de Ciências e Tecnologia da Univ. de Coimbra
Processes going on in modern chemical processing plants are typically very complex, and this complexity is also present in collected data, which contain the cumulative effect of many underlying phenomena and disturbances, presenting different patterns in the time/frequency domain. Such characteristics motivate the development and application of data-driven multiscale approaches to process analysis, with the ability of selectively analyzing the information contained at different scales, but, even in these cases, there is a number of additional complicating features that can make the analysis not being completely successful. Missing and multirate data structures are two representatives of the difficulties that can be found, to which we can add multiresolution data structures, among others. On the other hand, some additional requisites should be considered when performing such an analysis, in particular the incorporation of all available knowledge about data, namely data uncertainty information. In this context, this thesis addresses the problem of developing frameworks that are able to perform the required multiscale decomposition analysis while coping with the complex features present in industrial data and, simultaneously, considering measurement uncertainty information. These frameworks are proven to be useful in conducting data analysis in these circumstances, representing conveniently data and the associated uncertainties at the different relevant resolution levels, being also instrumental for selecting the proper scales for conducting data analysis. In line with efforts described in the last paragraph and to further explore the information processed by such frameworks, the integration of uncertainty information on common single-scale data analysis tasks is also addressed. We propose developments in this regard in the fields of multivariate linear regression, multivariate statistical process control and process optimization. The second part of this thesis is oriented towards the development of intrinsically multiscale approaches, where two such methodologies are presented in the field of process monitoring, the first aiming to detect changes in the multiscale characteristics of profiles, while the second is focused on analysing patterns evolving in the time domain.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography