Teses / dissertações sobre o tema "Multiscale optimization"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 43 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Multiscale optimization".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.
Lalanne, Jean-Benoît. "Multiscale dissection of bacterial proteome optimization". Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/130217.
Texto completo da fonteCataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 315-348).
The quantitative composition of proteomes results from biophysical and biochemical selective pressures acting under system-level resource allocation constraints. The nature and strength of these evolutionary driving forces remain obscure. Through the development of analytical tools and precision measurement platforms spanning biological scales, we found evidence of optimization in bacterial gene expression programs. We compared protein synthesis rates across distant lineages and found tight conservation of in-pathway enzyme expression stoichiometry, suggesting generic selective pressures on expression setpoints. Beyond conservation, we used high-resolution transcriptomics to identify numerous examples of stoichiometry preserving cis-elements compensation in pathway operons. Genome-wide mapping of transcription termination sites also led to the discovery of a phylogenetically widespread mode of bacterial gene expression, 'runaway transcription', whereby RNA polymerases are functionally uncoupled from pioneering ribosomes on mRNAs. To delineate biophysical rationales underlying these pressures, we formulated a parsimonious ribosome allocation model capturing the trade-off between reaction flux and protein production cost. The model correctly predicts the expression hierarchy of key translation factors. We then directly measured the quantitative relationship between expression and fitness for specific translation factors in the Gram-positive species Bacillus subtilis. These precision measurements confirmed that endogenous expression maximizes growth rate. Idiosyncratic transcriptional changes in regulons were however observed away from endogenous expression. The resulting physiological burdens sharpened the fitness landscapes. Spurious system-level responses to targeted expression perturbations, called 'regulatory entrenchment', thus exacerbate the requirement for precisely set expression stoichiometry.
by Jean-Benoît Lalanne.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Physics
Fitriani. "Multiscale Dynamic Time and Space Warping". Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45279.
Texto completo da fonteIncludes bibliographical references (p. 149-151).
Dynamic Time and Space Warping (DTSW) is a technique used in video matching applications to find the optimal alignment between two videos. Because DTSW requires O(N4) time and space complexity, it is only suitable for short and coarse resolution videos. In this thesis, we introduce Multiscale DTSW: a modification of DTSW that has linear time and space complexity (O(N)) with good accuracy. The first step in Multiscale DTSW is to apply the DTSW algorithm to coarse resolution input videos. In the next step, Multiscale DTSW projects the solution from coarse resolution to finer resolution. A solution for finer resolution can be found effectively by refining the projected solution. Multiscale DTSW then repeatedly projects a solution from the current resolution to finer resolution and refines it until the desired resolution is reached. I have explored the linear time and space complexity (O(N)) of Multiscale DTSW both theoretically and empirically. I also have shown that Multiscale DTSW achieves almost the same accuracy as DTSW. Because of its efficiency in computational cost, Multiscale DTSW is suitable for video detection and video classification applications. We have developed a Multiscale-DTSW-based video classification framework that achieves the same accuracy as a DTSW-based video classification framework with greater than 50 percent reduction in the execution time. We have also developed a video detection application that is based on Dynamic Space Warping (DSW) and Multiscale DTSW methods and is able to detect a query video inside a target video in a short time.
by Fitriani.
S.M.
Yourdkhani, Mostafa. "Multiscale modeling and optimization of seashell structure and material". Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66991.
Texto completo da fonteUne vaste majorité des mollusques développent une coquille dure pour leur pro-tection. Une coquille typique est constitué de deux couches distinctes. La couche externe est faite de calcite (un matériau dur mais fragile), tandis que la couche in-terne est composée de nacre, un matériau plus résiliant et ductile. La nacre est un matériau biocomposite constitué de plus de 95% d'aragonite sous forme de ta-blette et d'un matériel organique souple qui forme la matrice. Bien que la cérami-que aragonite constitue une grande portion de la nacre, ses propriétés mécaniques sont étonnamment plus élevées de celles de ses constituants. La calcite et la nacre, deux matériaux avec des propriétés et des structures différentes, sont supposément étalonnées de façon optimale pour combattre les attaques de prédateurs. Cette étude cherche à déterminer les règles de construction d'une coquille de gastropode en utilisant la modélisation multi-échelle et des techniques d'optimisation. À l'échelle microscopique, un volume représentatif de la microstructure de la nacre a été utilisé pour formuler une solution analytique de son module d'élasticité et un critère de fracture multiaxial fonction des dimensions de la microstructure. À l'échelle macroscopique, un modèle d'éléments finis à deux couches de la co-quille à été utilisé pour représenter la curvature et le ratio calcite/nacre en fonction des paramètres géométriques. La charge maximale que la coquille peut supporter à son apex a été déterminée. Une approche d'optimisation multi-échelle a aussi été employée pour évaluer la reconstruction optimale du coquillage naturel. Fina-lement, plusieurs tests ont été effectués sur une coquille d'abalone rouge pour valider les résultats.
Umoh, Utibe Godwin. "Multiscale analysis of cohesive fluidization". Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/28988.
Texto completo da fonteSorrentino, Luigi. "Simulation and optimization of crowd dynamics using a multiscale model". Doctoral thesis, Universita degli studi di Salerno, 2012. http://hdl.handle.net/10556/318.
Texto completo da fonteIn the last decades, the modeling of crowd motion and pedestrian .ow has attracted the attention of applied mathematicians, because of an increasing num- ber of applications, in engineering and social sciences, dealing with this or similar complex systems, for design and optimization purposes. The crowd has caused many disasters, in the stadiums during some major sporting events as the "Hillsborough disaster" occurred on 15 April 1989 at Hills- borough, a football stadium, in She¢ eld, England, resulting in the deaths of 96 people, and 766 being injured that remains the deadliest stadium-related disaster in British history and one of the worst ever international football accidents. Other example is the "Heysel Stadium disaster" occurred on 29 May 1985 when escaping, fans were pressed against a wall in the Heysel Stadium in Brussels, Belgium, as a result of rioting before the start of the 1985 European Cup Final between Liv- erpool of England and Juventus of Italy. Thirty-nine Juventus fans died and 600 were injured. It is well know the case of the London Millennium Footbridge, that was closed the very day of its opening due to macroscopic lateral oscillations of the structure developing while pedestrians crossed the bridge. This phenomenon renewed the interest toward the investigation of these issues by means of mathe- matical modeling techniques. Other examples are emergency situations in crowded areas as airports or railway stations. In some cases, as the pedestrian disaster in Jamarat Bridge located in South Arabia, mathematical modeling and numerical simulation have already been successfully employed to study the dynamics of the .ow of pilgrims, so as to highlight critical circumstances under which crowd ac- cidents tend to occur and suggest counter-measures to improve the safety of the event. In the existing literature on mathematical modeling of human crowds we can distinguish two approaches: microscopic and macroscopic models. In model at microscopic scale pedestrians are described individually in their motion by ordinary di¤erential equations and problems are usually set in two-dimensional domains delimiting the walking area under consideration, with the presence of obstacles within the domain and a target. The basic modeling framework relies on classical Newtonian laws of point. The model at the macroscopic scale consists in using partial di¤erential equations, that is in describing the evolution in time and space of pedestrians supplemented by either suitable closure relations linking the velocity of the latter to their density or analogous balance law for the momentum. Again, typical guidelines in devising this kind of models are the concepts of preferred direction of motion and discomfort at high densities. In the framework of scalar conservation laws, a macroscopic onedimensional model has been proposed by Colombo and Rosini, resorting to some common ideas to vehicular tra¢ c modeling, with the speci.c aim of describing the transition from normal to panic conditions. Piccoli and Tosin propose to adopt a di¤erent macroscopic point of view, based on a measure-theoretical framework which has recently been introduced by Canuto et al. for coordination problems (rendez-vous) of multiagent systems. This approach consists in a discrete-time Eulerian macroscopic representation of the system via a family of measures which, pushed forward by some motion mappings, provide an estimate of the space occupancy by pedestrians at successive time steps. From the modeling point of view, this setting is particularly suitable to treat nonlocal interactions among pedestrians, obstacles, and wall boundary conditions. A microscopic approach is advantageous when one wants to model di¤erences among the individuals, random disturbances, or small environments. Moreover, it is the only reliable approach when one wants to track exactly the position of a few walkers. On the other hand, it may not be convenient to use a microscopic approach to model pedestrian .ow in large environments, due to the high com- putational e¤ort required. A macroscopic approach may be preferable to address optimization problems and analytical issues, as well as to handle experimental data. Nonetheless, despite the fact that self-organization phenomena are often visible only in large crowds, they are a consequence of strategical behaviors devel- oped by individual pedestrians. The two scales may reproduce the same features of the group behavior, thus providing a perfect matching between the results of the simulations for the micro- scopic and the macroscopic model in some test cases. This motivated the multiscale approach proposed by Cristiani, Piccoli and Tosin. Such an approach allows one to keep a macroscopic view without losing the right amount of .granularity,.which is crucial for the emergence of some self-organized patterns. Furthermore, the method allows one to introduce in a macroscopic (averaged) context some micro- scopic e¤ects, such as random disturbances or di¤erences among the individuals, in a fully justi.able manner from both the physical and the mathematical perspec- tive. In the model, microscopic and macroscopic scales coexist and continuously share information on the overall dynamics. More precisely, the microscopic part tracks the trajectories of single pedestrians and the macroscopic part the density of pedestrians using the same evolution equation duly interpreted in the sense of measures. In this respect, the two scales are indivisible. Starting from model of Cristiani, Piccoli and Tosin we have implemented algo- rithms to simulate the pedestrians motion toward a target to reach in a bounded area, with one or more obstacles inside. In this work di¤erent scenarios have been analyzed in order to .nd the obstacle con.guration which minimizes the pedes- trian average exit time. The optimization is achieved using to algorithms. The .rst one is based on the exhaustive exploration of all positions: the average exit time for all scenarios is computed and then the best one is chosen. The second algorithm is of steepest descent type according to which the obstacle con.guration corresponding to the minimum exit time is found using an iterative method. A variant has been introduced to the algorithm so to obtain a more e¢ cient proce- dure. The latter allows to .nd better solutions in few steps than other algorithms. Finally we performed other simulations with bounded domains like a classical .at with .ve rooms and two exits, comparing the results of three di¤erent scenario changing the positions of exit doors. [edited by author]
X n.s.
Parno, Matthew David. "A multiscale framework for Bayesian inference in elliptic problems". Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/65322.
Texto completo da fontePage 118 blank. Cataloged from PDF version of thesis.
Includes bibliographical references (p. 112-117).
The Bayesian approach to inference problems provides a systematic way of updating prior knowledge with data. A likelihood function involving a forward model of the problem is used to incorporate data into a posterior distribution. The standard method of sampling this distribution is Markov chain Monte Carlo which can become inefficient in high dimensions, wasting many evaluations of the likelihood function. In many applications the likelihood function involves the solution of a partial differential equation so the large number of evaluations required by Markov chain Monte Carlo can quickly become computationally intractable. This work aims to reduce the computational cost of sampling the posterior by introducing a multiscale framework for inference problems involving elliptic forward problems. Through the construction of a low dimensional prior on a coarse scale and the use of iterative conditioning technique the scales are decouples and efficient inference can proceed. This work considers nonlinear mappings from a fine scale to a coarse scale based on the Multiscale Finite Element Method. Permeability characterization is the primary focus but a discussion of other applications is also provided. After some theoretical justification, several test problems are shown that demonstrate the efficiency of the multiscale framework.
by Matthew David Parno.
S.M.
MEJIAS, TUNI JESUS ALBERTO. "Multiscale approach applied to fires in tunnels, Model optimization and development". Doctoral thesis, Politecnico di Torino, 2022. http://hdl.handle.net/11583/2960751.
Texto completo da fonteChen, Minghan. "Stochastic Modeling and Simulation of Multiscale Biochemical Systems". Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/90898.
Texto completo da fonteDoctor of Philosophy
Modeling and simulation of biochemical networks faces numerous challenges as biochemical networks are discovered with increased complexity and unknown mechanisms. With improvement in experimental techniques, biologists are able to quantify genes and proteins and their dynamics in a single cell, which calls for quantitative stochastic models, or numerical models based on probability distributions, for gene and protein networks at cellular levels that match well with the data and account for randomness. This dissertation studies a stochastic model in space and time of a bacterium’s life cycle— Caulobacter. A two-dimensional model based on a natural pattern mechanism is investigated to illustrate the changes in space and time of a key protein population. However, stochastic simulations are often complicated by the expensive computational cost for large and sophisticated biochemical networks. The hybrid stochastic simulation algorithm is a combination of traditional deterministic models, or analytical models with a single output for a given input, and stochastic models. The hybrid method can significantly improve the efficiency of stochastic simulations for biochemical networks that contain both species populations and reaction rates with widely varying magnitude. The populations of some species may become negative in the simulation under some circumstances. This dissertation investigates negative population estimates from the hybrid method, proposes several remedies, and tests them with several cases including a realistic biological system. As a key factor that affects the quality of biological models, parameter estimation in stochastic models is challenging because the amount of observed data must be large enough to obtain valid results. To optimize system parameters, the quasi-Newton algorithm for stochastic optimization (QNSTOP) was studied and applied to a stochastic (budding) yeast life cycle model by matching different distributions between simulated results and observed data. Furthermore, to reduce model complexity, this dissertation simplifies the fundamental molecular binding mechanism by the stochastic Hill equation model with optimized system parameters. Considering that many parameter vectors generate similar system dynamics and results, this dissertation proposes a general α-β-γ rule to return an acceptable parameter region of the stochastic Hill equation based on QNSTOP. Different optimization strategies are explored targeting different features of the observed data.
Ahamad, Intan Salwani. "Multiscale line search in interior point methods for nonlinear optimization and applications". Thesis, University of Cambridge, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612762.
Texto completo da fonteArabnejad, Sajad. "Multiscale mechanics and multiobjective optimization of cellular hip implants with variable stiffness". Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=119630.
Texto completo da fonteLa résorption osseuse et l'instabilité de l'interface os-implant sont deux goulots d'étranglement de modèles actuels d'implants orthopédiques de hanche. La résorption osseuse est souvent déclenchée par une bio-incompatibilité mécanique de l'implant avec l'os environnant. Il en résulte de graves conséquences cliniques à la fois en chirurgie primaire et en chirurgie de révision des arthroplasties de la hanche. Après la chirurgie primaire, la résorption osseuse peut entraîner des fractures périprothétiques, conduisant au descellement de l'implant. Pour la chirurgie de révision, la perte de substance osseuse compromet la capacité de l'os à bien fixer l'implant. L'instabilité de l'interface, d'autre part, se produit à la suite d'un stress excessif et de micromouvements à l'interface os-implant, ce qui empêche la fixation des implants. De ce fait, l'implant échoue, et la chirurgie de révision est nécessaire.De nombreuses études ont été réalisées pour concevoir un implant qui minimise la résorption osseuse et l'instabilité de l'interface. Cependant, les résultats n'ont pas été efficaces, car minimiser un objectif pénaliserait l'autre. En conséquence, parmi tous les modèles disponibles sur le marché, il n'y a pas d'implant qui puisse en même temps réduire ces deux objectifs contradictoires. L'objectif de cette thèse est de concevoir une prothèse orthopédique de la hanche qui puisse simultanément réduire la résorption osseuse et l'instabilité de l'implant. Nous proposons un nouveau concept d'implant à raideur variable qui est mis en œuvre grâce à l'utilisation de matériaux assemblés en treillis.Une méthodologie de conception basée sur la mécanique multi-échelle et l'optimisation multiobjectif est développé pour l'analyse et la conception d'un implant totalement poreux avec une microstructure en treillis. Les propriétés mécaniques de l'implant sont localement optimisés pour minimiser la résorption osseuse et l'instabilité d'interface. La théorie de l'homogénéisation asymptotique (HA) est utilisée pour capturer la distribution des contraintes pour l'analyse des défaillances tout le long de l'implant et de sa microstructure en treillis. Concernant cette microstructure en treillis, une bibliothèque de topologies de cellules 2D est développée, et leurs propriétés mécaniques efficaces, y compris les modules d'élasticité et la limite d'élasticité, sont calculées en utilisant le théorie HA. Puisque les prothèses orthopédiques de hanche sont généralement censées soutenir les forces dynamiques générées par les activités humaines, elles doivent être également conçues contre les fractures de fatigue pour éviter des dommages progressifs. Une méthodologie pour la conception en fatigue des matériaux cellulaires est proposée et appliquée à un implant en deux dimensions, et aux topologies de cellules carrées et de Kagome. Il est prouvé qu'un implant en treillis avec une répartition optimale des propriétés des matériaux réduit considérablement la quantité de la résorption osseuse et la contrainte de cisaillement de l'interface par rapport à un implant en titane totalement dense. La fabricabilité des implants en treillis est démontrée par la fabrication d'un ensemble de concepts de prototypes utilisant la fusion par faisceau d'électronsde poudre Ti6Al4V. La microscopie optique est utilisée pour mesurer les paramètres morphologiques de la microstructure cellulaire. L'analyse numérique et les tests de fabricabilité effectués dans cette étude préliminaire suggèrent que la méthodologie développée peut être utilisée pour la conception et la fabrication d'implants orthopédiques innovants qui peuvent contribuer de manière significative à la réduction des conséquences cliniques des implants actuels.
Petay, Margaux. "Multimodal and multiscale analysis of complex biomaterials : optimization and constraints of infrared nanospectroscopy measurements". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASF092.
Texto completo da fonteIn the biomedical field, understanding the physicochemical changes at the cellular level in tissues can be crucial for unraveling the mechanisms of pathological phenomena. However, the number of techniques providing chemical descriptions at the cellular/molecular level is limited. Infrared (IR) nanospectroscopy techniques, particularly AFM-IR (Atomic Force Microscopy-infrared), are promising as they offer materials' chemical descriptions at the nanometer scale. Up to now, AFM-IR is mainly used in biology for studying individual cells or micro-organisms, but its direct application in biological tissues is relatively scarce due to tissue sections' complex nature. Yet, many applications could benefit from such description, such as mineralization phenomena in breast tissue. Breast microcalcifications (BMCs) are calcium-based deposits (such as calcium oxalate and calcium phosphate) hypothesized to be associated with some breast pathologies, including cancer. Despite increased research over the past decade, BMCs' formation process and connection with breast conditions remain poorly understood. Still, BMCs nanoscale chemical speciation might offer new insights into their chemical architecture. However, breast biopsies typically range from a few millimeters to a few centimeters, containing many BMCs ranging from hundreds of nanometers to a millimeter. Thus, a breast biopsy multiscale characterization strategy is required to provide both a global chemical description of the sample and a fine chemical description of BMCs. We, thus, propose a new multimodal and multiscale approach to investigate BMCs' morphological properties using scanning electron microscopy and their chemical composition at the microscale using IR spectromicroscopy, extending up to the nanometer scale thanks to AFM-IR analysis. Although AFM-IR measurements of inorganic and crystalline objects can be challenging due to their specific optical and mechanical properties, we demonstrate AFM-IR capabilities to characterize pathological deposits directly in biological tissues. Furthermore, implementing a multimodal and multiscale methodology comes with significant challenges in terms of sample preparation, measurements, data processing, and data management, as well as their interpretation: challenges which will be outlined and addressed
Sato, Ayami. "A structural optimization methodology for multiscale designs considering local deformation in microstructures and rarefied gas flows in microchannels". Kyoto University, 2019. http://hdl.handle.net/2433/242495.
Texto completo da fonteXia, Liang. "Towards optimal design of multiscale nonlinear structures : reduced-order modeling approaches". Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2230/document.
Texto completo da fonteHigh-performance heterogeneous materials have been increasingly used nowadays for their advantageous overall characteristics resulting in superior structural mechanical performance. The pronounced heterogeneities of materials have significant impact on the structural behavior that one needs to account for both material microscopic heterogeneities and constituent behaviors to achieve reliable structural designs. Meanwhile, the fast progress of material science and the latest development of 3D printing techniques make it possible to generate more innovative, lightweight, and structurally efficient designs through controlling the composition and the microstructure of material at the microscopic scale. In this thesis, we have made first attempts towards topology optimization design of multiscale nonlinear structures, including design of highly heterogeneous structures, material microstructural design, and simultaneous design of structure and materials. We have primarily developed a multiscale design framework, constituted of two key ingredients : multiscale modeling for structural performance simulation and topology optimization forstructural design. With regard to the first ingredient, we employ the first-order computational homogenization method FE2 to bridge structural and material scales. With regard to the second ingredient, we apply the method Bi-directional Evolutionary Structural Optimization (BESO) to perform topology optimization. In contrast to the conventional nonlinear design of homogeneous structures, this design framework provides an automatic design tool for nonlinear highly heterogeneous structures of which the underlying material model is governed directly by the realistic microstructural geometry and the microscopic constitutive laws. Note that the FE2 method is extremely expensive in terms of computing time and storage requirement. The dilemma of heavy computational burden is even more pronounced when it comes to topology optimization : not only is it required to solve the time-consuming multiscale problem once, but for many different realizations of the structural topology. Meanwhile we note that the optimization process requires multiple design loops involving similar or even repeated computations at the microscopic scale. For these reasons, we introduce to the design framework a third ingredient : reduced-order modeling (ROM). We develop an adaptive surrogate model using snapshot Proper Orthogonal Decomposition (POD) and Diffuse Approximation to substitute the microscopic solutions. The surrogate model is initially built by the first design iteration and updated adaptively in the subsequent design iterations. This surrogate model has shown promising performance in terms of reducing computing cost and modeling accuracy when applied to the design framework for nonlinear elastic cases. As for more severe material nonlinearity, we employ directly an established method potential based Reduced Basis Model Order Reduction (pRBMOR). The key idea of pRBMOR is to approximate the internal variables of the dissipative material by a precomputed reduced basis computed from snapshot POD. To drastically accelerate the computing procedure, pRBMOR has been implemented by parallelization on modern Graphics Processing Units (GPUs). The implementation of pRBMOR with GPU acceleration enables us to realize the design of multiscale elastoviscoplastic structures using the previously developed design framework inrealistic computing time and with affordable memory requirement. We have so far assumed a fixed material microstructure at the microscopic scale. The remaining part of the thesis is dedicated to simultaneous design of both macroscopic structure and microscopic materials. By the previously established multiscale design framework, we have topology variables and volume constraints defined at both scales
Dang, Hieu. "Adaptive multiobjective memetic optimization: algorithms and applications". Journal of Cognitive Informatics and Natural Intelligence, 2012. http://hdl.handle.net/1993/30856.
Texto completo da fonteFebruary 2016
Carriou, Vincent. "Multiscale, multiphysic modeling of the skeletal muscle during isometric contraction". Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2376/document.
Texto completo da fonteThe neuromuscular and musculoskeletal systems are complex System of Systems (SoS) that perfectly interact to provide motion. From this interaction, muscular force is generated from the muscle activation commanded by the Central Nervous System (CNS) that pilots joint motion. In parallel an electrical activity of the muscle is generated driven by the same command of the CNS. This electrical activity can be measured at the skin surface using electrodes, namely the surface electromyogram (sEMG). The knowledge of how these muscle out comes are generated is highly important in biomechanical and clinical applications. Evaluating and quantifying the interactions arising during the muscle activation are hard and complex to investigate in experimental conditions. Therefore, it is necessary to develop a way to describe and estimate it. In the bioengineering literature, several models of the sEMG and the force generation are provided. They are principally used to describe subparts of themuscular outcomes. These models suffer from several important limitations such lacks of physiological realism, personalization, and representability when a complete muscle is considered. In this work, we propose to construct bioreliable, personalized and fast models describing electrical and mechanical activities of the muscle during contraction. For this purpose, we first propose a model describing the electrical activity at the skin surface of the muscle where this electrical activity is determined from a voluntary command of the Peripheral Nervous System (PNS), activating the muscle fibers that generate a depolarization of their membrane that is filtered by the limbvolume. Once this electrical activity is computed, the recording system, i.e. the High Density sEMG (HD-sEMG) grid is define over the skin where the sEMG signal is determined as a numerical integration of the electrical activity under the electrode area. In this model, the limb is considered as a multilayered cylinder where muscle, adipose and skin tissues are described. Therefore, we propose a mechanical model described at the Motor Unit (MU) scale. The mechanical outcomes (muscle force, stiffness and deformation) are determined from the same voluntary command of the PNS, and is based on the Huxley sliding filaments model upscale at the MU scale using the distribution-moment theory proposed by Zahalak. This model is validated with force profile recorded from a subject implanted with an electrical stimulation device. Finally, we proposed three applications of the proposed models to illustrate their reliability and usefulness. A global sensitivity analysis of the statistics computed over the sEMG signals according to variation of the HD-sEMG electrode grid is performed. Then, we proposed in collaboration a new HDsEMG/force relationship, using personalized simulated data of the Biceps Brachii from the electrical model and a Twitch based model to estimate a specific force profile corresponding to a specific sEMG sensor network and muscle configuration. To conclude, a deformableelectro-mechanicalmodelcouplingthetwoproposedmodelsisproposed. This deformable model updates the limb cylinder anatomy considering isovolumic assumption and respecting incompressible property of the muscle
Liu, Mingyong. "Optimization of electromagnetic and acoustic performances of power transformers". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS256/document.
Texto completo da fonteThis thesis deals with the prediction of the vibration of a multi-layer transformer core made of an assembly of electrical sheets. This magneto-mechanical coupled problem is solved by a stepping finite element method sequential approach: magnetic resolution is followed by mechanical resolution. A 3D Simplified Multi-Scale Model (SMSM) describing both magnetic and magnetostrictive anisotropies is used as the constitutive law of the material. The transformer core structure is modeled in 2D and a homogenization technique is implemented to take the anisotropic behavior of each layer into consideration and define an average behavior at each element of the finite element mesh. Experimental measurements are then carried out, allowing the validation of the material constitutive law, static structural behavior, dynamic structural behavior, and the noise estimation. Different materials geometries are considered for this workStructural optimizations are finally achieved by numerical simulation for lower vibration and noise emission of the transformer cores
Chen, Yun. "Mining Dynamic Recurrences in Nonlinear and Nonstationary Systems for Feature Extraction, Process Monitoring and Fault Diagnosis". Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6072.
Texto completo da fontePereira, Danillo Roberto 1984. "Fitting 3D deformable biological models to microscope images = Alinhamento de modelos tridimensionais usando imagens de microscopia". [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275627.
Texto completo da fonteTese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-23T12:30:57Z (GMT). No. of bitstreams: 1 Pereira_DanilloRoberto_D.pdf: 2771811 bytes, checksum: 6d5b092c08b5c011636be5fc2661e4a0 (MD5) Previous issue date: 2013
Resumo: Nesta tese descrevemos um algoritmo genérico (que denominamos MSFit) capaz de estimar a pose e as deformações de modelos 3D de estruturas biológicas (bactérias, células e etc.) em imagens obtidas por meio de microscópios óticos ou de varredura eletrônica. O algoritmo usa comparação multi-escala de imagens utilizando uma métrica sensível ao contorno; e um método original de otimização não-linear. Nos nossos testes com modelos de complexidade moderada (até 12 parâmetros) o algoritmo identifica corretamente os parâmetros do modelo em 60-70% dos casos com imagens reais e entre 80-90% dos casos com imagens sintéticas
Abstract: In this thesis we describe a generic algorithm (which we call MSFit) able to estimate the pose and deformations of 3D models of biological structures (bacteria, cells, etc.) with images obtained by optical and scanning electron microscopes. The algorithm uses an image comparison metric multi-scale, that is outline-sensitive, and a novel nonlinear optimization method. In our tests with models of moderate complexity (up to 12 parameters) the algorithm correctly identifies the model parameters in 60-70 % of the cases with real images and 80-90 % of the cases with synthetic images
Doutorado
Ciência da Computação
Doutor em Ciência da Computação
Waldspurger, Irène. "Wavelet transform modulus : phase retrieval and scattering". Thesis, Paris, Ecole normale supérieure, 2015. http://www.theses.fr/2015ENSU0036/document.
Texto completo da fonteAutomatically understanding the content of a natural signal, like a sound or an image, is in general a difficult task. In their naive representation, signals are indeed complicated objects, belonging to high-dimensional spaces. With a different representation, they can however be easier to interpret. This thesis considers a representation commonly used in these cases, in particular for theanalysis of audio signals: the modulus of the wavelet transform. To better understand the behaviour of this operator, we study, from a theoretical as well as algorithmic point of view, the corresponding inverse problem: the reconstruction of a signal from the modulus of its wavelet transform. This problem belongs to a wider class of inverse problems: phase retrieval problems. In a first chapter, we describe a new algorithm, PhaseCut, which numerically solves a generic phase retrieval problem. Like the similar algorithm PhaseLift, PhaseCut relies on a convex relaxation of the phase retrieval problem, which happens to be of the same form as relaxations of the widely studied problem MaxCut. We compare the performances of PhaseCut and PhaseLift, in terms of precision and complexity. In the next two chapters, we study the specific case of phase retrieval for the wavelet transform. We show that any function with no negative frequencies is uniquely determined (up to a global phase) by the modulus of its wavelet transform, but that the reconstruction from the modulus is not stable to noise, for a strong notion of stability. However, we prove a local stability property. We also present a new non-convex phase retrieval algorithm, which is specific to the case of the wavelet transform, and we numerically study its performances. Finally, in the last two chapters, we study a more sophisticated representation, built from the modulus of the wavelet transform: the scattering transform. Our goal is to understand which properties of a signal are characterized by its scattering transform. We first prove that the energy of scattering coefficients of a signal, at a given order, is upper bounded by the energy of the signal itself, convolved with a high-pass filter that depends on the order. We then study a generalization of the scattering transform, for stationary processes. We show that, in finite dimension, this generalized transform preserves the norm. In dimension one, we also show that the generalized scattering coefficients of a process characterize the tail of its distribution
Koyeerath, Graham Danny. "Topology optimization in interfacial flows using the pseudopotential model". Electronic Thesis or Diss., Nantes Université, 2024. http://www.theses.fr/2024NANU4008.
Texto completo da fonteThe optimization of systems and processes is an exercise that is carried out taking into account one’s experience and knowledge. Here we explore a mathematical approach to optimize physical problems by utilizing various optimization algorithms. In this thesis, the preliminary objective of the optimizer is to modify the flow characteristics of the system by tweaking the capillary forces. This could be accomplished by modifying either of the two sets of parameters: (a) by introducing a wetting solid material i.e. the level-set parameter or (b) by changing the wettability of the existing solid surfaces i.e. the wettability parameter. We propose that the former set of parameters could be modified using the topology optimization algorithm, where the gradient of the cost function is obtained by solving an adjointstate state model for the single component multiphase Shan and Chen (SCMP-SC) model. Similarly, we propose that the latter set of parameters are modified using the wettability optimization algorithm where we again derive an adjoint-state model for the SCMP-SC. Lastly, we utilize a multiscale optimization algorithm, where we compute the gradient of the cost function using the finite difference. We have succeeded in demonstrating the competence of this optimizer for maximizing the mean velocity of a 2D droplet by up to 69%
Billy, Frédérique. "Modélisation mathématique multi-échelle de l'angiogenèse tumorale : analyse de la réponse tumorale aux traitements anti-angiogéniques". Phd thesis, Université Claude Bernard - Lyon I, 2009. http://tel.archives-ouvertes.fr/tel-00631513.
Texto completo da fonteWenzel, Moritz. "Development of a Metamaterial-Based Foundation System for the Seismic Protection of Fuel Storage Tanks". Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/256685.
Texto completo da fonteWenzel, Moritz. "Development of a Metamaterial-Based Foundation System for the Seismic Protection of Fuel Storage Tanks". Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/256685.
Texto completo da fonteResidori, Sara. "FABRICATION AND CHARACTERIZATION OF 3D PRINTED METALLIC OR NON-METALLIC GRAPHENE COMPOSITES". Doctoral thesis, Università degli studi di Trento, 2022. https://hdl.handle.net/11572/355324.
Texto completo da fonteZhu, Yan. "Rational design of plastic packaging for alcoholic beverages". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLA020.
Texto completo da fonteThe view of plastic food packaging turned from useful to a major source of contaminants in food and an environmental threat. Substituting glass by recycled or biosourced plastic containers reduces environmental impacts for bottled beverages. The thesis developed a 3D computational and optimization framework to accelerate the prototyping of eco-efficient packaging for alcoholic beverages. Shelf-life, food safety, mechanical constraints, and packaging wastes are considered into a single multicriteria optimization problem. New bottles are virtually generated within an iterative three steps process involving: i) a multiresolution [E]valuation of coupled mass transfer; ii) a [D]ecision step validating technical (shape, capacity, weight) and regulatory (shelf-life, migrations) constraints; iii) a global [Solving] step seeking acceptable Pareto solutions. The capacity to predict shelf-life of liquors in real conditions was tested successfully on ca. 500 hundred bottle min iatures in PET (polyethylene terephthalate) over several months. The entire approach has been designed to manage any coupled mass transfer (permeation, sorption, migration). Mutual sorption is considered via polynary Flory-Huggins formulation. A blob formulation of the free-volume theory of Vrentas and Duda was developed to predict the diffusion properties in glassy polymers of water and organic solutes in arbitrary polymers (polyesters, polyamides, polyvinyls, polyolefins). The validation set included 433 experimental diffusivities from literature and measured in this work. The contribution of polymer relaxation in glassy PET was analyzed in binary and ternary differential sorption using a cosorption microbalance from 25 to 50°C. Part of the framework will be released as an open-source project to encourage the integration of more factors affecting the shelf-life of beverages and food products (oxidation kinetics, aroma scalping)
Derou, Dominique. "Optimisation neuronale et régularisation multiéchelle auto-organisée pour la trajectographie de particules : application à l'analyse d'écoulements en mécanique des fluides". Grenoble INPG, 1995. http://www.theses.fr/1995INPG0146.
Texto completo da fonteEve, Thierry. "Optimisation decentralisee et coordonnee de la conduite de systemes electriques interconnectes". Paris 6, 1994. http://www.theses.fr/1994PA066569.
Texto completo da fonteGobé, Alexis. "Méthodes Galerkin discontinues pour la simulation de problèmes multiéchelles en nanophotonique et applications au piégeage de la lumière dans des cellules solaires". Thesis, Université Côte d'Azur, 2020. http://www.theses.fr/2020COAZ4011.
Texto completo da fonteThe objective of this thesis is the numerical study of light trapping in nanostructured solar cells. Climate change has become a major issue requiring a short-term energy transition. In this context, solar energy seems to be an ideal energy source. This resource is both globally scalable and environmentally friendly. In order to maximize its penetration, it is needed to increase the amount of light absorbed and reduce the costs associated with cell design. Light trapping is a strategy that achieves both of these objectives. The principle is to use nanometric textures to focus the light in the absorbing semiconductor layers. In this work, the Discontinuous Galerkin Time-Domain (DGTD) method is introduced. Two major methodological developments are presented, allowing to better take into account the characteristics of solar cells. First, the use of a local approximation order is proposed, based on a particular order distribution strategy. The second development is the use of hybrid meshes mixing structured hexahedral and unstructured tetrahedral elements. Realistic cases of solar cells from the literature and collaborations with physicists in the field of photovoltaics illustrate the contribution of these developments. A case of inverse optimization of a diffraction grating in a solar cell is also presented by coupling the numerical solver with a Bayesian optimization algorithm. In addition, an in-depth study of the solver's performance has also been carried out with methodological modifications to counter load balancing problems. Finally, a more prospective method, the Multiscale Hybrid-Mixed method (MHM) specialized in solving very highly multiscale problems is introduced. A multiscale time scheme is presented and its stability is proven
Rath, James Michael 1975. "Multiscale basis optimization for Darcy flow". Thesis, 2007. http://hdl.handle.net/2152/3977.
Texto completo da fontetext
Li, Xiaohai. "Multiscale simulation and optimization of copper electrodeposition /". 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3290296.
Texto completo da fonteSource: Dissertation Abstracts International, Volume: 68-11, Section: B, page: 7496. Advisers: Richard D. Braatz; Richard C. Alkire. Includes bibliographical references. Available on microfilm from Pro Quest Information and Learning.
Raimondeau, Stephanie Marie. "Development, optimization, and reduction of hierarchical multiscale models". 2003. https://scholarworks.umass.edu/dissertations/AAI3078715.
Texto completo da fonteNgnotchouye, Jean Medard Techoukouegno. "Conservation laws models in networks and multiscale flow optimization". Thesis, 2011. http://hdl.handle.net/10413/7922.
Texto completo da fonteThesis (Ph.D.)-University of KwaZulu-Natal, Pietermaritzburg, 2011.
Sun, Hao. "Development of Hierarchical Optimization-based Models for Multiscale Damage Detection". Thesis, 2014. https://doi.org/10.7916/D8GQ6W2J.
Texto completo da fonte(5930414), Tong Wu. "TOPOLOGY OPTIMIZATION OF MULTISCALE STRUCTURES COUPLING FLUID, THERMAL AND MECHANICAL ANALYSIS". Thesis, 2019.
Encontre o texto completo da fonteDE, MAIO RAUL. "Multiscale methods for traffic flow on networks". Doctoral thesis, 2019. http://hdl.handle.net/11573/1239374.
Texto completo da fonte"Measurement, optimization and multiscale modeling of silicon wafer bonding interface fracture resistance". Université catholique de Louvain, 2006. http://edoc.bib.ucl.ac.be:81/ETD-db/collection/available/BelnUcetd-10272006-203227/.
Texto completo da fonteHu, Nan. "Advances in Multiscale Methods with Applications in Optimization, Uncertainty Quantification and Biomechanics". Thesis, 2016. https://doi.org/10.7916/D8FX79N8.
Texto completo da fontePark, Han-Young. "A Hierarchical Multiscale Approach to History Matching and Optimization for Reservoir Management in Mature Fields". Thesis, 2012. http://hdl.handle.net/1969.1/ETD-TAMU-2012-08-11779.
Texto completo da fonteVarshney, Amit. "Optimization and control of multiscale process systems using model reduction: application to thin-film growth". 2007. http://etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-1967/index.html.
Texto completo da fonteFoderaro, Greg. "A Distributed Optimal Control Approach for Multi-agent Trajectory Optimization". Diss., 2013. http://hdl.handle.net/10161/8226.
Texto completo da fonteThis dissertation presents a novel distributed optimal control (DOC) problem formulation that is applicable to multiscale dynamical systems comprised of numerous interacting systems, or agents, that together give rise to coherent macroscopic behaviors, or coarse dynamics, that can be modeled by partial differential equations (PDEs) on larger spatial and time scales. The DOC methodology seeks to obtain optimal agent state and control trajectories by representing the system's performance as an integral cost function of the macroscopic state, which is optimized subject to the agents' dynamics. The macroscopic state is identified as a time-varying probability density function to which the states of the individual agents can be mapped via a restriction operator. Optimality conditions for the DOC problem are derived analytically, and the optimal trajectories of the macroscopic state and control are computed using direct and indirect optimization algorithms. Feedback microscopic control laws are then derived from the optimal macroscopic description using a potential function approach.
The DOC approach is demonstrated numerically through benchmark multi-agent trajectory optimization problems, where large systems of agents were given the objectives of traveling to goal state distributions, avoiding obstacles, maintaining formations, and minimizing energy consumption through control. Comparisons are provided between the direct and indirect optimization techniques, as well as existing methods from the literature, and a computational complexity analysis is presented. The methodology is also applied to a track coverage optimization problem for the control of distributed networks of mobile omnidirectional sensors, where the sensors move to maximize the probability of track detection of a known distribution of mobile targets traversing a region of interest (ROI). Through extensive simulations, DOC is shown to outperform several existing sensor deployment and control strategies. Furthermore, the computation required by the DOC algorithm is proven to be far reduced compared to that of classical, direct optimal control algorithms.
Dissertation
Bauman, Paul Thomas 1980. "Adaptive multiscale modeling of polymeric materials using goal-oriented error estimation, Arlequin coupling, and goals algorithms". Thesis, 2008. http://hdl.handle.net/2152/3824.
Texto completo da fontetext
Liu, Kai. "Concurrent topology optimization of structures and materials". Thesis, 2013. http://hdl.handle.net/1805/3755.
Texto completo da fonteTopology optimization allows designers to obtain lightweight structures considering the binary distribution of a solid material. The introduction of cellular material models in topology optimization allows designers to achieve significant weight reductions in structural applications. However, the traditional topology optimization method is challenged by the use of cellular materials. Furthermore, increased material savings and performance can be achieved if the material and the structure topologies are concurrently designed. Hence, multi-scale topology optimization methodologies are introduced to fulfill this goal. The objective of this investigation is to discuss and compare the design methodologies to obtaining optimal macro-scale structures and the corresponding optimal meso-scale material designs in continuum design domains. These approaches make use of homogenization theory to establish communication bridges between both material and structural scales. The periodicity constraint makes such cellular materials manufacturable while relaxing the periodicity constraint to achieve major improvements of structural performance. Penalization methods are used to obtain binary solutions in both scales. The proposed methodologies are demonstrated in the design of stiff structure and compliant mechanism synthesis. The multiscale results are compared with the traditional structural-level designs in the context of Pareto solutions, demonstrating benefits of ultra-lightweight configurations. Errors involved in the mult-scale topology optimization procedure are also discussed. Errors are mainly classified as mesh refinement errors and homogenization errors. Comparisons between the multi-level designs and uni-level designs of solid structures, structures using periodic cellular materials and non-periodic cellular materials are provided. Error quantifications also indicate the superiority of using non-periodic cellular materials rather than periodic cellular materials.
Reis, Marco Paulo Seabra. "Monitorização, modelação e melhoria de processos químicos : abordagem multiescala baseada em dados". Doctoral thesis, 2006. http://hdl.handle.net/10316/7375.
Texto completo da fonteProcesses going on in modern chemical processing plants are typically very complex, and this complexity is also present in collected data, which contain the cumulative effect of many underlying phenomena and disturbances, presenting different patterns in the time/frequency domain. Such characteristics motivate the development and application of data-driven multiscale approaches to process analysis, with the ability of selectively analyzing the information contained at different scales, but, even in these cases, there is a number of additional complicating features that can make the analysis not being completely successful. Missing and multirate data structures are two representatives of the difficulties that can be found, to which we can add multiresolution data structures, among others. On the other hand, some additional requisites should be considered when performing such an analysis, in particular the incorporation of all available knowledge about data, namely data uncertainty information. In this context, this thesis addresses the problem of developing frameworks that are able to perform the required multiscale decomposition analysis while coping with the complex features present in industrial data and, simultaneously, considering measurement uncertainty information. These frameworks are proven to be useful in conducting data analysis in these circumstances, representing conveniently data and the associated uncertainties at the different relevant resolution levels, being also instrumental for selecting the proper scales for conducting data analysis. In line with efforts described in the last paragraph and to further explore the information processed by such frameworks, the integration of uncertainty information on common single-scale data analysis tasks is also addressed. We propose developments in this regard in the fields of multivariate linear regression, multivariate statistical process control and process optimization. The second part of this thesis is oriented towards the development of intrinsically multiscale approaches, where two such methodologies are presented in the field of process monitoring, the first aiming to detect changes in the multiscale characteristics of profiles, while the second is focused on analysing patterns evolving in the time domain.