Dissertations / Theses on the topic 'Optimization-based modeling'

To see the other types of publications on this topic, follow the link: Optimization-based modeling.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Optimization-based modeling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Akhlagi, Ali. "A Modelica-based framework for modeling and optimization of microgrids." Thesis, KTH, Energiteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263037.

Full text
Abstract:
Microgrids have lately drawn much attention due to their considerable financial benefits and the increasing concerns about environmental issues. A solution that can address different engineering problems - from design to operation - is desired for practical reasons and to ensure consistency of the analyses. In this thesis, the capabilities of a Modelicabased framework is investigated for various microgrid optimization problems. Various sizing and scheduling problems are successfully formulated and optimized using nonlinear and physical component models, covering both electrical and thermal domains. Another focus of the thesis is to test the optimization platform when varying the problem formulation; performance and robustness tests have been performed with different boundary conditions and system setups. The results show that the technology can effectively handle complex scheduling strategies such as Model Predictive Control and Demand Charge Management. In sizing problems, although the platform can efficiently size the components while simultaneously solving for the economical load dispatch for short horizons (weekly or monthly), the implemented approach would require adaptations to become efficient on longer horizons (yearly).
APA, Harvard, Vancouver, ISO, and other styles
2

Yaoumi, Mohamed. "Energy modeling and optimization of protograph-based LDPC codes." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0224.

Full text
Abstract:
Il existe différents types de codes correcteur d’erreurs (CCE), chacun offrant différents compromis entre la performance et la consommation d’énergie. Nous proposons de traiter ce problème pour les codes LDPC (Low-Density Parity Check). Dans ce travail, nous avons considéré les codes LDPC construits à partir de protographes avec un décodeur Min-Sum quantifié, pour leurs bonnes performances et leur implémentation matérielle efficace. Nous avons utilisé une méthode basée sur l’évolution de densité pour évaluer les performances à longueur finie du décodeur pour un protographe donné. Ensuite, nous avons introduit deux modèles pour estimer la consommation d’énergie du décodeur Min-Sum quantifié. A partir de ces modèles, nous avons développé une méthode d’optimisation afin de sélectionner des protographes qui minimisent la consommation d’énergie du décodeur tout en satisfaisant un critère de performance donné.Dans la seconde partie de la thèse, nous avons considéré un décodeur LDPC bruité, et nous avons supposé que le circuit introduit des défauts dans les unités de mémoire utilisées par le décodeur. Nous avons ensuite mis à jour le modèle d’énergie de la mémoire afin de prendre en compte le bruit dans le décodeur. Par conséquent, nous avons proposé une méthode alternative afin d’optimiser les paramètres du modèle et minimiser la consommation d’énergie du décodeur pour un protographe donné
There are different types of error correction codes (CCE), each of which gives different trade-offs interms of decoding performanceand energy consumption. We propose to deal with this problem for Low-Density Parity Check (LDPC) codes. In this work, we considered LDPC codes constructed from protographs together with a quantized Min-Sum decoder, for their good performance and efficient hardware implementation. We used a method based on Density Evolution to evaluate the finite-length performance of the decoder for a given protograph.Then, we introduced two models to estimate the energy consumption of the quantized Min-Sum decoder. From these models, we developed an optimization method in order to select protographs that minimize the decoder energy consumption while satisfying a given performance criterion. The proposed optimization method was based on a genetic algorithm called differential evolution. In the second part of the thesis, we considered a faulty LDPC decoder, and we assumed that the circuit introduces some faults in the memory units used by the decoder. We then updated the memory energy model so as to take into account the noise in the decoder. Therefore, we proposed an alternate method in order to optimize the model parameters so as to minimize the decoder energy consumption for a given protograph
APA, Harvard, Vancouver, ISO, and other styles
3

Moore, Roxanne Adele. "Value-based global optimization." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44750.

Full text
Abstract:
Computational models and simulations are essential system design tools that allow for improved decision making and cost reductions during all phases of the design process. However, the most accurate models are often computationally expensive and can therefore only be used sporadically. Consequently, designers are often forced to choose between exploring many design alternatives with less accurate, inexpensive models and evaluating fewer alternatives with the most accurate models. To achieve both broad exploration of the alternatives and accurate determination of the best alternative with reasonable costs incurred, surrogate modeling and variable accuracy modeling are used widely. A surrogate model is a mathematically tractable approximation of a more expensive model based on a limited sampling of that model, while variable accuracy modeling involves a collection of different models of the same system with different accuracies and computational costs. As compared to using only very accurate and expensive models, designers can determine the best solutions more efficiently using surrogate and variable accuracy models because obviously poor solutions can be eliminated inexpensively using only the less expensive, less accurate models. The most accurate models are then reserved for discerning the best solution from the set of good solutions. In this thesis, a Value-Based Global Optimization (VGO) algorithm is introduced. The algorithm uses kriging-like surrogate models and a sequential sampling strategy based on Value of Information (VoI) to optimize an objective characterized by multiple analysis models with different accuracies. It builds on two primary research contributions. The first is a novel surrogate modeling method that accommodates data from any number of analysis models with different accuracies and costs. The second contribution is the use of Value of Information (VoI) as a new metric for guiding the sequential sampling process for global optimization. In this manner, the cost of further analysis is explicitly taken into account during the optimization process. Results characterizing the algorithm show that VGO outperforms Efficient Global Optimization (EGO), a similar global optimization algorithm that is considered to be the current state of the art. It is shown that when cost is taken into account in the final utility, VGO achieves a higher utility than EGO with statistical significance. In further experiments, it is shown that VGO can be successfully applied to higher dimensional problems as well as practical engineering design examples.
APA, Harvard, Vancouver, ISO, and other styles
4

Clough, Joshua Alan. "Modeling and optimization of turbine-based combined-cycle engine performance." College Park, Md. : University of Maryland, 2004. http://hdl.handle.net/1903/2094.

Full text
Abstract:
Thesis (M.S.) -- University of Maryland, College Park, 2004.
Thesis research directed by: Dept. of Aerospace Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
5

Lam, Remi Roger Alain Paul. "Surrogate modeling based on statistical techniques for multi-fidelity optimization." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/90673.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 71-74).
Designing and optimizing complex systems generally requires the use of numerical models. However, it is often too expensive to evaluate these models at each step of an optimization problem. Instead surrogate models can be used to explore the design space, as they are much cheaper to evaluate. Constructing a surrogate becomes challenging when different numerical models are used to compute the same quantity, but with different levels of fidelity (i.e., different levels of uncertainty in the models). In this work, we propose a method based on statistical techniques to build such a multi-fidelity surrogate. We introduce a new definition of fidelity in the form of a variance metric. This variance is characterized by expert opinion and can vary across the design space. Gaussian processes are used to create an intermediate surrogate for each model. The uncertainty of each intermediate surrogate is then characterized by a total variance, combining the posterior variance of the Gaussian process and the fidelity variance. Finally, a single multi-fidelity surrogate is constructed by fusing all the intermediate surrogates. One of the advantages of the approach is the multi-fidelity surrogate capability of integrating models whose fidelity changes over the design space, thus relaxing the common assumption of hierarchical relationships among models. The proposed approach is applied to two aerodynamic examples: the computation of the lift coefficient of a NACA 0012 airfoil in the subsonic regime and of a biconvex airfoil in both the subsonic and the supersonic regimes. In these examples, the multi-fidelity surrogate mimics the behavior of the higher fidelity samples where available, and uses the lower fidelity points elsewhere. The proposed method is also able to quantify the uncertainty of the multi-fidelity surrogate and identify whether the fidelity or the sampling is the principal source of this uncertainty.
by Rémi Lam.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
6

Paul, Ratnadeep. "Modeling and Optimization of Powder Based Additive Manufacturing (AM) Processes." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1378113813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bracey, Marcus J. "Dynamic Modeling of Thermal Management System with Exergy Based Optimization." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1503682474459341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Oremland, Matthew Scott. "Techniques for mathematical analysis and optimization of agent-based models." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/25138.

Full text
Abstract:
Agent-based models are computer simulations in which entities (agents) interact with each other and their environment according to local update rules. Local interactions give rise to global dynamics. These models can be thought of as in silico laboratories that can be used to investigate the system being modeled. Optimization problems for agent-based models are problems concerning the optimal way of steering a particular model to a desired state. Given that agent-based models have no rigorous mathematical formulation, standard analysis is difficult, and traditional mathematical approaches are often intractable. This work presents techniques for the analysis of agent-based models and for solving optimization problems with such models. Techniques include model reduction, simulation optimization, conversion to systems of discrete difference equations, and a variety of heuristic methods. The proposed strategies are novel in their application; results show that for a large class of models, these strategies are more effective than existing methods.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

Steffensen, Martin-Alexander. "Maritime fleet size and mix problems : An optimization based modeling approach." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for marin teknikk, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-18759.

Full text
Abstract:
This master thesis addresses the maritime fleet size and mix problem (MFSMP). Finding the optimal fleet size and mix of ships for future needs is arguably the single most important decision of a ship owner. This thesis has examined the accuracy with which a developed mathematical formulation of the problem is at predicting fleet demand under various conditions. The FSM model that has been studied is an extension of a model already established by the MARFLIX project. Because of the thesis link to the MARFLIX project, the considered shipping segment is deep-sea Ro-Ro. For testing how accurate the FSM model is at creating a fleet that can handle complex routing constraints a deployment model has been developed. The consistency of the model under different time frames, varying bunker costs and effects of using continuous instead of integer variables in the FSM model was also tested.The major findings of the work was that the fleet proposed by the the FSM model, in its current form, often is undersized. The fleet size and mix problem is usually considered a strategic problem, with time horizons up to several years. However, this particular model performed better for shorter time frames. Using continuous variables on the different trips undertaken by the fleet proved to have little impact on the fleet composition, but the loss of a vessel could occur. The method proved, however, to be significantly faster than the using integer variables. Changes in the cost of fuel had immense impact on the fleet composition, and one should always be clear on the effects of fluctuations in fuel costs have on a fleet. In general, when the price increased the fleet got larger and slow steamed a larger portion of the fleet.Further work should be made on improving the routing capabilities of the FSM model. In its present form the model cannot be relied upon as the only means for establishing the actual optimal fleet. It can, however, be used as a guidance
APA, Harvard, Vancouver, ISO, and other styles
10

Valentine, Jane E. "Modeling and optimization of a MEMS membrane-based acoustic-wave biosensor." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/227.

Full text
Abstract:
Rapid, reliable, and inexpensive detection of biological and chemical species is highly advantageous in numerous situations. The ability to simultaneously detect multiple targets, for example in medical or environmental testing settings, in areas where modern laboratory equipment is not widely available, is especially desirable. The combination of acoustic wave sensing and MicroElectroMechanical Systems (MEMS) technology leads to a sensor with these capabilities. In this thesis we describe the modeling and optimization of such a membrane-based acoustic wave MEMS biosensor. Starting from an analytical model of the vibration behavior of an unloaded membrane, we model the vibration behavior of a mass-loaded membrane both computationally (using Finite Element Methods) and by using matrix perturbation analysis to develop a computationally efficient approximate analytical solution. Comparing the two methods, we find that our two models show excellent agreement for the range of mass loadings we expect to see. We then note that we can alter sensor performance by controlling the placement of chemically or biologically functionalized regions on the membrane. Our approximate analytical model lets us efficiently predict the effects of functionalization geometries, and so we can optimize performance according to a number of metrics. We develop several optimization objectives to take advantage of our ability to control sensitivity and to multiplex. We develop precise formulations for the objective functions and for constraints, both physical and design-related. We then solve our optimization problems using two complementary methods. The first is an analytical approach we developed, which is feasible for simpler problems, while the second is a stochastic optimization routine using genetic algorithms for more complex problems. Using this method we were able to confirm the solutions given by our analytical approach, and find solutions for more complicated optimization problems. Our solutions allow us to examine the tradeoffs involved in deciding where to place regions of added mass, including tradeoffs between patches and between modes. This helps to elucidate the dynamics of our system, and raises questions for further research. Finally we discuss future research directions, including further optimization possibilities for single sensors as well as for systems of multiple sensors.
APA, Harvard, Vancouver, ISO, and other styles
11

Stanko, Milan Edvard Wolf. "Topics in Production Systems Modeling: Separation, Pumping and Model Based Optimization." Doctoral thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for petroleumsteknologi og anvendt geofysikk, 2014. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-26826.

Full text
Abstract:
This thesis addresses three distinct topics within oilfield production technology: 1) Inline oil-water separation for subsea applications, 2) Model based constrained optimization for production networks of high water cut wells boosted by ESPs (Downhole Electric Submersible Pumps), and 3) Hydraulic analysis of a novel configured hexagonal positive displacement pump. While each of the three topics in the thesis is investigated and discussed in a stand-alone manner, they all share a common industry objective; increasing the yield and prolonging the viable production period of hydrocarbon producing fields. More specifically, they reside within two important classes of production technology challenges; (a) boosting the deliverability and the flow of wells with high water content, and (b) separating and removing of water from hydrocarbon streams as close as possible to the source in a production gathering system. Numerical modeling is the main methodology employed in the three topics, where modeling results are substantiated by field scale or laboratory generated data. The inline oil-water separation technology addressed in this thesis is based on a controlled and distributed tapping from the lower side of a water rich stream flowing in an inclined pipe spool. The long term objective is to develop a capability for seabed separation near the subsea wells in mature offshore fields with high water production and declining reservoir pressure. The intention is to reduce the backpressure on the wells and increase or maintain their production level. The production gain is achieved by harnessing and hydraulically manipulating the energy of the inlet mixture stream to reduce the backpressure exerted by the outlet streams. Important and unique features of the concept are; the separation and phase splitting do not consume external energy, there are no major moving parts, and there is inherent performance tolerance to deviations from the design set-points. The thesis expands an earlier IPT/NTNU concept verification research project (Sponsored by the Research Council program DEMO 2000) which involved experimenting with a low pressure full scale separator test facility. This thesis progresses the relevant previous knowledge and information from a concept validation level to establishing and validating a more detailed design strategy and a more focused performance design for the separator. The thesis brings the investigated separation approach to a mature level where the fluid mechanics design aspects are largely clear and understood and are ready as an input for the mechanical design of a separator prototype. The separation was analyzed from the multiphase hydraulic design point of view using numerical experimentation as the primary tool. The research methodology comprised of conducting the following tasks: (a) developing a procedure to assess the potential production gain of installing the inline separator in a subsea production system and to identify the design requirements for obtaining a specified separator performance, (b) introducing and demonstrating concepts to quantify the drainage performance of a single and multiple taping points, (c) Validating the usefulness of 3D CFD (Computational Fluid Dynamics) methods to represent  the fluid dynamics details of an oil-in-water dispersion and separation, (d) Employing the same 3D CFD model to reproduce the laboratory experimental results. The other two topics in the thesis constitute a response to emerging field scale problems where the industry have called for an immediate and sound modeling based diagnostic and modeling based investigative design. The second topic addresses an optimization strategy for large oil production systems consisting of clusters of high water cut, low GOR oil wells producing by ESP. The production streams of the wells converge through a multi branched surface gathering system into a system of main flow conduits leading to a single processing plant. The objective is to perform a model based numerical optimization to maximize oil production and reduce lift costs by modifying ESP rotor rotation frequency while complying with multiple operational constraints. While industry is currently in possession of tools to perform such tasks the outcome is inconsistent and yields poor optimization result when modeling large system with many wells, complex network and large number of constraints. An investigative task to clarify the source of the difficulties was deemed necessary. The optimization technique is described in the thesis and employed to quantifying the achievable production gains. It also identifies the computational hurdles encountered in computing the global production optimum. The thesis reports and discusses modeling and optimization using three cases: two are scaled-down synthetic cases to establish the fundamentals of the computational process, and one case on a field-scale production system is used to capture the impact of system complexity. The observed outcome and the conclusions of the investigation provide bases for a robust and consistent production optimization program of a large field. The details of this industrial scale project are beyond the scope of this thesis The third topic deals with modeling and critical analysis of a novel design of a positive displacement pump for drilling mud circulation. The concept has been commercialized and launched to the offshore market in recent years (commercially called “Hex pump”). The obvious attractiveness of the pump is its compactness and its small footprint when mounted on congested offshore platforms. However, the pumping performance of the pilot installation was very poor exhibiting excessive pulsation, vibration, mechanical failures and noise. These have driven expensive and critical drilling operations offshore to a halt. It has been recognized at this stage that the unique and innovative design features of the pump together with the criticality of it good and safe performance warned a thorough model based concept analysis and verification. The thesis describes the hydraulic performance modeling and its use to identify the concept inherent pulsation generating source. The conducted modeling and its interpretation are of novel nature and the results revealed a fundamental conceptual flaw. The research outcome had a prompt and an immediate impact on the industry decision of deploying this novel pump type.
APA, Harvard, Vancouver, ISO, and other styles
12

Maragno, Donato. "Optimization with machine learning-based modeling: an application to humanitarian food aid." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21621/.

Full text
Abstract:
In this thesis, we propose a machine learning-based optimization methodology to build (part of) optimization models with a data-driven approach. This approach is useful whenever we have to model one or more relations between the decisions and their impact on the system. This kind of relationship can be challenging to model manually, and so machine learning is used to learn it through the use of data. We demonstrate the potential of this method through a case study in which a predictive model is used to approximate the palatability scoring function in a typical diet problem formulation. First, the performance of this approach is analyzed by embedding a Linear Regression model and then by embedding a Fully Connected Neural Network.
APA, Harvard, Vancouver, ISO, and other styles
13

Windmann, Andreas [Verfasser], and Petra [Akademischer Betreuer] Wagner. "Optimization-based modeling of suprasegmental speech timing / Andreas Windmann ; Betreuer: Petra Wagner." Bielefeld : Universitätsbibliothek Bielefeld, 2016. http://d-nb.info/112372718X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Agarwal, Neeraj 1975. "Neural network based modeling and simulation for the optimization of safety logic." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/84313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Hariri, Mahdiar. "A study of optimization-based predictive dynamics method for digital human modeling." Diss., University of Iowa, 2012. https://ir.uiowa.edu/etd/2886.

Full text
Abstract:
This study develops theorems which generalize or improve the existing predictive dynamics method and implements them to simulate several motion tasks of a human model. Specifically, the problem of determination of contact forces (non-adhesive) between the environment and the digital human model is addressed. Determination of accurate contact forces is used in the calculation of joint torques and is important to account for human strength limitations in simulation of various tasks. It is shown that calculation of the contact forces based on the distance of the contact areas from the Zero Moment Point (ZMP) leads to unrealistic values for some of the forces. This is the approach that has been used in the past. In this work, necessary and sufficient constraints for modeling the non-adhesiveness of a contact area are presented through the definition of NCM (Normal Contact Moment) concepts. NCM point, constraints and stability margins are the new theoretical concepts introduced. When there is only one contact area between the body and the environment, the ZMP and the NCM point coincide. In this case, the contact forces and moments are deterministic. When there are more than one contact areas, the contact forces and moments are indeterminate. In this case, an optimization problem is defined based on the NCM constraints where contact forces and moments are treated as the unknown design variables. Here, kinematics of the motion is assumed to be known. It is shown that this approach leads to more realistic values for the contact forces and moments for a human motion task as opposed to the ZMP based approach. The proposed approach appears to be quite promising and needs to be fully integrated into the predictive dynamics approach of human motion simulation. Some other insights are obtained for the predictive dynamics approach of human motion simulation. For example, it is mathematically proved and also validated that there is a need for an individual constraint to ensure that the normal component of the resultant global forces remains compressive for non-adhesive contacts between the body and the environment. Also, the ZMP constraints and stability margins are applicable for the problems where all the contacts between the environment and the body are in one plane; however, the NCM constraints and stability margins are applicable for all types of arbitrary contacts between the body and the environment. The ZMP and NCM methods are used to model the motion of a human (soldier) performing several military tasks: Aiming, Kneeling, Going Prone and Aiming in Prone Position. New collision avoidance theorems are also presented and used in these simulations.
APA, Harvard, Vancouver, ISO, and other styles
16

Garcés, Monge Luis. "Knowledge-based configuration : a contribution to generic modeling, evaluation and evolutionary optimization." Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2019. http://www.theses.fr/2019EMAC0003/document.

Full text
Abstract:
Dans un contexte de personnalisation de masse, la configuration concourante du produit et de son processus d’obtention constituent un défi industriel important : de nombreuses options ou alternatives, de nombreux liens ou contraintes et un besoin d’optimisation des choix réalisés doivent être pris en compte. Ce problème est intitulé O-CPPC (Optimization of Concurrent Product and Process Configuration). Nous considérons ce problème comme un CSP (Constraints Satisfaction Problem) et l’optimisons avec des algorithmes évolutionnaires. Un état de l’art fait apparaître : i) que la plupart des travaux de recherche sont illustrés sur des exemples spécifiques à un cas industriel ou académique et peu représentatifs de la diversité existante ; ii) un besoin d’amélioration des performances d’optimisation afin de gagner en interactivité et faire face à des problèmes de taille plus conséquente. En réponse au premier point, ces travaux de thèse proposent les briques d’un modèle générique du problème O-CPPC. Ces briques permettent d’architecturer le produit et son processus d’obtention. Ce modèle générique est utilisé pour générer un benchmark réaliste pour évaluer les algorithmes d’optimisation. Ce benchmark est ensuite utilisé pour analyser la performance de l’approche évolutionnaire CFB-EA. L’une des forces de cette approche est de proposer rapidement un front de Pareto proche de l’optimum. Pour répondre au second point, une amélioration de cette méthode est proposée puis évaluée. L’idée est, à partir d’un premier front de Pareto approximatif déterminé très rapidement, de demander à l’utilisateur de choisir une zone d’intérêt et de restreindre la recherche de solutions uniquement sur cette zone. Cette amélioration entraine des gains de temps de calcul importants
In a context of mass customization, the concurrent configuration of the product and its production process constitute an important industrial challenge: Numerous options or alternatives, numerous links or constraints and a need to optimize the choices made. This problem is called O-CPPC (Optimization of Concurrent Product and Process Configuration). We consider this problem as a CSP (Constraints Satisfaction Problem) and optimize it with evolutionary algorithms. A state of the art shows that: i) most studies are illustrated with examples specific to an industrial or academic case and not representative of the existing diversity; ii) a need to improve optimization performance in order to gain interactivity and face larger problems. In response to the first point, this thesis proposes a generic model of the O-CPPC problem. This generic model is used to generate a realistic benchmark for evaluating optimization algorithms. This benchmark is then used to analyze the performance of the CFB-EA evolutionary approach. One of the strengths of this approach is to quickly propose a Pareto front near the optimum. To answer the second point, an improvement of this method is proposed and evaluated. The idea is, from a first approximate Pareto front, to ask the user to choose an area of interest and to restrict the search for solutions only on this area. This improvement results in significant computing time savings
APA, Harvard, Vancouver, ISO, and other styles
17

Abdallah, Zeina. "Microwave sources based on high quality factor resonators : modeling, optimization and metrology." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30267/document.

Full text
Abstract:
La technologie photonique-RF offre une alternative intéressante à l'approche purement électronique dans différents systèmes micro-ondes pour des applications militaires, spatiales et civiles. Un composant original, l'oscillateur optoélectronique (OEO), permet la génération de signaux RF stables et à haute pureté spectrale. Il est basé sur une liaison photonique micro-onde utilisée comme boucle de rétroaction et comportant soit une fibre longue, soit un résonateur à fort coefficient de qualité. Différentes études ont été menées au cours de cette thèse afin d'optimiser et d'améliorer la performance en termes de stabilité et de bruit de phase pour le cas de l'OEO à résonateur. La caractérisation fine et la modélisation des résonateurs est une première étape de la conception globale du système. La métrologie du résonateur optique est réalisée par une technique originale, dite de spectroscopie RF. Les résultats expérimentaux ont révélé que cette technique permet d'une part d'identifier le régime de couplage du résonateur et d'autre part de déterminer avec une grande précision tous les paramètres d'un dispositif résonant, comme les facteurs de qualité interne et externe ou les facteurs de couplage. Une deuxième étude a été orientée vers l'implémentation d'un modèle non-linéaire fiable du dispositif. Dans un tel modèle, la photodiode rapide nécessitait une description plus précise, dans le but de contrôler la conversion du bruit d'amplitude optique en bruit de phase de l'OEO. Un nouveau modèle non-linéaire d'une photodiode hyperfréquence a été développé sous un logiciel commercial: Agilent ADS. Ce nouveau modèle rend effectivement compte de cette conversion de bruit. Une puissance optique optimale à l'entrée de la photodiode a été déterminée, pour laquelle la contribution de RIN du laser au bruit de phase RF pourrait être négligeable. La performance de l'OEO est affectée par diverses perturbations entrainant un décalage en fréquence entre la fréquence du laser et la fréquence de résonance du résonateur. Il est donc important d'utiliser un système de stabilisation pour contrôler cette différence de fréquence. Des séries d'expériences et de tests ont été menées pour étudier la possibilité, d'une part, de remplacer l'électronique commerciale utilisée auparavant pour le système de verrouillage en fréquence (boucle de Pound-Drever-Hall) par une électronique faible bruit et, d'autre part, d'utiliser un laser à semi-conducteur. Un bilan de ces approches est présenté
RF photonics technology offers an attractive alternative to classical electronic approaches in several microwave systems for military, space and civil applications. One specific original architecture dubbed as optoelectronic oscillator (OEO) allows the generation of spectrally pure microwave reference frequencies, when the microwave photonic link is used as a feedback loop. Various studies have been conducted during this thesis on the OEO, especially the one that is based on fiber ring resonators, in order to optimize and improve its phase noise performance and its long-term stability. Precise characterization and modeling of the optical resonator are the first step towards overall system design. The resonator metrology is performed using an original approach, known as RF spectral characterization. The experimental results have demonstrated that this technique is helpful for the identification of the resonator's coupling regime and the accurate determination of the main resonator parameters such as the intrinsic and extrinsic quality factors or the coupling coefficients. A second study was directed toward implementing a reliable nonlinear model of the system. In such a model, the fast photodiode require an accurate description, in order to reduce the conversion of the optical amplitude noise into RF noise. A new nonlinear equivalent circuit model of a fast photodiode has been implemented in a microwave circuit simulator: Agilent ADS. This new model is able to describe the conversion of the laser relative intensity noise (RIN) into microwave phase noise at the photodiode output. An optimal optical power at the photodiode's input has been identified, at which the contribution of the laser RIN in RF phase noise is negligible. When it comes to practical applications, the desired performance of an OEO is threatened by various disturbances that may result in a frequency shift of both the laser frequency and the transmission peak of the resonator, which causes a malfunction of the OEO. Therefore it is desirable to use a stabilization system to control the difference between the laser frequency and the resonator frequency. A series of tests and experiments have been carried out to investigate the possibility, on one hand, to replace the commercial servo controller that was used up until now in the Pound-Drever-Hall loop, with a low noise homemade one and, on the other hand, to use a semiconductor laser to reduce the system size. A detailed review of these approaches is presented
APA, Harvard, Vancouver, ISO, and other styles
18

Zhao, Yongjun. "An Integrated Framework for Gas Turbine Based Power Plant Operational Modeling and Optimization." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/10580.

Full text
Abstract:
The deregulation of the electric power market introduced a strong element of competition. Power plant operators strive to develop advanced operational strategies to maximize the profitability in the dynamic electric power market. New methodologies for gas turbine power plant operational modeling and optimization are needed for power plant operation to enhance operational decision making, and therefore to maximize power plant profitability by reducing operations and maintenance cost and increasing revenue. In this study, a profit based, lifecycle oriented, and unit specific methodology for gas turbine based power plant operational modeling was developed, with the power plant performance, reliability, maintenance, and market dynamics considered simultaneously. The generic methodology is applicable for a variety of optimization problems, and several applications for operational optimization were implemented using this method. A multiple time-scale method was developed for gas turbine power plants long term generation scheduling. This multiple time-scale approach allows combining the detailed granularity of the day-to-day operations with global (seasonal) trends, while keeping the resulting optimization model relatively compact. Using the multiple timescale optimization method, a profit based outage departure planning method was developed, and the key factors for this profit based approach include power plant aging, performance degradation, reliability deterioration, and the energy market dynamics. A novel approach for gas turbine based power plant sequential preventive maintenance scheduling was also introduced. Finally, methods to evaluate the impact of upgrade packages on gas turbine power plant performance, reliability, and economics were developed, and TIES methodology was applied for effective evaluation and selection of gas turbine power plant upgrade packages.
APA, Harvard, Vancouver, ISO, and other styles
19

Wolf, Ailco, and Ailco Wolf. "Comprehensive geostatistical based parameter optimization and inverse modeling of North Avra Valley, Arizona." Thesis, The University of Arizona, 2002. http://hdl.handle.net/10150/626825.

Full text
Abstract:
Geostatistical based optimization was applied to the North A vra Valley ground water model to estimate the transmissivity field and boundary conditions that minimize the difference of the modeled and measured head. The Sequential Self-Calibration (SSC) method was used for the inverse modeling and optimization. SSC is an iterative technique that combines geostatistics with an optimization routine to condition both transmissivity and head fields to measured data. Two calibration methodologies were compared. In the first, the inflow and outflow boundary conditions are adjusted to minimize head residuals, using the uniform geometric mean transmissivity field and the subsequent SSC calibrated transmissivity field is based on those initial boundary conditions. The second method ran the model independent optimization software PEST in series with SSC. This approach calibrates the inflow and outflow boundary conditions and transmissivity field iteratively against the head residuals. As a consequence, the inflow and outflow boundary conditions are optimized against the final geostatistical based transmissivity field used in the model. The serial PEST-SSC calibration method produces consistently better results with respect to head residuals, by an average of 27 .1 percent. The resulting calibrated transmissivity fields of both methods were compared using stochastic error analysis, showing similar results for both methods. A final model run was done employing the PEST-SSC method for a more detailed analysis. This resulted in a relative error ( O'head residuals I head-range) of only 1.5 percent.
APA, Harvard, Vancouver, ISO, and other styles
20

Bither, Cheryl Ann, and Julie A. Dougherty. "A modeling strategy for large-scale optimization based on analysis and visualization principles." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/28372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Magato, James. "Process Model and Sensor Based Optimization of Polyimide Prepreg Compaction During Composite Cure." University of Dayton / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1533144776251201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Zhao, Liang. "Reliability-based design optimization using surrogate model with assessment of confidence level." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/1194.

Full text
Abstract:
The objective of this study is to develop an accurate surrogate modeling method for construction of the surrogate model to represent the performance measures of the compute-intensive simulation model in reliability-based design optimization (RBDO). In addition, an assessment method for the confidence level of the surrogate model and a conservative surrogate model to account the uncertainty of the prediction on the untested design domain when the number of samples are limited, are developed and integrated into the RBDO process to ensure the confidence of satisfying the probabilistic constraints at the optimal design. The effort involves: (1) developing a new surrogate modeling method that can outperform the existing surrogate modeling methods in terms of accuracy for reliability analysis in RBDO; (2) developing a sampling method that efficiently and effectively inserts samples into the design domain for accurate surrogate modeling; (3) generating a surrogate model to approximate the probabilistic constraint and the sensitivity of the probabilistic constraint with respect to the design variables in most-probable-point-based RBDO; (4) using the sampling method with the surrogate model to approximate the performance function in sampling-based RBDO; (5) generating a conservative surrogate model to conservatively approximate the performance function in sampling-based RBDO and assure the obtained optimum satisfy the probabilistic constraints. In applying RBDO to a large-scale complex engineering application, the surrogate model is commonly used to represent the compute-intensive simulation model of the performance function. However, the accuracy of the surrogate model is still challenging for highly nonlinear and large dimension applications. In this work, a new method, the Dynamic Kriging (DKG) method is proposed to construct the surrogate model accurately. In this DKG method, a generalized pattern search algorithm is used to find the accurate optimum for the correlation parameter, and the optimal mean structure is set using the basis functions that are selected by a genetic algorithm from the candidate basis functions based on a new accuracy criterion. Plus, a sequential sampling strategy based on the confidence interval of the surrogate model from the DKG method, is proposed. By combining the sampling method with the DKG method, the efficiency and accuracy can be rapidly achieved. Using the accurate surrogate model, the most-probable-point (MPP)-based RBDO and the sampling-based RBDO can be carried out. In applying the surrogate models to MPP-based RBDO and sampling-based RBDO, several efficiency strategies, which include: (1) using local window for surrogate modeling; (2) adaptive window size for different design candidates; (3) reusing samples in the local window; (4) using violated constraints for surrogate model accuracy check; (3) adaptive initial point for correlation parameter estimation, are proposed. To assure the accuracy of the surrogate model when the number of samples is limited, and to assure the obtained optimum design can satisfy the probabilistic constraints, a conservative surrogate model, using the weighted Kriging variance, is developed, and implemented for sampling-based RBDO.
APA, Harvard, Vancouver, ISO, and other styles
23

Lu, Tao. "A Metrics-based Sustainability Assessment of Cryogenic Machining Using Modeling and Optimization of Process Performance." UKnowledge, 2014. http://uknowledge.uky.edu/me_etds/47.

Full text
Abstract:
The development of a sustainable manufacturing process requires a comprehensive evaluation method and fundamental understanding of the processes. Coolant application is a critical sustainability concern in the widely used machining process. Cryogenic machining is considered a candidate for sustainable coolant application. However, the lack of comprehensive evaluation methods leaves significant uncertainties about the overall sustainability performance of cryogenic machining. Also, the lack of practical application guidelines based on scientific understanding of the heat transfer mechanism in cryogenic machining limits the process optimization from achieving the most sustainable performance. In this dissertation, based on a proposed Process Sustainability Index (ProcSI) methodology, the sustainability performance of the cryogenic machining process is optimized with application guidelines established by scientific modeling of the heat transfer mechanism in the process. Based on the experimental results, the process optimization is carried out with Genetic Algorithm (GA). The metrics-based ProcSI method considers all three major aspects of sustainable manufacturing, namely economy, environment and society, based on the 6R concept and the total life-cycle aspect. There are sixty five metrics, categorized into six major clusters. Data for all relavant metrics are collected, normalized, weighted, and then aggregated to form the ProcSI score, as an overall judgment for the sustainability performance of the process. The ProcSI method focuses on the process design as a manufacturer’s aspect, hoping to improve the sustainability performance of the manufactured products and the manufacturing system. A heat transfer analysis of cryogenic machining for a flank-side liquid nitrogen jet delivery is carried out. This is performed by micro-scale high-speed temperature measurement experiments. The experimental results are processed with an innovative inverse heat transfer solution method to calculate the surface heat transfer coefficient at various locations throughout a wide temperature range. Based on the results, the application guidelines, including suggestions of a minimal, but sufficient, coolant flow rate are established. Cryogenic machining experiments are carried out, and ProcSI evaluation is applied to the experimental scenario. Based on the ProcSI evaluation, the optimization process implemented with GA provides optimal machining process parameters for minimum manufacturing cost, minimal energy consumption, or the best sustainability performance.
APA, Harvard, Vancouver, ISO, and other styles
24

Barceló, Adrover Salvador. "An advanced Framework for efficient IC optimization based on analytical models engine." Doctoral thesis, Universitat de les Illes Balears, 2013. http://hdl.handle.net/10803/128968.

Full text
Abstract:
En base als reptes sorgits a conseqüència de l'escalat de la tecnologia, la present tesis desenvolupa i analitza un conjunt d'eines orientades a avaluar la sensibilitat a la propagació d'esdeveniments SET en circuits microelectrònics. S'han proposant varies mètriques de propagació de SETs considerant l'impacto dels emmascaraments lògic, elèctric i combinat lògic-elèctric. Aquestes mètriques proporcionen una via d'anàlisi per quantificar tant les regions més susceptibles a propagar SETs com les sortides més susceptibles de rebre'ls. S'ha desenvolupat un conjunt d'algorismes de cerca de camins sensibilitzables altament adaptables a múltiples aplicacions, un sistema lògic especific i diverses tècniques de simplificació de circuits. S'ha demostrat que el retard d'un camí donat depèn dels vectors de sensibilització aplicats a les portes que formen part del mateix, essent aquesta variació de retard comparable a la atribuïble a les variacions paramètriques del proces.
En base a los desafíos surgidos a consecuencia del escalado de la tecnología, la presente tesis desarrolla y analiza un conjunto de herramientas orientadas a evaluar la sensibilidad a la propagación de eventos SET en circuitos microelectrónicos. Se han propuesto varias métricas de propagación de SETs considerando el impacto de los enmascaramientos lógico, eléctrico y combinado lógico-eléctrico. Estas métricas proporcionan una vía de análisis para cuantificar tanto las regiones más susceptibles a propagar eventos SET como las salidas más susceptibles a recibirlos. Ha sido desarrollado un conjunto de algoritmos de búsqueda de caminos sensibilizables altamente adaptables a múltiples aplicaciones, un sistema lógico especifico y diversas técnicas de simplificación de circuitos. Se ha demostrado que el retardo de un camino dado depende de los vectores de sensibilización aplicados a las puertas que forman parte del mismo, siendo esta variación de retardo comparable a la atribuible a las variaciones paramétricas del proceso.
Based on the challenges arising as a result of technology scaling, this thesis develops and evaluates a complete framework for SET propagation sensitivity. The framework comprises a number of processing tools capable of handling circuits with high complexity in an efficient way. Various SET propagation metrics have been proposed considering the impact of logic, electric and combined logic-electric masking. Such metrics provide a valuable vehicle to grade either in-circuit regions being more susceptible of propagating SETs toward the circuit outputs or circuit outputs more susceptible to produce SET. A quite efficient and customizable true path finding algorithm with a specific logic system has been constructed and its efficacy demonstrated on large benchmark circuits. It has been shown that the delay of a path depends on the sensitization vectors applied to the gates within the path. In some cases, this variation is comparable to the one caused by process parameters variations.
APA, Harvard, Vancouver, ISO, and other styles
25

Zurek, Eduardo. "System optimization for micron and sub-micron particle identification using spectroscopy-based techniques." [Tampa, Fla] : University of South Florida, 2006. http://purl.fcla.edu/usf/dc/et/SFE0001635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Agca, Esra. "Optimization-based Logistics Planning and Performance Measurement for Hospital Evacuation and Emergency Management." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/51551.

Full text
Abstract:
This dissertation addresses the development of optimization models for hospital evacuation logistics, as well as the analyses of various resource management strategies in terms of the equity of evacuation plans generated. We first formulate the evacuation transportation problem of a hospital as an integer programming model that minimizes the total evacuation risk consisting of the threat risk necessitating evacuation and the transportation risk experienced en route. Patients, categorized based on medical conditions and care requirements, are allocated to a limited fleet of vehicles with various medical capabilities and capacities to be transported to receiving beds, categorized much like patients, at the alternative facilities. We demonstrate structural properties of the underlying transportation network that enables the model to be used for both strategic planning and operational decision making. Next, we examine the resource management and equity issues that arise when multiple hospitals in a region are evacuated. The efficiency and equity of the allocation of resources, including a fleet of vehicles, receiving beds, and each hospital\'s loading capacity, determine the performance of the optimal evacuation plan. We develop an equity modeling framework, where we consider equity among evacuating hospitals and among patients. The range of equity of optimal solutions is investigated and properties of optimal and equitable solutions based on risk-based utility functions are analyzed. Finally, we study the integration of the transportation problem with the preceding hospital building evacuation. Since, in practice, the transportation plan depends on the pace of building evacuation, we develop a model that would generate the transportation plan subject to the output of hospital building evacuation. The optimal evacuation plans are analyzed with respect to resource utilization and patient prioritization schemes. Parametric analysis of the resource constraints is provided along with managerial insights into the assessment of evacuation requirements and resource allocation. In order to demonstrate the performance of the proposed models, computational results are provided using case studies with real data obtained from the second largest hospital in Virginia.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
27

Ploé, Patrick. "Surrogate-based optimization of hydrofoil shapes using RANS simulations." Thesis, Ecole centrale de Nantes, 2018. http://www.theses.fr/2018ECDN0012/document.

Full text
Abstract:
Cette thèse présente un framework d’optimisation pour la conception hydrodynamique de forme d’hydrofoils. L’optimisation d’hydrofoil par simulation implique des objectifs d’optimisation divergents et impose des compromis contraignants en raison du coût des simulations numériques et des budgets limités généralement alloués à la conception des navires. Le framework fait appel à l’échantillonnage séquentiel et aux modèles de substitution. Un modèle prédictif est construit en utilisant la Régression par Processus Gaussien (RPG) à partir des données issues de simulations fluides effectuées sur différentes géométries d’hydrofoils. Le modèle est ensuite combiné à d’autres critères dans une fonction d’acquisition qui est évaluée sur l’espace de conception afin de définir une nouvelle géométrie qui est testée et dont les paramètres et la réponse sont ajoutés au jeu de données, améliorant ainsi le modèle. Une nouvelle fonction d’acquisition a été développée, basée sur la variance RPG et la validation croisée des données. Un modeleur géométrique a également été développé afin de créer automatiquement les géométries d’hydrofoil a partir des paramètres déterminés par l’optimiseur. Pour compléter la boucle d’optimisation,FINE/Marine, un solveur fluide RANS, a été intégré dans le framework pour exécuter les simulations fluides. Les capacités d’optimisation ont été testées sur des cas tests analytiques montrant que la nouvelle fonction d’acquisition offre plus de robustesse que d’autres fonctions d’acquisition existantes. L’ensemble du framework a ensuite été testé sur des optimisations de sections 2Dd’hydrofoil ainsi que d’hydrofoil 3D avec surface libre. Dans les deux cas, le processus d’optimisation fonctionne, permettant d’optimiser les géométries d’hydrofoils et confirmant les performances obtenues sur les cas test analytiques. Les optima semblent cependant être assez sensibles aux conditions opérationnelles
This thesis presents a practical hydrodynamic optimization framework for hydrofoil shape design. Automated simulation based optimization of hydrofoil is a challenging process. It may involve conflicting optimization objectives, but also impose a trade-off between the cost of numerical simulations and the limited budgets available for ship design. The optimization frameworkis based on sequential sampling and surrogate modeling. Gaussian Process Regression (GPR) is used to build a predictive model based on data issued from fluid simulations of selected hydrofoil geometries. The GPR model is then combined with other criteria into an acquisition function that isevaluated over the design space, to define new querypoints that are added to the data set in order to improve the model. A custom acquisition function is developed, based on GPR variance and cross validation of the data.A hydrofoil geometric modeler is also developed to automatically create the hydrofoil shapes based on the parameters determined by the optimizer. To complete the optimization loop, FINE/Marine, a RANS flow solver, is embedded into the framework to perform the fluid simulations. Optimization capabilities are tested on analytical test cases. The results show that the custom function is more robust than other existing acquisition functions when tested on difficult functions. The entire optimization framework is then tested on 2D hydrofoil sections and 3D hydrofoil optimization cases with free surface. In both cases, the optimization process performs well, resulting in optimized hydrofoil shapes and confirming the results obtained from the analytical test cases. However, the optimum is shown to be sensitive to operating conditions
APA, Harvard, Vancouver, ISO, and other styles
28

Cheng, Zhanping. "Value based management of supplier relationships and supply contracts : quantitative modeling, valuation and portfolio optimization based on financial investment theories /." Lohmar : Eul, 2009. http://d-nb.info/997314826/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Thomas, George L. "Biogeography-Based Optimization of a Variable Camshaft Timing System." Cleveland State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=csu1419775790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Degenhardt, Richard Kennedy III. "Self-collision avoidance through keyframe interpolation and optimization-based posture prediction." Thesis, University of Iowa, 2014. https://ir.uiowa.edu/etd/1446.

Full text
Abstract:
Simulating realistic human behavior on a virtual avatar presents a difficult task. Because the simulated environment does not adhere to the same scientific principles that we do in the existent world, the avatar becomes capable of achieving infeasible postures. In an attempt to obtain realistic human simulation, real world constraints are imposed onto the non-sentient being. One such constraint, and the topic of this thesis, is self-collision avoidance. For the purposes of this topic, a posture will be defined solely as a collection of angles formed by each joint on the avatar. The goal of self-collision avoidance is to eliminate the formation of any posture where multiple body parts are attempting to occupy the exact same space. My work necessitates an extension of this definition to also include collision avoidance with objects attached to the body, such as a backpack or armor. In order to prevent these collisions from occurring, I have implemented an effort-based approach for correcting afflicted postures. This technique specifically pertains to postures that are sequenced together with the objective of animating the avatar. As such, the animation's coherence and defining characteristics must be preserved. My approach to this problem is unique in that it strategically blends the concept of keyframe interpolation with an optimization-based strategy for posture prediction. Although there has been considerable work done with methods for keyframe interpolation, there has been minimal progress towards integrating a realistic collision response strategy. Additionally, I will test this optimization-based approach through the use of a complex kinematic human model and investigate the use of the results as input to an existing dynamic motion prediction system.
APA, Harvard, Vancouver, ISO, and other styles
31

Kloß, Sebastian. "Simulation-Optimization of the Management of Sensor-Based Deficit Irrigation Systems." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-188762.

Full text
Abstract:
Current research concentrates on ways to investigate and improve water productivity (WP), as agriculture is today’s predominant freshwater consumer, averaging at 70% and reaching up to 93% in some regions. A growing world population will require more food and thus more water for cultivation. Regions that are already affected by physical water scarcity and which depend on irrigation for growing crops will face even greater challenges regarding their water supply. Other problems in such regions are a variable water supply, inefficient irrigation practices, and over-pumping of available groundwater resources with other adverse effects on the ecosystem. To face those challenges, strategies are needed that use the available water resources more efficiently and allow farming in a more sustainable way. This work focused on the management of sensor-based deficit irrigation (DI) systems and improvements of WP through a combined approach of simulation-optimization and irrigation experiments. In order to improve irrigation control, a new sensor called pF-meter was employed, which extended the measurement range of the commonly used tensiometers from pF 2.9 to pF 7. The following research questions were raised: (i) Is this approach a suitable strategy to improve WP; (ii) Is the sensor for irrigation control suitable; (iii) Which crop growth models are suitable to be part of that approach; and (iv) Can the combined application with experiments prove an increase of WP? The stochastic simulation-optimization approach allowed deriving parameter values for an optimal irrigation control for sensor-based full and deficit irrigation strategies. Objective was to achieve high WP with high reliability. Parameters for irrigation control included irrigation thresholds of soil-water potentials because of the working principle behind plant transpiration where pressure gradients are transmitted from the air through the plant and into the root zone. Optimal parameter values for full and deficit irrigation strategies were tested in irrigation experiments in containers in a vegetation hall with drip irrigated maize and compared to schedule-based irrigation strategies with regard to WP and water consumption. Observation data from one of the treatments was used afterwards in a simulation study to systematically investigate the parameters for implementing effective setups of DI systems. The combination of simulation-optimization and irrigation experiments proved to be a suitable approach for investigating and improving WP, as well as for deriving optimal parameter values of different irrigation strategies. This was verified in the irrigation experiment and shown through overall high WP, equally high WP between deficit and full irrigation strategies, and achieved water savings. Irrigation thresholds beyond the measurement range of tensiometers are feasible and applicable. The pF-meter performed satisfactorily and is a promising candidate for irrigation control. Suitable crop models for being part of this approach were found and their properties formulated. Factors that define the behavior of DI systems regarding WP and water consumption were investigated and assessed. This research allowed for drawing the first conclusions about the potential range of operations of sensor-based DI systems for achieving high WP with high reliability through its systematical investigation of such systems. However, this study needs validation and is therefore limited with regard to exact values of derived thresholds.
APA, Harvard, Vancouver, ISO, and other styles
32

Arroyo, Campos Ismael. "Optimization of Display-Wall Aware Applications on Cluster Based Systems." Doctoral thesis, Universitat de Lleida, 2017. http://hdl.handle.net/10803/405579.

Full text
Abstract:
Actualment, els sistemes d'informació i comunicació que treballen amb grans volums de dades requereixen l'ús de plataformes que permetin una representació entenible des del punt de vista de l'usuari. En aquesta tesi s'analitzen les plataformes Cluster Display Wall, usades per a la visualització de dades massives, i es treballa concretament amb la plataforma Liquid Galaxy, desenvolupada per Google. Mitjançant la plataforma Liquid Galaxy, es realitza un estudi de rendiment d'aplicacions de visualització representatives, identificant els aspectes de rendiment més rellevants i els possibles colls d'ampolla. De forma específica, s'estudia amb major profunditat un cas representatiu d'aplicació de visualització, el Google Earth. El comportament del sistema executant Google Earth s'analitza mitjançant diferents tipus de test amb usuaris reals. Per a aquest fi, es defineix una nova mètrica de rendiment, basada en la ratio de visualització, i es valora la usabilitat del sistema mitjançant els atributs tradicionals d'efectivitat, eficiència i satisfacció. Adicionalment, el rendiment del sistema es modela analíticament i es prova la precisió del model comparant-ho amb resultats reals.
Nowadays, information and communication systems that work with a high volume of data require infrastructures that allow an understandable representation of it from the user's point of view. This thesis analyzes the Cluster Display Wall platforms, used to visualized massive amounts of data, and specifically studies the Liquid Galaxy platform, developed by Google. Using the Liquid Galaxy platform, a performance study of representative visualization applications was performed, identifying the most relevant aspects of performance and possible bottlenecks. Specifically, we study in greater depth a representative case of visualization application, Google Earth. The system behavior while running Google Earth was analyzed through different kinds of tests with real users. For this, a new performance metric was defined, based on the visualization ratio, and the usability of the system was assessed through the traditional attributes of effectiveness, efficiency and satisfaction. Additionally, the system performance was analytically modeled and the accuracy of the model was tested by comparing it with actual results.
Actualmente, los sistemas de información y comunicación que trabajan con grandes volúmenes de datos requieren el uso de plataformas que permitan una representación entendible desde el punto de vista del usuario. En esta tesis se analizan las plataformas Cluster Display Wall, usadas para la visualización de datos masivos, y se trabaja en concreto con la plataforma Liquid Galaxy, desarrollada por Google. Mediante la plataforma Liquid Galaxy, se realiza un estudio de rendimiento de aplicaciones de visualización representativas, identificando los aspectos de rendimiento más relevantes y los posibles cuellos de botella. De forma específica, se estudia en mayor profundidad un caso representativo de aplicación de visualización, el Google Earth. El comportamiento del sistema ejecutando Google Earth se analiza mediante diferentes tipos de test con usuarios reales. Para ello se define una nueva métrica de rendimiento, basada en el ratio de visualización, y se valora la usabilidad del sistema mediante los atributos tradicionales de efectividad, eficiencia y satisfacción. Adicionalmente, el rendimiento del sistema se modela analíticamente y se prueba la precisión del modelo comparándolo con resultados reales.
APA, Harvard, Vancouver, ISO, and other styles
33

Parsons, Mark Allen. "Network-Based Naval Ship Distributed System Design and Mission Effectiveness using Dynamic Architecture Flow Optimization." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/104198.

Full text
Abstract:
This dissertation describes the development and application of a naval ship distributed system architectural framework, Architecture Flow Optimization (AFO), and Dynamic Architecture Flow Optimization (DAFO) to naval ship Concept and Requirements Exploration (CandRE). The architectural framework decomposes naval ship distributed systems into physical, logical, and operational architectures representing the spatial, functional, and temporal relationships of distributed systems respectively. This decomposition greatly simplifies the Mission, Power, and Energy System (MPES) design process for use in CandRE. AFO and DAFO are a network-based linear programming optimization methods used to design and analyze MPES at a sufficient level of detail to understand system energy flow, define MPES architecture and sizing, model operations, reduce system vulnerability and improve system reliability. AFO incorporates system topologies, energy coefficient component models, preliminary arrangements, and (nominal and damaged) steady state scenarios to minimize the energy flow cost required to satisfy all operational scenario demands and constraints. DAFO applies the same principles as AFO and adds a second commodity, data flow. DAFO also integrates with a warfighting model, operational model, and capabilities model that quantify tasks and capabilities through system measures of performance at specific capability nodes. This enables the simulation of operational situations including MPES configuration and operation during CandRE. This dissertation provides an overview of design tools developed to implement this process and methods, including objective attribute metrics for cost, effectiveness and risk, ship synthesis model, hullform exploration and MPES explorations using design of experiments (DOEs) and response surface models.
Doctor of Philosophy
This dissertation describes the development and application of a warship system architectural framework, Architecture Flow Optimization (AFO), and Dynamic Architecture Flow Optimization (DAFO) to warship Concept and Requirements Exploration (CandRE). The architectural framework decomposes warship systems into physical, logical, and operational architectures representing the spatial, functional, and time-based relationships of systems respectively. This decomposition greatly simplifies the Mission, Power, and Energy System (MPES) design process for use in CandRE. AFO and DAFO are a network-based linear programming optimization methods used to design and analyze MPES at a sufficient level of detail to understand system energy usage, define MPES connections and sizing, model operations, reduce system vulnerability and improve system reliability. AFO incorporates system templates, simple physics and energy-based component models, preliminary arrangements, and simple undamaged/damaged scenarios to minimize the energy flow usage required to satisfy all operational scenario demands and constraints. DAFO applies the same principles and adds a second commodity, data flow representing system operation. DAFO also integrates with a warfighting model, operational model, and capabilities model that quantify tasks and capabilities through system measures of performance. This enables the simulation of operational situations including MPES configuration and operation during CandRE. This dissertation provides an overview of design tools developed to implement this process and methods, including optimization objective attribute metrics for cost, effectiveness and risk.
APA, Harvard, Vancouver, ISO, and other styles
34

Lim, Jung Youl. "A distributed multi-level current modeling method for design analysis and optimization of permanent magnet electromechanical actuators." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53990.

Full text
Abstract:
This thesis has been motivated by the growing needs for multi-degree of freedom (M-DOF) electromagnetic actuators capable of smooth and accurate multi-dimensional driving motions. Because high coercive rare-earth permanent-magnets (PMs) are widely available at low cost, their uses for developing compact, energy-efficient M-DOF actuators have been widely researched. To facilitate design analysis and optimization, this thesis research seeks to develop a general method based on distributed source models to characterize M-DOF PM-based actuators and optimize their designs to achieve high torque-to-weight performance with compact structures To achieve the above stated objective, a new method that is referred to here as distributed multi-level current (DMC) utilizes geometrically defined point sources has been developed to model electromagnetic components and phenomena, which include PMs, electromagnets (EMs), iron paths and induced eddy current. Unlike existing numerical methods (such as FEM, FDM, or MLM) which solve for the magnetic fields from Maxwell’s equations and boundary conditions, the DMC-based method develops closed-form solutions to the magnetic field and force problems on the basis of electromagnetic point currents in a multi-level structure while allowing trade-off between computational speed and accuracy. Since the multi-level currents can be directly defined at the geometrically decomposed volumes and surfaces of the components (such as electric conductors and magnetic materials) that make up of the electromagnetic system, the DMC model has been effectively incorporated in topology optimization to maximize the torque-to-weight ratio of an electromechanical actuator. To demonstrate the above advantages, the DMC optimization has been employed to optimize the several designs ranging from conventional single-axis actuators, 2-DOF linear-rotary motors to 3-DOF spherical motors. The DMC modeling method has been experimentally validated and compared against published data. While the DMC model offers an efficient means for the design analysis and optimization of electromechanical systems with improved computational accuracy and speed, it can be extended to a broad spectrum of emerging and creative applications involving electromagnetic systems.
APA, Harvard, Vancouver, ISO, and other styles
35

Wong, Ka In. "Machine-learning-based modeling of biofuel engine systems with applications to optimization and control of engine performance." Thesis, University of Macau, 2017. http://umaclib3.umac.mo/record=b3691886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Mena, Rodrigo. "Risk–based modeling, simulation and optimization for the integration of renewable distributed generation into electric power networks." Thesis, Châtenay-Malabry, Ecole centrale de Paris, 2015. http://www.theses.fr/2015ECAP0034/document.

Full text
Abstract:
Il est prévu que la génération distribuée par l’entremise d’énergie de sources renouvelables (DG) continuera à jouer un rôle clé dans le développement et l’exploitation des systèmes de puissance électrique durables, efficaces et fiables, en vertu de cette fournit une alternative pratique de décentralisation et diversification de la demande globale d’énergie, bénéficiant de sources d’énergie plus propres et plus sûrs. L’intégration de DG renouvelable dans les réseaux électriques existants pose des défis socio–technico–économiques, qu’ont attirés de la recherche et de progrès substantiels.Dans ce contexte, la présente thèse a pour objet la conception et le développement d’un cadre de modélisation, simulation et optimisation pour l’intégration de DG renouvelable dans des réseaux de puissance électrique existants. Le problème spécifique à considérer est celui de la sélection de la technologie,la taille et l’emplacement de des unités de génération renouvelable d’énergie, sous des contraintes techniques, opérationnelles et économiques. Dans ce problème, les questions de recherche clés à aborder sont: (i) la représentation et le traitement des variables physiques incertains (comme la disponibilité de les diverses sources primaires d’énergie renouvelables, l’approvisionnement d’électricité en vrac, la demande de puissance et l’apparition de défaillances de composants) qui déterminent dynamiquement l’exploitation du réseau DG–intégré, (ii) la propagation de ces incertitudes sur la réponse opérationnelle du système et le suivi du risque associé et (iii) les efforts de calcul intensif résultant du problème complexe d’optimisation combinatoire associé à l’intégration de DG renouvelable.Pour l’évaluation du système avec un plan d’intégration de DG renouvelable donné, un modèle de calcul de simulation Monte Carlo non–séquentielle et des flux de puissance optimale (MCS–OPF) a été conçu et mis en oeuvre, et qui émule l’exploitation du réseau DG–intégré. Réalisations aléatoires de scénarios opérationnels sont générés par échantillonnage à partir des différentes distributions des variables incertaines, et pour chaque scénario, la performance du système est évaluée en termes économiques et de la fiabilité de l’approvisionnement en électricité, représenté par le coût global (CG) et l’énergie non fournie (ENS), respectivement. Pour mesurer et contrôler le risque par rapport à la performance du système, deux indicateurs sont introduits, la valeur–à–risque conditionnelle(CVaR) et l’écart du CVaR (DCVaR).Pour la sélection optimale de la technologie, la taille et l’emplacement des unités DG renouvelables,deux approches distinctes d’optimisation multi–objectif (MOO) ont été mis en oeuvre par moteurs de recherche d’heuristique d’optimisation (HO). La première approche est basée sur l’algorithme génétique élitiste de tri non-dominé (NSGA–II) et vise à la réduction concomitante de l’espérance mathématique de CG et de ENS, dénotés ECG et EENS, respectivement, combiné avec leur valeurs correspondent de CVaR(CG) et CVaR(ENS); la seconde approche effectue un recherche à évolution différentielle MOO (DE) pour minimiser simultanément ECG et s’écart associé DCVaR(CG). Les deux approches d’optimisation intègrent la modèle de calcul MCS–OPF pour évaluer la performance de chaque réseau DG–intégré proposé par le moteur de recherche HO.Le défi provenant de les grands efforts de calcul requises par les cadres de simulation et d’optimisation proposée a été abordée par l’introduction d’une technique originale, qui niche l’analyse de classification hiérarchique (HCA) dans un moteur de recherche de DE.Exemples d’application des cadres proposés ont été élaborés, concernant une adaptation duréseau test de distribution électrique IEEE 13–noeuds et un cadre réaliste du système test de sous–transmission et de distribution IEEE 30–noeuds. [...]
Renewable distributed generation (DG) is expected to continue playing a fundamental role in the development and operation of sustainable, efficient and reliable electric power systems, by virtue of offering a practical alternative to diversify and decentralize the overall power generation, benefiting from cleaner and safer energy sources. The integration of renewable DG in the existing electric powernetworks poses socio–techno–economical challenges, which have attracted substantial research and advancement.In this context, the focus of the present thesis is the design and development of a modeling,simulation and optimization framework for the integration of renewable DG into electric powernetworks. The specific problem considered is that of selecting the technology, size and location of renewable generation units, under technical, operational and economic constraints. Within this problem, key research questions to be addressed are: (i) the representation and treatment of the uncertain physical variables (like the availability of diverse primary renewable energy sources, bulk–power supply, power demands and occurrence of components failures) that dynamically determine the DG–integrated network operation, (ii) the propagation of these uncertainties onto the system operational response and the control of the associated risk and (iii) the intensive computational efforts resulting from the complex combinatorial optimization problem of renewable DG integration.For the evaluation of the system with a given plan of renewable DG, a non–sequential MonteCarlo simulation and optimal power flow (MCS–OPF) computational model has been designed and implemented, that emulates the DG–integrated network operation. Random realizations of operational scenarios are generated by sampling from the different uncertain variables distributions,and for each scenario the system performance is evaluated in terms of economics and reliability of power supply, represented by the global cost (CG) and the energy not supplied (ENS), respectively.To measure and control the risk relative to system performance, two indicators are introduced, the conditional value–at–risk (CVaR) and the CVaR deviation (DCVaR).For the optimal technology selection, size and location of the renewable DG units, two distinct multi–objective optimization (MOO) approaches have been implemented by heuristic optimization(HO) search engines. The first approach is based on the fast non–dominated sorting genetic algorithm(NSGA–II) and aims at the concurrent minimization of the expected values of CG and ENS, thenECG and EENS, respectively, combined with their corresponding CVaR(CG) and CVaR(ENS) values; the second approach carries out a MOO differential evolution (DE) search to minimize simultaneously ECG and its associated deviation DCVaR(CG). Both optimization approaches embed the MCS–OPF computational model to evaluate the performance of each DG–integrated network proposed by the HO search engine. The challenge coming from the large computational efforts required by the proposed simulation and optimization frameworks has been addressed introducing an original technique, which nests hierarchical clustering analysis (HCA) within a DE search engine. Examples of application of the proposed frameworks have been worked out, regarding an adaptation of the IEEE 13 bus distribution test feeder and a realistic setting of the IEEE 30 bussub–transmission and distribution test system. The results show that these frameworks are effectivein finding optimal DG–integrated networks solutions, while controlling risk from two distinctperspectives: directly through the use of CVaR and indirectly by targeting uncertainty in the form ofDCVaR. Moreover, CVaR acts as an enabler of trade–offs between optimal expected performanceand risk, and DCVaR integrates also uncertainty into the analysis, providing a wider spectrum ofinformation for well–supported and confident decision making
APA, Harvard, Vancouver, ISO, and other styles
37

Lindhorst, Henning Verfasser], and Achim [Gutachter] [Kienle. "Modeling and simulation of enzyme controlled metabolic networks using optimization based methods / Henning Lindhorst ; Gutachter: Achim Kienle." Magdeburg : Universitätsbibliothek Otto-von-Guericke-Universität, 2020. http://d-nb.info/1220036501/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Lindhorst, Henning [Verfasser], and Achim [Gutachter] Kienle. "Modeling and simulation of enzyme controlled metabolic networks using optimization based methods / Henning Lindhorst ; Gutachter: Achim Kienle." Magdeburg : Universitätsbibliothek Otto-von-Guericke-Universität, 2020. http://d-nb.info/1220036501/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Dai, Lei. "An Open Platform of Parameterized Shape Optimization based-on CAD/CAE Integration Technique." Reims, 2006. http://theses.univ-reims.fr/exl-doc/GED00000752.pdf.

Full text
Abstract:
The main content in the research is developing an open platform of parameterized shape optimization based-on CAD/CAE integration technique. Through integration, structural analysis and design optimization are seamless combined with parametric geometry modeling and embed into the CAD system. POSHAPE can provide parameterized shape optimization method for 3D solid structure, spatial shell structure and cell structure of composite material. To realize such a general method, integration is the most essential part. In this platform, integration is realized includes: 1) Integrating structure analysis tool of different disciplinary with structure shape optimization. Structure response from different disciplinary will be studied according structure shape. 2) Integrating finite element modeling with parametric geometry modeling through Boundary Representative Tree (simplified as: B-Rep) used in solid modeling. Finite element model is parameterized to be dynamic regenerated during optimization design steps. 3) Parametric solid modeling is extended to realize parameterized surface modeling under integration between surface model definition and solid model. Parameterized finite element modeling of shell structure is also achieved which is similar to that of solid structure
APA, Harvard, Vancouver, ISO, and other styles
40

Eltoukhy, Moataz. "Implementation and Validation of a Detailed 3D Inverse Dynamics Lower Extremity Model for Gait Analysis Applications Based on Optimization Technique." Scholarly Repository, 2011. http://scholarlyrepository.miami.edu/oa_dissertations/558.

Full text
Abstract:
The goal of this research work was to introduce the whole process of developing and validating a 3D lower extremity musculoskeletal model and to test the ability of the model to predict the muscles recruitment of the different muscles involved in human locomotion as well as determining the corresponding forces and moments generated around the different joints in the lower extremity. Therefore the model can be applied in one of the important fields of orthopaedics which is joint replacement; the case study used in such application is the total knee replacement. The knee reaction forces were compared to the pattern obtained by Harrington (1992), where the hip moment components (Flexion/extension, internal/external, and abduction/adduction) were all compared to the patterns obtained from the Hip98 data base. It was shown in the different graphs of joints forces and moments that the model was able to produce very close results when comparing pattern and magnitude to the literature data. Thus, this 3D biomechanical model is sophisticated enough to be used for surgery evaluation such as in total knee replacement, where the damaged cartilage and bone are removed from the surface of the knee joint and replaced with a man-made. The case study of the second part of the research work presented involved the comparison of the gait pattern between two main knee joint types, Metallic and Allograft knee joints against normal subjects (Control group). A total of fifteen subjects participated in this study, five subjects in each group. It was concluded that based on the study conducted and the statistical evidence obtained that the introduced model can be used for applications that involves joint surgeries such as knee replacement that ultimately can be utilized in surgery evaluation.
APA, Harvard, Vancouver, ISO, and other styles
41

Nejadpak, Arash. "Development of Physics-based Models and Design Optimization of Power Electronic Conversion Systems." FIU Digital Commons, 2013. http://digitalcommons.fiu.edu/etd/824.

Full text
Abstract:
The main objective for physics based modeling of the power converter components is to design the whole converter with respect to physical and operational constraints. Therefore, all the elements and components of the energy conversion system are modeled numerically and combined together to achieve the whole system behavioral model. Previously proposed high frequency (HF) models of power converters are based on circuit models that are only related to the parasitic inner parameters of the power devices and the connections between the components. This dissertation aims to obtain appropriate physics-based models for power conversion systems, which not only can represent the steady state behavior of the components, but also can predict their high frequency characteristics. The developed physics-based model would represent the physical device with a high level of accuracy in predicting its operating condition. The proposed physics-based model enables us to accurately develop components such as; effective EMI filters, switching algorithms and circuit topologies [7]. One of the applications of the developed modeling technique is design of new sets of topologies for high-frequency, high efficiency converters for variable speed drives. The main advantage of the modeling method, presented in this dissertation, is the practical design of an inverter for high power applications with the ability to overcome the blocking voltage limitations of available power semiconductor devices. Another advantage is selection of the best matching topology with inherent reduction of switching losses which can be utilized to improve the overall efficiency. The physics-based modeling approach, in this dissertation, makes it possible to design any power electronic conversion system to meet electromagnetic standards and design constraints. This includes physical characteristics such as; decreasing the size and weight of the package, optimized interactions with the neighboring components and higher power density. In addition, the electromagnetic behaviors and signatures can be evaluated including the study of conducted and radiated EMI interactions in addition to the design of attenuation measures and enclosures.
APA, Harvard, Vancouver, ISO, and other styles
42

Rusticali, Valeria. "Numerical investigations on slope stability problems through mathematical optimization-based Finite Element approach." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/22124/.

Full text
Abstract:
Il metodo agli elementi finiti (FEM) è uno degli approcci prevalenti nella valutazione dei problemi di stabilità dei pendii. Le equazioni per il problema elastoplastico, basandosi su un principio funzionale multi-campo, sono trattate come un tipico problema Second Order Cone Programming (SOCP) e risolte tramite il software di ottimizzazione MOSEK. In questo studio, un codice MATLAB precedentemente sviluppato (Wang et al. 2019) viene esteso ad una più ampia gamma di problemi di stablità. Più nello specifico, una formula di Davis modificata è implementata, migliorando l'accuratezza del codice quando applicato a non-associated flow rules. Inoltre, il codice viene utilizzato per studiare pendii stratificati e il carico sismico viene trattato tramite il classico metodo pseudo-statico. Confrontando i nostri risultati con quelli pubblicati si ottengono esiti soddisfacenti. Col fine di includere gli effetti dovuti all'eterogeneità, abbiamo implementato la teoria dei campi random all'interno del framework computazionale. Questo è stato fatto tramite il Random Finite Element Method (RFEM), il quale viene applicato ad un pendio omogeneo. Infine, il metodo sviluppato viene applicato al caso di studio del monte Vogelsberg. Il fattore di sicurezza (FS) è valutato a diversi livelli piezometrici e carichi sismici. L'analisi statistica, in questo caso, è incentrata sulla probability of failure e sulla distribuzione di FS probabilistico.
APA, Harvard, Vancouver, ISO, and other styles
43

Willey, Landon Clark. "A Systems-Level Approach to the Design, Evaluation, and Optimization of Electrified Transportation Networks Using Agent-Based Modeling." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8532.

Full text
Abstract:
Rising concerns related to the effects of traffic congestion have led to the search for alternative transportation solutions. Advances in battery technology have resulted in an increase of electric vehicles (EVs), which serve to reduce the impact of many of the negative consequences of congestion, including pollution and the cost of wasted fuel. Furthermore, the energy-efficiency and quiet operation of electric motors have made feasible concepts such as Urban Air Mobility (UAM), in which electric aircraft transport passengers in dense urban areas prone to severe traffic slowdowns. Electrified transportation may be the solution needed to combat urban gridlock, but many logistical questions related to the design and operation of the resultant transportation networks remain to be answered. This research begins by examining the near-term effects of EV charging networks. Stationary plug-in methods have been the traditional approach to recharge electric ground vehicles; however, dynamic charging technologies that can charge vehicles while they are in motion have recently been introduced that have the potential to eliminate the inconvenience of long charging wait times and the high cost of large batteries. Using an agent-based model verified with traffic data, different network designs incorporating these dynamic chargers are evaluated based on the predicted benefit to EV drivers. A genetic optimization is designed to optimally locate the chargers. Heavily-used highways are found to be much more effective than arterial roads as locations for these chargers, even when installation cost is taken into consideration. This work also explores the potential long-term effects of electrified transportation on urban congestion by examining the implementation of a UAM system. Interdependencies between potential electric air vehicle ranges and speeds are explored in conjunction with desired network structure and size in three different regions of the United States. A method is developed to take all these considerations into account, thus allowing for the creation of a network optimized for UAM operations when vehicle or topological constraints are present. Because the optimization problem is NP-hard, five heuristic algorithms are developed to find potential solutions with acceptable computation times, and are found to be within 10% of the optimal value for the test cases explored. The results from this exploration are used in a second agent-based transportation model that analyzes operational parameters associated with UAM networks, such as service strategy and dispatch frequency, in addition to the considerations associated with network design. General trends between the effectiveness of UAM networks and the various factors explored are identified and presented.
APA, Harvard, Vancouver, ISO, and other styles
44

Hertz, Erik M. "Thermal and EMI Modeling and Analysis of a Boost PFC Circuit Designed Using a Genetic-based Optimization Algorithm." Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/34234.

Full text
Abstract:
The boost power factor correction (PFC) circuit is a common circuit in power electronics. Through years of experience, many designers have optimized the design of these circuits for particular applications. In this study, a new design procedure is presented that guarantees optimal results for any application. The algorithm used incorporates the principles of evolution in order to find the best design. This new design technique requires a rethinking of the traditional design process. Electrical models have been developed specifically for use with the optimization tool. One of the main focuses of this work is the implementation and verification of computationally efficient thermal and electro-magnetic interference (EMI) models for the boost PFC circuit. The EMI model presented can accurately predict noise levels into the 100's of kilohertz range. The thermal models presented provide very fast predictions and they have been adjusted to account for different thermal flows within the layout. This tuning procedure results in thermal predictions within 10% of actual measurement data. In order to further reduce the amount of analysis that the optimization tool must perform, some of the converter design has been performed using traditional methods. This part of the design is discussed in detail. Additionally, a per unit analysis of EMI and thermal levels is introduced. This new analysis method allows EMI and thermal levels to be compared on the same scale thus highlighting the tradeoffs between the both behaviors.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
45

Sen, Padmanava. "Estimation and optimization of layout parasitics for silicon-based millimeter-wave integrated circuits." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/26585.

Full text
Abstract:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Dr. Joy Laskar; Committee Member: Dr. Chang- Ho Lee; Committee Member: Dr. Federico Bonetto; Committee Member: Dr. John D. Cressler; Committee Member: Dr. John Papapolymerou; Committee Member: Dr. Linda S. Milor. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
46

Hendrickson, Eric B. "Morphologically simplified conductance based neuron models: principles of construction and use in parameter optimization." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33905.

Full text
Abstract:
The dynamics of biological neural networks are of great interest to neuroscientists and are frequently studied using conductance-based compartmental neuron models. For speed and ease of use, neuron models are often reduced in morphological complexity. This reduction may affect input processing and prevent the accurate reproduction of neural dynamics. However, such effects are not yet well understood. Therefore, for my first aim I analyzed the processing capabilities of 'branched' or 'unbranched' reduced models by collapsing the dendritic tree of a morphologically realistic 'full' globus pallidus neuron model while maintaining all other model parameters. Branched models maintained the original detailed branching structure of the full model while the unbranched models did not. I found that full model responses to somatic inputs were generally preserved by both types of reduced model but that branched reduced models were better able to maintain responses to dendritic inputs. However, inputs that caused dendritic sodium spikes, for instance, could not be accurately reproduced by any reduced model. Based on my analyses, I provide recommendations on how to construct reduced models and indicate suitable applications for different levels of reduction. In particular, I recommend that unbranched reduced models be used for fast searches of parameter space given somatic input output data. The intrinsic electrical properties of neurons depend on the modifiable behavior of their ion channels. Obtaining a quality match between recorded voltage traces and the output of a conductance based compartmental neuron model depends on accurate estimates of the kinetic parameters of the channels in the biological neuron. Indeed, mismatches in channel kinetics may be detectable as failures to match somatic neural recordings when tuning model conductance densities. In my first aim, I showed that this is a task for which unbranched reduced models are ideally suited. Therefore, for my second aim I optimized unbranched reduced model parameters to match three experimentally characterized globus pallidus neurons by performing two stages of automated searches. In the first stage, I set conductance densities free and found that even the best matches to experimental data exhibited unavoidable problems. I hypothesized that these mismatches were due to limitations in channel model kinetics. To test this hypothesis, I performed a second stage of searches with free channel kinetics and observed decreases in the mismatches from the first stage. Additionally, some kinetic parameters consistently shifted to new values in multiple cells, suggesting the possibility for tailored improvements to channel models. Given my results and the potential for cell specific modulation of channel kinetics, I recommend that experimental kinetic data be considered as a starting point rather than as a gold standard for the development of neuron models.
APA, Harvard, Vancouver, ISO, and other styles
47

Ylimäki, M. (Markus). "Methods for image-based 3-D modeling using color and depth cameras." Doctoral thesis, Oulun yliopisto, 2017. http://urn.fi/urn:isbn:9789526217352.

Full text
Abstract:
Abstract This work addresses the problems related to three-dimensional modeling of scenes and objects and model evaluation. The work is divided into four main parts. At first, the work concentrates on purely image-based reconstruction while the second part presents a modeling pipeline based on an active depth sensor. Then, the work introduces methods for producing surface meshes from point clouds, and finally, a novel approach for model evaluation is presented. In the first part, this work proposes a multi-view stereo (MVS) reconstruction method that takes a set of images as an input and outputs a model represented as a point cloud. The method is based on match propagation, where a set of initial corresponding points between images is expanded iteratively into larger regions by searching new correspondences in the spatial neighborhood of the existing ones. The expansion is implemented using a best-first strategy, where the most reliable match is always expanded first. The method produces comparable results with the state-of-the-art but significantly faster. In the second part, this work presents a method that merges a sequence of depth maps into a single non-redundant point cloud. In the areas, where the depth maps overlap, the method fuses points together by giving more weight to points which seem to be more reliable. The method overcomes its predecessor both in accuracy and robustness. In addition, this part introduces a method for depth camera calibration. The method develops on an existing calibration approach which was originally designed for the first generation Microsoft Kinect device. The third part of the thesis addresses the problem of converting the point clouds to surface meshes. The work briefly reviews two well-known approaches and compares their ability to produce sparse mesh models without sacrificing accuracy. Finally, the fourth part of this work describes the development of a novel approach for performance evaluation of reconstruction algorithms. In addition to the accuracy and completeness, which are the metrics commonly used in existing evaluation benchmarks, the method also takes the compactness of the models into account. The metric enables the evaluation of the accuracy-compactness trade-off of the models
Tiivistelmä Tässä työssä käsitellään näkymän tai esineen kolmiulotteista mallintamista ja tulosten laadun arviointia. Työ on jaettu neljään osaan. Ensiksi keskitytään pelkästään valokuvia hyödyntävään mallinnukseen ja sitten esitellään menetelmä syvyyskamerapohjaiseen mallinnukseen. Kolmas osa kuvaa menetelmiä verkkomallien luomiseen pistepilvestä ja lopuksi esitellään menetelmä mallien laadun arviointiin. Ensimmäisessä osassa esitellään usean kuvan stereoon perustuva mallinnusmenetelmä, joka saa syötteenä joukon valokuvia ja tuottaa kuvissa näkyvästä kohteesta pistepilvimallin. Menetelmä perustuu vastinpisteiden laajennukseen, jossa kuvien välisiä pistevastaavuuksia laajennetaan iteratiivisesti suuremmiksi vastinalueiksi hakemalla uusia vastinpistepareja jo löydettyjen läheisyydestä. Laajennus käyttää paras ensin -menetelmää, jossa luotettavin pistevastaavuus laajennetaan aina ensin. Menetelmä tuottaa vertailukelpoisia tuloksia johtaviin menetelmiin verrattuna, mutta merkittävästi nopeammin. Toisessa osassa esitellään menetelmä, joka yhdistää joukon syvyyskameralla kaapattuja syvyyskarttoja yhdeksi pistepilveksi. Alueilla, jotka sisältävät syvyysmittauksia useasta syvyyskartasta, päällekkäiset mittaukset yhdistetään painottamalla luotettavammalta vaikuttavaa mittausta. Menetelmä on tarkempi kuin edeltäjänsä ja toimii paremmin kohinaisemmalla datalla. Lisäksi tässä osassa esitellään menetelmä syvyyskameran kalibrointiin. Menetelmä kehittää jo olemassa olevaa kalibrointityökalua, joka alun perin kehitettiin ensimmäisen sukupolven Microsoft Kinect laitteelle. Väitöskirjan kolmas osa käsittelee pintamallin luomista pistepilvestä. Työ esittelee kaksi hyvin tunnettua menetelmää ja vertailee niiden kykyä luoda harvoja, mutta edelleen tarkkoja malleja. Lopuksi esitellään uudenlainen menetelmä mallinnusmenetelmien arviointiin. Tarkkuuden ja kattavuuden lisäksi, jotka ovat yleisimmät arvioinnissa käytetyt metriikat, menetelmä ottaa huomioon myös mallin pistetiheyden. Metriikan avulla on mahdollista arvioida kompromissia mallin tarkkuuden ja tiheyden välillä
APA, Harvard, Vancouver, ISO, and other styles
48

Davidson, James. "A Distributed Surrogate Methodology for Inverse Most Probable Point Searches in Reliability Based Design Optimization." Wright State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=wright1440695264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Rechkemmer, Sabrina [Verfasser]. "Lifetime modeling and model-based lifetime optimization of Li-ion batteries for use in electric two-wheelers / Sabrina Rechkemmer." Düren : Shaker, 2020. http://d-nb.info/1213627621/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Werner, Quentin. "Model-based optimization of electrical system in the early development stage of hybrid drivetrains." Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0109.

Full text
Abstract:
Cette thèse analyse les challenges auxquels sont confrontés les composants électriques pour les systèmes de traction hybrides. L’analyse de ces composants et de leurs interactions en tant qu’entité indépendante est un sujet de recherche important afin de dimensionner de manière optimale le système au lieu de combiner des composants optimaux. Les véhicules hybrides sont un domaine de recherche qui suscite un grand intérêt parce qu’il s’agit d’une solution efficace à court terme afin de préparer la transition énergétique vers les véhicules à zéro émission. Malgré les avantages de cette solution, c’est un sujet de recherche complexe car les composants électriques doivent être intégrés dans un système de propulsion conventionnel. Ainsi le but de ce travail de recherche est axé sur la détermination de méthodes appropriées pour étudier les composants électriques et les contributions apportées par cette thèse visent à répondre à la problématique suivante : déterminer le niveau suffisant de détails pour modéliser les systèmes électriques pour les systèmes de traction pour véhicules hybrides afin d’identifier le dimensionnement idéal des composants pour différents systèmes pendant la phase de développement. Afin de résoudre cette problématique, ce rapport est divisé en quatre parties au sein de six chapitres. D’abord l’état de l’art des véhicules hybrides, des composants électriques ainsi que des méthodes d’optimisation associées sont présentés (chapitre 1). Ensuite, pour chaque composant (chapitre 2 à 4), des méthodes de modélisation appropriées sont déterminées afin de les modéliser mais aussi afin d’évaluer leur intégration dans le système de propulsion. Puis, une solution pour l’étude du système globale est déterminée à partir de l’analyse de travaux précédents (chapitre 5). Finalement, une approche d’optimisation est développée et permet d’analyser différents systèmes ainsi que l’influence de différents paramètres sur le dimensionnement (chapitre 6). Grâce à l’analyse du développement actuel et des travaux précédents sur le sujet ainsi qu’au développement d’outils de simulation, cette thèse étudie et analyse les relations entre le niveau de tension et de courant, et les performances du système dans différents cas. Les résultats permettent de déterminer l’influence de ces paramètres sur les composants ainsi que l’impact de l’environnement industriel sur les résultats. En tenant compte du cadre législatif actuel, les résultats convergent globalement tous dans la même direction : une réduction du niveau de tension, respectivement une augmentation du courant, entraine une amélioration du système global par rapport aux méthodes de dimensionnent actuelles. Ces observations sont liées à l’architecture, au cycle d’évaluation et à l’environnement considérés mais les méthodes et l’approche développée ont posé les bases pour étendre les connaissances dans le domaine de l’optimisation des véhicules hybrides. En plus de l’optimisation générale, des cas particuliers sont analysés afin de montrer la modularité des méthodes et l’influence de paramètres supplémentaires (système 48V ou convertisseur Boost). Afin de conclure, cette thèse a mis en place les bases pour l’étude des composants électriques pour les véhicules hybrides. De part un environnement fluctuant et les nombreuses technologies possibles, ce sujet suscite encore un grand intérêt et les points suivants peuvent être encore étudiés de manière plus détaillée : * Application des méthodes pour d’autres systèmes de propulsion (autre architectures hybrides, véhicule à pile à combustible ou tout électrique), * Étude de nouvelles technologies comme le carbure de silicium pour l’électronique de puissance, la machine à reluctance variable ou le sulfure de lithium pour les batteries, * Analyse d’autre cycle d’évaluation ainsi que leur cadre législatif, * Mise en place de structures additionnelles pour l’électronique de puissance, * Validations supplémentaires avec d’autres composants
This work analyses the challenges faced by the electric components for traction purpose in hybrid drivetrains. It investigates the components and their interactions as an independent entity in order to refine the scope of investigation and to find the best combinations of components instead of the best components combinations. Hybrid vehicle is currently a topic of high interest because it stands for a suitable short-term solution towards zero emission vehicle. Despite its advantages, it is a challenging topic because the components need to be integrated in a conventional drivetrain architecture. Therefore, the focus of this work is set on the determination of the right methods to investigate only the electric components for traction purpose. The aim and the contributions of this work lies thereby in the resolution of the following statement: Determine the sufficient level of details in modeling electric components at the system level and develop models and tools to perform dynamic simulations of these components and their interactions in a global system analysis to identify ideal designs of various drivetrain electric components during the design process. To address these challenges, this work is divided in four main parts within six chapters. First the current status of the hybrid vehicle, the electric components and the associated optimization methods and simulation are presented (first chapter). Then for each component, the right modeling approach is defined in order to investigate the electrical, mechanical and thermal behavior of the components as well as methods to evaluate their integration in the drivetrain (second to fourth chapter). After this, a suitable method is defined to evaluate the global system and to investigate the interactions between the components based on the review of relevant previous works (chapter five). Finally, the last chapter presents the optimization approach considered in this work and the results by analyzing different system and cases (chapter six). Thanks to the analysis of the current status, previous works and the development of the simulations tools, this work investigates the relationships between the voltage, the current and the power in different cases. The results enable, under the considered assumptions of the work, to determine the influence of these parameters on the components and of the industrial environment on the optimization results. Considering the current legislative frame, all the results converge toward the same observation referred to the reference systems: a reduction of the voltage and an increase of the current leads to an improvement of the integration and the performance of the system. These observations are linked with the considered architecture, driving cycle and development environment but the developed methods and approaches have set the basis to extend the knowledge for the optimization of the electric system for traction purpose. Beside the main optimization, special cases are investigated to show the influence of additional parameters (increase of the power, 48V-system, machine technology, boost-converter…) In order to conclude, this work have set the basis for further investigations about the electric components for traction purpose in more electrified vehicle. Due to the constantly changing environment, the new technologies and the various legislative frame, this topic remains of high interest and the following challenges still need to be deeper investigated: * Application of the methods for other drivetrain architecture (series hybrid, power-split hybrid, fuel-cell vehicle, full electric vehicle), * Investigation of new technologies such as silicon-carbide for the power electronics, lithium–sulfur battery or switch reluctance machine, * Investigation of other driving cycle, legislative frame, * Integration of additional power electronics structure, * Further validation of the modeling approaches with additional components
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography