Dissertations / Theses on the topic 'Multi-scale design'

To see the other types of publications on this topic, follow the link: Multi-scale design.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Multi-scale design.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Suberu, Bolaji A. "Multi-scale Composite Materials with Increased Design Limits." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1377868507.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hari, Bost Jan. "Multi-Scale Modelling, Simulation, Design and Analysis of Microreactors." Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.511850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Holcombe, Evan W. "Multi-Scale Approach to Design Sustainable Asphalt Paving Materials." Ohio University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1493805362392927.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ferrer, Ferré Àlex. "Multi-scale topological design of structural materials : an integrated approach." Doctoral thesis, Universitat Politècnica de Catalunya, 2017. http://hdl.handle.net/10803/406354.

Full text
Abstract:
The present dissertation aims at addressing multiscale topology optimization problems. For this purpose, the concept of topology derivative in conjunction with the computational homogenization method is considered. In this study, the topological derivative algorithm, which is clearly non standard in topology optimization, and the optimality conditions are first introduced in order to a provide a better insight. Then, a precise treatment of the interface elements is proposed to reduce the numerical instabilities and the time-consuming computations that appear when using the topological derivative algorithm. The resulting strategy is examined and compared with current methodologies collected in the literature by means of some numerical tests of different nature. Then, a closed formula of the anisotropic topological derivative is obtained by solving analytically the exterior elastic problem. To this aim, complex variable theory and symbolic computation is considered. The resulting expression is validated through some numerical tests. In addition, different anisotropic topology optimization problems are solved to show the macroscopic topological implications of considering anisotropic materials. Finally, the two-scale topology optimization problem is tackled. As a first approach, an structural stiffness increase is achieved by considering the microscopic topologies as design variables of the problem. An alternate direction algorithm is proposed to address the high non-linearities of the problem. In addition, to mitigate the unaffordable time-consuming computations, a reduction technique is presented by means of pre-computing the optimal microscopic topologies in a computational material catalogue. As an extension of the first approach, besides designing the microscopic topologies, the macroscopic topology is also considered as a design variable, leading to even more optimal solutions. In addition, the proposed algorithms are modified in order to obtain manufacturable optimal designs. Two-scale topology optimization examples exhibit the potential of the proposed methodology
Aquest treball té com a objectiu abordar els problemes d'optimització de topologia de múltiples escales. Amb aquesta finalitat, es consideren els conceptes de derivada topologia junt amb el mètode d'homogeneïtzació computacional. En aquest estudi, es presenta primer les condicions d'optimalitat i l'algorisme d'optimització utilitzat quan es considera la derivada topològica. A continuació, es proposa un tractament més precís dels elements de la interfície per reduir les inestabilitats numèriques i els elevats càlculs computacionals que apareixen quan s'utilitza l'algorisme de la derivada topològica. L'estratègia resultant s'examina i es compara amb les metodologies actuals, que es poden trobar sovint recollides a la literatura, mitjançant algunes proves numèriques. A més, s'obté una fórmula tancada de la derivada topològica anisotròpica quan es resol analíticament el problema exterior d'elasticitat. Per aconseguir-ho, es considera la teoria de variable complexa i la computació simbòlica. L'expressió resultant es valida a través d'algunes proves numèriques. A més, es resolen diferents problemes d'optimització topològica anisotròpica per mostrar les implicacions de la topològica macroscòpica en considerar materials anisòtrops. Finalment, s'aborda els problemes d'optimització topològica de dues escales. Com a primera estratègia, es considera les topologies microestructurals com a variables de disseny del problema per obtenir un augment de la rigidesa de l'estructura. Es proposa un algoritme de direcció alternada per fer front a les altes no linealitats del problema. A més, per mitigar els elevats càlculs computacionals, es presenta una tècnica de reducció per mitjà d'un precalcul de les topologies microestructural òptimes, que posteriorment són recollides en un catàleg de material. Com a una extensió de la primera estratègia, a més del disseny de les topologies microestructurals, la topologia macroscòpica també es considera com una variable de disseny, obtenint així solucions encara més òptimes. A més, els algoritmes proposats es modifiquen per tal d'obtenir dissenys que poden ser posteriorment fabricats. Alguns exemples numèrics d'optimització topològica de dues escales mostren el potencial de la metodologia proposada.
APA, Harvard, Vancouver, ISO, and other styles
5

Kalua, Amos. "Framework for Integrated Multi-Scale CFD Simulations in Architectural Design." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/105013.

Full text
Abstract:
An important aspect in the process of architectural design is the testing of solution alternatives in order to evaluate them on their appropriateness within the context of the design problem. Computational Fluid Dynamics (CFD) analysis is one of the approaches that have gained popularity in the testing of architectural design solutions especially for purposes of evaluating the performance of natural ventilation strategies in buildings. Natural ventilation strategies can reduce the energy consumption in buildings while ensuring the good health and wellbeing of the occupants. In order for natural ventilation strategies to perform as intended, a number of factors interact and these factors must be carefully analysed. CFD simulations provide an affordable platform for such analyses to be undertaken. Traditionally, these simulations have largely followed the direction of Best Practice Guidelines (BPGs) for quality control. These guidelines are built around certain simplifications due to the high computational cost of CFD modelling. However, while the computational cost has increasingly fallen and is predicted to continue to drop, the BPGs have largely remained without significant updates. The need to develop a CFD simulation framework that leverages the contemporary and anticipates the future computational cost and capacity can, therefore, not be overemphasised. When conducting CFD simulations during the process of architectural design, the variability of the wind flow field including the wind direction and its velocity constitute an important input parameter. Presently, however, in many simulations, the wind direction is largely used in a steady state manner. It is assumed that the direction of flow downwind of a meteorological station remains constant. This assumption may potentially compromise the integrity of CFD modelling as in reality, the wind flow field is bound to be dynamic from place to place. In order to improve the accuracy of the CFD simulations for architectural design, it is therefore necessary to adequately account for this variability. This study was a two-pronged investigation with the ultimate objective of improving the accuracy of the CFD simulations that are used in the architectural design process, particularly for the design and analysis of natural ventilation strategies. Firstly, a framework for integrated meso-scale and building scale CFD simulations was developed. Secondly, the newly developed framework was then implemented by deploying it to study the variability of the wind flow field between a reference meteorological station, the Virginia Tech Airport, and a selected localized building scale site on the Virginia Tech campus. The findings confirmed that the wind flow field varies from place to place and showed that the newly developed framework was able to capture this variation, ultimately, generating a wind flow field characterization representative of the conditions prevalent at the localized building site. This framework can be particularly useful when undertaking de-coupled CFD simulations to design and analyse natural ventilation strategies in the building design process.
Doctor of Philosophy
The use of natural ventilation strategies in building design has been identified as one viable pathway toward minimizing energy consumption in buildings. Natural ventilation can also reduce the prevalence of the Sick Building Syndrome (SBS) and enhance the productivity of building occupants. This research study sought to develop a framework that can improve the usage of Computational Fluid Dynamics (CFD) analyses in the architectural design process for purposes of enhancing the efficiency of natural ventilation strategies in buildings. CFD is a branch of computational physics that studies the behaviour of fluids as they move from one point to another. The usage of CFD analyses in architectural design requires the input of wind environment data such as direction and velocity. Presently, this data is obtained from a weather station and there is an assumption that this data remains the same even for a building site located at a considerable distance away from the weather station. This potentially compromises the accuracy of the CFD analyses as studies have shown that due to a number of factors such the urban built form, vegetation, terrain and others, the wind environment is bound to vary from one point to another. This study sought to develop a framework that quantifies this variation and provides a way for translating the wind data obtained from a weather station to data that more accurately characterizes a local building site. With this accurate site wind data, the CFD analyses can then provide more meaningful insights into the use of natural ventilation in the process of architectural design. This newly developed framework was deployed on a study site at Virginia Tech. The findings showed that the framework was able to demonstrate that the wind flow field varies from one place to another and it also provided a way to capture this variation, ultimately, generating a wind flow field characterization that was more representative of the local conditions.
APA, Harvard, Vancouver, ISO, and other styles
6

Koop, Matthew J. "High-Performance Multi-Transport MPI Design for Ultra-Scale InfiniBand Clusters." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1243581928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Song, Huaguang. "Multi-scale data sketching for large data analysis and visualization." Scholarly Commons, 2012. https://scholarlycommons.pacific.edu/uop_etds/832.

Full text
Abstract:
Analysis and visualization of large data sets is time consuming and sometimes can be a very difficult process, especially for 3D data sets. Therefore, data processing and visualization techniques have often been used in the case of different massive data analysis for efficiency and accuracy purposes. This thesis presents a multi-scale data sketching solution, specifically for large 3D scientific data with a goal to support collaborative data management, analysis and visualization. The idea is to allow users to quickly identify interesting regions and observe significant patterns without directly accessing the raw data, since most of the information in raw form is not useful. This solution will provide a fast way to allow the users to choose the regions they are interested and save time. By preprocessing the data, our solution can sketch out the general regions of the 3D data, and users can decide whether they are interested in going further to analyze the current data. The key issue is to find efficient and accurate algorithms to detect boundaries or regions information for large 3D scientific data. Specific techniques and performance analysis are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

Zentner, John Marc. "A Design Space Exploration Process for Large Scale, Multi-Objective Computer Simulations." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11572.

Full text
Abstract:
The primary contributions of this thesis are associated with the development of a new method for exploring the relationships between inputs and outputs for large scale computer simulations. Primarily, the proposed design space exploration procedure uses a hierarchical partitioning method to help mitigate the curse of dimensionality often associated with the analysis of large scale systems. Closely coupled with the use of a partitioning approach, is the problem of how to partition the system. This thesis also introduces and discusses a quantitative method developed to aid the user in finding a set of good partitions for creating partitioned metamodels of large scale systems. The new hierarchically partitioned metamodeling scheme, the lumped parameter model (LPM), was developed to address two primary limitations to the current partitioning methods for large scale metamodeling. First the LPM was formulated to negate the need to rely on variable redundancies between partitions to account for potentially important interactions. By using a hierarchical structure, the LPM addresses the impact of neglected, direct interactions by indirectly accounting for these interactions via the interactions that occur between the lumped parameters in intermediate to top-level mappings. Secondly, the LPM was developed to allow for hierarchical modeling of black-box analyses that do not have available intermediaries with which to partition the system around. The second contribution of this thesis is a graph-based partitioning method for large scale, black-box systems. The graph-based partitioning method combines the graph and sparse matrix decomposition methods used by the electrical engineering community with the results of a screening test to create a quantitative method for partitioning large scale, black-box systems. An ANOVA analysis of the results of a screening test can be used to determine the sparse nature of the large scale system. With this information known, the sparse matrix and graph theoretic partitioning schemes can then be used to create potential sets of partitions to use with the lumped parameter model.
APA, Harvard, Vancouver, ISO, and other styles
9

Samadiani, Emad. "Energy efficient thermal management of data centers via open multi-scale design." Diss., Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/37218.

Full text
Abstract:
Data centers are computing infrastructure facilities that house arrays of electronic racks containing high power dissipation data processing and storage equipment whose temperature must be maintained within allowable limits. In this research, the sustainable and reliable operations of the electronic equipment in data centers are shown to be possible through the Open Engineering Systems paradigm. A design approach is developed to bring adaptability and robustness, two main features of open systems, in multi-scale convective systems such as data centers. The presented approach is centered on the integration of three constructs: a) Proper Orthogonal Decomposition (POD) based multi-scale modeling, b) compromise Decision Support Problem (cDSP), and c) robust design to overcome the challenges in thermal-fluid modeling, having multiple objectives, and inherent variability management, respectively. Two new POD based reduced order thermal modeling methods are presented to simulate multi-parameter dependent temperature field in multi-scale thermal/fluid systems such as data centers. The methods are verified to achieve an adaptable, robust, and energy efficient thermal design of an air-cooled data center cell with an annual increase in the power consumption for the next ten years. Also, a simpler reduced order modeling approach centered on POD technique with modal coefficient interpolation is validated against experimental measurements in an operational data center facility.
APA, Harvard, Vancouver, ISO, and other styles
10

Crowe, Robert A. "Design, construction and testing of a reduced-scale cascaded multi-level converter." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Jun%5FCrowe.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, June 2003.
Thesis advisor(s): Robert W. Ashton, John G. Ciezki, Douglas J. Fouts. Includes bibliographical references (p. 125-126). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
11

Morin, Jeffrey W. (Jeffrey William). "Design, fabrication and mechanical optimization of multi-scale anisotropic feet for terrestrial locomotion." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/65314.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 67-69).
Multi-scale surface interaction methods have been studied to achieve optimal locomotion over surface features of differing length scales. It has been shown that anisotropy is a convenient way of transferring an undirected force to a preferred direction or movement. In this thesis, the fundamentals of friction were studied to achieve a better understanding of how to design multi-scaled robotic feet that use anisotropy for terrestrial locomotion. Static and kinetic friction coefficients were found for novel test geometries under varying load conditions. The test geometries were manufactured with materials of variable durometer and were tested using unconventional rheometry methodology. Test results were then compared to standard friction laws. As predicted, the effects of contact area were shown to have an effect on the friction forces experienced by the softer materials. The contact area effects were then modeled as Hertzian contacts for a given material. Verification of the area dependencies for the materials with adhesive effects was performed for the samples used in the friction tests. The samples were subjected to varying compressive force and images of the corresponding contact areas were obtained using an inverted microscope. The microscope images were then processed using MATLAB's image processing toolbox to find the actual contact area for the samples. The contact area results were shown to be in accordance with Herztian contact principles. The effects of varying surface roughness were also studied for a given anisotropic arrangement of bristles. The array of bristles was used to provide propulsion to a controllable robot called BristleBot. The untethered nature of the robot allowed for unhindered velocity and force measurements that were used to analyze the effects of surface roughness. The force input for the robot was provided by two vibration motors that created an excitation which was then translated to horizontal movement by the anisotropic formation of the bristles. It was found that the BristleBot was able to achieve optimal locomotion when roughness conditions were minimized. Results of the anisotropic friction and adhesion tests were used to improve footpad development for soft robotic platforms.
by Jeffrey W. Morin.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
12

Tartibu, Lagouge K. "A multi-objective optimisation approach for small-scale standing wave thermoacoustic coolers design." Thesis, Cape Peninsula University of Technology, 2014. http://hdl.handle.net/20.500.11838/1307.

Full text
Abstract:
Thesis submitted in fulfilment of the requirements for the degree Doctor of Technology: Mechanical Engineering in the Faculty of Engineering at the Cape Peninsula University of Technology 2014
Thermoacoustic heat engines provide a practical solution to the problem of heat management where heat can be pumped or spot cooling can be induced. This is new among emerging technology with a strong potential towards the development of sustainable and renewable energy systems by utilising solar energy or wasted heat. The most inhibiting characteristic of current thermoacoustic cooling devices is the lack of efficiency. Although simple to fabricate, the designing of thermoacoustic coolers involves significant technical challenges. The stack has been identified as the heart of the device where the heat transfer takes place. Improving its performance will make thermoacoustic technology more attractive. Existing efforts have not taken thermal losses to the surroundings into account in the derivation of the models. Although thermal losses can be neglected for large-scale applications, these losses need to be adequately covered for small-scale applications. This work explores the use of a multi-objective optimisation approach to model and to optimise the performance of a simple thermoacoustic engine. This study aims to optimise its geometrical parameters—namely the stack length, the stack height, the stack position, the number of channels and the plate spacing—involved in designing thermoacoustic engines. System parameters and constraints that capture the underlying thermoacoustic dynamics have been used to define the models. Acoustic work, viscous loss, conductive heat loss, convective heat loss and radiative heat loss have been used to measure the performance of the thermoacoustic engine. The optimisation task is formulated as a five-criterion mixed-integer nonlinear programming problem. Since we optimise multiple objectives simultaneously, each objective component has been given a weighting factor to provide appropriate user-defined emphasis. A practical example is provided to illustrate the approach. We have determined a design statement of a stack describing how the design would change if emphasis is placed on one objective in particular. We also considered optimisation of multiple objective components simultaneously and identified global optimal solutions describing the stack geometry using the augmented ε-constraint method. This approach has been implemented in GAMS (General Algebraic Modelling System). In addition, this work develops a novel mathematical programming model to optimise the performance of a simple thermoacoustic refrigerator. This study aims to optimise its geometrical parameters—namely the stack position, the stack length, the blockage ratio and the plate spacing—involved in designing thermoacoustic refrigerators. System parameters and constraints that capture the underlying thermoacoustic dynamics have been used to define the models. The cooling load, the coefficient of performance and the acoustic power loss have been used to measure the performance of the device. The optimisation task is formulated as a three-criterion nonlinear programming problem with discontinuous derivatives (DNLPs). Since we optimise multiple objectives simultaneously, each objective component has been given a weighting factor to provide appropriate user-defined emphasis. A practical example is provided to illustrate the approach. We have determined a design statement of a stack describing how the geometrical parameters described would change if emphasis is placed on one objective in particular. We also considered optimisation of multiple objective components simultaneously and identified global optimal solutions describing the stack geometry using a lexicographic multi-objective optimisation scheme. The unique feature of the present mathematical programming approach is to compute the stack geometrical parameters describing thermoacoustic refrigerators for maximum cooling or maximum coefficient of performance. The present study highlights the importance of thermal losses in the modelling of small-scale thermoacoustic engines using a multi-objective approach. The proposed modelling approach for thermoacoustic engines provides a fast estimate of the geometry and position of the stack for maximum performance of the device. The use of a lexicographic method introduced in this study improves the modelling and the computation of optimal solutions and avoids subjectivity in aggregation of weight to objective functions in the formulation of mathematical models. The unique characteristic of this research is the computing of all efficient non dominated Pareto optimal solutions allowing the decision maker to select the most efficient solution. The present research experimentally examines the influence of the stack geometry and position on the performance of thermoacoustic engines and thermoacoustic refrigerators. Thirty-six different cordierite honeycomb ceramic stacks are studied in this research. The influence of the geometry and the stack position has been investigated. The temperature difference across the stack and radiated sound pressure level at steady state are considered indicators of the performance of the devices. The general trends of the proposed mathematical programming approach results show satisfactory agreement with the experiment. One important aspect revealed by this study is that geometrical parameters are interdependent and can be treated as such when optimising the device to achieve its highest performance. The outcome of this research has direct application in the search for efficient stack configurations of small-scale thermoacoustic devices for electronics cooling.
APA, Harvard, Vancouver, ISO, and other styles
13

Delucia, Marco. "Développement de modèles multi-échelle et multi-physiques pour la conception de composites à base de liège." Thesis, Paris, HESAM, 2022. http://www.theses.fr/2022HESAE036.

Full text
Abstract:
Cette thèse porte sur le développement de modèles multi-échelle et multi-physiques de l’aggloméré de liège dans le but de développer une première contribution à la compréhension du comportement des composites à base de liège. Ce travail contribue au développement de nouvelles méthodes de conception basées sur des approches systématiques. Le travail de thèse est focalisé sur les agglomérés blancs. Un premier modèle numérique 2D de l’agglomère à l’échelle mésoscopique a été développé afin de comprendre le comportement thermomécanique local des granules de liège. Ceci a permis d’étudier l’influence de certains paramètres de conception sur les propriétés thermoélastiques de l’aggloméré à l’échelle macroscopique. À la suite de ce premier travail, le modèle numérique 2D de l’aggloméré a été généralisé et un nouveau modèle numérique 3D a été développé. Une stratégie d’homogénéisation, basée sur une approche stochastique, a été développée afin d’étudier l’influence de la variabilité des propriétés élastiques du liège naturel sur les propriétés élastiques de l’aggloméré en fonction de la température. Les résultats obtenus grâce à ces travaux peuvent être utiles dans une logique d'aide à la décision pour la conception de l'aggloméré par la méthode essai/erreur. Enfin, l’état de pré-contrainte au sein de l’aggloméré fabriqué par moulage par compression a été évalué à travers la simulation du processus de compression. L’approche proposée est basée sur une simulation de la phase de remplissage du moule suivie d’une phase de compression. Les courbes contrainte-déformation du granulé, dépendants du temps et à différents états de déformation, représentent un premier résultat utile comme donnée d’entrée dans une démarche de conception qui tient compte des effets du procédé de fabrication
This thesis focuses on the development of multi-scale and multi-physical models of the cork agglomerate with the aim of developing a first contribution to characterise the behaviour of cork-based composites. This work contributes to the development of new design methods based on systematic approaches. The thesis’s work is focused on white agglomerates. A first 2D numerical model of the agglomerate at the mesoscopic scale has been developed in order to understand the local thermomechanical behaviour of cork granules. This made it possible to study the influence of some design parameters on the thermoelastic properties of the agglomerate at the macroscopic scale. Following this first work, the 2D numerical model of the agglomerate has been generalised and a new 3D numerical model has been proposed. A homogensation strategy, based on a stochastic approach, has been introduced to study the influence of the variability of the elastic properties of natural cork on the elastic properties of the agglomerate as a function of temperature. The results obtained thanks to these works can be useful for the design of the agglomerate by the trial/error method. Finally, the pre-stress state into the agglomerate, made by the compression moulding, has been evaluated by means of the simulation of the compression process. The proposed approach is based on a simulation of the mould filling phase followed by a compression phase. The time dependent stress-strain curves of the granulate, at different strain states, represent a first useful result as input data in a design approach that takes into account the effects of manufacturing process
APA, Harvard, Vancouver, ISO, and other styles
14

Abdelaziz, Omar Abdelaziz Ahmed. "Development of multi-scale, multi-physics, analysis capability and its application to novel heat exchanger design and optimizaiton." College Park, Md.: University of Maryland, 2009. http://hdl.handle.net/1903/9566.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2009.
Thesis research directed by: Dept. of Mechanical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
15

Gomez, Connie Sun Wei Shokoufandeh Ali. "A unit cell based multi-scale modeling and design approach for tissue engineered scaffolds /." Philadelphia, Pa. : Drexel University, 2007. http://hdl.handle.net/1860/1766.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Suwanpinij, Piyada [Verfasser]. "Multi-scale modelling of hot rolled dual-phase steels for process design / Piyada Suwanpinij." Aachen : Hochschulbibliothek der Rheinisch-Westfälischen Technischen Hochschule Aachen, 2013. http://d-nb.info/1030517010/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Reichl, John Vincent. "Design Optimization of Hybrid Switch Soft-Switching Inverters using Multi-Scale Electro-Thermal Simulation." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/64169.

Full text
Abstract:
The development of a fully automated tool that is used to optimize the design of a hybrid switch soft-switching inverter using a library of dynamic electro-thermal component models parameterized in terms of electrical, structural and material properties is presented. A multi-scale electro-thermal simulation approach is developed allowing for a large number of parametric studies involving multiple design variables to be considered, drastically reducing simulation time. Traditionally, electro-thermal simulation and analysis has been used to predict the behavior of pre-existing designs. While the traditional approach to electro-thermal analysis can help shape cooling requirements and heat sink designs to maintain certain junction temperatures, there is no guarantee that the design under study is the most optimal. This dissertation uses electro-thermal simulation to guarantee an optimal design and thus truly minimizing cooling requirements and improving device reliability. The proposed optimization tool is used to provide a step-by-step design optimization of a two-coupled magnetic hybrid soft-switching inverter. The soft-switching inverter uses a two-coupled magnetic approach for transformer reset condition [1], a variable timing control for achieving ZVS over the entire load range [2], and utilizes a hybrid switch approach for the main device [3]. Design parameters such as device chip area, gate drive timing control and external resonant capacitor and inductor are used to minimize device loss subject to design constraints such as converter minimum on-time, maximum device chip area, and transformer reset condition. Since the amount of heat that is dissipated has been minimized, the optimal cooling requirements can be determined by reducing the cooling convection coefficients until desired junction temperatures are achieved. The optimized design is then compared and contrasted with an already existing design from the Virginia Tech freedom car project using the generation II module. It will be shown that the proposed tool improves the baseline design by 16% in loss and reduces the cooling requirements by 42%. Validation of the device model against measured data along with the procedures for device parameter extraction is also provided. Validation of the thermal model against measured data is also provided.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
18

Becar, Joseph Samuel. "A Collaborative Conceptual Aircraft Design Environment for the Design of Small-Scale UAVs in a Multi-University Setting." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/5857.

Full text
Abstract:
In today's competitive global market, there is an ever-increasing demand for highly skilled engineers equipped to perform in teams dispersed over several time-zones by geography. Aerospace Partners for the Advancement of Collaborative Engineering (AerosPACE) is a senior design capstone program co-developed by academia and industry to help students develop the necessary skills to excel in the aerospace industry by challenging them to design, build, and fly an unique unmanned aerial vehicle (UAV). Students with little to no experience designing UAVs are put together in teams with their peers from geographically dispersed universities. This presents a significant challenge for the students in assimilating and applying aircraft design principles, using and interpreting output from analysis tools in multiple disciplines, and communicating their findings with their team members in an effective way. This thesis documents the development of a collaborative design tool for the generation and evaluation of small-scale electric-powered UAV concepts in AerosPACE. The integrated design and optimization software CCADE (Collaborative Conceptual Aircraft Design Environment) enables the immersion of team members from different universities in a software environment which shares design information and analysis results in a central database. Input files for use by open-source analysis tools are automatically generated, and output files read in and displayed in a user-friendly graphical interface. Analysis codes for initial sizing, geometry, airfoil selection, aerodynamics, propulsion, stability and control, and structures are included in the software. Optimization methods are proposed for implementation in future versions of CCADE to explore the breadth of the design space and help students understand the sensitivity of their design to certain key parameters. Testing of CCADE by students during the 2014-2015 AerosPACE course showed an increased volume of explored concepts and prompted questions from students to fill gaps in understanding of fundamental principles. Suggestions for increased student acceptance and use of the software are given. Through its unique architecture and application, CCADE aims to increase productivity and teamwork among AerosPACE participants by increasing the number of concepts which can be fully analyzed, enabling broader exploration of the feasible design space to produce unique and innovative aircraft configurations, and allowing teammates to share thoughts and learning via a shared design and analysis work-space.
APA, Harvard, Vancouver, ISO, and other styles
19

Dasiyici, Mehmet Celal. "Multi-Scale Cursor: Optimizing Mouse Interaction for Large Personal Workspaces." Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/32706.

Full text
Abstract:
As increasingly large displays are integrated into personal workspaces, mouse-based interaction becomes more problematic. Users must repeatedly â clutchâ the mouse for long distance movements [61]. The visibility of the cursor is also problematic in large screens, since the percentage of the screen space that the cursor takes from the whole display gets smaller. We test multi-scale approaches to mouse interaction that utilize dynamic speed and size techniques to grow the cursor larger and faster for long movements. Using Fittsâ Law methods, we experimentally compare different implementations to optimize the mouse design for large displays and to test how they scale to large displays. We also compare them to techniques that integrate absolute pointing with head tracking. Results indicate that with some implementation level modifications the mouse device can scale well up to even a 100 megapixel display with lower mean movement times as compared to integrating absolute pointing techniques to mouse input while maintaining fast performance of the typical mouse configuration on small screens for short distance movements. Designs that have multiple acceleration levels and 4x maximum acceleration reduced average number of clutching to less than one per task in a 100 megapixel display. Dynamic size cursors statistically improve pointing performance. Results also indicated that dynamic speed transitions should be as smooth as possible without steps of more than 2x increase in speed.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
20

Escamez, Guillaume. "AC losses in superconductors : a multi-scale approach for the design of high current cables." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT087/document.

Full text
Abstract:
Le travail de cette thèse porte sur l'étude des pertes AC dans les supraconducteurs pour des applications tels que les câbles ou les aimants. Les modélisations numériques rapportées sont de type éléments-finis et méthode intégrale. Toutes ces méthodes visent à résoudre à calculer les distributions de densité de courant et de champ magnétique en prenant en compte différents loi de comportement pour le supraconducteur. Deux conducteurs sont introduits dans ce mémoire. Tout d'abord, les supraconducteurs à haute température critiques sont étudiées avec l'introduction d'une nouvelle forme de conducteur (fils cylindriques) et sont envisagés pour des câbles fort courant de 3~kA. Dans un second temps, des simulations numériques 3-D sont réalisés sur un conducteur MgB2. Le chapitre suivant traite des contraintes de calculs des pertes dans le but de dimensionner l'ensemble des pertes d'un câble complet. Enfin, les modèles numériques développés précédemment sont utilisé sur un exemple concret : le démonstrateur 10~kA fait à l'aide du conducteur MgB2 dans le projet BEST-PATHS
The work reported in this PhD deals with AC losses in superconducting material for large scale applications such as cables or magnets. Numerical models involving FEM and integral methods have been developed to solve the time transient electromagnetic distributions of field and current density with the peculiarity of the superconducting constitutive E-J equation. Two main conductors have been investigated for two ranges of superconducting cables. First, REBCO superconductors working at 77 K are studied and a new architecture of conductor (round wires) for 3~kA cables. Secondly, for very high current cables, 3-D simulations on MgB2 wires are approach and solved using FEM modeling. The following chapter introduced new development used for the calculation of AC losses in DC cables. The thesis ends with the use of the developed numerical model on a practical example in the BEST-PATHS project: a 10 kA MgB2 demonstrator
APA, Harvard, Vancouver, ISO, and other styles
21

Duro, Royo Jorge. "Towards Fabrication Information Modeling (FIM) : workflow and methods for multi-scale trans-disciplinary informed design." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101843.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 67-70).
This thesis sets the stage for Fabrication Information Modeling (FIM); a design approach for enabling seamless design-to-production workflows that can derive complex designs fusing advanced digital design technologies associated with analysis, engineering and manufacturing. Present day digital fabrication platforms enable the design and construction of high-resolution and complex material distribution structures. However, virtual-to-physical workflows and their associated software environments are yet to incorporate such capabilities. As preliminary methods towards FIM I have developed four computational strategies for the design and digital construction of custom systems. These methods are presented in this thesis in the context of specific design challenges and include a biologically driven fiber construction algorithm; an anatomically driven shell-to-wearable translation protocol; an environmentally-driven swarm printing system; and a manufacturing-driven hierarchical fabrication platform. I discuss and analyze these four challenges in terms of their capabilities to integrate design across media, disciplines and scales through the concepts of multidimensionality, media-informed computation and trans-disciplinary data in advanced digital design workflows. With FIM I aim to contribute to the field of digital design and fabrication by enabling feedback workflows where materials are designed rather than selected; where the question of how information is passed across spatiotemporal scales is central to design generation itself; where modeling at each level of resolution and representation is based on various methods and carried out by various media or agents within a single environment; and finally, where virtual and physical considerations coexist as equals.
by Jorge Duro Royo.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
22

Sharma, Roopali. "Design of a pixel scale sample-and-hold circuit suitable for integration in multi-technology FPGA." Cincinnati, Ohio : University of Cincinnati, 2006. http://www.ohiolink.edu/etd/view.cgi?acc%5Fnum=ucin1141328237.

Full text
Abstract:
Thesis (M.S.)--University of Cincinnati, 2006.
Title from electronic thesis title page (viewed Apr. 18, 2006). Includes abstract. Keywords: Photodetector, Sample and Hold Circuit, Multi-technology Devices. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
23

PATEL, PRERNA D. "DESIGN OF A PIXEL SCALE OPTICAL POWER METER SUITABLE FOR INCORPORATION IN A MULTI-TECHNOLOGY FPGA." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1066421274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Navarro, Rosa Jennifer. "Framework for sustainability assessment of industrial processes with multi-scale technology at design level: microcapsules production process." Doctoral thesis, Universitat Rovira i Virgili, 2009. http://hdl.handle.net/10803/8572.

Full text
Abstract:
In a world with limited resources and serious environmental, social and economical impacts, a more sustainable life style is everyday more important. Therefore, the general objective of this work is to develop a methodological procedure for eco-efficiency and sustainability assessment of industrial processes with multi-scale technology at design level. The methodology developed follows the ISO 14040 series for environmental LCA standard. To integrate the three pillars of sustainability the analytical hierarchical process was used. The results are represented in a triple bottom line framework. The methodology was applied to the case study "production of perfume-containing microcapsules" and different scenarios were assessed and compared. Several sustainability indicators were chosen to analyze the impacts. The results showed that this methodology can be used as a decision making tool for sustainability reporting. It can be applied to any process choosing in each case the corresponding set of inventory data and sustainability impact indicators.
En un mundo con recursos limitados y graves impactos ambientales, sociales y económicos, un estilo de vida más sostenible es cada día más importante. Debido a esto, el objetivo general de este trabajo es desarrollar un procedimiento metodológico para evaluar eco-eficiencia y sostenibilidad de procesos industriales con tecnología multi-escala a nivel de diseño. La metodología desarrollada sigue la serie ISO 14040 para el medio ambiente. Se utilizó el proceso analítico jerárquico para integrar los tres pilares de sostenibilidad. Los resultados se presentan en un balance triple. La metodología se aplicó al caso de estudio "producción de micro-cápsulas que contienen perfume" y se analizaron y compararon diferentes escenarios. Se seleccionaron diversos indicadores de sostenibilidad para analizar los impactos. Los resultados demostraron que esta metodología puede ser utilizada como herramienta de toma de decisiones y que puede aplicarse a cualquier proceso seleccionando, en cada caso, los datos del inventario y los indicadores.
APA, Harvard, Vancouver, ISO, and other styles
25

SHARMA, ROOPALI. "DESIGN OF A PIXEL SCALE OPTICAL SAMPLE-AND-HOLD CIRCUIT SUITABLE FOR INTEGRATION IN MULTI-TECHNOLOGY FPGA." University of Cincinnati / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1141328237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Qiu, Fengjing. "Analog very large scale integrated circuits design of two-phase and multi-phase voltage doublers with frequency regulation." Ohio : Ohio University, 1999. http://www.ohiolink.edu/etd/view.cgi?ohiou1175632756.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Kapil, Ankur. "Multi-scale simulation tools for design and decision making of sorption enhanced reaction processes in energy and biochemical systems." Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.507227.

Full text
Abstract:
Process intensification by integration of catalytic and adsorptive functionalities has tremendous potential in energy and biochemical systems. In-situ or integrated catalytic adsorption can lead to higher yield, purity and selectivity compared to conventional reaction processes. The aim of this work is to develop a) multi-scale methodologies to determine the optimal process variables for maximum performance, b) new tools for the design and decision making of high performance sorption enhanced reaction processes. In this work, a unified framework has been developed that integrates continuum model at bulk scale with particle level diffusion-reaction-sorption model for a fixed bed reactor with multifunctional particles. At bulk scale the system is sensitive to various operating parameters like wall temperature, bed voidage and feed compositions etc. Two important particle level characteristics are identified: distribution of catalyst and sorbent inside particles and geometry of particle pores such as the ratio of pore radius to tortuosity. It has been demonstrated that considering an effective diffusivity at particle pore level has a better control on intensification of sorption enhanced reaction processes. Theadsorption capacity of a sorption enhanced reaction bed is limited. The yield and purity from the fixed bed decreases once the adsorption capacity of the bed is exhausted. The regeneration of the used bed along with recycle of the products can lead to continuous production of high purity products. Simulated moving bed reactor is one such process for the production of high quality products by continuous regeneration of a series of fixed bed reactor-adsorbers. In this work, we have developed rigorous dynamic simulation frameworks to achieve efficient operation of industrially relevant energy generation processes, such as steam methane reforming (SMR) for hydrogen production and esterification reactions for biodiesel production. The effect of various operating conditions such as switching time, feed flow rate, eluent flow rate, and length of the unit on the purity and conversion was systematically investigated. Thus optimal operating conditions for high conversion and purity from these processes are achieved. Multi-scale simulation is an emerging discipline that spans over different scales varying from . surface level to continuum level. Multi-scale model can predict the properties of catalyst and adsorbent for optimum operation of the sorption enhanced reaction processes discussed above. In addition to the dynamic simulation methodology, two novel multi-scale methodologies have been proposed a) coarse graining method by wavelet transform of surface level Monte Carlo simulations b) hybrid methodology combining continuum equations at bulk scale and Monte Carlo simulations at surface scale. The coarse grained MC simulations are applied to example problems of CO oxidation on the surface of catalyst and to a generic sequential linear reactions with three components, while hybrid Monte Carlo mean field methodology is applied to the cell cycle data to study the activation of cancer by mitogen activated protein kinase pathway. Furthermore the latter methodology was applied to study the mechanisms of biodiesel transesterification reactions.
APA, Harvard, Vancouver, ISO, and other styles
28

Hossain, Md Amjad. "DESIGN OF CROWD-SCALE MULTI-PARTY TELEPRESENCE SYSTEM WITH DISTRIBUTED MULTIPOINT CONTROL UNIT BASED ON PEER TO PEER NETWORK." Kent State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=kent1606570495229229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

DiBiasio, Christopher M. (Christopher Michael). "Concept synthesis and design optimization of meso-scale, multi-degree-of-freedom precision flexure motion systems with integrated strain-based sensors." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61518.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2010.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 171-178).
The purpose of this research was to generate the knowledge required to 1) identify where and how to best place strain-based sensors in multi-degree-of-freedom (MDOF) flexure systems and 2) design a flexure system with optimal topology/size/shape for precision equipment and instrumentation. The success of many application areas (e.g. probe-based nanomanufacturing) hinges on the ability to design and realize low-cost, high-performance MDOF nanopositioners. The repeatability and accuracy of precision flexure-based instruments depends upon the performance of the flexure mechanism (e.g. bearings, actuators, and structural elements) and a metrology system (e.g. sensors). In meso-scale MDOF nanopositioners the sensing system must be integrated into the structure of the nanopositioner. The only viable candidate for small-scale, low-cost sensing is strain-based sensors; specifically piezoresistive sensors. Strain-based sensing introduces strong coupling and competition between the metrology and mechanical subsystems because these subsystems share a load path. Traditional tools for flexure system and compliant mechanism synthesis are not capable of simultaneously optimizing the mechanical and sensing subsystems. The building block synthesis approach developed in this work is the only tool capable of designing compliant mechanisms with integrated strain based sensing. Building block modeling allows for rapid synthesis and vetting of concepts. This approach also allows the designer to check concept feasibility, identify performance limits and tradeoffs, and obtain 1st order estimates of beam geometry. In short, this enables one to find an optimal design and set first order design parameters. The utility of the preceding is demonstrated via a case study. A meso-scale 6-DOF nanopositioner was designed via the building block synthesis approach. Polysilicon piezoresistors were surface micromachined onto a microfabricated silicon nanopositioner. The nanopositioner was actuated with moving magnet Lorentz force actuators. The final prototype costs less than $300 US and was found to have 10's of [mu]m range, nm-level resolution, and a 100 Hz 1st mode. The accuracy of the sensing system as determined by existing metrology equipment is better than 17% in-plane and better than 30% out-of-plane.
by Christopher M. DiBiasio.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
30

Ramos, Jubierre Javier [Verfasser], André [Akademischer Betreuer] [Gutachter] Borrmann, and Christian [Gutachter] Koch. "Consistency preservation methods for multi-scale design of subway infrastructure facilities / Javier Ramos Jubierre ; Gutachter: André Borrmann, Christian Koch ; Betreuer: André Borrmann." München : Universitätsbibliothek der TU München, 2017. http://d-nb.info/1127728598/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Pröbstl, Alma [Verfasser], Samarjit [Akademischer Betreuer] Chakraborty, Andreas [Gutachter] Jossen, Qi [Gutachter] Zhu, and Samarjit [Gutachter] Chakraborty. "Multi-Scale System Design and Management for Battery Health Optimization / Alma Pröbstl ; Gutachter: Andreas Jossen, Qi Zhu, Samarjit Chakraborty ; Betreuer: Samarjit Chakraborty." München : Universitätsbibliothek der TU München, 2020. http://d-nb.info/1220320706/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Coull, Renee Katherine. "The impact of multi-stakeholder planning and design processes on large-scale residential developments : an evaluation of the Rodgers Creek Area development, British Columbia." Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/23490.

Full text
Abstract:
This thesis evaluates the planning and design process for the Rodgers Creek development in West Vancouver, British Columbia. The site spans 215-acres and is part of a series of residential projects by British Pacific Properties (BPP). The planning and design process for the Rodgers Creek Area Development Plan (ADP) began in 2005 was approved in 2008. The approval of the development within the lifecycle of Council (3 years) was a critical underlying factor in the process. The ADP accommodates 736 dwelling units in six neighbourhoods connected by a unique mountain pathway allowing for the preservation of 55% of green space on the site. The analysis and evaluation of the planning and design process is based on a variety of qualitative and quantitative data including academic and practitioner sources as well as in-depth interviews with key stakeholders. The Rodgers Creek planning and design process is path-breaking and inventive in its use of a high level of public participation. The process was multi-layered and included detailed background work, integrated technical sessions, a unique project-specific working group comprising unpaid industry advisors and innovative review, evaluation and implementation tools. The process was based on transparency, openness, trust, flexibility, inclusiveness and collaboration. By involving all stakeholders, it bridged the gap between the community, local government and the private sector. The participatory approach led to the development of a plan that produced a more sustainable model of development that was accepted by the community and easy to endorse for elected officials. Urban development problems arising from conventional processes call for new planning and design processes. The results of the evaluation suggest that better processes may lead to better outcomes. The Rodgers Creek development points to the importance of engaging stakeholders and forming an integrated team. The concept of a tailored working group to respond to the characteristics of the development could be transported to other large-scale residential developments to produce better forms of development. The success of the Rodgers Creek planning and design process should provide greater confidence for future stakeholders to engage in a similar process.
APA, Harvard, Vancouver, ISO, and other styles
33

Rodriguez, Pila Ernesto. "Contribution aux choix de modélisations pour la conception de structures multi-échelle sous incertitudes." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0222/document.

Full text
Abstract:
La conception des structures multi-échelle s’appuie sur des modélisations expérimentales et prédictives. Pour accéder à des niveaux de précision élevés, ces modélisations reposent sur des campagnes expérimentales nombreuses et des développements prédictifs sophistiqués analytiques ou numériques qui intègrent des connaissances sur les paramètres d’intérêt. L’intégration de connaissances diminue l’incertitude sur les grandeurs d’intérêt et impacte de façon significative le coût de modélisation des structures multi-échelle, facteur majeur du coût de conception. Le concepteur doit alors être en mesure de maîtriser la pertinence de l’intégration de connaissances pour la prédiction des grandeurs d’intérêt et son impact sur le coût de modélisation. Les recherches menées sont structurées autour du développement d’une méthodologie d’aide à la conception sous incertitudes permettant au concepteur de choisir des combinaisons de modèles prédictifs et expérimentaux, appelées chemins de modélisation, présentant des compromis différents entre le coût de modélisation et l’incertitude sur les paramètres d’intérêt. Le travail se base sur une représentation pyramidale des modélisations expérimentales et prédictives. Les incertitudes aléatoires et épistémiques liées aux matériaux, aux modèles ainsi qu’aux tolérances géométriques sont agrégées et propagées dans la pyramide jusqu’aux grandeurs d’intérêt de la structure. Une méthode adaptative d’estimation du coût de modélisation, basée sur la logique floue, a été proposée. Le problème multi objectif visant à minimiser les incertitudes sur les paramètres d’intérêt et le coût de modélisation est résolu au moyen d’un algorithme « NSGA-II » permettant l’identification de chemins optimisés robustes. Les travaux sont appliqués au cas d’un réservoir composite épais destiné au stockage d’hydrogène. La méthodologie proposée démontre qu’il est possible de rationaliser les modélisations expérimentales et prédictives menées pour obtenir la pression d’éclatement du réservoir avec une précision maîtrisée. Dans un second temps, la méthodologie est utilisée pour obtenir des solutions de reconception sur des réservoirs présentant des volumes plus importants ou plus faibles et atteignant des pressions cibles différentes. Les chemins de modélisations robustes obtenus délivrent des solutions de dimensionnement adaptées aux exigences de reconception présentant un coût de modélisation et un niveau d’incertitude maitrisés
The design of multi-scale structures is based on predictive and experimental modelling. To achieve a high level of precision, modelling rest on a high number of experimental tests and sophisticated analytical and numerical developments integrating all possible knowledge about the quantity of interest. Adding knowledge into models diminishes the uncertainty on quantities of interest and significantly impacts the cost of modelling, a high impact factor on the design cost. The designer must be able to control the suitability of the integration of knowledge into the prediction of quantities of interest and its impact on the cost of modelling. The research carried out in this work is structured around the development of a methodology of assistance to the design under uncertainties allowing the designer to choose combinations between several predictive and experimental models, called modelling paths, presenting different compromises between the cost of modelling and the uncertainty on quantities of interest. The work is based on a pyramidal representation of experimental and predictive modelling. Random and epistemic uncertainties related to materials, models and geometrical tolerances are aggregated and propagated in the pyramid up to the quantities of interest of the structure. An adaptive method based on fuzzy logics for estimating the cost of modelling has been proposed. The multi objective problem aiming to minimizing the uncertainties on the quantities of interest and the cost of modelling is solved by means of the « NSGA-II » genetic algorithm, allowing to identify robust optimized modelling paths. This methodology is applied to a thick composite vessel for hydrogen storage. The proposed methodology demonstrates the possibility of rationalization of experimental and predictive models carried out to obtain the burst pressure of the vessel with a controlled precision. In a second step, the methodology is used to redesign the vessel considering larger or smaller volumes and with different burst pressure targets. Robust modelling paths obtained deliver design solutions adapted to the redesign requirements with a controlled modelling cost and a managed level of uncertainty
APA, Harvard, Vancouver, ISO, and other styles
34

Pindat, Cyprien. "A Content-Aware Design Approach to Multiscale Navigation." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-01016710.

Full text
Abstract:
Computer screens are very small compared to the size of large information spaces that arise in many domains. The visualization of such datasets requires multiscale navigation capabilities, enabling users to switch between zoomed-in detailed views and zoomed-out contextual views of the data. Designing interfaces that allow users to quickly identify objects of interest, get detailed views of those objects, relate them and put them in a broader spatial context, raise challenging issues. Multi-scale interfaces have been the focus of much research effort over the last twenty years.There are several design approaches to address multiscale navigation issues. In this thesis, we review and categorize these approaches according to their level of content awareness. We identify two main approaches: content-driven, which optimizes interfaces for navigation in specific content; and content-agnostic, that applies to any type of data. We introduce the content-aware design approach, which dynamically adapts the interface to the content. The latter design approach can be used to design multiscale navigation techniques both in 2D or 3D spaces. We introduce Arealens and Pathlens, two content-aware fisheye lenses that dynamically adapt their shape to the underlying content to better preserve the visual aspect of objects of interest. We describe the techniques and their implementation, and report on a controlled experiment that evaluates the usability of Arealens compared to regular fisheye lenses, showing clear performance improvements with the new technique for a multiscale visual search task. We introduce a new distortion-oriented presentation library enabling the design of fisheye lenses featuring several foci of arbitrary shapes. Then, we introduce Gimlens, a multi-view detail-in-context visualization technique that enables users to navigate complex 3D models by drilling holes into their outer layers to reveal objects that are buried into the scene. Gimlens adapts to the geometry of objects of interest so as to better manage visual occlusion problems, selection mechanism and coordination of lenses.
APA, Harvard, Vancouver, ISO, and other styles
35

Harmes, Riccardo Lucian Paul. "Localism and the design of political systems." Thesis, University of Exeter, 2017. http://hdl.handle.net/10871/30140.

Full text
Abstract:
Localism places a special value on the local, and is increasingly prominent as a political doctrine. The literature suggests localism operates in three ways: bottom-up, top down and mutualistic. To assess its impact, localism needs to be seen within the broader context of multi-level governance. Here localism is examined in relation to three major themes: place, public value (PV), and institutional design. Regarding place, a key distinction is drawn between old and new localism. Old localism is about established local government, while new localism highlights the increasing room for manoeuvre that localities have in contemporary politics. This enables them to influence wider power structures, for example through trans-local organizing. With regard to public value, localist thinking makes a key contribution to core PV domains such as sustainability, wellbeing and democracy, as well as to others like territorial cohesion and intergovernmental mutuality. As for institutional design, the study is particularly concerned with ‘sub-continental’ political systems. A set of principles for the overall design of such systems is proposed, together with a framework of desirable policy outcomes at the local level. This can be used to evaluate how effective political systems are at creating public value in local settings. The thesis presents a comparative study of localism in two significant, sub-continental clusters: India/Kerala/Kollam and the EU/UK/England/Cornwall. Both can be seen as contrasting ‘exemplars’ of localism in action. In India, localism was a major factor in the nationwide local self-government reforms of 1993 and their subsequent enactment in the state of Kerala. In the EU, localism has been pursued through an economic federalism based on regions and sub-regions. This is at odds with the top-down tradition in British politics. The tension between the two approaches is being played out currently in the peripheral sub-region of Cornwall/Isles of Scilly. Cornwall’s dilemma has been sharpened by Britain’s recent decision to leave the EU. The thesis considers the wider implications of the case studies, and presents some proposals for policymakers and legislators to consider, together with suggestions for further research.
APA, Harvard, Vancouver, ISO, and other styles
36

Basirat, Farzad. "Process Models for CO2 Migration and Leakage : Gas Transport, Pore-Scale Displacement and Effects of Impurities." Doctoral thesis, Uppsala universitet, Luft-, vatten och landskapslära, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-315490.

Full text
Abstract:
Geological Carbon Storage (GCS) is considered as one of the key techniques to reduce the rate of atmospheric emissions of CO2 and thereby to contribute to controlling the global warming. A successful application of a GCS project requires the capability of the formation to trap CO2 for a long term. In this context, processes related to CO2 trapping and also possible leakage of CO2 to the near surface environment need to be understood. The overall aim of this thesis is to understand the flow and transport of CO2 through porous media in the context of geological storage of CO2. The entire range of scales, including the pore scale, the laboratory scale, the field experiment scale and the industrial scale of CO2 injection operation are addressed, and some of the key processes investigated by means of experiments and modeling.  First, a numerical model and laboratory experimental setup were developed to investigate the CO2 gas flow, mimicking the system in the near-surface conditions in case a leak from the storage formation should occur. The system specifically addressed the coupled flow and mass transport of gaseous CO2 both in the porous domain as well as the free flow domain above it. The comparison of experiments and modelling results showed a very good agreement indicating that the model developed can be applied to evaluate monitoring and surface detection of potential CO2 leakage. Second, the field scale CO2 injection test carried out in a shallow aquifer in Maguelone, France was analyzed and modeled. The results showed that Monte Carlo simulations accounting for the heterogeneity effects of the permeability field did capture the key observations of the monitoring data, while a homogeneous model could not represent them. Third, a numerical model based on phase-field method was developed and model simulations carried out addressing the effect of wettability on CO2-brine displacement at the pore-scale. The results show that strongly water-wet reservoirs provide a better potential for the dissolution trapping, due to the increase of interface between CO2 and brine with very low contact angles. The results further showed that strong water-wet conditions also imply a strong capillary effect, which is important for residual trapping of CO2. Finally, numerical model development and model simulations were carried out to address the large scale geological storage of CO2 in the presence of impurity gases in the CO2 rich phase. The results showed that impurity gases N2 and CH4 affected the spatial distribution of the gas (the supercritical CO2 rich phase), and a larger volume of reservoir is needed in comparison to the pure CO2 injection scenario. In addition, the solubility trapping significantly increased in the presence of N2 and CH4.
APA, Harvard, Vancouver, ISO, and other styles
37

Wagner, Julie. "A body-centric framework for generating and evaluating novel interaction techniques." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00772138.

Full text
Abstract:
This thesis introduces BodyScape, a body-centric framework that accounts for how users coordinate their movements within and across their own limbs in order to interact with a wide range of devices, across multiple surfaces. It introduces a graphical notation that describes interaction techniques in terms of (1) motor assemblies responsible for performing a control task (input motor assembly) or bringing the body into a position to visually perceive output (output motor assembly), and (2) the movement coordination of motor assemblies, relative to the body or fixed in the world, with respect to the interactive environment. This thesis applies BodyScape to 1) investigate the role of support in a set of novel bimanual interaction techniques for hand-held devices, 2) analyze the competing effect across multiple input movements, and 3) compare twelve pan-and-zoom techniques on a wall-sized display to determine the roles of guidance and interference on performance. Using BodyScape to characterize interaction clarifies the role of device support on the user's balance and subsequent comfort and performance. It allows designers to identify situations in which multiple body movements interfere with each other, with a corresponding decrease in performance. Finally, it highlights the trade-offs among different combinations of techniques, enabling the analysis and generation of a variety of multi-surface interaction techniques. I argue that including a body-centric perspective when defining interaction techniques is essential for addressing the combinatorial explosion of interactive devices in multi-surface environments.
APA, Harvard, Vancouver, ISO, and other styles
38

Hecht, Martin. "Optimierung von Messinstrumenten im Large-scale Assessment." Doctoral thesis, Humboldt-Universität zu Berlin, Lebenswissenschaftliche Fakultät, 2015. http://dx.doi.org/10.18452/17270.

Full text
Abstract:
Messinstrumente stellen in der wissenschaftlichen Forschung ein wesentliches Element zur Erkenntnisgewinnung dar. Das Besondere an Messinstrumenten im Large-scale Assessment in der Bildungsforschung ist, dass diese normalerweise für jede Studie neu konstruiert werden und dass die Testteilnehmer verschiedene Versionen des Tests bekommen. Hierbei ergeben sich potentielle Gefahren für die Akkuratheit und Validität der Messung. Um solche Gefahren zu minimieren, sollten (a) die Ursachen für Verzerrungen der Messung und (b) mögliche Strategien zur Optimierung der Messinstrumente eruiert werden. Deshalb wird in der vorliegenden Dissertation spezifischen Fragestellungen im Rahmen dieser beiden Forschungsanliegen nachgegangen.
Measurement instruments are essential elements in the acquisition of knowledge in scientific research. Special features of measurement instruments in large-scale assessments of student achievement are their frequent reconstruction and the usage of different test versions. Here, threats for the accuracy and validity of the measurement may emerge. To minimize such threats, (a) sources for potential bias of measurement and (b) strategies to optimize measuring instruments should be explored. Therefore, the present dissertation investigates several specific topics within these two research areas.
APA, Harvard, Vancouver, ISO, and other styles
39

Rodosik, Sandrine. "Etude de l'impact d'architectures fluidiques innovantes sur la gestion, la performance et la durabilité de systèmes de pile à combustible PEMFC pour les transports." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAI090.

Full text
Abstract:
Même si l’hydrogène est en plein essor, les véhicules électriques à pile à combustible sont encore rares sur le marché. Leur volume et leur complexité encore trop importants comptent parmi les freins au développement des systèmes PEM (Proton Exchange Membrane) dans le secteur des transports. Ces travaux de thèse visent à étudier deux nouveaux circuits fluidiques permettant à la fois de simplifier et de réduire le volume du système. Il s’agit de la recirculation de l’air, et du Ping-Pong, une architecture fluidique permettant d’alterner la localisation de l’alimentation en combustible dans le stack. Les performances des deux architectures ont été étudiées expérimentalement en conditions automobile sur un système de 5 kW. Une analyse multi-échelles a été conduite pour comparer, à d’autres architectures connues, les performances du système, du stack et l’homogénéité des tensions de cellules du stack. L’étude a été complétée par un test de durabilité en Ping-Pong afin d’évaluer l’impact de ce nouveau mode de fonctionnement sur le stack. A nouveau, les données expérimentales sont analysées à différentes échelles jusqu’à l’expertise post-mortem des assemblages-membrane-électrode
Although hydrogen is booming, fuel cell electric vehicles are still rare on the market. Their high volume and complexity are still major hurdles to the development of PEM (Proton Exchange Membrane) systems for transport applications. This PhD. work aimed at studying two new fluidic circuits that can both simplify and reduce the system volume. Namely, the cathodic recirculation, and the Ping-Pong, which is a new fluidic architecture that alternate the fuel feed locations during operation. The performances of both architectures have been studied experimentally in automotive conditions on a 5 kW system. A multiscale analysis was conducted to compare, with other known architectures, the performances of the system, the stack and the homogeneity of the cell voltages inside the stack. The study was completed with a Ping-Pong durability test to evaluate the impact of this new operation on the fuel cell stack. The experimental data have been analyzed at different scales up to the post-mortem expertise of membrane-electrode assemblies
APA, Harvard, Vancouver, ISO, and other styles
40

Yang, Bin. "Contribution to a kernel of symbolic asymptotic modeling software." Thesis, Besançon, 2014. http://www.theses.fr/2014BESA2055/document.

Full text
Abstract:
Cette thèse est consacrée au développement d’un noyau du logiciel MEMSALab de modélisation parcalcul symbolique qui sera utilisé pour la génération automatique de modèles asymptotiques pourdes matrices de micro et nano-systèmes. Contrairement à des logiciels traditionnels réalisant des simulationsnumériques utilisant des modèles prédéfinis, le principe de fonctionnement de MEMSALabest de construire des modèles asymptotiques qui transforment des équations aux dérivées partiellesen tenant compte de leurs caractéristiques. Une méthode appelée ”par extension-combinaison” pourla modélisation asymptotique, qui permet la construction de modèle de façon incrémentale de sorteque les caractéristiques désirées soient incluses étape par étape est tout d’abord proposé pour lemodèle d’homogénéisation dérivation. Il repose sur une combinaison de méthodes asymptotiquesissues de la théorie des équations aux dérivés partielles et de techniques de réécriture issues del’informatique. Cette méthode concentre sur la dérivation de modèle pour les familles de PDEs aulieu de chacune d’entre elles. Un modèle d’homogénéisation de l’électro thermoélastique équationdéfinie dans un domaine mince multicouche est dérivé par utiliser la méthode mathématique danscette approche. Pour finir, un outil d’optimisation a été développé en combinant SIMBAD, une boite `aoutils logicielle pour l’optimisation et développée en interne, et COMSOL-MATLAB. Il a ´ et ´e appliquépour étudier la conception optimale d’une classe de sondes de microscopie atomique thermique et apermis d’ établir des règles générale pour leurs conception
This thesis is dedicated to develop a kernel of a symbolic asymptotic modeling software packageMEMSALab which will be used for automatic generation of asymptotic models for arrays of micro andnanosystems. Unlike traditional software packages aimed at numerical simulations by using pre-builtmodels, the purpose of MEMSALab is to derive asymptotic models for input equations by taking intoaccount their own features. An approach called ”by extension-combination” for the asymptotic modelingwhich allows an incremental model construction is firstly proposed for the homogenization modelderivation. It relies on a combination of the asymptotic method used in the field of partial differentialequations with term rewriting techniques coming from computer science. This approach focuses onthe model derivation for family of PDEs instead of each of them. An homogenization model of theelectrothermoelastic equation defined in a multi-layered thin domain has been derived by applyingthe mathematical method used in this approach. At last, an optimization tool has been developed bycombining a house-made optimization software package SIMBAD and COMSOL-MATLAB simulationand it has been applied for optimization of a SThM probe
APA, Harvard, Vancouver, ISO, and other styles
41

Lauzet, Nicolas. "Prise en compte cumulée du réchauffement climatique et des surchauffes urbaines en phase amont de conception frugale des bâtiments centrée sur le confort des occupants : des propositions méthodologiques." Thesis, Lorient, 2019. http://www.theses.fr/2019LORIS551.

Full text
Abstract:
Alors que les prévisions climatiques du GIEC sont de plus en plus avancées et que les phénomènes relatifs à l’Ilot de Chaleur Urbain (ICU) sont de mieux en mieux cernés, les uns et les autres ne sont toujours pas pris en compte dans la conception actuelle des bâtiments. Comment prendre en compte le réchauffement climatique à l’échelle globale et les surchauffes urbaines en phase de conception des bâtiments ? Quels sont les impacts sur la thermique des bâtiments ? Quels critères de confort pour orienter les choix des concepteurs motivés par la frugalité ? La première partie présente comment la thermique du bâtiment « voit » le climat. Actuellement, les bureaux d’études utilisent des météos moyennées sur 10 ou 30 ans de mesures qui donnent un fichier météo représentatif du climat d’une zone, mais qui suppriment les conditions extrêmes comme les vagues de chaleur. Ces dernières constituent et constitueront pourtant de plus en plus des risques sanitaires pour les personnes vulnérables. Une méthodologie est proposée pour choisir une année météorologique réelle en la repositionnant par rapport aux prévisions climatiques du GIEC. En deuxième partie du manuscrit, l’influence du type de fichier météo utilisé sur les résultats de simulation thermique dynamique (STD) est étudiée sur un bâtiment de logements collectifs situé dans le quartier Confluence à Lyon. Cette étude est centrée sur l’analyse du confort d’été qui est l’enjeu majeur pour l’adaptation aux aléas climatiques. Cette partie contient également des propositions méthodologiques pour l’analyse des risques sanitaires en ambiances intérieures lors des évènements extrêmes de canicules. Enfin en troisième et dernière partie, nous étudions la possibilité d’utiliser les résultats des outils de microclimatologie urbaine en entrée météo des modèles STD. Une expérimentation de chainage entre les outils CitySim et CIM, développés à l’EPFL de Lausanne, est menée sur le même quartier Confluence à Lyon
While the IPCC's climate forecasts are more and more advanced and the phenomena related to the Urban Heat Island (UHI) are well understood, both are still not taken into account in the design of current buildings. How to take into account the global warming and urban overheating in buildings’ design? What are the impacts on the thermal behavior of buildings? What comfort criteria can be proposed to guide the choices of designers motivated by frugality? The first part presents how the building "sees" the climate. Currently, consulting agencies use averages over 10 or 30 years of measurements that give a weather file representative of the climate of an area, but which remove extreme conditions such as heat waves. However, these are and will increasingly constitute health risks for vulnerable people. A methodology is proposed to choose a real weather year by repositioning it in relation to the IPCC climate forecast. In the second part of the manuscript, the influence of the type of weather file used for buildings simulation on the comfort results is studied for a residential building located in the Confluence district in Lyon. This study focuses on the analysis of summer comfort, which is the major issue for adaptation to current and future climates. This part also contains methodological proposals for the analysis of health risks in indoor environments during extreme heatwave events. Finally, in the third and last part, we study the possibility of using the results of a urban microclimate tool to produce weather input for the building energy models. A chaining experiment between the CitySim and CIM tools, developed at EPFL Lausanne, is conducted on the same Confluence district in Lyon
APA, Harvard, Vancouver, ISO, and other styles
42

Ryu, Kyeong Keol. "Automated Bus Generation for Multi-processor SoC Design." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5076.

Full text
Abstract:
In the design of a multi-processor System-on-a-Chip (SoC), the bus architecture typically comes to the forefront because the system performance is not dependent only on the speed of the Processing Elements (PEs) but also on the bus architecture in the system. An efficient bus architecture with effective arbitration for reducing contention on the bus plays an important role in maximizing performance. Therefore, among many issues of multi-processor SoC research, we focus on two issues related to the bus architecture in this dissertation. One issue is how to quickly and easily design an efficient bus architecture for an SoC. The second issue is how to quickly explore the design space across performance influencing factors to achieve a high performance bus system. The objective of this research is to provide a Computer-Aided Design (CAD) tool with which the user can quickly explore System-on-a-Chip (SoC) bus design space in search of a high performance SoC bus system. From a straightforward description of the numbers and types of Processing Elements (PEs), non-PEs, memories and buses (including, for example, the address and data bus widths of the buses and memories), our Bus Synthesis tool, called BusSynth, generates a Register-Transfer Level (RTL) Verilog Hardware Description Language (HDL) description of the specified bus system. The user can utilize this RTL Verilog in bus-accurate simulations to more quickly arrive at an efficient bus architecture for a multi-processor SoC. The methodology we propose gives designers a great benefit in fast design space exploration of bus systems across a variety of performance influencing factors such as bus types, PE types and software programming styles (e.g., pipelined parallel fashion or functional parallel fashion). We also show that BusSynth can efficiently generate bus systems in a matter of seconds as opposed to weeks of design effort to integrate together each system component by hand. Moreover, unlike the previous related work, BusSynth can support a wide variety of PEs, memory types and bus architectures (including a hybrid bus architecture) in search of a high performance SoC.
APA, Harvard, Vancouver, ISO, and other styles
43

Yin, Weiwei. "The role and regulatory mechanisms of nox1 in vascular systems." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44833.

Full text
Abstract:
As an important endogenous source of reactive oxygen species (ROS), NADPH oxidase 1 (Nox1) has received tremendous attention in the past few decades. It has been identified to play a key role as the initial "kindle," whose activation is crucial for amplifying ROS production through several propagation mechanisms in the vascular system. As a consequence, Nox1 has been implicated in the initiation and genesis of many cardiovascular diseases and has therefore been the subject of detailed investigations. The literature on experimental studies of the Nox1 system is extensive. Numerous investigations have identified essential features of the Nox1 system in vasculature and characterized key components, possible regulatory signals and/or signaling pathways, potential activation mechanisms, a variety of Nox1 stimuli, and its potential physiological and pathophysiological functions. While these experimental studies have greatly enhanced our understanding of the Nox1 system, many open questions remain regarding the overall functionality and dynamic behavior of Nox1 in response to specific stimuli. Such questions include the following. What are the main regulatory and/or activation mechanisms of Nox1 systems in different types of vascular cells? Once Nox1 is activated, how does the system return to its original, unstimulated state, and how will its subunits be recycled? What are the potential disassembly pathways of Nox1? Are these pathways equally important for effectively reutilizing Nox1 subunits? How does Nox1 activity change in response to dynamic signals? Are there generic features or principles within the Nox1 system that permit optimal performance? These types of questions have not been answered by experiments, and they are indeed quite difficult to address with experiments. I demonstrate in this dissertation that one can pose such questions and at least partially answer them with mathematical and computational methods. Two specific cell types, namely endothelial cells (ECs) and vascular smooth muscle cells (VSMCs), are used as "templates" to investigate distinct modes of regulation of Nox1 in different vascular cells. By using a diverse array of modeling methods and computer simulations, this research identifies different types of regulation and their distinct roles in the activation process of Nox1. In the first study, I analyze ECs stimulated by mechanical stimuli, namely shear stresses of different types. The second study uses different analytical and simulation methods to reveal generic features of alternative disassembly mechanisms of Nox1 in VSMCs. This study leads to predictions of the overall dynamic behavior of the Nox1 system in VSMCs as it responds to extracellular stimuli, such as the hormone angiotensin II. The studies and investigations presented here improve our current understanding of the Nox1 system in the vascular system and might help us to develop potential strategies for manipulation and controlling Nox1 activity, which in turn will benefit future experimental and clinical studies.
APA, Harvard, Vancouver, ISO, and other styles
44

Sa, Shibasaki Rui. "Lagrangian Decomposition Methods for Large-Scale Fixed-Charge Capacitated Multicommodity Network Design Problem." Thesis, Université Clermont Auvergne‎ (2017-2020), 2020. http://www.theses.fr/2020CLFAC024.

Full text
Abstract:
Typiquement présent dans les domaines de la logistique et des télécommunications, le problème de synthèse de réseau multi-flot à charge fixe reste difficile, en particulier dans des contextes à grande échelle. Dans ce cas, la capacité à produire des solutions de bonne qualité dans un temps de calcul raisonnable repose sur la disponibilité d'algorithmes efficaces. En ce sens, cette thèse propose des approches lagrangiennes capables de fournir des bornes relativement proches de l'optimal pour des instances de grande taille. L'efficacité des méthodes dépend de l'algorithme appliqué pour résoudre les duals lagrangiens, nous choisissons donc entre deux des solveurs les plus efficaces de la littérature: l'algorithme de Volume et la méthode Bundle, fournissant une comparaison entre eux. Les résultats ont montré que l'algorithme de Volume est plus efficace dans le contexte considéré, étant celui choisi pour le développement du projet de recherche.Une première heuristique lagrangienne a été conçue pour produire des solutions réalisables de bonne qualité pour le problème, obtenant de bien meilleurs résultats que Cplex pour les plus grandes instances. Concernant les limites inférieures, un algorithme Relax-and-Cut a été implémenté intégrant une analyse de sensibilité et une mise à l'échelle des contraintes, ce qui a amélioré les résultats. Les améliorations des bornes inférieures ont atteint 11\%, mais en moyenne, elles sont restées inférieures à 1\%. L'algorithme Relax-and-Cut a ensuite été inclus dans un schéma Branch-and-Cut, pour résoudre des programmes linéaires dans chaque nœud de l'arbre de recherche. De plus, une heuristique Feasibility Pump lagrangienne a été implémentée pour accélérer la recherche de bonnes solutions réalisables. Les résultats obtenus ont montré que le schéma proposé est compétitif avec les meilleurs algorithmes de la littérature et fournit les meilleurs résultats dans des contextes à grande échelle. De plus, une version heuristique de l'algorithme Branch-and-Cut basé sur le Feasibility Pump lagrangien a été testée, fournissant les meilleurs résultats en général, par rapport aux heuristiques de la littérature
Typically present in logistics and telecommunications domains, the Fixed-Charge Multicommodity Capacitated Network Design Problem remains challenging, especially when large-scale contexts are involved. In this particular case, the ability to produce good quality soutions in a reasonable amount of time leans on the availability of efficient algorithms. In that sense, the present thesis proposed Lagrangian approaches that are able to provide relatively sharp bounds for large-scale instances of the problem. The efficiency of the methods depend on the algorithm applied to solve Lagrangian duals, so we choose between two of the most efficient solvers in the literature: the Volume Algorithm and the Bundle Method, providing a comparison between them. The results showed that the Volume Algorithm is more efficient in the present context, being the one kept for further research.A first Lagrangian heuristic was devised to produce good quality feasible solutions for the problem, obtaining far better results than Cplex, for the largests instances. Concerning lower bounds, a Relax-and-Cut algorithm was implemented embbeding sensitivity analysis and constraint scaling, which improved results. The increases in lower bounds attained 11\%, but on average they remained under 1\%.The Relax-and-Cut algorithm was then included in a Branch-and-Cut scheme, to solve linear programs in each node of the search tree. Moreover, a Feasibility Pump heuristic using the Volume Algorithm as solver for linear programs was implemented to accelerate the search for good feasible solutions in large-scale cases. The obtained results showed that the proposed scheme is competitive with the best algorithms in the literature, and provides the best results in large-scale contexts. Moreover, a heuristic version of the Branch-and-Cut algorithm based on the Lagrangian Feasibility Pump was tested, providing the best results in general, when compared to efficient heuristics in the literature
APA, Harvard, Vancouver, ISO, and other styles
45

Polakowski, Matthew Ryan. "An Improved Lightweight Micro Scale Vehicle Capable of Aerial and Terrestrial Locomotion." Case Western Reserve University School of Graduate Studies / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=case1334600182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Vadlmudi, Tripurasuparna. "A nano-CMOS based universal voltage level converter for multi-VDD SoCs." Thesis, University of North Texas, 2007. https://digital.library.unt.edu/ark:/67531/metadc3602/.

Full text
Abstract:
Power dissipation of integrated circuits is the most demanding issue for very large scale integration (VLSI) design engineers, especially for portable and mobile applications. Use of multiple supply voltages systems, which employs level converter between two voltage islands is one of the most effective ways to reduce power consumption. In this thesis work, a unique level converter known as universal level converter (ULC), capable of four distinct level converting operations, is proposed. The schematic and layout of ULC are built and simulated using CADENCE. The ULC is characterized by performing three analysis such as parametric, power, and load analysis which prove that the design has an average power consumption reduction of about 85-97% and capable of producing stable output at low voltages like 0.45V even under varying load conditions.
APA, Harvard, Vancouver, ISO, and other styles
47

Wang, Jialing Yang Xiaojun. "Multi-scale forest landscape pattern characterization." Diss., 2005. http://etd.lib.fsu.edu/theses/available/etd-08142005-151446.

Full text
Abstract:
Thesis (Ph. D.)--Florida State University, 2005.
Advisor: Xiaojun Yang, Florida State University, College of Social Sciences, Dept. of Geography. Title and description from dissertation home page (viewed Jan. 24, 2006). Document formatted into pages; contains xiv, 213 pages. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
48

JONG-HARN, YWI CHI, and 尉遲仲涵. "Design of SDN based Large-Scale Multi-Tenant Data Center Networks." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/04065276713127203383.

Full text
Abstract:
碩士
國立中正大學
通訊工程研究所
103
Abstract— In this thesis, we propose an architecture for realizing a reliable large scale data center network. This network is a cost effective hybrid SDN-Ethernet network. We fully take advantage of the automatic learning capability of the Ethernet switches to simplify state management. By employing SDN switches in the core of the network, the system is able to perform route changes for supporting VM live migration and fast failure recovery. Using the proposed architecture, a single data center can support hundreds of thousand physical machines (PM) and own up to a million of virtual machines (VM). Besides, more than 64K tenants can share a single data center and each tenant can individually own up to 4K VLANs in their network. Each tenant is allowed to freely assign IP addresses and VLAN IDs to their VM. Unlike conventional IP network that employing packet broadcasting to handle ARP queries, in our system, ARP packets are considered to be control packets. They are processed in the system controller so as to mitigate the impact of ARP broadcast in a large data center To enhance reliability, dual SDN controllers are used to manage the network through in-band control channels. The controllers are able to detect network topology and computing servers automatically. It fully supports plug and play service to reduce the possible configuration errors introduced by human. We apply MPTCP to provide multiple subflows to enhance throughput of TCP connections. In this work, a novel congestion aware routing is propose for network load balancing. We have conducted experiments and simulations to evaluate the performance of the proposed network. The experimental results show that the system is able to handle heavy connection requests and resume connections after VM migrations within short time. Our simulation results also reveal that the proposed congestion aware routing outperforms ECMP. It improves the total MPTCP throughput significantly.
APA, Harvard, Vancouver, ISO, and other styles
49

"Protein Folding & Dynamics Using Multi-scale Computational Methods." Doctoral diss., 2014. http://hdl.handle.net/2286/R.I.25005.

Full text
Abstract:
abstract: This thesis explores a wide array of topics related to the protein folding problem, ranging from the folding mechanism, ab initio structure prediction and protein design, to the mechanism of protein functional evolution, using multi-scale approaches. To investigate the role of native topology on folding mechanism, the native topology is dissected into non-local and local contacts. The number of non-local contacts and non-local contact orders are both negatively correlated with folding rates, suggesting that the non-local contacts dominate the barrier-crossing process. However, local contact orders show positive correlation with folding rates, indicating the role of a diffusive search in the denatured basin. Additionally, the folding rate distribution of E. coli and Yeast proteomes are predicted from native topology. The distribution is fitted well by a diffusion-drift population model and also directly compared with experimentally measured half life. The results indicate that proteome folding kinetics is limited by protein half life. The crucial role of local contacts in protein folding is further explored by the simulations of WW domains using Zipping and Assembly Method. The correct formation of N-terminal β-turn turns out important for the folding of WW domains. A classification model based on contact probabilities of five critical local contacts is constructed to predict the foldability of WW domains with 81% accuracy. By introducing mutations to stabilize those critical local contacts, a new protein design approach is developed to re-design the unfoldable WW domains and make them foldable. After folding, proteins exhibit inherent conformational dynamics to be functional. Using molecular dynamics simulations in conjunction with Perturbation Response Scanning, it is demonstrated that the divergence of functions can occur through the modification of conformational dynamics within existing fold for β-lactmases and GFP-like proteins: i) the modern TEM-1 lactamase shows a comparatively rigid active-site region, likely reflecting adaptation for efficient degradation of a specific substrate, while the resurrected ancient lactamases indicate enhanced active-site flexibility, which likely allows for the binding and subsequent degradation of different antibiotic molecules; ii) the chromophore and attached peptides of photocoversion-competent GFP-like protein exhibits higher flexibility than the photocoversion-incompetent one, consistent with the evolution of photocoversion capacity.
Dissertation/Thesis
Ph.D. Physics 2014
APA, Harvard, Vancouver, ISO, and other styles
50

Chang, Fu-Hsing, and 張富雄. "Development of Innovative Product Design Process Using Patent Multi-Scale Analysis and TRIZ Methodology." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/97arf3.

Full text
Abstract:
碩士
國立臺北科技大學
工業工程與管理系碩士班
100
In recent years, because the competition between enterprises has been more intense, and the preference of consumers has changed dramatically. The production changing rapidly and diversely of styles cut down the life cycle of products today. The enterprises that concern about the property right and lot of professional techniques in production processes are increasing year by year, so it''s important to avoid patents issues in production developing processes. It took lot of manpower and resource to search, read patents and evaluate how to advance products patents avoiding designs in the past. We developed the concepts of production design processes for integrating the patent multi-scale analysis and TRIZ methodology in this study. First of all, we sorted out the key patents by the international patent classification codes, and then we found those patents similar with our target patents by the Multi-scale analysis. With the above described integrated approach, we used text mining technique and combined back-propagation neural network with user''s problems to provide the design direction for users to avoid patent issues by TRIZ methods. Finally, this study will be integrated for building the system process method and the specific application of innovation in the LED design patent.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography