Dissertations / Theses on the topic 'Tolerance optimization'

To see the other types of publications on this topic, follow the link: Tolerance optimization.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Tolerance optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Shehabi, Murtaza Kaium. "Cost tolerance optimization for piecewise continuous cost tolerance functions." Ohio : Ohio University, 2002. http://www.ohiolink.edu/etd/view.cgi?ohiou1174937670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yue, Junping. "A computerized optimization method for tolerance control." Thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-07112009-040315/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jrad, Mohamed. "Multidisciplinary Optimization and Damage Tolerance of Stiffened Structures." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/52276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The structural optimization of a cantilever aircraft wing with curvilinear spars and ribs and stiffeners is described. The design concept of reinforcing the wing structure using curvilinear stiffening members has been explored due to the development of novel manufacturing technologies like electron-beam-free-form-fabrication (EBF3). For the optimization of a complex wing, a common strategy is to divide the optimization procedure into two subsystems: the global wing optimization which optimizes the geometry of spars, ribs and wing skins; and the local panel optimization which optimizes the design variables of local panels bordered by spars and ribs. The stiffeners are placed on the local panels to increase the stiffness and buckling resistance. The panel thickness, size and shape of stiffeners are optimized to minimize the structural weight. The geometry of spars and ribs greatly influences the design of stiffened panels. During the local panel optimization, the stress information is taken from the global model as a displacement boundary condition on the panel edges using the so-called "Global-Local Approach". The aircraft design is characterized by multiple disciplines: structures, aeroelasticity and buckling. Particle swarm optimization is used in the integration of global/local optimization to optimize the SpaRibs. The interaction between the global wing optimization and the local panel optimization is usually computationally expensive. A parallel computing technology has been developed in Python programming to reduce the CPU time. The license cycle-check method and memory self-adjustment method are two approaches that have been applied in the parallel framework in order to optimize the use of the resources by reducing the license and memory limitations and making the code robust. The integrated global-local optimization approach has been applied to subsonic NASA common research model (CRM) wing, which proves the methodology's application scaling with medium fidelity FEM analysis. Both the global wing design variables and local panel design variables are optimized to minimize the wing weight at an acceptable computational cost. The structural weight of the wing has been, therefore, reduced by 40% and the parallel implementation allowed a reduction in the CPU time by 89%. The aforementioned Global-Local Approach is investigated and applied to a composite panel with crack at its center. Because of composite laminates' heterogeneity, an accurate analysis of these requires very high time and storage space. In the presence of structural discontinuities like cracks, delaminations, cutouts etc., the computational complexity increases significantly. A possible alternative to reduce the computational complexity is the global-local analysis which involves an approximate analysis of the whole structure followed by a detailed analysis of a significantly smaller region of interest. We investigate here the performance of the global-local scheme based on the finite element method by comparing it to the traditional finite element method. To do so, we conduct a 2D structural analysis of a composite square plate, with a thin rectangular notch at its center, subjected to a uniform transverse pressure, using the commercial software ABAQUS. We show that the presence of the thin notch affects only the local response of the structure and that the size of the affected area depends on the notch length. We investigate also the effect of the notch shape on the response of the structure. Stiffeners attached to composite panels may significantly increase the overall buckling load of the resultant stiffened structure. Buckling analysis of a composite panel with attached longitudinal stiffeners under compressive loads is performed using Ritz method with trigonometric functions. Results are then compared to those from ABAQUS FEA for different shell elements. The case of composite panel with one, two, and three stiffeners is investigated. The effect of the distance between the stiffeners on the buckling load is also studied. The variation of the buckling load and buckling modes with the stiffeners' height is investigated. It is shown that there is an optimum value of stiffeners' height beyond which the structural response of the stiffened panel is not improved and the buckling load does not increase. Furthermore, there exist different critical values of stiffener's height at which the buckling mode of the structure changes. Next, buckling analysis of a composite panel with two straight stiffeners and a crack at the center is performed. Finally, buckling analysis of a composite panel with curvilinear stiffeners and a crack at the center is also conducted. ABAQUS is used for these two examples and results show that panels with a larger crack have a reduced buckling load. It is shown also that the buckling load decreases slightly when using higher order 2D shell FEM elements. A damage tolerance framework, EBF3PanelOpt, has been developed to design and analyze curvilinearly stiffened panels. The framework is written with the scripting language PYTHON and it interacts with the commercial software MSC. Patran (for geometry and mesh creation), MSC. Nastran (for finite element analysis), and MSC. Marc (for damage tolerance analysis). The crack location is set to the location of the maximum value of the major principal stress while its orientation is set normal to the major principal axis direction. The effective stress intensity factor is calculated using the Virtual Crack Closure Technique and compared to the fracture toughness of the material in order to decide whether the crack will expand or not. The ratio of these two quantities is used as a constraint, along with the buckling factor, Kreisselmeier and Steinhauser criteria, and crippling factor. The EBF3PanelOpt framework is integrated within a two-step Particle Swarm Optimization in order to minimize the weight of the panel while satisfying the aforementioned constraints and using all the shape and thickness parameters as design variables. The result of the PSO is used then as an initial guess for the Gradient Based Optimization using only the thickness parameters as design variables. The GBO is applied using the commercial software VisualDOC.
Ph. D.
4

Arenbeck, Henry. "Efficient Reliability-Based Tolerance Optimization for Multibody Systems." Thesis, The University of Arizona, 2007. http://hdl.handle.net/10150/190380.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Barraja, Mathieu. "TOLERANCE ALLOCATION FOR KINEMATIC SYSTEMS." UKnowledge, 2004. http://uknowledge.uky.edu/gradschool_theses/315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A method for allocating tolerances to exactly constrained assemblies is developed. The procedure is established as an optimization subject to constraints. The objective is to minimize the manufacturing cost of the assembly while respecting an acceptable level of performance. This method is particularly interesting for exactly constrained components that should be mass-produced. This thesis presents the different concepts used to develop the method. It describes exact constraint theory, manufacturing variations, optimization concepts, and the related mathematical tools. Then it explains how to relate these different topics in order to perform a tolerance allocation. The developed method is applied on two relevant exactly constrained examples: multi-fiber connectors, and kinematic coupling. Every time a mathematical model of the system and its corresponding manufacturing variations is established. Then an optimization procedure uses this model to minimize the manufacturing cost of the system while respecting its functional requirements. The results of the tolerance allocation are verified with Monte Carlo simulation.
6

Chen, Jack Szu-Shen. "Distortion-free tolerance-based layer setup optimization for layered manufacturing." Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/27268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Layer manufacturing has emerged as a highly versatile process to produce complex parts compared to conventional manufacturing processes, which are either too costly to implement or just downright not possible. However, this relatively new manufacturing process is characterized by a few outstanding issues that have kept the process from being widely applied. The most detrimental is the lack of a reliable method on a computational geometry level to predict the resulting part error. Layer setup with regard to the contour profile and thickness of each layer is often rendered to operator-deemed best. As a result, the manufactured part accuracy is not guaranteed and the build time is not easily optimized. Even with the availability of a scheme to predict the resulting finished part, optimal layer setup cannot be determined. Current practice generates the layer contours by simply intersecting a set of parallel planes through the computer model of the design part. The volumetric geometry of each layer is then constructed by extruding the layer contour by the layer thickness in the part building direction. This practice often leads to distorted part geometry due to the unidirectional bias of the extruded layers. Because of this, excessive layers are often employed to alleviate the effect of the part distortion. Such form of the distortion, referred to as systematic distortion, needs to be removed during layer setup. This thesis proposes methods to first remove the systematic distortion and then to determine the optimal layer setup based on a tolerance measure. A scheme to emulate the final polished part geometry is also presented. Case studies are performed in order to validate that the proposed method. The proposed scheme is shown to have significantly reduced the number of layers for constructing an LM part while satisfying a user specified error bound. Therefore, accuracy is better guaranteed due to the existence of error measure and control. Efficiency is greatly increased.
7

Burlyaev, Dmitry. "Design, Optimization, and Formal Verification of Circuit Fault-Tolerance Techniques." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM058/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La miniaturisation de la gravure et l'ajustement dynamique du voltage augmentent le risque de fautes dans les circuits intégrés. Pour pallier cet inconvénient, les ingénieurs utilisent des techniques de tolérance aux fautes pour masquer ou, au moins, détecter les fautes. Ces techniques sont particulièrement utilisées dans les domaines critiques (aérospatial, médical, nucléaire, etc.) où les garanties de bon fonctionnement des circuits et leurs tolérance aux fautes sont cruciales. Cependant, la vérification de propriétés fonctionnelles et de tolérance aux fautes est un problème complexe qui ne peut être résolu par simulation en raison du grand nombre d'exécutions possibles et de scénarios d'occurrence des fautes. De même, l'optimisation des surcoûts matériels ou temporels imposés par ces techniques demande de garantir que le circuit conserve ses propriétés de tolérance aux fautes après optimisation.Dans cette thèse, nous décrivons une optimisation de techniques de tolérance aux fautes classiques basée sur des analyses statiques, ainsi que de nouvelles techniques basées sur la redondance temporelle. Nous présentons comment leur correction peut être vérifiée formellement à l'aide d'un assistant de preuves.Nous étudions d'abord comment certains voteurs majoritaires peuvent être supprimés des circuits basés sur la redondance matérielle triple (TMR) sans violer leurs propriétés de tolérance. La méthodologie développée prend en compte les particularités des circuits (par ex. masquage logique d'erreurs) et des entrées/sorties pour optimiser la technique TMR.Deuxièmement, nous proposons une famille de techniques utilisant la redondance temporelle comme des transformations automatiques de circuits. Elles demandent moins de ressources matérielles que TMR et peuvent être facilement intégrés dans les outils de CAO. Les transformations sont basées sur une nouvelle idée de redondance temporelle dynamique qui permet de modifier le niveau de redondance «à la volée» sans interrompre le calcul. Le niveau de redondance peut être augmenté uniquement dans les situations critiques (par exemple, au-dessus des pôles où le niveau de rayonnement est élevé), lors du traitement de données cruciales (par exemple, le cryptage de données sensibles), ou pendant des processus critiques (par exemple, le redémarrage de l'ordinateur d'un satellite).Troisièmement, en associant la redondance temporelle dynamique avec un mécanisme de micro-points de reprise, nous proposons une transformation avec redondance temporelle double capable de masquer les fautes transitoires. La procédure de recouvrement est transparente et le comportement entrée/sortie du circuit reste identique même lors d'occurrences de fautes. En raison de la complexité de cette méthode, la garantie totale de sa correction a nécessité une certification formelle en utilisant l'assistant de preuves Coq. La méthodologie développée peut être appliquée pour certifier d'autres techniques de tolérance aux fautes exprimées comme des transformations de circuits
Technology shrinking and voltage scaling increase the risk of fault occurrences in digital circuits. To address this challenge, engineers use fault-tolerance techniques to mask or, at least, to detect faults. These techniques are especially needed in safety critical domains (e.g., aerospace, medical, nuclear, etc.), where ensuring the circuit functionality and fault-tolerance is crucial. However, the verification of functional and fault-tolerance properties is a complex problem that cannot be solved with simulation-based methodologies due to the need to check a huge number of executions and fault occurrence scenarios. The optimization of the overheads imposed by fault-tolerance techniques also requires the proof that the circuit keeps its fault-tolerance properties after the optimization.In this work, we propose a verification-based optimization of existing fault-tolerance techniques as well as the design of new techniques and their formal verification using theorem proving. We first investigate how some majority voters can be removed from Triple-Modular Redundant (TMR) circuits without violating their fault-tolerance properties. The developed methodology clarifies how to take into account circuit native error-masking capabilities that may exist due to the structure of the combinational part or due to the way the circuit is used and communicates with the surrounding device.Second, we propose a family of time-redundant fault-tolerance techniques as automatic circuit transformations. They require less hardware resources than TMR alternatives and could be easily integrated in EDA tools. The transformations are based on the novel idea of dynamic time redundancy that allows the redundancy level to be changed "on-the-fly" without interrupting the computation. Therefore, time-redundancy can be used only in critical situations (e.g., above Earth poles where the radiation level is increased), during the processing of crucial data (e.g., the encryption of selected data), or during critical processes (e.g., a satellite computer reboot).Third, merging dynamic time redundancy with a micro-checkpointing mechanism, we have created a double-time redundancy transformation capable of masking transient faults. Our technique makes the recovery procedure transparent and the circuit input/output behavior remains unchanged even under faults. Due to the complexity of that method and the need to provide full assurance of its fault-tolerance capabilities, we have formally certified the technique using the Coq proof assistant. The developed proof methodology can be applied to certify other fault-tolerance techniques implemented through circuit transformations at the netlist level
8

Morales, Reyes Alicia. "Fault tolerant and dynamic evolutionary optimization engines." Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/4882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Mimicking natural evolution to solve hard optimization problems has played an important role in the artificial intelligence arena. Such techniques are broadly classified as Evolutionary Algorithms (EAs) and have been investigated for around four decades during which important contributions and advances have been made. One main evolutionary technique which has been widely investigated is the Genetic Algorithm (GA). GAs are stochastic search techniques that follow the Darwinian principle of evolution. Their application in the solution of hard optimization problems has been very successful. Indeed multi-dimensional problems presenting difficult search spaces with characteristics such as multi-modality, epistasis, non regularity, deceptiveness, etc., have all been effectively tackled by GAs. In this research, a competitive form of GAs known as fine or cellular GAs (cGAs) are investigated, because of their suitability for System on Chip (SoC) implementation when tackling real-time problems. Cellular GAs have also attracted the attention of researchers due to their high performance, ease of implementation and massive parallelism. In addition, cGAs inherently possess a number of structural configuration parameters which make them capable of sustaining diversity during evolution and therefore of promoting an adequate balance between exploitative and explorative stages of the search. The fast technological development of Integrated Circuits (ICs) has allowed a considerable increase in compactness and therefore in density. As a result, it is nowadays possible to have millions of gates and transistor based circuits in very small silicon areas. Operational complexity has also significantly increased and consequently other setbacks have emerged, such as the presence of faults that commonly appear in the form of single or multiple bit flips. Tough environmental or time dependent operating conditions can trigger faults in registers and memory allocations due to induced radiation, electron migration and dielectric breakdown. These kinds of faults are known as Single Event Effects (SEEs). Research has shown that an effective way of dealing with SEEs consists of a combination of hardware and software mitigation techniques to overcome faulty scenarios. Permanent faults known as Single Hard Errors (SHEs) and temporary faults known as Single Event Upsets (SEUs) are common SEEs. This thesis aims to investigate the inherent abilities of cellular GAs to deal with SHEs and SEUs at algorithmic level. A hard real-time application is targeted: calculating the attitude parameters for navigation in vehicles using Global Positioning System (GPS) technology. Faulty critical data, which can cause a system’s functionality to fail, are evaluated. The proposed mitigation techniques show cGAs ability to deal with up to 40% stuck at zero and 30% stuck at one faults in chromosomes bits and fitness score cells. Due to the non-deterministic nature of GAs, dynamic on-the-fly algorithmic and parametric configuration has also attracted the attention of researchers. In this respect, the structural properties of cellular GAs provide a valuable attribute to influence their selection pressure. This helps to maintain an adequate exploitation-exploration tradeoff, either from a pure topological perspective or through genetic operations that also make use of structural characteristics in cGAs. These properties, unique to cGAs, are further investigated in this thesis through a set of middle to high difficulty benchmark problems. Experimental results show that the proposed dynamic techniques enhance the overall performance of cGAs in most benchmark problems. Finally, being structurally attached, the dimensionality of cellular GAs is another line of investigation. 1D and 2D structures have normally been used to test cGAs at algorithm and implementation levels. Although 3D-cGAs are an immediate extension, not enough attention has been paid to them, and so a comparative study on the dimensionality of cGAs is carried out. Having shorter radii, 3D-cGAs present a faster dissemination of solutions and have denser neighbourhoods. Empirical results reported in this thesis show that 3D-cGAs achieve better efficiency when solving multi-modal and epistatic problems. In future, the performance improvements of 3D-cGAs will merge with the latest benefits that 3D integration technology has demonstrated, such as reductions in routing length, in interconnection delays and in power consumption.
9

KANSARA, SHARAD MAHENDRA. "AN EFFICIENT SEQUENTIAL INTEGER OPTIMIZATION TECHNIQUE FOR PROCESS PLANNING AND TOLERANCE ALLOCATION." University of Cincinnati / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1069798466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Izosimov, Viacheslav. "Scheduling and Optimization of Fault-Tolerant Embedded Systems." Licentiate thesis, Linköping University, Linköping University, ESLAB - Embedded Systems Laboratory, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

Safety-critical applications have to function correctly even in presence of faults. This thesis deals with techniques for tolerating effects of transient and intermittent faults. Reexecution, software replication, and rollback recovery with checkpointing are used to provide the required level of fault tolerance. These techniques are considered in the context of distributed real-time systems with non-preemptive static cyclic scheduling.

Safety-critical applications have strict time and cost constrains, which means that not only faults have to be tolerated but also the constraints should be satisfied. Hence, efficient system design approaches with consideration of fault tolerance are required.

The thesis proposes several design optimization strategies and scheduling techniques that take fault tolerance into account. The design optimization tasks addressed include, among others, process mapping, fault tolerance policy assignment, and checkpoint distribution.

Dedicated scheduling techniques and mapping optimization strategies are also proposed to handle customized transparency requirements associated with processes and messages. By providing fault containment, transparency can, potentially, improve testability and debugability of fault-tolerant applications.

The efficiency of the proposed scheduling techniques and design optimization strategies is evaluated with extensive experiments conducted on a number of synthetic applications and a real-life example. The experimental results show that considering fault tolerance during system-level design optimization is essential when designing cost-effective fault-tolerant embedded systems.

11

Wang, Pei. "Simultaneously solving process selection, machining parameter optimization and tolerance design problems: A bi-criterion approach." Thesis, University of Ottawa (Canada), 2003. http://hdl.handle.net/10393/26544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The selection of right process, use of optimal machining parameters and specification of best tolerance parameters have been recognized by industry as key issues to ensure product quality and reduce production cost. The three issues have thus attracted a great deal of attention over last several decades. However, they are often addressed separately in existing publications. In reality, the three issues are closely interrelated. Analyzing the three issues in isolation will inevitably lead to inconsistent, infeasible, or conflicting decisions. To avoid the drawbacks, an integrated approach is proposed to jointly solve process selection, machining parameter optimization, and tolerance design problems. The integrated problem is formulated as a bi-criterion model to handle both tangible and intangible costs. The model is solved using a modified Chebyshev goal programming method to achieve a preferred compromise between the two conflicting criteria. The application of the proposed bi-criterion approach has been demonstrated by first using the single component single part feature case. The integrated approach is then extended to the multiple components multiple part features case (the assembly case). Examples are provided to illustrate the application of the two models and the solution procedure. The results have shown that the decisions on process selection, machining parameter selection and tolerance design can be made simultaneously using the models.
12

Diril, Abdulkadir Utku. "Circuit Level Techniques for Power and Reliability Optimization of CMOS Logic." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Technology scaling trends lead to shrinking of the individual elements like transistors and wires in digital systems. The main driving force behind this is cutting the cost of the systems while the systems are filled with extra functionalities. This is the reason why a 3 GHz Intel processor now is priced less than what a 50MHz processor was priced 10 years ago. As in most cases, this comes with a price. This price is the complex design process and problems that stem from the reduction in physical dimensions. As the transistors became smaller in size and the systems became faster, issues like power consumption, signal integrity, soft error tolerance, and testing became serious challenges. There is an increasing demand to put CAD tools in the design flow to address these issues at every step of the design process. First part of this research investigates circuit level techniques to reduce power consumption in digital systems. In second part, improving soft error tolerance of digital systems is considered as a trade off problem between power and reliability and a power aware dynamic soft error tolerance control strategy is developed. The objective of this research is to provide CAD tools and circuit design techniques to optimize power consumption and to increase soft error tolerance of digital circuits. Multiple supply and threshold voltages are used to reduce power consumption. Variable supply and threshold voltages are used together with variable capacitances to develop a dynamic soft error tolerance control scheme.
13

Rajagopalan, Mohan. "Optimizing System Performance and Dependability Using Compiler Techniques." Diss., Tucson, Arizona : University of Arizona, 2006. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1439%5F1%5Fm.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Lima, Alice Medeiros de. "Nonlinear constrained optimization with flexible tolerance method: improvement and application in systems synthesis of mass integration." Universidade Federal de São Carlos, 2015. https://repositorio.ufscar.br/handle/ufscar/3967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Made available in DSpace on 2016-06-02T19:55:43Z (GMT). No. of bitstreams: 1 6624.pdf: 8550248 bytes, checksum: 5ec0bd60b54af457950d157adeb2bc97 (MD5) Previous issue date: 2015-03-13
Universidade Federal de Sao Carlos
Este trabalho visa a otimização não-linear restrita usando o Método das Tolerâncias Flexíveis (FTM) e na aplicação do mesmo na síntese de sistemas de integração mássica. A integração mássica é uma técnica que permite a compreensão global do fluxo de massa dentro do processo, e emprega tais conhecimentos na identificação de melhorias de desempenho e otimização da geração e mapeamento de espécies ao longo do processo. A integração de massa baseia-se nos princípios fundamentais da engenharia química combinada com a análise do sistema usando ferramentas gráficas e de otimização. Neste contexto, o método direto de otimização foi usado como base para melhorias a fim de tornar possível sua aplicação em problemas de síntese de processo, especialmente a integração de massa. O Método das Tolerância Flexíveis é um método direto de otimização que apresenta algumas vantagens como simplicidade e a capacidade de lidar com igualdade e desigualdade sem empregar o cálculo de derivadas. O método utiliza duas buscas para satisfazer a restrição de viabilidade. A busca externa é uma variação do método de Nelder-Mead (ou o método Poliedro Flexível ou FPM) que minimiza a função objetivo. A busca interna minimiza o valor da função formada pelas restrições de igualdade e/ou desigualdade do problema. Esta busca interna pode ser realizada por qualquer método de otimização não linear irrestrita. Neste trabalho, o método das tolerâncias flexíveis foi hibridizado com diferentes métodos irrestritos para realizar a busca interna: BFGS (Método de Broyden, Fletcher, Goldfarb and Shanno) e Powell modificado. O método estocástico do Enxame de Partículas (PSO) também foi empregado para efetuar a inicialização e geração do ponto de partida viável para sequencial aplicação do método deiii terminístico (FTM e modificações). Outras modificações testadas foram o escalonamento de variáveis, a utilização de parâmetros adaptativos Nelder-Mead e a adição de uma barreira. Os algoritmos propostos neste trabalho foram aplicados a um conjunto de problemas nãolineares restritos que compreende problemas de otimização reais. Os códigos que apresentaram melhor desempenho foram o Método Modificado das Tolerâncias Flexíveis com variáveis escalonadas (MFTMS) e o híbrido FTMS-PSO (o Método das Tolerância Flexíveis com escalonamento de variáveis e hibridizado com PSO). Estes melhores códigos foram aplicados com sucesso na solução de problemas de integração em massa. Os resultados encontrados neste trabalho demonstram a capacidade de métodos simples e diretos em lidar com problemas de otimização complexos, como os problemas de integração mássica. Além disso, um problema inédito de integração mássica proposto neste trabalho, a integração mássica de uma biorefinaria de cana-de-açúcar incluindo 1G, 2G e 3G, foi resolvido com êxito com os métodos propostos neste trabalho (MFTMS e FTMS-PSO). A primeira geração (1G) inclui a produção de etanol utilizando o caldo da cana-de-açúcar e produção de vapor e eletricidade pela cogeração. A segunda geração (2G) utiliza a biomassa lignocelulósica para produção de etanol pela rota bioquímica. A terceira geração (3G) inclui a utilização de algas para produção de biocombustíveis (etanol e biodiesel). Os resultados deste estudo de caso fornecem uma indicação de uma forma economicamente viável de conseguir avanços substanciais em termos de consumo de água e redução da poluição.
This work is focused in constrained nonlinear optimization using the Flexible Tolerance Method (FTM) and in applying in systems synthesis of mass integration. Mass integration is a technique that allows an overall understanding of the mass flow within the process, and employs such knowledge in identification of performance improvements and optimization of the generation and mapping of species throughout the process. The mass integration is based on the fundamental principles of chemical engineering combined with system analysis using graphical and optimization tools. In this context, the direct method of optimization was used as the basis for improvements in order to make possible the application in process synthesis problems, especially mass integration. The Flexible Tolerance Method is a direct method of optimization that present some advantages as simplicity, the ability to lead with equality and inequality constraints without employ derivative calculus. The method uses two searches to satisfy feasibility constraint. The external search is a variation of the Nelder-Mead method (or the Flexible Polyhedron method or FPM). This one seeks to minimizes the objective function. The internal search minimizes the value of the positive function for all equality and/or inequality constraints of the problem. This internal search can be performed by any unconstrained nonlinear optimization method. In this work, the Flexible Tolerance Method was hybridized with different unconstrained methods to perform the inner search: the BFGS (Broyden, Fletcher, Goldfarb and Shanno Method) and the modified Powell. The stochastic PSO method was also employed to perform the initialization and generation of the feasible start point to sequential application of the determination method i (FTM and modifications). Others modifications tested were the scaling of variables, the use of Nelder-Mead adaptive parameters and the addition of a barrier. The algorithms proposed in this work were applied to a benchmark of constrained nonlinear problems that comprises real world optimization problems. The best codes obtained were the Modified Flexible Tolerance Method Scaled (MFTMS) and the hybrid FTMS-PSO (the Flexible Tolerance Method with scaling of variables hybridized with PSO (Particle Swarm Optimization)). These best codes were applied with success in the solution of mass integration problems. The results found in this work demonstrate the capacity of simple and direct methods in deals with complex optimization problems, as the mass integration problems. Additionally an inedited problem of mass integration proposed in this work, the mass integration of 1G, 2G and 3G sugarcane biorefinery was successful solved with the methods proposed in this work (MFTMS and FTMS-PSO). The first generation (1G) includes the ethanol production using the sugarcane juice and production of vapor and electricity throughout cogeneration. The second generation (2G) includes the ethanol production using the lignocellulosic biomass feedstock via the biochemical route. The third generation (3G) includes the algae use for production of biofuels (ethanol and biodiesel). The findings of this study case provide an indication of an economically viable way of achieving substantial advances in terms of water consumption and pollution reduction.
15

Petruccioli, Andrea <1993&gt. "Development of a Computer-Based methodology for tolerance selection and optimization applied to the automotive sector." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10322/1/PhD%20Thesis%20Andrea%20Petruccioli.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The design optimization of industrial products has always been an essential activity to improve product quality while reducing time-to-market and production costs. Although cost management is very complex and comprises all phases of the product life cycle, the control of geometrical and dimensional variations, known as Dimensional Management (DM), allows compliance with product and process requirements. Hence, the tolerance-cost optimization becomes the main practice to provide an effective application of Design for Tolerancing (DfT) and Design to Cost (DtC) approaches by enabling a connection between product tolerances and associated manufacturing costs. However, despite the growing interest in this topic, a profitable application in the industry of these techniques is hampered by their complexity: the definition of a systematic framework is the key element to improving design optimization, enhancing the concurrent use of Computer-Aided tools and Model-Based Definition (MBD) practices. The present doctorate research aims to define and develop an integrated methodology for product/process design optimization, to better exploit the new capabilities of advanced simulations and tools. By implementing predictive models and multi-disciplinary optimization, a Computer-Aided Integrated framework for tolerance-cost optimization has been proposed to allow the integration of DfT and DtC approaches and their direct application for the design of automotive components. Several case studies have been considered, with the final application of the integrated framework on a high-performance V12 engine assembly, to achieve both functional targets and cost reduction. From a scientific point of view, the proposed methodology provides an improvement for the tolerance-cost optimization of industrial components. The integration of theoretical approaches and Computer-Aided tools allows to analyse the influence of tolerances on both product performance and manufacturing costs. The case studies proved the suitability of the methodology for its application in the industrial field, providing the identification of further areas for improvement and refinement.
16

Islam, Ziaul. "A design of experiment approach to tolerance allocation." Ohio : Ohio University, 1995. http://www.ohiolink.edu/etd/view.cgi?ohiou1179428292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Moneghan, Matthew John. "Microstructural Deformation Mechanisms and Optimization of Selectively Laser Melted 316L Steel." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/104170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, a novel approach is utilized to investigate the deformation mechanisms at the microstructural level in 3D printed alloys. The complex in-situ heat treatments during 3D printing leaves a unique and complicated microstructure in the as-built 3D printed metals, particularly alloys. The microstructure is made of a hierarchical stacking of some interconnected geometrical shapes, namely meltpools, grains, and cells. These are connected to each other by boundaries that might have different element compositions, and consequently, material properties, compared to the interior region of each geometrical unit. Deformation mechanisms in this microstructure are still highly unexplored, mainly because of the challenges on the way of performing experiments at the micrometer length scale. In this work, we establish an image processing framework that directly converts the SEM images taken from the microstructure of 3D printed 316L stainless steel alloys into CAD models. The model of the complicated microstructure is then scaled up, and the scaled model is 3D printed using polymeric materials. For 3D printing these samples, two polymers with contrasting mechanical properties are used. Distribution of these two polymers mimics the arrangement of soft and stiff regions in the microstructure of 3D printed alloys. These representative samples are subjected to mechanical loads and digital image correlation is utilized to investigate the deformation mechanisms, particularly the delocalization of stress concentration and also the crack propagation, at the microstructural level of 3D printed metals. Besides experiments, computational modeling using finite element method is also performed to study the same deformation mechanisms at the microstructure of 3D printed 316L stainless steel. Our results show that the hierarchical arrangement of stiff and soft phases in 3D printed alloys delocalizes the stress concentration and has the potential to make microstructures with significantly improved damage tolerance capabilities.
Master of Science
Many researchers have studied the impacts of laser parameters on the bulk material properties of SLM printed parts; few if any have studied how these parts break at a microstructural level. In this work we show how SLM printed parts with complex microstructures including grains, meltpools, and cells, deform and break. The cellular network that occurs in some SLM printed parts leads to a multi-material hierarchical structure, with a stiff network of thin boundaries, and a bulk "matrix" of soft cell material. This leads to similar properties as some composites, whereby the stiff network of cell boundaries leads to increased damage tolerance. We show both computationally through finite element analysis, and experimentally through multi-material 3D fabrication, that the microstructure leads to increased crack length in failure, as well as lower toughness loss and strength loss in the event of a crack. Essentially, the complex nature of the formation of these parts (high heating and cooling rates from laser melting) leads to a beneficial microstructure for damage tolerance that has not been studied from this perspective before.
18

Nielsen, Mark. "Design of aerospace laminates for multi-axis loading and damage tolerance." Thesis, University of Bath, 2018. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.760971.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Acknowledging the goal of reduced aircraft weight, there is a need to improve on conservative design techniques used in industry. Minimisation of laminate in-plane elastic energy is used as an appropriate in-plane performance marker to assess the weight saving potential of new design techniques. MATLAB optimisations using a genetic algorithm were used to find the optimal laminate variables for minimum in-plane elastic energy and/or damage tolerance for all possible loadings. The use of non-standard angles was able to offer equivalent, if not better in-plane performance than standard angles, and are shown to be useful to improve the ease of manufacture. Any standard angle laminate stiffness was shown to be able to be matched by a range of two non-standard angle ply designs. This non-uniqueness of designs was explored. Balancing of plus and minus plies about the principal loading axes instead of themanufacturing axes was shown to offer considerable potential for weight saving as the stiffness is better aligned to the load. Designing directly for an uncertain design load showed little benefit over the 10% ply percentage rule in maintaining in-plane performance. This showed the current rule may do a sufficient job to allow robustness in laminate performance. This technique is seen useful for non-standard angle design that lacks an equivalent 10% rule. Current use of conservative damage tolerance strain limits for design has revealed the need for more accurate prediction of damage propagation. Damage tolerance modelling was carried out using fracture mechanics for a multi-axial loading considering the full 2D strain energy and improving on current uni-axial models. The non-conservativeness of the model was evidenced to be from assumptions of zero post-buckled stiffness. Preliminary work on conservative multi-axial damage tolerance design, independent of thickness, is yet to be confirmed by experiments.
19

Guenoun, Pierre S. M. Massachusetts Institute of Technology. "Design optimization of advanced PWR SiC/SiC fuel cladding for enhanced tolerance of loss of coolant conditions." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/103649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 64-68).
Limited data has been published (especially on experimental work) on integrated multilayer SiC/SiC prototypical fuel cladding. In this work the mechanical performance of three unique architectures of three-layer silicon carbide (SiC) composite cladding is experimentally investigated under conditions associated with the loss of coolant accident (LOCA), and analytically under various conditions. Specifically, this work investigates SiC cladding mechanical performance after exposure to 1,400°C steam for 48 hours and after thermal shock induced by quenching from 1,200°C into either 100°C or 90°C water. Mechanical performance characteristics are thereafter correlated with sample architecture through void characterization and ceramography. The series with a reduced thickness did not have a pseudo-ductile regime due to overloading of the composite layer. The presence of the axial tow did not yield significant difference in the mechanical behavior most likely because samples were tested in the hoop direction. While as-received and quenched samples behaved similarly (pseudo ductile failure except for one series), non-frangible brittle failure (single-crack failure with no release of debris) was systematically observed after oxidation due to silica buildup in the inner voids of the ceramic matrix composite (CMC) layer. Overall, thermal shock had limited influence on sample mechanical characteristics and oxidation resulted in the formation of silica on the inner wall of the CMC voids leading to the weakening of the monolith matrix and brittle fracture. Stress field in the cladding design is simulated by finite element analysis under service and shutdown conditions at both the core's middle height and at the end of the fuel rod. Stresses in the fuel region are driven by the thermal gradient that creates stresses predominantly from irradiation induced swelling. At the endplug, constraints are mainly mechanical. Stress calculations show high sensitivity to the data scatter and especially swelling and thermal conductivity. No cladding with the design studied here can survive either service or shutdown conditions because of the high irradiation-induced tensile stresses that develop in the hot inner monolith layer. It is shown that this peak tensile stress can be alleviated by adjusting the swelling level of the different layers. The addition of an under-swelling material such as PyC or Si can reduce the monolith tensile stress by 10%. With a composite that swells 10% less than the monolith, the stress is reduced by 20%.
by Pierre Guenoun.
S.M.
20

Pendse, Nachiket Vishwas. "An effective dimensional inspection method based on zone fitting." Texas A&M University, 2004. http://hdl.handle.net/1969.1/3239.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Coordinate measuring machines are widely used to generate data points from an actual surface. The generated measurement data must be analyzed to yield critical geometric deviations of the measured part according to the requirements specified by the designer. However, ANSI standards do not specify the methods that should be used to evaluate the tolerances. The coordinate measuring machines employ different verification algorithms which may yield different results. Functional requirements or assembly conditions on a manufactured part are normally translated into geometric constraints to which the part must conform. Minimum zone evaluation technique is used when the measured data is regarded as an exact copy of the actual surface and the tolerance zone is represented as geometric constraints on the data. In the present study, a new zone-fitting algorithm is proposed. The algorithm evaluates the minimum zone that encompasses the set of measured points from the actual surface. The search for the rigid body transformation that places the set of points in the zone is modeled as a nonlinear optimization problem. The algorithm is employed to find the form tolerance of 2-D (line, circle) as well as 3-D geometries (cylinder). It is also used to propose an inspection methodology for turbine blades. By constraining the transformation parameters, the proposed methodology determines whether the points measured at the 2-D cross-sections fit in the corresponding tolerance zones simultaneously.
21

Zhou, Yao. "Study on genetic algorithm improvement and application." Link to electronic thesis, 2006. http://www.wpi.edu/Pubs/ETD/Available/etd-050306-211907/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Albandes, Iuri. "Use of Approximate Triple Modular Redundancy for Fault Tolerance in Digital Circuits." Doctoral thesis, Universidad de Alicante, 2018. http://hdl.handle.net/10045/88248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La triple redundancia modular (TMR) es una técnica bien conocida de mitigación de fallos que proporciona una alta protección frente a fallos únicos pero con un gran coste en términos de área y consumo de potencia. Por esta razón, la redundancia parcial se suele aplicar para aligerar estos sobrecostes. En este contexto, la TMR aproximada (ATMR), que consisten en la implementación de la redundancia triple con versiones aproximadas del circuito a proteger, ha surgido en los últimos años como una alternativa a la replicación parcial, con la ventaja de obtener mejores soluciones de compromiso entre la cobertura a fallos y los sobrecostes. En la literatura ya han sido propuestas varias técnicas para la generación de circuitos aproximados, cada una con sus pros y sus contras. Este trabajo realiza un estudio de la técnica ATMR, evaluando el coste-beneficio entre el incremento de recursos (área) y la cobertura frente a fallos. La primera contribución es una nueva aproximación ATMR donde todos los módulos redundantes son versiones aproximadas del diseño original, permitiendo la generación de circuitos ATMR con un sobrecoste de área muy reducido, esta técnica se denomina Full-ATMR (ATMR completo o FATMR). El trabajo también presenta una segunda aproximación para implementar la ATMR de forma automática combinando una biblioteca de puertas aproximadas (ApxLib) y un algoritmo genético multi-objetivo (MOOGA). El algoritmo realiza una búsqueda ciega sobre el inmenso espacio de soluciones, optimizando conjuntamente la cobertura frente a fallos y el sobrecoste de área. Los experimentos comparando nuestra aproximación con las técnicas del estado del arte muestran una mejora de los trade-offs para diferentes circuitos de prueba (benchmark).
23

Väyrynen, Mikael. "Fault-Tolerant Average Execution Time Optimization for General-Purpose Multi-Processor System-On-Chips." Thesis, Linköping University, Linköping University, Linköping University, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-17705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

Fault tolerance is due to the semiconductor technology development important, not only for safety-critical systems but also for general-purpose (non-safety critical) systems. However, instead of guaranteeing that deadlines always are met, it is for general-purpose systems important to minimize the average execution time (AET) while ensuring fault tolerance. For a given job and a soft (transient) no-error probability, we define mathematical formulas for AET using voting (active replication), rollback-recovery with checkpointing (RRC) and a combination of these (CRV) where bus communication overhead is included. And, for a given multi-processor system-on-chip (MPSoC), we define integer linear programming (ILP) models that minimize the AET including bus communication overhead when: (1) selecting the number of checkpoints when using RRC or a combination where RRC is included, (2) finding the number of processors and job-to-processor assignment when using voting or a combination where voting is used, and (3) defining fault tolerance scheme (voting, RRC or CRV) per job and defining its usage for each job. Experiments demonstrate significant savings in AET.

24

Keresztes, Janos C., Koshel R. John, Karlien D’huys, Ketelaere Bart De, Jan Audenaert, Peter Goos, and Wouter Saeys. "Augmented design and analysis of computer experiments: a novel tolerance embedded global optimization approach applied to SWIR hyperspectral illumination design." OPTICAL SOC AMER, 2016. http://hdl.handle.net/10150/622951.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A novel meta-heuristic approach for minimizing nonlinear constrained problems is proposed, which offers tolerance information during the search for the global optimum. The method is based on the concept of design and analysis of computer experiments combined with a novel two phase design augmentation (DACEDA), which models the entire merit space using a Gaussian process, with iteratively increased resolution around the optimum. The algorithm is introduced through a series of cases studies with increasing complexity for optimizing uniformity of a short-wave infrared (SWIR) hyperspectral imaging (HSI) illumination system (IS). The method is first demonstrated for a two-dimensional problem consisting of the positioning of analytical isotropic point sources. The method is further applied to two-dimensional (2D) and five-dimensional (5D) SWIR HSI IS versions using close-and far-field measured source models applied within the non-sequential ray-tracing software FRED, including inherent stochastic noise. The proposed method is compared to other heuristic approaches such as simplex and simulated annealing (SA). It is shown that DACEDA converges towards a minimum with 1 % improvement compared to simplex and SA, and more importantly requiring only half the number of simulations. Finally, a concurrent tolerance analysis is done within DACEDA for to the five-dimensional case such that further simulations are not required. (C) 2016 Optical Society of America
25

Shrimal, Shubhendra. "Maximizing Parallelization Opportunities by Automatically Inferring Optimal Container Memory for Asymmetrical Map Tasks." Bowling Green State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1468011920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Samii, Soheil. "Quality-Driven Synthesis and Optimization of Embedded Control Systems." Doctoral thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-68641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis addresses several synthesis and optimization issues for embedded control systems. Examples of such systems are automotive and avionics systems in which physical processes are controlled by embedded computers through sensor and actuator interfaces. The execution of multiple control applications, spanning several computation and communication components, leads to a complex temporal behavior that affects control quality. The relationship between system timing and control quality is a key issue to consider across the control design and computer implementation phases in an integrated manner. We present such an integrated framework for scheduling, controller synthesis, and quality optimization for distributed embedded control systems. At runtime, an embedded control system may need to adapt to environmental changes that affect its workload and computational capacity. Examples of such changes, which inherently increase the design complexity, are mode changes, component failures, and resource usages of the running control applications. For these three cases, we present trade-offs among control quality, resource usage, and the time complexity of design and runtime algorithms for embedded control systems. The solutions proposed in this thesis have been validated by extensive experiments. The experimental results demonstrate the efficiency and importance of the presented techniques.
27

Lee, Abraham. "A Hybrid Method for Sensitivity Optimization With Application to Radio-Frequency Product Design." BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/4358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A method for performing robust optimal design that combines the efficiency of experimental designs and the accuracy of nonlinear programming (NLP) has been developed, called Search-and-Zoom. Two case studies from the RF and communications industry, a high-frequency micro-strip band-pass filter (BPF) and a rectangular, directional patch antenna, were used to show that sensitivity optimization could be effectively performed in this industry and to compare the computational efficiency of traditional NLP methods (using fmincon solver in MATLAB R2013a) and they hybrid method Search-and-Zoom. The sensitivity of the BPF's S11 response was reduced from 0.06666 at the (non-robust) nominal optimum to 0.01862 at the sensitivity optimum. Feasibility in the design was improved by reducing the likelihood of violating constraints from 20% to nearly 0%, assuming RSS (i.e., normally-distributed) input tolerances and from 40% to nearly 0%, assuming WC (i.e., uniformly-distributed) input tolerances. The sensitivity of the patch antenna's S11 function was also improved from 0.02068 at the nominal optimum to 0.0116 at the sensitivity optimum. Feasibility at the sensitivity optimum was estimated to be 100%, and thus did not need to be improved. In both cases, the computation effort to reach the sensitivity optima, as well as the sensitivity optima with RSS and WC feasibility robustness, was reduced by more than 80% (average) by using Search-and-Zoom, compared to the NLP solver.
28

Sebaey, Abdella Tamer Ali Abdella. "Characterization and optimization of dispersed composite laminates for damage resistant aeronautical structures." Doctoral thesis, Universitat de Girona, 2013. http://hdl.handle.net/10803/98393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The main objective of the thesis is to assess the damage resistance and damage tolerance of the non-conventional dispersed laminates and compare the response with the conventional ones. However, part of the effort is spent on understanding the delamination behavior in multidirectional laminates. In the first part of the thesis, the delamination in multidirectional laminates is studied. The objective is to design a proper stacking sequence, capable of avoiding intralaminar damage (crack jumping), to enable the fracture toughness characterization under pure mode I. The result of this study shows that the higher the crack arm bending stiffness, the lower the tendency to crack jumping. This phenomenon is also studied experimentally and the same conclusion is drawn.
El principal objectiu de la tesi és valorar la resistència al dany i la tolerància al dany dels laminats no-convencionals dispersos i comparar la seva resposta amb la dels laminats convencionals. No obstant, part de l'atenció es dedica a comprendre el comportament de la delaminació en laminats multidireccionals. En la primera part de la tesi, s'analitza la delaminació en laminats multidireccionals. L'objectiu és dissenyar una seqüència d'apilament apropiada per evitar el dany intralaminar (migració de la delaminació) i permetre la caracterització de la tenacitat a la fractura en model. Els resultats d'aquests estudi mostren que a major rigidesa a flexió dels braços de l'esquerda, menor és la tendència a la migració de l'esquerda. Aquest aspecte també s'ha analitzat experimentalment, obtenint les mateixes conclusions.
29

Martin, Ivo [Verfasser], Dieter [Gutachter] Bestle, and Arnold [Gutachter] Kühhorn. "Automated process for robust airfoil design-optimization incorporating critical eigenmode identification and production-tolerance evaluation / Ivo Martin ; Gutachter: Dieter Bestle, Arnold Kühhorn." Cottbus : BTU Cottbus - Senftenberg, 2019. http://d-nb.info/1180389603/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Psiakis, Rafail. "Performance optimization mechanisms for fault-resilient VLIW processors." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S095/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les processeurs intégrés dans des domaines critiques exigent une combinaison de fiabilité, de performances et de faible consommation d'énergie. Very Large Instruction Word (VLIW) processeurs améliorent les performances grâce à l'exploitation ILP (Instruction Level Parallelism), tout en maintenant les coûts et la puissance à un niveau bas. L’ILP étant fortement dépendant de l'application, le processeur n'utilise pas toutes ses ressources en permanence et ces ressources peuvent donc être utilisées pour l'exécution d'instructions redondantes. Cette thèse présente une méthodologie d’injection fautes pour processeurs VLIW et trois mécanismes matériels pour traiter les pannes légères, permanentes et à long terme menant à trois contributions.La première contribution présente un schéma d’analyse du facteur de vulnérabilité architecturale et du facteur de vulnérabilité d’instruction pour les processeurs VLIW. Une méthodologie d’injection de fautes au niveau de différentes structures de mémoire est proposée pour extraire les capacités de masquage architecture / instruction du processeur. Un schéma de classification des défaillances de haut niveau est présenté pour catégoriser la sortie du processeur. La deuxième contribution explore les ressources inactives hétérogènes au moment de l'exécution, à l'intérieur et à travers des ensembles d'instructions consécutifs. Pour ce faire, une technique d’ordonnancement des instructions optimisée pour le matériel est appliquée en parallèle avec le pipeline afin de contrôler efficacement la réplication et l’ordonnancement des instructions. Suivant les tendances à la parallélisation croissante, une conception basée sur les clusters est également proposée pour résoudre les problèmes d’évolutivité, tout en maintenant une pénalité surface/énergie raisonnable. La technique proposée accélère la performance de 43,68% avec une surcoût en surface et en énergie de ~10% par rapport aux approches existantes. Les analyses AVF et IVF évaluent la vulnérabilité du processeur avec le mécanisme proposé.La troisième contribution traite des défauts persistants. Un mécanisme matériel est proposé, qui réplique au moment de l'exécution les instructions et les planifie aux emplacements inactifs en tenant compte des contraintes de ressources. Si une ressource devient défaillante, l'approche proposée permet de relier efficacement les instructions d'origine et les instructions répliquées pendant l'exécution. Les premiers résultats de performance d’évaluation montrent un gain de performance jusqu’à 49% sur les techniques existantes.Afin de réduire davantage le surcoût lié aux performances et de prendre en charge l’atténuation des erreurs uniques et multiples sur les transitoires de longue durée (LDT), une quatrième contribution est présentée. Nous proposons un mécanisme matériel qui détecte les défauts toujours actifs pendant l'exécution et réorganise les instructions pour utiliser non seulement les unités fonctionnelles saines, mais également les composants sans défaillance des unités fonctionnelles concernées. Lorsque le défaut disparaît, les composants de l'unité fonctionnelle concernés peuvent être réutilisés. La fenêtre de planification du mécanisme proposé comprend deux ensembles d'instructions pouvant explorer des solutions d'atténuation lors de l'exécution de l'instruction en cours et de l'instruction suivante. Les résultats obtenus sur l'injection de fautes montrent que l'approche proposée peut atténuer un grand nombre de fautes avec des performances, une surface et une surcharge de puissance faibles
Embedded processors in critical domains require a combination of reliability, performance and low energy consumption. Very Long Instruction Word (VLIW) processors provide performance improvements through Instruction Level Parallelism (ILP) exploitation, while keeping cost and power in low levels. Since the ILP is highly application dependent, the processor does not use all its resources constantly and, thus, these resources can be utilized for redundant instruction execution. This thesis presents a fault injection methodology for VLIW processors and three hardware mechanisms to deal with soft, permanent and long-term faults leading to three contributions. The first contribution presents an Architectural Vulnerability Factor (AVF) and Instruction Vulnerability Factor (IVF) analysis schema for VLIW processors. A fault injection methodology at different memory structures is proposed to extract the architectural/instruction masking capabilities of the processor. A high-level failure classification schema is presented to categorize the output of the processor. The second contribution explores heterogeneous idle resources at run-time both inside and across consecutive instruction bundles. To achieve this, a hardware optimized instruction scheduling technique is applied in parallel with the pipeline to efficiently control the replication and the scheduling of the instructions. Following the trends of increasing parallelization, a cluster-based design is also proposed to tackle the issues of scalability, while maintaining a reasonable area/power overhead. The proposed technique achieves a speed-up of 43.68% in performance with a ~10% area and power overhead over existing approaches. AVF and IVF analysis evaluate the vulnerability of the processor with the proposed mechanism.The third contribution deals with persistent faults. A hardware mechanism is proposed which replicates at run-time the instructions and schedules them at the idle slots considering the resource constraints. If a resource becomes faulty, the proposed approach efficiently rebinds both the original and replicated instructions during execution. Early evaluation performance results show up to 49\% performance gain over existing techniques.In order to further decrease the performance overhead and to support single and multiple Long-Duration Transient (LDT) error mitigation a fourth contribution is presented. We propose a hardware mechanism, which detects the faults that are still active during execution and re-schedules the instructions to use not only the healthy function units, but also the fault-free components of the affected function units. When the fault faints, the affected function unit components can be reused. The scheduling window of the proposed mechanism is two instruction bundles being able to explore mitigation solutions in the current and the next instruction execution. The obtained fault injection results show that the proposed approach can mitigate a large number of faults with low performance, area, and power overhead
31

Zheng, Yexin. "Circuit Design Methods with Emerging Nanotechnologies." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/30000.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
As complementary metal-oxide semiconductor (CMOS) technology faces more and more severe physical barriers down the path of continuously feature size scaling, innovative nano-scale devices and other post-CMOS technologies have been developed to enhance future circuit design and computation. These nanotechnologies have shown promising potentials to achieve magnitude improvement in performance and integration density. The substitution of CMOS transistors with nano-devices is expected to not only continue along the exponential projection of Moore's Law, but also raise significant challenges and opportunities, especially in the field of electronic design automation. The major obstacles that the designers are experiencing with emerging nanotechnology design include: i) the existing computer-aided design (CAD) approaches in the context of conventional CMOS Boolean design cannot be directly employed in the nanoelectronic design process, because the intrinsic electrical characteristics of many nano-devices are not best suited for Boolean implementations but demonstrate strong capability for implementing non-conventional logic such as threshold logic and reversible logic; ii) due to the density and size factors of nano-devices, the defect rate of nanoelectronic system is much higher than conventional CMOS systems, therefore existing design paradigms cannot guarantee design quality and lead to even worse result in high failure ratio. Motivated by the compelling potentials and design challenges of emerging post-CMOS technologies, this dissertation work focuses on fundamental design methodologies to effectively and efficiently achieve high quality nanoscale design. A novel programmable logic element (PLE) is first proposed to explore the versatile functionalities of threshold gates (TGs) and multi-threshold threshold gates (MTTGs). This PLE structure can realize all three- or four-variable logic functions through configuring binary control bits. This is the first single threshold logic structure that provides complete Boolean logic implementation. Based on the PLEs, a reconfigurable architecture is constructed to offer dynamic reconfigurability with little or no reconfiguration overhead, due to the intrinsic self-latching property of nanopipelining. Our reconfiguration data generation algorithm can further reduce the reconfiguration cost. To fully take advantage of such threshold logic design using emerging nanotechnologies, we also developed a combinational equivalence checking (CEC) framework for threshold logic design. Based on the features of threshold logic gates and circuits, different techniques of formulating a given threshold logic in conjunctive normal form (CNF) are introduced to facilitate efficient SAT-based verification. Evaluated with mainstream benchmarks, our hybrid algorithm, which takes into account both input symmetry and input weight order of threshold gates, can efficiently generate CNF formulas in terms of both SAT solving time and CNF generating time. Then the reversible logic synthesis problem is considered as we focus on efficient synthesis heuristics which can provide high quality synthesis results within a reasonable computation time. We have developed a weighted directed graph model for function representation and complexity measurement. An atomic transformation is constructed to associate the function complexity variation with reversible gates. The efficiency of our heuristic lies in maximally decreasing the function complexity during synthesis steps as well as the capability to climb out of local optimums. Thereafter, swarm intelligence, one of the machine learning techniques is employed in the space searching for reversible logic synthesis, which achieves further performance improvement. To tackle the high defect-rate during the emerging nanotechnology manufacturing process, we have developed a novel defect-aware logic mapping framework for nanowire-based PLA architecture via Boolean satisfiability (SAT). The PLA defects of various types are formulated as covering and closure constraints. The defect-aware logic mapping is then solved efficiently by using available SAT solvers. This approach can generate valid logic mapping with a defect rate as high as 20%. The proposed method is universally suitable for various nanoscale PLAs, including AND/OR, NOR/NOR structures, etc. In summary, this work provides some initial attempts to address two major problems confronting future nanoelectronic system designs: the development of electronic design automation tools and the reliability issues. However, there are still a lot of challenging open questions remain in this emerging and promising area. We hope our work can lay down stepstones on nano-scale circuit design optimization through exploiting the distinctive characteristics of emerging nanotechnologies.
Ph. D.
32

Rau, Anand V. "Processing of toughened cyanate ester matrix composites." Diss., This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-06062008-151604/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

García, Gómez Sonia C. "Allocation et optimisation de tolérances géométriques par des polyédres prismatiques." Electronic Thesis or Diss., Bordeaux, 2023. http://www.theses.fr/2023BORD0504.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les écarts géométriques et dimensionnels des pièces mécaniques peuvent causer des perturbations nuisibles aux fonctionnalités attendues des systèmes mécaniques. Les spécifications géométriques et dimensionnelles représentent les limites des défauts de fabrication des surfaces fabriquées. La détermination des tolérances n’est pas une tâche aisée car (i) les valeurs des tolérances assignées affectent les fonctionnalités attendues d’un système et le coût de fabrication de ses pièces, et (ii) l'interdépendance des tolérances et des jeux rend complexe leur agrégation dans une résultante.L’analyse de tolérances et la synthèse de tolérances sont les deux méthodes classiques pour aborder la détermination de la résultante. La synthèse de tolérances est traditionnellement considérée comme un ``problème d’optimisation sous contraintes'' dans lequel la fonction objectif est généralement une fonction de coût, une fonction de qualité ou de coût-qualité.Dans le cas des mécanismes hyperstatiques, la complexité de l'interaction des tolérances ne permet pas de décrire la résultante par une fonction analytique. Par conséquent, il est courant d’effectuer une allocation de tolérances au lieu d’une synthèse de tolérances. L’objectif de l’allocation de tolérances est de maximiser les tolérances et les jeux initialement déterminés selon des retours d'expériences ou des connaissances empiriques, en incorporant certaines méthodes heuristiques d’optimisation.Dans ce travail, nous montrons comment effectuer une allocation de tolérances en utilisant l’approche polyédrique prismatique comme modèle de tolérances et le recuit simulé comme algorithme d’optimisation heuristique. Pour ce faire, certains problèmes intermédiaires sont discutés, tels que la qualité des opérandes versus le temps de calcul. Une méthode de réduction de modèle et un indicateur permettant de quantifier la conformité d’un mécanisme vis-à-vis d'une condition fonctionnelle sont également introduits
Geometric and dimensional deviations of mechanical components can cause problems of assemblability and/or functionality of the mechanisms. The geometric and dimensional specifications represent the limits of the manufacturing defects of a given surface. Tolerance specification is not an easy task because (i) the assigned tolerance values affect the functionalities of a system and the manufacturing cost of its parts, and (ii) design tolerances are often interrelated and contribute to a resultant tolerance.Tolerance analysis and tolerance synthesis are the two typical ways to approach the problem of tolerance design. Tolerance synthesis is traditionally seen as a "constrained optimization problem" in which the objective function is usually a cost function, a quality function or a cost-quality.In the case of over-constrained mechanisms, the interaction of the tolerances is complex and it is not possible to describe it by means of an analytical function. Hence, it is typical to do tolerance allocation instead of tolerance synthesis. The objective of the tolerance allocation is then to complete or augment the tolerance specification, originally made from experience or empirical knowledge, by incorporating some heuristics or optimization methods.In this work we show how to do tolerance allocation using the prismatic polyhedral approach as tolerance model and the simulated annealing as the heuristic optimization algorithm. In order to do this, some intermediate problems are discussed, such us (i) quality of the operands, (ii) computational time required to do a simulation and we also develop (iii) an indicator to quantify the compliance of a mechanism with its functional condition
34

Guilbert, Damien. "Tolérance aux défauts et optimisation des convertisseurs DC/DC pour véhicules électriques à pile à combustible." Thesis, Belfort-Montbéliard, 2014. http://www.theses.fr/2014BELF0245/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Ces dernières années, la fiabilité et la continuité de service des chaînes de traction sont devenus des défis majeurs afin que les véhicules électriques puissent accéder au marché grand public de l’automobile. En effet, la présence de défauts dans les chaînes de traction peut conduire à des dysfonctionnements dans les véhicules et ainsi réduire ses performances par rapport aux véhicules conventionnels. Dans l’hypothèse où des défauts électriques se produisaient, les chaînes de traction des véhicules électriques à pile à combustible devraient inclure des topologies et/ou contrôles tolérants aux défauts pour les différents convertisseurs DC/DC et DC/AC. Dans le cadre de ce travail de recherche, un focus est fait sur le convertisseur DC/DC associé à la pile à combustible de la chaine de traction. Ce dernier doit répondre aux problématiques majeures des applications véhicule électrique à pile à combustible à savoir : faible masse et petit volume, haute efficacité énergétique, réduction de l’ondulation de courant d’entrée et fiabilité. A la base d’une recherche bibliographique poussée sur les structures non-isolées et isolées appropriées pour des applications PàC, une topologie de convertisseur DC/DC entrelacé a été choisie permettant de respecter les contraintes des véhicules électriques à pile à combustible.Ce travail de thèse a ensuite consisté à dimensionner et contrôler la structure de convertisseur DC/DC tolérante aux défauts choisie pour les véhicules à PàC. Des algorithmes de gestion des modes dégradés de ce convertisseur ont été développés et implémentés expérimentalement. A ce titre, l’interaction PàC-convertisseur DC/DC a été étudiée. Une approche théorique, de simulation et expérimentale a été mise en oeuvre pour mener à bien ce travail
Over the last years, reliability and continuity of service of powertrains have become major challenge so that the fuel cell electric vehicles (CFEV) can access to the mass automotive market. Indeed, the presence of faults in powertrains can lead up to malfunctions in the vehicle and consequently reduce its performances compared with conventional vehicles. In the case of electrical faults, powertrains of FCEV have to include fault tolerant topology and/or control for the different DC/DC and DC/AC converters. Within the framework of this research work, the study is focused on DC/DC converter combined with a Proton Exchange Membrane Fuel Cell (PEMFC). The DC/DC converter must respond to challenging issues in FCEV applications such as: low weight and small volume, high energy efficiency, fuel cell current ripple reduction and reliability. Basing on a thorough bibliographical study on non-isolated and isolated DC/DC converter topologies, an interleaved DC/DC boost converter has been chosen, meeting the FCEV requirements.The purpose of this thesis has then consisted in sizing and controlling the chosen fault-tolerant DC/DC converter topology for FCEVs. Algorithms for degraded mode management of this converter have been developed and implemented experimentally. As such, the interaction between PEMFC and interleaved DC/DC boost converter has been investigated. A theoretical approach, simulation and experimental results have been carried out to complete this work
35

Goka, Edoh. "Analyse des tolérances des systèmes complexes – Modélisation des imperfections de fabrication pour une analyse réaliste et robuste du comportement des systèmes." Thesis, Paris, ENSAM, 2019. http://www.theses.fr/2019ENAM0019/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L’analyse des tolérances a pour but de vérifier lors de la phase de conception, l’impact des tolérances individuelles sur l’assemblage et la fonctionnalité d’un système mécanique. Les produits fabriqués possèdent différents types de contacts et sont sujets à des imperfections de fabrication qui sont sources de défaillances d’assemblage et fonctionnelle. Les méthodes généralement proposées pour l’analyse des tolérances ne considèrent pas les défauts de forme. L’objectif des travaux de thèse est de proposer une nouvelle procédure d’analyse des tolérances permettant de prendre en compte les défauts de forme et le comportement géométriques des différents types de contacts. Ainsi, dans un premier temps, une méthode de modélisation des défauts de forme est proposée afin de rendre les simulations plus réalistes. Dans un second temps, ces défauts de forme sont intégrés dans la modélisation du comportement géométrique d’un système mécanique hyperstatique, en considérant les différents types de contacts. En effet, le comportement géométrique des différents types de contacts est différent dès que les défauts de forme sont considérés. La simulation de Monte Carlo associée à une technique d’optimisation est la méthode choisie afin de réaliser l’analyse des tolérances. Cependant, cette méthode est très couteuse en temps de calcul. Pour pallier ce problème, une approche utilisant des modèles probabilistes obtenus grâce à l’estimation par noyaux, est proposée. Cette nouvelle approche permet de réduire les temps de calcul de manière significative
Tolerance analysis aims toward the verification of the impact of individual tolerances on the assembly and functional requirements of a mechanical system. The manufactured products have several types of contacts and their geometry is imperfect, which may lead to non-functioning and non-assembly. Traditional methods for tolerance analysis do not consider the form defects. This thesis aims to propose a new procedure for tolerance analysis which considers the form defects and the different types of contact in its geometrical behavior modeling. A method is firstly proposed to model the form defects to make realistic analysis. Thereafter, form defects are integrated in the geometrical behavior modeling of a mechanical system and by considering also the different types of contacts. Indeed, these different contacts behave differently once the imperfections are considered. The Monte Carlo simulation coupled with an optimization technique is chosen as the method to perform the tolerance analysis. Nonetheless, this method is subject to excessive numerical efforts. To overcome this problem, probabilistic models using the Kernel Density Estimation method are proposed
36

Кудряшов, В. С. "Удосконалення технологічного процесу виготовлення шестерні 1412.1820.1334 редуктора Ц2У-100 шляхом структурно-параметричної оптимізації зубофрезерної операції." Master's thesis, Сумський державний університет, 2020. https://essuir.sumdu.edu.ua/handle/123456789/81973.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Кваліфікаційна робота магістра становить 104 сторінки, в тому числі двадцять рисунків, двадцять таблиць, бібліографії з 23 джерел, 11 додатків на 26 сторінках. Мета роботи.Підвищення ефективності технологічного процесу виготовлення "шестерні" шляхом структурно-параметричної оптимізації зубофрезерної операції. Для розв’язання зазначеної мети в роботі були поставлені та вирішені слідуючи задачі: 1) виконати аналіз базового технологічного процесу виготовлення деталі; 2) розробити перспективний технологічний процес виготовлення деталі; 3) спроектувати спеціальну конструкцію пристосування для фрезерування зубів; 4) спроектувати спеціальну конструкцію контрольно-вимірювального пристрою для контролю радіального та торцевого биття поверхонь деталі; 5) виконати дослідження спроектованої конструкції верстатного пристрою методами статичного та динамічного аналізу. Сформулювати рекомендації щодо вдосконалення конструкції верстатного пристрою. Об’єкт дослідження – технологічний процес виготовлення "шестерні", зубофрезерна операція. Предмет дослідження – структурно-параметрична оптимізація зубофрезерної операції, конструкція пристосування для фрезерування зубів. Наукова новизна: в результаті теоретичних та експериментальних досліджень виконані статичний і динамічний аналіз запропонованої конструкції пристосування для фрезерування зубів "шестерні", що дозволило сформулювати пропозиції й рекомендації, спрямовані на вдосконалення конструкції, забезпечення її стійкої роботи і підвищення ефективності зубофрезерної операції в цілому.
The master's qualification work is 104 pages, including twenty figures, twenty tables, bibliographies from 23 sources, 11 appendices on 26 pages. The purpose of the work. Improving the efficiency of the technological process of manufacturing "gears" by structural and parametric optimization of gear milling operation. To solve this goal in the work were set and solved the following tasks: 1) perform an analysis of the basic technological process of manufacturing parts; 2) to develop a promising technological process of manufacturing parts; 3) to design a special design of the device for milling teeth; 4) to design a special design of the control and measuring device for control of radial and face beating of surfaces of a detail; 5) perform research of the designed design of the machine tool by methods of static and dynamic analysis. Formulate recommendations for improving the design of the machine tool. The object of research is the technological process of making "gears", gear milling operation. The subject of research - structural and parametric optimization of gear milling operation, design of devices for milling teeth. Scientific novelty: as a result of theoretical and experimental researches the static and dynamic analysis of the offered design of the device for milling of teeth of "gear wheel" that allowed to formulate the offers and recommendations directed on improvement of a design, maintenance of its steady work and increase of efficiency of tooth milling operation as a whole.
37

Tierno, Antonio. "Automatic Design Space Exploration of Fault-tolerant Embedded Systems Architectures." Doctoral thesis, Università degli studi di Trento, 2023. https://hdl.handle.net/11572/364571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Embedded Systems may have competing design objectives, such as to maximize the reliability, increase the functional safety, minimize the product cost, and minimize the energy consumption. The architectures must be therefore configured to meet varied requirements and multiple design objectives. In particular, reliability and safety are receiving increasing attention. Consequently, the configuration of fault-tolerant mechanisms is a critical design decision. This work proposes a method for automatic selection of appropriate fault-tolerant design patterns, optimizing simultaneously multiple objective functions. Firstly, we present an exact method that leverages the power of Satisfiability Modulo Theory to encode the problem with a symbolic technique. It is based on a novel assessment of reliability which is part of the evaluation of alternative designs. Afterwards, we empirically evaluate the performance of a near-optimal approximation variation that allows us to solve the problem even when the instance size makes it intractable in terms of computing resources. The efficiency and scalability of this method is validated with a series of experiments of different sizes and characteristics, and by comparing it with existing methods on a test problem that is widely used in the reliability optimization literature.
38

Hays, Joseph T. "Parametric Optimal Design Of Uncertain Dynamical Systems." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/28850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This research effort develops a comprehensive computational framework to support the parametric optimal design of uncertain dynamical systems. Uncertainty comes from various sources, such as: system parameters, initial conditions, sensor and actuator noise, and external forcing. Treatment of uncertainty in design is of paramount practical importance because all real-life systems are affected by it; not accounting for uncertainty may result in poor robustness, sub-optimal performance and higher manufacturing costs. Contemporary methods for the quantification of uncertainty in dynamical systems are computationally intensive which, so far, have made a robust design optimization methodology prohibitive. Some existing algorithms address uncertainty in sensors and actuators during an optimal design; however, a comprehensive design framework that can treat all kinds of uncertainty with diverse distribution characteristics in a unified way is currently unavailable. The computational framework uses Generalized Polynomial Chaos methodology to quantify the effects of various sources of uncertainty found in dynamical systems; a Least-Squares Collocation Method is used to solve the corresponding uncertain differential equations. This technique is significantly faster computationally than traditional sampling methods and makes the construction of a parametric optimal design framework for uncertain systems feasible. The novel framework allows to directly treat uncertainty in the parametric optimal design process. Specifically, the following design problems are addressed: motion planning of fully-actuated and under-actuated systems; multi-objective robust design optimization; and optimal uncertainty apportionment concurrently with robust design optimization. The framework advances the state-of-the-art and enables engineers to produce more robust and optimally performing designs at an optimal manufacturing cost.
Ph. D.
39

Mitropoulou, Konstantina. "Performance optimizations for compiler-based error detection." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/10473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The trend towards smaller transistor technologies and lower operating voltages stresses the hardware and makes transistors more susceptible to transient errors. In future systems, performance and power gains will come at the cost of unreliable areas on the chip. For this reason, there is an increased need for low-overhead highly-reliable error detection methodologies. In the last years, several techniques have been proposed. The majority of them are based on redundancy which can be implemented at several levels (e.g., hardware, instruction, thread, process, etc). In instruction-level error detection approaches, the compiler replicates the instructions of the program and inserts checks wherever they are needed. The checks evaluate code correctness and decide whether or not an error has occurred. This type of error detection is more flexible than the hardware alternatives. It allows the programmer to choose the protected area of the program and it can be applied without any hardware modifications. On the other hand, the replicated instructions and the checks cause a large slowdown making software techniques less appealing. In this thesis, we propose two techniques that aim at reducing the error detection overhead of compiler-based approaches and improving system’s performance without sacrificing the fault-coverage. The first technique, DRIFT, achieves this by decoupling the execution of the code (original and replicated) from the checks. The checks are compare and jump instructions. The latter ones tend to make the code sequential and prohibit the compiler from performing aggressive instruction scheduling optimizations. We call this phenomenon basic-block fragmentation. DRIFT reduces the impact of basic-block fragmentation by breaking the synchronized execute-check-confirm-execute cycle. In this way, DRIFT generates a scheduler-friendly code with more instruction-level parallelism (ILP). As a result, it reduces the performance overhead down to 1.29× (on average) and outperforms the state-of-the-art by up to 29.7% retaining the same fault-coverage. Next, CASTED focuses on reducing the impact of error detection overhead on single-chip scalable architectures that are composed of tightly-coupled cores. The proposed compiler methodology adaptively distributes the error detection overhead to the available resources across multiple cores, fully exploiting the abundant ILP of these architectures. CASTED adapts to a wide range of architecture configurations (issue-width, inter-core communication). The results show that CASTED matches the performance of, and often outperforms, sometimes by as mush as 21.2%, the best fixed state-of-the-art approach while maintaining the same fault coverage.
40

Dumas, Antoine. "Développement de méthodes probabilistes pour l'analyse des tolérances des systèmes mécaniques sur-contraints." Thesis, Paris, ENSAM, 2014. http://www.theses.fr/2014ENAM0054/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'analyse des tolérances des mécanismes a pour but d'évaluer la qualité du produit lors de sa phase de conception. La technique consiste à déterminer si, dans une production de grandes séries, le taux de rebuts des mécanismes défaillants est acceptable. Deux conditions doivent être vérifiées: une condition d'assemblage et une condition fonctionnelle. La méthode existante se base sur le couplage de la simulation de Monte Carlo avec un algorithme d'optimisation qui est très couteuse en temps de calcul. L'objectif des travaux de thèse est de développer des méthodes plus efficaces basées sur des approches probabilistes. Dans un premier temps, il est proposé une linéarisation des équations non linéaires du modèle de comportement afin de simplifier l'étape faisant appel à l'algorithme d'optimisation. Une étude de l'impact de cette opération sur la qualité de la probabilité est menée. Afin de minimiser l'erreur d'approximation, deux procédures itératives pour traiter le problème d'assemblage sont proposées. Ils permettent de calculer la probabilité de défaillance d'assemblage de façon précise en un temps de calcul réduit. En outre, les travaux de thèse ont permis le développement d'une nouvelle méthode de résolution basée sur la méthode de fiabilité système FORM (First Order Reliability Method) système. Cette méthode permet de traiter uniquement le problème fonctionnel. Elle a nécessité la mise au point d'une nouvelle formulation du problème d'analyse des tolérances sous forme système. La formulation décompose le mécanisme hyperstatique en plusieurs configurations isostatiques, le but étant de considérer les configurations dominantes menant à une situation de défaillance. La méthode proposée permet un gain de temps considérable en permettant d'obtenir un résultat en quelques minutes, y compris pour atteindre des faibles probabilités
Tolerance analysis of mechanism aims at evaluating product quality during its design stage. Technique consists in computing a defect probability of mechanisms in large series production. An assembly condition and a functional condition are checked. Current method mixes a Monte Carlo simulation and an optimization algorithm which is too much time consuming. The objective of this thesis is to develop new efficient method based on probabilistic approach to deal with the tolerance analysis of overconstrained mechanism. First, a linearization procedure is proposed to simplify the optimization algorithm step. The impact of such a procedure on the probability accuracy is studied. To overcome this issue, iterative procedures are proposed to deal with the assembly problem. They enable to compute accurate defect probabilities in a reduced computing time. Besides, a new resolution method based on the system reliability method FORM (First Order Reliability Method) for systems was developed for the functional problem. In order to apply this method, a new system formulation of the tolerance analysis problem is elaborated. Formulation splits up the overconstrained mechanism into several isoconstrained configurations. The goal is to consider only the main configurations which lead to a failure situation. The proposed method greatly reduces the computing time allowing getting result within minutes. Low probabilities can also be reached and the order of magnitude does not influence the computing time
41

Nilsson, Johan. "Accurate description of heterogeneous tumors for biologically optimized radiation therapy." Doctoral thesis, Stockholm : Division of medical radiation physics, Department of oncology-pathology, Stockholm University and Karolinska Institutet, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Benichou, Sami. "Intégration des effets des dilatations thermiques dans le tolérancement." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2012. http://tel.archives-ouvertes.fr/tel-00749566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La cotation fonctionnelle doit garantir la montabilité et le bon fonctionnement d'un mécanisme en imposant les spécifications fonctionnelles à respecter sur les pièces. Ces spécifications sont exprimées avec les normes ISO de cotation et doivent être vérifiées à 20°C. Pour les mécanismes soumis à de fortes températures, il est nécessaire de cumuler l'influence des tolérances et des dilatations aux différents régimes thermiques. Après avoir formulé des hypothèses de comportement dans les liaisons avec contact ou avec jeux affectés par les déformations thermiques et l'influence des incertitudes sur les températures, la méthodologie proposée permet de séparer le calcul thermique et le tolérancement. Le bureau de calcul thermique détermine les champs de température et les déplacements des mailles par la méthode des éléments finis à partir des modèles nominaux des pièces. Le cumul des tolérances et des dilatations est basé sur la méthode des droites d'analyse. Pour chaque exigence, la surface terminale est discrétisée en différents points d'analyse. Dans chaque jonction, les relations de transfert déterminent les points de contact et l'influence des dilatations et des écarts thermiques en ces points sur l'exigence. Une application à un mécanisme industriel démontre l'intérêt d'optimiser les dimensions nominales des modèles afin de maximiser les tolérances tout en respectant l'ensemble des exigences.
43

Thakral, Garima. "Process-Voltage-Temperature Aware Nanoscale Circuit Optimization." Thesis, University of North Texas, 2010. https://digital.library.unt.edu/ark:/67531/metadc67943/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Embedded systems which are targeted towards portable applications are required to have low power consumption because such portable devices are typically powered by batteries. During the memory accesses of such battery operated portable systems, including laptops, cell phones and other devices, a significant amount of power or energy is consumed which significantly affects the battery life. Therefore, efficient and leakage power saving cache designs are needed for longer operation of battery powered applications. Design engineers have limited control over many design parameters of the circuit and hence face many chal-lenges due to inherent process technology variations, particularly on static random access memory (SRAM) circuit design. As CMOS process technologies scale down deeper into the nanometer regime, the push for high performance and reliable systems becomes even more challenging. As a result, developing low-power designs while maintaining better performance of the circuit becomes a very difficult task. Furthermore, a major need for accurate analysis and optimization of various forms of total power dissipation and performance in nanoscale CMOS technologies, particularly in SRAMs, is another critical issue to be considered. This dissertation proposes power-leakage and static noise margin (SNM) analysis and methodologies to achieve optimized static random access memories (SRAMs). Alternate topologies of SRAMs, mainly a 7-transistor SRAM, are taken as a case study throughout this dissertation. The optimized cache designs are process-voltage-temperature (PVT) tolerant and consider individual cells as well as memory arrays.
44

Izosimov, Viacheslav. "Scheduling and optimization of fault-tolerant distributed embedded systems." Doctoral thesis, Linköping : Department of Computer and Information Science, Linköping University, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-51727.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Umaras, Eduardo. "Tolerâncias dimensionais em conjuntos mecânicos: estudo e proposta para otimização." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/3/3152/tde-20102010-153205/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Este trabalho aborda os conceitos necessários para o estudo de tolerâncias dimensionais de um conjunto mecânico e propõe um método eficaz para a especificação de tolerâncias na fase de detalhamento do projeto de um produto, através de um algoritmo de otimização baseado em custos de produção. Conceitos do custo da perda de qualidade desenvolvidos por Genichi Taguchi são também aplicados para a especificação de restrições funcionais, que visam garantir um determinado nível de qualidade em função de valores especificados para critérios funcionais. Comentários e comparações com outros trabalhos de otimização de tolerâncias dimensionais são também realizados, através dos quais podem ser observadas características específicas no método proposto. Um exemplo de aplicação é apresentado através do estudo de caso baseado em um projeto de sistema de transmissão de potência por correias a equipamentos periféricos de um motor de combustão interna. Os resultados da aplicação do algoritmo de otimização são comparados aos de métodos convencionais de síntese de tolerâncias, mostrando sua eficácia.
This work approaches the concepts needed to the study of dimensional tolerances of a mechanical assembly and proposes an effective method for specifying tolerances in the detailing phase of product design, by means of an optimization algorithm based on manufacturing costs. Concepts of quality loss developed by Genichi Taguchi are also applied for specifying functional constraints, which aim to assure an adequate quality level regarding specified values of functional criteria. Comments and comparisons with other dimensional tolerances optimization works are also made, through which specific features of the proposed method can be observed. An application example of the method is presented through a case study based on a belt power transmission system to ancillary equipment of an internal combustion engine. Results of application of the optimization algorithm are compared with the ones of conventional tolerance synthesis methods, showing their effectiveness.
46

Schott, Jason R. (Jason Ramon). "Fault tolerant design using single and multicriteria genetic algorithm optimization." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/11582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Panhalkar, Neeraj. "Hierarchical Data Structures for Optimization of Additive Manufacturing Processes." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1439310812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Pollet, Félix. "Conception optimale de drones électriques : une approche multidisciplinaire avec analyse des incertitudes, de la tolérance aux pannes et des impacts environnementaux." Electronic Thesis or Diss., Toulouse, ISAE, 2024. http://www.theses.fr/2024ESAE0013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les drones ont connu un développement intensif ces dernières années. En raison de leur rentabilité et de leur polyvalence, ces véhicules devraient gagner en popularité dans un large éventail d'applications, telles que la livraison de colis, la surveillance de lignes électriques et l'agriculture de précision. Parallèlement, le développement de nouvelles technologies et leur intégration sur diverses architectures de drones élargissent les possibilités de conception. Il en résulte un besoin accru pour des approches de conception holistiques permettant une meilleure intégration des technologies, un temps de développement plus court et une plus grande modularité. Cette thèse développe et met en application une méthodologie pour la conception de drones électriques multirotors, à voilure fixe et hybrides à décollage et atterrissage verticaux (VTOL). La plateforme ainsi développée permet le dimensionnement optimal d'un drone à partir de spécifications arbitraires sur la mission, les choix technologiques et l'architecture, en utilisant une approche multidisciplinaire.À partir d'un ensemble de modèles analytiques comprenant des lois d'échelle et des régressions, une méthodologie générique de dimensionnement est proposée. La méthodologie repose sur une formulation multidisciplinaire et optimale de conception (MDO) qui permet une convergence rapide vers la solution présentant les meilleures performances. En particulier, l'application de cette approche permet d'évaluer rapidement les effets d’une modification du cahier des charges. Dans un deuxième temps, les incertitudes relatives aux modèles et à la disponibilité des composants optimaux sur le marché sont évaluées. Pour atténuer les incertitudes critiques en matière de performances des drones, la méthodologie de dimensionnement est étendue pour permettre l'optimisation de la conception à partir de catalogues de composants existants en lieu et place de modèles physiques. Enfin, la thèse aborde deux aspects spécifiques de la conception des drones liés à la réglementation et aux enjeux sociétaux. D'une part, les récentes réglementations émises par l'Agence européenne de la sécurité aérienne (AESA) imposent un certain niveau de sûreté pour l'opération des drones. Dans cette optique, une approche est proposée pour évaluer la contrôlabilité de diverses architectures en cas de défaillance d’un rotor ou d’une surface de contrôle. L'évaluation est notamment reliée à la méthodologie de conception précédemment développée afin d’assurer un dimensionnement avec un certain niveau de tolérance aux pannes. D'autre part, l'acceptation sociétale des drones est étroitement liée aux préoccupations environnementales, incluant, entre autres, le changement climatique et la consommation de ressources. Cette problématique est abordée par le développement et l'intégration d'une discipline environnementale dans l’outil de conception. Cette approche permet d'évaluer la sensibilité des impacts environnementaux aux exigences de la mission et aux hypothèses technologiques, ainsi que de minimiser les impacts environnementaux dès les premières phases du processus de conception.La thèse contribue ainsi au développement d'un cadre de référence pour optimiser la conception des drones électriques avec une approche multidisciplinaire. De ce fait, les contributions de la thèse sont particulièrement adaptées pour la conception de futurs drones dont le déploiement sera soumis à des problématiques de marché, de sûreté et de contraintes environnementales
Unmanned aerial vehicles (UAVs) have undergone intensive development in recent years. Owing to their cost-effectiveness and versatility, UAVs are expected to gain popularity in a wide range of applications, such as parcel delivery, power line monitoring and precision farming. Concurrently, the development of new technologies and their integration into various drone concepts is expanding the range of design alternatives. This is driving the need for holistic design approaches with better technology integration, faster development time and greater modularity.The thesis develops and implements a methodology for the conceptual design of electric multirotor, fixed-wing and hybrid vertical take-off and landing (VTOL) UAVs. The framework enables the optimal sizing of a UAV from arbitrary specifications on the mission, technological choices and architecture, using a comprehensive multidisciplinary approach.Starting from a set of analytical models including scaling laws and regressions, a generic sizing methodology is developed. The proposed methodology relies on an efficient multidisciplinary design and optimization (MDO) formulation, which enables fast convergence to the UAV candidate with best performances. In particular, the application of this approach enables to rapidly assess the effects of changes in the requirements. Next, the uncertainties surrounding the models and the availability of optimal components on the market are assessed. To mitigate critical uncertainties in UAV performance, the sizing methodology is extended to allow the design to be optimized using catalogues of existing components instead of models. Finally, the thesis develops two specific aspects of UAV design related to regulatory and societal challenges. On the one hand, recent regulations issued by the European Union Aviation Safety Agency (EASA) impose a level of safety for specific categories of UAVs. To this end, an approach is proposed to assess the controllability of various architectures in the event of failures. The assessment is further linked to the design framework to achieve fault-tolerance sizing of the rotors and control surfaces. On the other hand, societal acceptance of UAVs is strongly related to environmental concerns, including but not limited to climate change and resource consumption. This challenge is addressed by developing and integrating an environmental discipline into the design framework. The novel approach enables to assess the sensitivity of environmental impacts to mission requirements and technological assumptions, as well as minimizing environmental burdens at the earliest design stages.The thesis contributes to the development of a unified framework for optimizing the design of electric UAVs with a holistic approach. As such, it is relevant to future UAVs designed for applications subject to market, regulatory and environmental issues
49

Sheldon, Karl Edward. "Analysis Methods to Control Performance Variability and Costs in Turbine Engine Manufacturing." Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/32290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Few aircraft engine manufacturers are able to consistently achieve high levels of performance reliability in newly manufactured engines. Much of the variation in performance reliability is due to the combined effect of tolerances of key engine components, including tip clearances of rotating components and flow areas in turbine nozzles. This research presents system analysis methods for determining the maximum possible tolerances of these key components that will allow a turbine engine to pass a number of specified performance constraints at a selected level of reliability. Through the combined use of a state-of-the-art engine performance code, component clearance loss models, and stochastic simulations, regions of feasible design space can be explored that allow for a pre-determined level of engine reliability. As expected, constraints such as spool speed and fuel consumption that are highly sensitive to certain component tolerances can significantly limit the feasible design space of the component in question. Discussed are methods for determining the bounds of any components feasible design space and for selecting the most economical combinations of component tolerances. Unique to this research is the method that determines the tolerances of engine components as a system while maintaining the geometric constraints of individual components. The methods presented in this work allow for any number of component tolerances to be varied or held fixed while providing solutions that satisfy all performance criteria. The algorithms presented in this research also allow for an individual specification of reliability on any number of performance parameters and geometric constraints. This work also serves as a foundation for an even larger algorithm that can include stochastic simulations and reliability prediction of an engine over its entire life cycle. By incorporating information such as time dependent performance data, known mission profiles, and the influence of maintenance into the component models, it would be possible to predict the reliability of an engine over time. Ultimately, a time-variant simulation such as this could help predict the timing and levels of maintenance that will maximize the life of an engine for a minimum cost.
Master of Science
50

Hiscox, Briana (Briana Diane). "Analysis and optimization of a new accident tolerant fuel called fuel-in-fibers." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 64-68).
The 2011 Fukushima Daiichi accident highlighted the weakness of the current nuclear fuel and motivated R&D of accident tolerant fuels. Accident tolerant fuels (ATF) are fuels that can tolerate loss of active cooling in the core of light water reactors (LWRs) for a considerably longer period of time while maintaining or improving the fuel performance during normal operations. Fully Ceramic Microencapsulated (FCM) fuel is an ATF concept aimed at significantly increasing the fission product retention capability of nuclear fuel at high temperatures. The FCM concept is made up of fuel particles surrounded by multilayers of ceramic material similar to the TRISO fuel concept. The fuel particles are embedded in a SiC matrix in cylindrical pellet geometry which gives the fuel its high temperature corrosion resistance. However, when implementing the FCM concept in a conventional PWR fuel geometry, it is not possible to maintain an 18 month fuel cycle length and remain below the proliferation enrichment limit of 20 w/o U₂₃₅. This is a critical challenge that needs to be overcome in order to benefit from the high temperature fission product retention capability of FCM-type ATF concepts. Therefore, this work aims at investigating the potential benefits of a new accident tolerant fuel, Fuel-in-Fibers (F-in-F) concept. The Fuel-in-Fibers concept was created by Free Form Fibers, a laser chemical vapor deposition direct manufacturing company. It aims to combine the same robust fission product retention and high temperature stability as the FCM fuel concept while drastically decreasing the necessary fuel enrichment. This is done by designing a fuel fiber in cylindrical geometry as opposed to spherical particles to increase the packing fraction within a cylindrical pellet. The direct manufacturing allows for minimization of the volume occupied by the SiC matrix as well as direct deposition of high density fuels like uranium nitride (UN). Assembly level calculations in the Monte Carlo code SERPENT determined that the Fuel-in-Fibers concept could maintain a typical PWR cycle length with less than 20 w/o U₂₃₅ (LEU) enrichment. The fibers in the fuel pellet were then homogenized for use in lattice physics code CASMO and core simulator code SIMULATE3. The SIMUALTE full core simulation showed that the Fuel-in- Fibers design required enrichments of 8% and 6% for UO2 and UN as fuels, respectively. Overall, the full core analysis of a standard 4-loop Westinghouse PWR showed Fuel-in-Fibers concept has similar behavior as the conventional fuel. Due to the high fissile enrichments, the calculated radial power peaking factors were higher in Fuel-in-Fibers concept. This may result in decrease of the coolant outlet temperature by 5 K in order to maintain safety margins. The shutdown margin analysis showed that using B4C instead AgInCd control rods is needed. A design optimization was also performed to calculate the ideal geometry for Fuel-in-Fibers concept. An in-house MATLAB single channel code, built to evaluate PWR Thermal Hydraulic and Structural performance, was used to vary the fuel pin Pitch and Pitch-to-Diameter ratio (P/D Ratio). The results showed that a smaller pitch and larger diameter of 13.2 mm and 12 mm, respectively will improve the Fuel-in-Fibers concept enrichment requirements. A simplified economic analysis based on highly uncertain fabrication cost estimates was performed. The economics analysis determined that the fuel in fiber design is estimated to cost more than current UO₂ fuel by 1.25x - 15x due to the increased enrichment and fabrication costs but may be offset by the additional safety margins provided by the Fuel-in-Fibers concept.
by Briana Hiscox.
S.M.

To the bibliography