To see the other types of publications on this topic, follow the link: Surrogate methods.

Dissertations / Theses on the topic 'Surrogate methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Surrogate methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Conradie, Tanja. "Modelling of nonlinear dynamic systems : using surrogate data methods." Thesis, Stellenbosch : Stellenbosch University, 2000. http://hdl.handle.net/10019.1/51834.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2000.
ENGLISH ABSTRACT: This study examined nonlinear modelling techniques as applied to dynamic systems, paying specific attention to the Method of Surrogate Data and its possibilities. Within the field of nonlinear modelling, we examined the following areas of study: attractor reconstruction, general model building techniques, cost functions, description length, and a specific modelling methodology. The Method of Surrogate Data was initially applied in a more conventional application, i.e. testing a time series for nonlinear, dynamic structure. Thereafter, it was used in a less conventional application; i.e. testing the residual vectors of a nonlinear model for membership of identically and independently distributed (i.i.d) noise. The importance of the initial surrogate analysis of a time series (determining whether the apparent structure of the time series is due to nonlinear, possibly chaotic behaviour) was illustrated. This study confrrmed that omitting this crucial step could lead to a flawed conclusion. If evidence of nonlinear structure in the time series was identified, a radial basis model was constructed, using sophisticated software based on a specific modelling methodology. The model is an iterative algorithm using minimum description length as the stop criterion. The residual vectors of the models generated by the algorithm, were tested for membership of the dynamic class described as i.i.d noise. The results of this surrogate analysis illustrated that, as the model captures more of the underlying dynamics of the system (description length decreases), the residual vector resembles Li.d noise. It also verified that the minimum description length criterion leads to models that capture the underlying dynamics of the time series, with the residual vector resembling Li.d noise. In the case of the "worst" model (largest description length), the residual vector could be distinguished from Li.d noise, confirming that it is not the "best" model. The residual vector of the "best" model (smallest description length), resembled Li.d noise, confirming that the minimum description length criterion selects a model that captures the underlying dynamics of the time series. These applications were illustrated through analysis and modelling of three time series: a time series generated by the Lorenz equations, a time series generated by electroencephalograhpic signal (EEG), and a series representing the percentage change in the daily closing price of the S&P500 index.
AFRIKAANSE OPSOMMING: In hierdie studie ondersoek ons nie-lineere modelleringstegnieke soos toegepas op dinamiese sisteme. Spesifieke aandag word geskenk aan die Metode van Surrogaat Data en die moontlikhede van hierdie metode. Binne die veld van nie-lineere modellering het ons die volgende terreine ondersoek: attraktor rekonstruksie, algemene modelleringstegnieke, kostefunksies, beskrywingslengte, en 'n spesifieke modelleringsalgoritme. Die Metode and Surrogaat Data is eerstens vir 'n meer algemene toepassing gebruik wat die gekose tydsreeks vir aanduidings van nie-lineere, dimanise struktuur toets. Tweedens, is dit vir 'n minder algemene toepassing gebruik wat die residuvektore van 'n nie-lineere model toets vir lidmaatskap van identiese en onafhanlike verspreide geraas. Die studie illustreer die noodsaaklikheid van die aanvanklike surrogaat analise van 'n tydsreeks, wat bepaal of die struktuur van die tydsreeks toegeskryf kan word aan nie-lineere, dalk chaotiese gedrag. Ons bevesting dat die weglating van hierdie analise tot foutiewelike resultate kan lei. Indien bewyse van nie-lineere gedrag in die tydsreeks gevind is, is 'n model van radiale basisfunksies gebou, deur gebruik te maak van gesofistikeerde programmatuur gebaseer op 'n spesifieke modelleringsmetodologie. Dit is 'n iteratiewe algoritme wat minimum beskrywingslengte as die termineringsmaatstaf gebruik. Die model se residuvektore is getoets vir lidmaatskap van die dinamiese klas wat as identiese en onafhanlike verspreide geraas bekend staan. Die studie verifieer dat die minimum beskrywingslengte as termineringsmaatstaf weI aanleiding tot modelle wat die onderliggende dinamika van die tydsreeks vasvang, met die ooreenstemmende residuvektor wat nie onderskei kan word van indentiese en onafhanklike verspreide geraas nie. In die geval van die "swakste" model (grootse beskrywingslengte), het die surrogaat analise gefaal omrede die residuvektor van indentiese en onafhanklike verspreide geraas onderskei kon word. Die residuvektor van die "beste" model (kleinste beskrywingslengte), kon nie van indentiese en onafhanklike verspreide geraas onderskei word nie en bevestig ons aanname. Hierdie toepassings is aan die hand van drie tydsreekse geillustreer: 'n tydsreeks wat deur die Lorenz vergelykings gegenereer is, 'n tydsreeks wat 'n elektroenkefalogram voorstel en derdens, 'n tydsreeks wat die persentasie verandering van die S&P500 indeks se daaglikse sluitingsprys voorstel.
APA, Harvard, Vancouver, ISO, and other styles
2

Asritha, Kotha Sri Lakshmi Kamakshi. "Comparing Random forest and Kriging Methods for Surrogate Modeling." Thesis, Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20230.

Full text
Abstract:
The issue with conducting real experiments in design engineering is the cost factor to find an optimal design that fulfills all design requirements and constraints. An alternate method of a real experiment that is performed by engineers is computer-aided design modeling and computer-simulated experiments. These simulations are conducted to understand functional behavior and to predict possible failure modes in design concepts. However, these simulations may take minutes, hours, days to finish. In order to reduce the time consumption and simulations required for design space exploration, surrogate modeling is used. \par Replacing the original system is the motive of surrogate modeling by finding an approximation function of simulations that is quickly computed. The process of surrogate model generation includes sample selection, model generation, and model evaluation. Using surrogate models in design engineering can help reduce design cycle times and cost by enabling rapid analysis of alternative designs.\par Selecting a suitable surrogate modeling method for a given function with specific requirements is possible by comparing different surrogate modeling methods. These methods can be compared using different application problems and evaluation metrics. In this thesis, we are comparing the random forest model and kriging model based on prediction accuracy. The comparison is performed using mathematical test functions. This thesis conducted quantitative experiments to investigate the performance of methods. After experimental analysis, it is found that the kriging models have higher accuracy compared to random forests. Furthermore, the random forest models have less execution time compared to kriging for studied mathematical test problems.
APA, Harvard, Vancouver, ISO, and other styles
3

Kamath, Atul Krishna. "Surrogate-assisted optimisation-based verification & validation." Thesis, University of Exeter, 2014. http://hdl.handle.net/10871/15637.

Full text
Abstract:
This thesis deals with the application of optimisation based Validation and Verification (V&V) analysis on aerospace vehicles in order to determine their worst case performance metrics. To this end, three aerospace models relating to satellite and launcher vehicles provided by European Space Agency (ESA) on various projects are utilised. As a means to quicken the process of optimisation based V&V analysis, surrogate models are developed using polynomial chaos method. Surro- gate models provide a quick way to ascertain the worst case directions as computation time required for evaluating them is very small. A sin- gle evaluation of a surrogate model takes less than a second. Another contribution of this thesis is the evaluation of operational safety margin metric with the help of surrogate models. Operational safety margin is a metric defined in the uncertain parameter space and is related to the distance between the nominal parameter value and the first instance of performance criteria violation. This metric can help to gauge the robustness of the controller but requires the evaluation of the model in the constraint function and hence could be computationally intensive. As surrogate models are computationally very cheap, they are utilised to rapidly compute the operational safety margin metric. But this metric focuses only on finding a safe region around the nominal parameter value and the possibility of other disjoint safe regions are not explored. In order to find other safe or failure regions in the param- eter space, the method of Bernstein expansion method is utilised on surrogate polynomial models to help characterise the uncertain param- eter space into safe and failure regions. Furthermore, Binomial failure analysis is used to assign failure probabilities to failure regions which might help the designer to determine if a re-design of the controller is required or not. The methodologies of optimisation based V&V, surrogate modelling, operational safety margin, Bernstein expansion method and risk assessment have been combined together to form the WCAT-II MATLAB toolbox.
APA, Harvard, Vancouver, ISO, and other styles
4

Heap, Ryan C. "Real-Time Visualization of Finite Element Models Using Surrogate Modeling Methods." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/6536.

Full text
Abstract:
Finite element analysis (FEA) software is used to obtain linear and non-linear solutions to one, two, and three-dimensional (3-D) geometric problems that will see a particular load and constraint case when put into service. Parametric FEA models are commonly used in iterative design processes in order to obtain an optimum model given a set of loads, constraints, objectives, and design parameters to vary. In some instances it is desirable for a designer to obtain some intuition about how changes in design parameters can affect the FEA solution of interest, before simply sending the model through the optimization loop. This could be accomplished by running the FEA on the parametric model for a set of part family members, but this can be very timeconsuming and only gives snapshots of the models real behavior. The purpose of this thesis is to investigate a method of visualizing the FEA solution of the parametric model as design parameters are changed in real-time by approximating the FEA solution using surrogate modeling methods. The tools this research will utilize are parametric FEA modeling, surrogate modeling methods, and visualization methods. A parametric FEA model can be developed that includes mesh morphing algorithms that allow the mesh to change parametrically along with the model geometry. This allows the surrogate models assigned to each individual node to use the nodal solution of multiple finite element analyses as regression points to approximate the FEA solution. The surrogate models can then be mapped to their respective geometric locations in real-time. Solution contours display the results of the FEA calculations and are updated in real-time as the parameters of the design model change.
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Chang-Hwa 1957. "Analysis of approaches to synchronous faults simulation by surrogate propagation." Thesis, The University of Arizona, 1988. http://hdl.handle.net/10150/276771.

Full text
Abstract:
This thesis describes a new simulation technique, Synchronous Faults Simulation by Surrogate with Exception, first proposed by Dr. F. J. Hill and has been initiated under the direction of Xiolin Wang. This paper reports early results of that project. The Sequential Circuit Test Sequence System, SCIRTSS, is an automatic test generation system which is developed in University of Arizona which will be used as a target to compare against the results of the new simulator. The major objective of this research is to analyze the results obtained by using the new simulator SFSSE against the results obtained by using the parallel simulator SCIRTSS. The results are listed in this paper to verify superiority of the new simulation technique.
APA, Harvard, Vancouver, ISO, and other styles
6

Shashidhar, Akhil. "Generalized Volterra-Wiener and surrogate data methods for complex time series analysis." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/41619.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
Includes bibliographical references (leaves 133-150).
This thesis describes the current state-of-the-art in nonlinear time series analysis, bringing together approaches from a broad range of disciplines including the non-linear dynamical systems, nonlinear modeling theory, time-series hypothesis testing, information theory, and self-similarity. We stress mathematical and qualitative relationships between key algorithms in the respective disciplines in addition to describing new robust approaches to solving classically intractable problems. Part I presents a comprehensive review of various classical approaches to time series analysis from both deterministic and stochastic points of view. We focus on using these classical methods for quantification of complexity in addition to proposing a unified approach to complexity quantification encapsulating several previous approaches. Part II presents robust modern tools for time series analysis including surrogate data and Volterra-Wiener modeling. We describe new algorithms converging the two approaches that provide both a sensitive test for nonlinear dynamics and a noise-robust metric for chaos intensity.
by Akhil Shashidhar.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
7

Bilicz, Sandor. "Application of Design-of-Experiment Methods and Surrogate Models in Electromagnetic Nondestructive Evaluation." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00601753.

Full text
Abstract:
Le contrôle non destructif électromagnétique (CNDE) est appliqué dans des domaines variés pour l'exploration de défauts cachés affectant des structures. De façon générale, le principe peut se poser en ces termes : un objet inconnu perturbe un milieu hôte donné et illuminé par un signal électromagnétique connu, et la réponse est mesurée sur un ou plusieurs récepteurs de positions connues. Cette réponse contient des informations sur les paramètres électromagnétiques et géométriques des objets recherchés et toute la difficulté du problème traité ici consiste à extraire ces informations du signal obtenu. Plus connu sous le nom de " problèmes inverses ", ces travaux s'appuient sur une résolution appropriée des équations de Maxwell. Au " problème inverse " est souvent associé le " problème direct " complémentaire, qui consiste à déterminer le champ électromagnétique perturbé connaissant l'ensemble des paramètres géométriques et électromagnétiques de la configuration, défaut inclus. En pratique, cela est effectué via une modélisation mathématique et des méthodes numériques permettant la résolution numérique de tels problèmes. Les simulateurs correspondants sont capables de fournir une grande précision sur les résultats mais à un coût numérique important. Sachant que la résolution d'un problème inverse exige souvent un grand nombre de résolution de problèmes directs successifs, cela rend l'inversion très exigeante en termes de temps de calcul et de ressources informatiques. Pour surmonter ces challenges, les " modèles de substitution " qui imitent le modèle exact peuvent être une solution alternative intéressante. Une manière de construire de tels modèles de substitution est d'effectuer un certain nombre de simulations exactes et puis d'approximer le modèle en se basant sur les données obtenues. Le choix des simulations (" prototypes ") est normalement contrôlé par une stratégie tirée des outils de méthodes de " plans d'expérience numérique ". Dans cette thèse, l'utilisation des techniques de modélisation de substitution et de plans d'expérience numérique dans le cadre d'applications en CNDE est examinée. Trois approches indépendantes sont présentées en détail : une méthode d'inversion basée sur l'optimisation d'une fonction objectif et deux approches plus générales pour construire des modèles de substitution en utilisant des échantillonnages adaptatifs. Les approches proposées dans le cadre de cette thèse sont appliquées sur des exemples en CNDE par courants de Foucault
APA, Harvard, Vancouver, ISO, and other styles
8

Peesapati, Lakshmi Narasimham. "Methods To evaluate the effectiveness of certain surrogate measures to assess safety of opposing left-turn interactions." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52324.

Full text
Abstract:
Highway safety evaluation has traditionally been performed using crash data. However crash data based safety analysis has limitations in terms of timeliness and efficiency. Previous studies show that the use of surrogate safety data allows for earlier evaluation of safety in comparison to the significantly longer time horizon required for collecting crash data. However, the predictive capability of surrogate measures is an area of ongoing research. Previous studies have often resulted in inconsistent findings in the relationship between surrogates and crashes, one of the primary reasons being inconsistent definitions of a conflict. This study evaluated the effectiveness of certain surrogate measures (Acceleration-Deceleration profile, intersection entering speed of through vehicles, and Post Encroachment Time (PET)) in assessing the safety of opposing left-turn interactions at 4-legged signalized intersections by collection of time resolved video from eighteen selected intersections throughout Georgia. Overall, this research demonstrated that surrogate measures can be effective in safety evaluation, specifically demonstrating the use of PET as a surrogate for crashes between left-turning vehicles and opposing through vehicles. The analysis of data found that the selected surrogate threshold is critical to the effectiveness of any surrogate measure. For example, the required PET threshold was found to be as low as 1 second to identify high crash intersections, significantly lower than the commonly reported 3 second threshold. Non-parametric rank analysis methods and generalized linear modeling techniques were used to model PET with other intersection and traffic characteristics to demonstrate the degree to which these surrogates can be used to identify potential high-crash intersections without resorting to a crash history. Finally, the effectiveness of PET and its assistance to decision makers is also been demonstrated through an example that helped find errors in reported crash data.
APA, Harvard, Vancouver, ISO, and other styles
9

Thomas, Sarah Nichole. "Decisions to Seek and Share: A Mixed Methods Approach to Understanding Caregivers Surrogate Information Acquisition Behaviors." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1595545894518707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Isaacs, Amitay Engineering &amp Information Technology Australian Defence Force Academy UNSW. "Development of optimization methods to solve computationally expensive problems." Awarded by:University of New South Wales - Australian Defence Force Academy. Engineering & Information Technology, 2009. http://handle.unsw.edu.au/1959.4/43758.

Full text
Abstract:
Evolutionary algorithms (EAs) are population based heuristic optimization methods used to solve single and multi-objective optimization problems. They can simultaneously search multiple regions to find global optimum solutions. As EAs do not require gradient information for the search, they can be applied to optimization problems involving functions of real, integer, or discrete variables. One of the drawbacks of EAs is that they require evaluations of numerous candidate solutions for convergence. Most real life engineering design optimization problems involve highly nonlinear objective and constraint functions arising out of computationally expensive simulations. For such problems, the computation cost of optimization using EAs can become quite prohibitive. This has stimulated the research into improving the efficiency of EAs reported herein. In this thesis, two major improvements are suggested for EAs. The first improvement is the use of spatial surrogate models to replace the expensive simulations for the evaluation of candidate solutions, and other is a novel constraint handling technique. These modifications to EAs are tested on a number of numerical benchmarks and engineering examples using a fixed number of evaluations and the results are compared with basic EA. addition, the spatial surrogates are used in the truss design application. A generic framework for using spatial surrogate modeling, is proposed. Multiple types of surrogate models are used for better approximation performance and a prediction accuracy based validation is used to ensure that the approximations do not misguide the evolutionary search. Two EAs are proposed using spatial surrogate models for evaluation and evolution. For numerical benchmarks, the spatial surrogate assisted EAs obtain significantly better (even orders of magnitude better) results than EA and on an average 5-20% improvements in the objective value are observed for engineering examples. Most EAs use constraint handling schemes that prefer feasible solutions over infeasible solutions. In the proposed infeasibility driven evolutionary algorithm (IDEA), a few infeasible solutions are maintained in the population to augment the evolutionary search through the infeasible regions along with the feasible regions to accelerate convergence. The studies on single and multi-objective test problems demonstrate the faster convergence of IDEA over EA. In addition, the infeasible solutions in the population can be used for trade-off studies. Finally, discrete structures optimization (DSO) algorithm is proposed for sizing and topology optimization of trusses. In DSO, topology optimization and sizing optimization are separated to speed up the search for the optimum design. The optimum topology is identified using strain energy based material removal procedure. The topology optimization process correctly identifies the optimum topology for 2-D and 3-D trusses using less than 200 function evaluations. The sizing optimization is performed later to find the optimum cross-sectional areas of structural elements. In surrogate assisted DSO (SDSO), spatial surrogates are used to accelerate the sizing optimization. The truss designs obtained using SDSO are very close (within 7% of the weight) to the best reported in the literature using only a fraction of the function evaluations (less than 7%).
APA, Harvard, Vancouver, ISO, and other styles
11

Bon, Joshua J. "Advances in sequential Monte Carlo methods." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/235897/1/Joshua%2BBon%2BThesis%284%29.pdf.

Full text
Abstract:
Estimating parameters of complex statistical models and their uncertainty from data is a challenging task in statistics and data science. This thesis developed novel statistical algorithms for efficiently performing statistical estimation, established the validity of these algorithms, and explored their properties with mathematical analysis. The new algorithms and their associated analysis are significant since they permit principled and robust fitting of statistical models that were previously intractable and will thus facilitate new scientific discoveries.
APA, Harvard, Vancouver, ISO, and other styles
12

Mathias, Berggren, and Sonesson Daniel. "Design Optimization in Gas Turbines using Machine Learning : A study performed for Siemens Energy AB." Thesis, Linköpings universitet, Programvara och system, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-173920.

Full text
Abstract:
In this thesis, the authors investigate how machine learning can be utilized for speeding up the design optimization process of gas turbines. The Finite Element Analysis (FEA) steps of the design process are examined if they can be replaced with machine learning algorithms. The study is done using a component with given constraints that are provided by Siemens Energy AB. With this component, two approaches to using machine learning are tested. One utilizes design parameters, i.e. raw floating-point numbers, such as the height and width. The other technique uses a high dimensional mesh as input. It is concluded that using design parameters with surrogate models is a viable way of performing design optimization while mesh input is currently not. Results from using different amount of data samples are presented and evaluated.
APA, Harvard, Vancouver, ISO, and other styles
13

Drzisga, Daniel [Verfasser], Barbara [Akademischer Betreuer] Wohlmuth, Matthias [Gutachter] Möller, Barbara [Gutachter] Wohlmuth, and Giancarlo [Gutachter] Sangalli. "Accelerating Isogeometric Analysis and Matrix-free Finite Element Methods Using the Surrogate Matrix Methodology / Daniel Drzisga ; Gutachter: Matthias Möller, Barbara Wohlmuth, Giancarlo Sangalli ; Betreuer: Barbara Wohlmuth." München : Universitätsbibliothek der TU München, 2020. http://d-nb.info/122693434X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Nowak, Vila Alex. "Structured prediction with theoretical guarantees." Electronic Thesis or Diss., Université Paris sciences et lettres, 2021. http://www.theses.fr/2021UPSLE059.

Full text
Abstract:
La classification est la branche de l’apprentissage supervisé qui vise à estimer une fonction à valeurs discrètes à partir de données constituées de paires d’entrées et de sorties. Le cadre le plus classique et le plus étudié est celui de la classification binaire, où le prédicteur discret prend pour valeur zéro ou un. Cependant, la plupart des problèmes de classification qu’on retrouve en pratique sont definis sur de grands espaces de sortie structurés tels que des séquences, des grilles, des graphs, des permutations, etc. Il existe des différences fondamentales entre la prédiction structurée et la classification multiclasse ou binaire non structurée: la grandeur exponentielle de l’espace de sortie par rapport à la dimension naturelle des objets à prédire et la sensibilité des coûts de la tâche de classification. Cette thèse se concentre sur les méthodes de substitution pour la prédiction structurée, dans lesquelles le problème discret typiquement insoluble est abordé à l’aide d’un problème continu convexe qui, à son tour, peut être résolu à l’aide de techniques de régression
Classification is the branch of supervised learning that aims at estimating a discrete valued mapping from data made of input-output pairs. The most classical and well studied setting is binary classification, where the discrete predictor takes zero or one as value. However, most of the practical classification settings deal with large structured output spaces such as sequences, grids, graphs, permutations, matchings, etc. There are many fundamental differences between structured prediction and vanilla binary or multi-class classification, such as the exponentially large size of the output space with respect to the natural dimension of the output objects and the cost-sensitive nature of the learning task. This thesis focuses on surrogate methods for structured prediction, whereby the typically intractable discrete problem is approached using a convex continuous surrogate problem which in turn can be addressed using techniques from regression
APA, Harvard, Vancouver, ISO, and other styles
15

Bamdad, Masouleh Keivan. "Building energy optimisation using machine learning and metaheuristic algorithms." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/120281/1/Keivan_Bamdad%20Masouleh_Thesis.pdf.

Full text
Abstract:
The focus of this research is on development of new methods for Building Optimisation Problems (BOPs) and deploying them on realistic case studies to evaluate their performance and utility. First, a new optimisation algorithm based on Ant Colony Optimisation was developed for solving simulation-based optimisation approaches. Secondly, a new surrogate-model optimisation method was developed using active learning approaches to accelerate the optimisation process. Both proposed methods demonstrated better performance than benchmark methods. Finally, a multi-objective scenario-based optimisation was introduced to address uncertainty in BOPs. Results demonstrated the capability of the proposed uncertainty methodology to find a robust design.
APA, Harvard, Vancouver, ISO, and other styles
16

Laughlin, Trevor William. "A parametric and physics-based approach to structural weight estimation of the hybrid wing body aircraft." Thesis, Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45829.

Full text
Abstract:
Estimating the structural weight of a Hybrid Wing Body (HWB) aircraft during conceptual design has proven to be a significant challenge due to its unconventional configuration. Aircraft structural weight estimation is critical during the early phases of design because inaccurate estimations could result in costly design changes or jeopardize the mission requirements and thus degrade the concept's overall viability. The tools and methods typically employed for this task are inadequate since they are derived from historical data generated by decades of tube-and-wing style construction. In addition to the limited applicability of these empirical models, the conceptual design phase requires that any new tools and methods be flexible enough to enable design space exploration without consuming a significant amount of time and computational resources. This thesis addresses these challenges by developing a parametric and physics-based modeling and simulation (M&S) environment for the purpose of HWB structural weight estimation. The tools in the M&S environment are selected based on their ability to represent the unique HWB geometry and model the physical phenomena present in the centerbody section. The new M&S environment is used to identify key design parameters that significantly contribute to the variability of the HWB centerbody structural weight and also used to generate surrogate models. These surrogate models can augment traditional aircraft sizing routines and provide improved structural weight estimations.
APA, Harvard, Vancouver, ISO, and other styles
17

Forsman, Niclas. "Method for quality assurance of mine-surrogates." Thesis, KTH, Maskinkonstruktion (Inst.), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-168744.

Full text
Abstract:
Ett av projekten som drivs av Försvarets Materielverk (FMV) går ut på att kvalitetssäkra de surrogatminor som används för att utvärdera minskyddet hos stridsfordon. Surrogatminorna gjuts i TNT av företaget Nammo LIAB och används sedan enligt en testprocess som är reglerad av standarden STANAG 4569 (Edition 2) Protection Levels for Occupants of Armoured Vehicles och AEP55 Procedures for evaluating the protection level of armoured vehicles. Testprocessen är väldigt dyr och det är därför av stor vikt att minimera osäkerheter i minornas verkan. Ingen standard för minornas kvalité finns idag. Arbetet med denna standard uppnås i två steg genom att först tillse att spårbarhet finns i tillverkningen och standardisera variationer i den samma. Steg två är att verifiera kvalitéten med en metod där stickprov ur leveranser kan provsprängas för att säkerställa kvalitén. Syftet med denna rapport är att ta fram en metod för steg två: att verifiera laddningarna. Efter att bakgrundsstudien avhandlat grunderna i sprängverkan så skapades en QFD-matris där kraven på metoden ställdes mot olika tekniska egenskaper varvid riktvärden erhölls för fortsatt idé-generering. En brainstorming-process genererade sedan fyra koncept som sedan ställdes mot varandra i en Pugh-matris. Det vinnande konceptet blev efter ett antal designantaganden sedan modellerat i FEM-programmet ANSYS där ett antal design-parameterar undersöktes med hänsyn till både spänningar, deformationer och svängningar. Säkerhetsfaktorn för materialdimensionering av komponenterna erhölls med hjälp av Pugsleys metod. Svagheter i designen identifierades och nödvändiga modifikationer för att konceptet ska kunna realiseras presenteras.
The Swedish defense administration (FMV) is working on a project with the goal of a quality assurance method for surrogate-mines used in evaluating the mine protection level of armoured vehicles on the behalf of customers. The mines are molded in TNT by Nammo LIAB and are tested according to the standard STANAG 4569 (Edition 2) Protection Levels for Occupants of Armoured Vehicles and related document AEP55 Procedures for evaluating the protection level of armoured vehicles. This is an expensive process that needs to produce repeatable results, something that could be achieved in two steps. The first is to obtain traceability in the manufacturing process and to standardize allowed variations in it. Step two is to be able to employ a verification method in which samples out of delivery batches can be tested to quality assure the batch. The purpose of this thesis is to develop and evaluate a method for step two. After a background study where the fundamentals of the explosive process was examined, a QFD-matrix was created where the demands from FMV was put against various technical properties of the method. The QFD generated some design guidelines that aided in a brain-storming process where four different concepts were generated. These concepts were then put against each other in a Pugh-matrix. The winning concept was then modeled in the FEM-program ANSYS where a number of design parameters was examined with respect to both stresses, deformations and vibrations. The safety factor for dimensioning the material of the components was obtained with the help of Pugsleys method. Weaknesses in the design was identified and necessary modifications needed for the concept to be realized was presented.
APA, Harvard, Vancouver, ISO, and other styles
18

Lamba, Nishtha. "Psychological well-being, maternal-foetal bonding and experiences of Indian surrogates." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/271335.

Full text
Abstract:
Over the past two decades, India has become an international hub of cross-border surrogacy. The extreme economic and cultural differences between international couples seeking surrogacy and the surrogates themselves, clinics compromising health of surrogates for profit, the stigmatisation of surrogacy in India, and the constant surveillance of these women living in a ‘surrogate house’, have raised concerns regarding the potentially negative psychological impact of surrogacy on Indian surrogates. The primary aims of the thesis were (i) to conduct a longitudinal assessment of surrogates’ psychological problems (anxiety, depression and stress) from pregnancy until several months after relinquishing the baby to the intended parents, (ii) to examine the nature of the bond formed between surrogates and the unborn baby and establish whether this prenatal bond contributes to their psychological problems, and (iii) to explore the experiences of surrogates during and post-surrogacy. Fifty surrogates were compared with a matched group of 69 expectant mothers during pregnancy. Of these, 45 surrogates and 49 compairson group of mothers were followed up 4-6 months after the birth. All surrogates were hosting pregnancies for international intended parents and had at least one child of their own. Data were obtained using standardised questionnaires and in-depth interviews and were analysed using quantitative and qualitative methods. Indian surrogates were found to be more depressed than the comparison group of mothers, both during pregnancy and after the birth. However, giving up the newborn did not appear to add to surrogates’ levels of depression. There were no differences between the surrogates and the expectant mothers in anxiety or stress during either phase of the study. The examination of risk factors for psychological problems among the surrogates showed that anticipation of stigma, experiences of social humiliation and receiving insufficient support during pregnancy were associated with higher levels of depression following the birth. With respect to bonding with the unborn child, surrogates experienced lower levels of emotional bonding (e.g. they interacted less, and wondered less about, the foetus), but exhibited higher levels of instrumental bonding (e.g. they adopted better eating habits and avoided unhealthy practices during pregnancy), than women who were carrying their own babies. Contrary to concerns, greater bonding with the unborn child was not associated with increased psychological problems post-relinquishment. All surrogates were able to give up the child. Meeting the intended parents after the birth positively contributed towards surrogates’ satisfaction with relinquishment whereas meeting the baby did not. The qualitative findings on surrogates’ experiences showed that the majority lacked basic medical information regarding surrogacy pregnancy; hid surrogacy from most people; felt positive and supported at the surrogate house; lived in uncertainty regarding whether or not they would be allowed to meet the intended parents and the baby; and did not actually get to meet them. These findings have important implications for policy and practice on surrogacy in the Global South.
APA, Harvard, Vancouver, ISO, and other styles
19

Lelièvre, Nicolas. "Développement des méthodes AK pour l'analyse de fiabilité. Focus sur les évènements rares et la grande dimension." Thesis, Université Clermont Auvergne‎ (2017-2020), 2018. http://www.theses.fr/2018CLFAC045/document.

Full text
Abstract:
Les ingénieurs utilisent de plus en plus de modèles numériques leur permettant de diminuer les expérimentations physiques nécessaires à la conception de nouveaux produits. Avec l’augmentation des performances informatiques et numériques, ces modèles sont de plus en plus complexes et coûteux en temps de calcul pour une meilleure représentation de la réalité. Les problèmes réels de mécanique sont sujets en pratique à des incertitudes qui peuvent impliquer des difficultés lorsque des solutions de conception admissibles et/ou optimales sont recherchées. La fiabilité est une mesure intéressante des risques de défaillance du produit conçu dus aux incertitudes. L’estimation de la mesure de fiabilité, la probabilité de défaillance, nécessite un grand nombre d’appels aux modèles coûteux et deviennent donc inutilisable en pratique. Pour pallier ce problème, la métamodélisation est utilisée ici, et plus particulièrement les méthodes AK qui permettent la construction d’un modèle mathématique représentatif du modèle coûteux avec un temps d’évaluation beaucoup plus faible. Le premier objectif de ces travaux de thèses est de discuter des formulations mathématiques des problèmes de conception sous incertitudes. Cette formulation est un point crucial de la conception de nouveaux produits puisqu’elle permet de comprendre les résultats obtenus. Une définition des deux concepts de fiabilité et de robustesse est aussi proposée. Ces travaux ont abouti à une publication dans la revue internationale Structural and Multidisciplinary Optimization (Lelièvre, et al. 2016). Le second objectif est de proposer une nouvelle méthode AK pour l’estimation de probabilités de défaillance associées à des évènements rares. Cette nouvelle méthode, nommée AK-MCSi, présente trois améliorations de la méthode AK-MCS : des simulations séquentielles de Monte Carlo pour diminuer le temps d’évaluation du métamodèle, un nouveau critère d’arrêt sur l’apprentissage plus stricte permettant d’assurer le bon classement de la population de Monte Carlo et un enrichissement multipoints permettant la parallélisation des calculs du modèle coûteux. Ce travail a été publié dans la revue Structural Safety (Lelièvre, et al. 2018). Le dernier objectif est de proposer de nouvelles méthodes pour l’estimation de probabilités de défaillance en grande dimension, c’est-à-dire un problème défini à la fois par un modèle coûteux et un très grand nombre de variables aléatoires d’entrée. Deux nouvelles méthodes, AK-HDMR1 et AK-PCA, sont proposées pour faire face à ce problème et sont basées respectivement sur une décomposition fonctionnelle et une technique de réduction de dimension. La méthode AK-HDMR1 fait l’objet d’une publication soumise à la revue Reliability Engineering and Structural Safety le 1er octobre 2018
Engineers increasingly use numerical model to replace the experimentations during the design of new products. With the increase of computer performance and numerical power, these models are more and more complex and time-consuming for a better representation of reality. In practice, optimization is very challenging when considering real mechanical problems since they exhibit uncertainties. Reliability is an interesting metric of the failure risks of design products due to uncertainties. The estimation of this metric, the failure probability, requires a high number of evaluations of the time-consuming model and thus becomes intractable in practice. To deal with this problem, surrogate modeling is used here and more specifically AK-based methods to enable the approximation of the physical model with much fewer time-consuming evaluations. The first objective of this thesis work is to discuss the mathematical formulations of design problems under uncertainties. This formulation has a considerable impact on the solution identified by the optimization during design process of new products. A definition of both concepts of reliability and robustness is also proposed. These works are presented in a publication in the international journal: Structural and Multidisciplinary Optimization (Lelièvre, et al. 2016). The second objective of this thesis is to propose a new AK-based method to estimate failure probabilities associated with rare events. This new method, named AK-MCSi, presents three enhancements of AK-MCS: (i) sequential Monte Carlo simulations to reduce the time associated with the evaluation of the surrogate model, (ii) a new stricter stopping criterion on learning evaluations to ensure the good classification of the Monte Carlo population and (iii) a multipoints enrichment permitting the parallelization of the evaluation of the time-consuming model. This work has been published in Structural Safety (Lelièvre, et al. 2018). The last objective of this thesis is to propose new AK-based methods to estimate the failure probability of a high-dimensional reliability problem, i.e. a problem defined by both a time-consuming model and a high number of input random variables. Two new methods, AK-HDMR1 and AK-PCA, are proposed to deal with this problem based on respectively a functional decomposition and a dimensional reduction technique. AK-HDMR1 has been submitted to Reliability Enginnering and Structural Safety on 1st October 2018
APA, Harvard, Vancouver, ISO, and other styles
20

Weaver, Brian Lee. "A methodology for ballistic missile defense systems analysis using nested neural networks." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Sung, Woong Je. "A neural network construction method for surrogate modeling of physics-based analysis." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43721.

Full text
Abstract:
A connectivity adjusting learning algorithm, Optimal Brain Growth (OBG) was proposed. Contrast to the conventional training methods for the Artificial Neural Network (ANN) which focus on the weight-only optimization, the OBG method trains both weights and connectivity of a network in a single training process. The standard Back-Propagation (BP) algorithm was extended to exploit the error gradient information of the latent connection whose current weight has zero value. Based on this, the OBG algorithm makes a rational decision between a further adjustment of an existing connection weight and a creation of a new connection having zero weight. The training efficiency of a growing network is maintained by freezing stabilized connections in the further optimization process. A stabilized computational unit is also decomposed into two units and a particular set of decomposition rules guarantees a seamless local re-initialization of a training trajectory. The OBG method was tested for the multiple canonical, regression and classification problems and for a surrogate modeling of the pressure distribution on transonic airfoils. The OBG method showed an improved learning capability in computationally efficient manner compared to the conventional weight-only training using connectivity-fixed Multilayer Perceptrons (MLPs).
APA, Harvard, Vancouver, ISO, and other styles
22

Chakraborty, Prithwish. "Data-Driven Methods for Modeling and Predicting Multivariate Time Series using Surrogates." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/81432.

Full text
Abstract:
Modeling and predicting multivariate time series data has been of prime interest to researchers for many decades. Traditionally, time series prediction models have focused on finding attributes that have consistent correlations with target variable(s). However, diverse surrogate signals, such as News data and Twitter chatter, are increasingly available which can provide real-time information albeit with inconsistent correlations. Intelligent use of such sources can lead to early and real-time warning systems such as Google Flu Trends. Furthermore, the target variables of interest, such as public heath surveillance, can be noisy. Thus models built for such data sources should be flexible as well as adaptable to changing correlation patterns. In this thesis we explore various methods of using surrogates to generate more reliable and timely forecasts for noisy target signals. We primarily investigate three key components of the forecasting problem viz. (i) short-term forecasting where surrogates can be employed in a now-casting framework, (ii) long-term forecasting problem where surrogates acts as forcing parameters to model system dynamics and, (iii) robust drift models that detect and exploit 'changepoints' in surrogate-target relationship to produce robust models. We explore various 'physical' and 'social' surrogate sources to study these sub-problems, primarily to generate real-time forecasts for endemic diseases. On modeling side, we employed matrix factorization and generalized linear models to detect short-term trends and explored various Bayesian sequential analysis methods to model long-term effects. Our research indicates that, in general, a combination of surrogates can lead to more robust models. Interestingly, our findings indicate that under specific scenarios, particular surrogates can decrease overall forecasting accuracy - thus providing an argument towards the use of 'Good data' against 'Big data'.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
23

Koch, Christiane. "Quantum dissipative dynamics with a surrogate Hamiltonian." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I, 2002. http://dx.doi.org/10.18452/14816.

Full text
Abstract:
Diese Dissertation untersucht Quantensysteme in kondensierter Phase, welche mit ihrer Umgebung wechselwirken und durch ultrakurze Laserpulse angeregt werden. Die Zeitskalen der verschiedenen beteiligten Prozessen lassen sich bei solchen Problemen nicht separieren, weshalb die Standardmethoden zur Behandlung offener Quantensysteme nicht angewandt werden können. Die Methode des Surrogate Hamiltonian stellt ein Beispiel neuer Herangehensweisen an dissipative Quantendynamik dar. Die Weiterentwicklung der Methode und ihre Anwendung auf Phänomene, die zur Zeit experimentell untersucht werden, stehen im Mittelpunkt dieser Arbeit. Im ersten Teil der Arbeit werden die einzelnen dissipativen Prozesse klassifiziert und diskutiert. Insbesondere wird ein Modell der Dephasierung in die Methode des Surrogate Hamiltonian eingeführt. Dies ist wichtig für zukünftige Anwendungen der Methode, z.b. auf kohärente Kontrolle oder Quantencomputing. Diesbezüglich hat der Surrogate Hamiltonian einen großen Vorteil gegenüber anderen zur Verfügung stehenden Methoden dadurch, daß er auf dem Spin-Bad, d.h. auf einer vollständig quantenmechanischen Beschreibung der Umgebung, beruht. Im nächsten Schritt wird der Surrogate Hamiltonian auf ein Standardproblem für Ladungstransfer in kondensierter Phase angewandt, zwei nichtadiabatisch gekoppelte harmonische Oszillatoren, die in ein Bad eingebettet sind. Dieses Modell stellt eine große Vereinfachung von z.B. einem Molekül in Lösung dar, es dient hier jedoch als Testbeispiel für die theoretische Beschreibung eines prototypischen Ladungstransferereignisses. Alle qualitativen Merkmale eines solchen Experimentes können wiedergegeben und Defizite früherer Behandlungen identifiziert werden. Ultraschnelle Experimente beobachten Reaktionsdynamik auf der Zeitskala von Femtosekunden. Dies kann besonders gut durch den Surrogate Hamiltonian als einer Methode, die auf einer zeitabhängigen Beschreibung beruht, erfaßt werden. Die Kombination der numerischen Lösung der zeitabhängigen Schrödingergleichung mit der Wignerfunktion, die die Visualisierung eines Quantenzustands im Phasenraum ermöglicht, gestattet es, dem Ladungstransferzyklus intuitiv Schritt für Schritt zu folgen. Der Nutzen des Surrogate Hamiltonian wird weiterhin durch die Verbindung mit der Methode der Filterdiagonalisierung erhöht. Dies gestattet es, aus mit dem Surrogate Hamiltonian nur für relative kurze Zeite konvergierte Erwartungswerten Ergebnisse in der Frequenzdomäne zu erhalten. Der zweite Teil der Arbeit beschäftigt sich mit der theoretischen Beschreibung der laserinduzierten Desorption kleiner Moleküle von Metalloxidoberflächen. Dieses Problem stellt ein Beispiel dar, in dem alle Aspekte mit derselben methodischen Genauigkeit beschrieben werden, d.h. ab initio Potentialflächen werden mit einem mikroskopischen Modell für die Anregungs- und Relaxationsprozesse verbunden. Das Modell für die Wechselwirkung zwischen angeregtem Adsorbat-Substrat-System und Elektron-Loch-Paaren des Substrats beruht auf einer vereinfachten Darstellung der Elektron-Loch-Paare als ein Bad aus Dipolen und auf einer Dipol-Dipol-Wechselwirkung zwischen System und Bad. Alle Parameter können aus Rechnungen zur elektronischen Struktur abgeschätzt werden. Desorptionswahrscheinlichkeiten und Desorptionsgeschwindigkeiten werden unabhängig voneinander im experimentell gefundenen Bereich erhalten. Damit erlaubt der Surrogate Hamiltonian erstmalig eine vollständige Beschreibung der Photodesorptionsdynamik auf ab initio-Basis.
This thesis investigates condensed phase quantum systems which interact with their environment and which are subject to ultrashort laser pulses. For such systems the timescales of the involved processes cannot be separated, and standard approaches to treat open quantum systems fail. The Surrogate Hamiltonian method represents one example of a number of new approaches to address quantum dissipative dynamics. Its further development and application to phenomena under current experimental investigation are presented. The single dissipative processes are classified and discussed in the first part of this thesis. In particular, a model of dephasing is introduced into the Surrogate Hamiltonian method. This is of importance for future work in fields such as coherent control and quantum computing. In regard to these subjects, it is a great advantage of the Surrogate Hamiltonian over other available methods that it relies on a spin, i.e. a fully quantum mechanical description of the bath. The Surrogate Hamiltonian method is applied to a standard model of charge transfer in condensed phase, two nonadiabatically coupled harmonic oscillators immersed in a bath. This model is still an oversimplification of, for example, a molecule in solution, but it serves as testing ground for the theoretical description of a prototypical ultrafast pump-probe experiment. All qualitative features of such an experiment are reproduced and shortcomings of previous treatments are identified. Ultrafast experiments attempt to monitor reaction dynamics on a femtosecond timescale. This can be captured particularly well by the Surrogate Hamiltonian as a method based on a time-dependent picture. The combination of the numerical solution of the time-dependent Schrödinger equation with the phase space visualization given by the Wigner function allows for a step by step following of the sequence of events in a charge transfer cycle in a very intuitive way. The utility of the Surrogate Hamiltonian is furthermore significantly enhanced by the incorporation of the Filter Diagonalization method. This allows to obtain frequency domain results from the dynamics which can be converged within the Surrogate Hamiltonian approach only for comparatively short times. The second part of this thesis is concerned with the theoretical treatment of laser induced desorption of small molecules from oxide surfaces. This is an example which allows for a description of all aspects of the problem with the same level of rigor, i.e. ab initio potential energy surfaces are combined with a microscopic model for the excitation and relaxation processes. This model of the interaction between the excited adsorbate-substrate complex and substrate electron-hole pairs relies on a simplified description of the electron-hole pairs as a bath of dipoles, and a dipole-dipole interaction between system and bath. All parameters are connected to results from electronic structure calculations. The obtained desorption probabilities and desorption velocities are simultaneously found to be in the right range as compared to the experimental results. The Surrogate Hamiltonian approach therefore allows for a complete description of the photodesorption dynamics on an ab initio basis for the first time.
APA, Harvard, Vancouver, ISO, and other styles
24

Hammoudeh, Ismail. "Qualitative nichtlineare Zeitreihenanalyse mit Anwendung auf das Problem der Polbewegung." Phd thesis, [S.l. : s.n.], 2002. http://pub.ub.uni-potsdam.de/2003/0003/hammoud.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Arsenyev, Ilya [Verfasser]. "Efficient Surrogate-based Robust Design Optimization Method : Multi-disciplinary Design for Aero-turbine Components / Ilya Arsenyev." Aachen : Shaker, 2018. http://d-nb.info/1166507599/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Charest, Abigail J. "Investigation of Physical Characteristics Impacting Fate and Transport of Viral Surrogates in Water Systems." Digital WPI, 2015. https://digitalcommons.wpi.edu/etd-dissertations/54.

Full text
Abstract:
A multi-scale approach was used to investigate the occurrence and physical characteristics of viral surrogates in water systems. This approach resulted in a methodology to quantify the dynamics and physical parameters of viral surrogates, including bacteriophages and nanoparticles. Physical parameters impacting the occurrence and survival of viruses can be incorporated into models that predict the levels of viral contamination in specific types of water. Multiple full-scale water systems (U.S., Italy and Australia) were tested including surface water, drinking water, stormwater and wastewater systems. Water quality parameters assessed included viral markers (TTV, polyomavirus, microviridae and adenovirus), bacteriophages (MS2 and ΦX-174), and coliforms (total coliforms and E. coli). In this study, the lack of correlations between adenovirus and that of bacterial indicators suggests that these bacterial indicators are not suitable as indicators of viral contamination. In the wastewater samples, microviridae were correlated to the adenovirus, polyomavirus, and TTV. While TTV may have some qualities which are consistent with an indicator such as physical similarity to enteric viruses and occurrence in populations worldwide, the use of TTV as an indicator may be limited as a result of the detection occurrence. The limitations of TTV may impede further analysis and other makers such as coliphages, and microviridae may be easier to study in the near future. Batch scale adsorption tests were conducted. Protein-coated latex nanospheres were used to model bacteriophages (MS2 and ΦX-174) and includes a comparison of the zeta potentials in lab water, and two artificial groundwaters with monovalent and divalent electrolytes. This research shows that protein-coated particles have higher average log10 removals than uncoated particles. Although, the method of fluorescently labeling nanoparticles may not provide consistent data at the nanoscale. The results show both that research on viruses at any scale can be difficult and that new methodologies are needed to analyze virus characteristics in water systems. A new dynamic light scattering methodology, area recorded generalized optical scattering (ARGOS) method, was developed for observing the dynamics of nanoparticles, including bacteriophages MS2 and ΦX-174. This method should be further utilized to predict virus fate and transport in environmental systems and through treatment processes. While the concentration of MS2 is higher than ΦX-174 as demonstrated by relative total intensity, the RMSD shows that the dynamics are greater and have more variation in ΦX-174 than MS2 and this may be a result of the hydrophobic nature of ΦX-174. Relationships such as these should be further explored, and may reflect relationships such as particle bonds or hydrophobicity.
APA, Harvard, Vancouver, ISO, and other styles
27

Hawkins, Alicia. "DECISION-MAKER TRADE-OFFS IN MULTIPLE RESPONSE SURFACE OPTIMIZATION." Doctoral diss., University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2787.

Full text
Abstract:
The focus of this dissertation is on improving decision-maker trade-offs and the development of a new constrained methodology for multiple response surface optimization. There are three key components of the research: development of the necessary conditions and assumptions associated with constrained multiple response surface optimization methodologies; development of a new constrained multiple response surface methodology; and demonstration of the new method. The necessary conditions for and assumptions associated with constrained multiple response surface optimization methods were identified and found to be less restrictive than requirements previously described in the literature. The conditions and assumptions required for a constrained method to find the most preferred non-dominated solution are to generate non-dominated solutions and to generate solutions consistent with decision-maker preferences among the response objectives. Additionally, if a Lagrangian constrained method is used, the preservation of convexity is required in order to be able to generate all non-dominated solutions. The conditions required for constrained methods are significantly fewer than those required for combined methods. Most of the existing constrained methodologies do not incorporate any provision for a decision-maker to explicitly determine the relative importance of the multiple objectives. Research into the larger area of multi-criteria decision-making identified the interactive surrogate worth trade-off algorithm as a potential methodology that would provide that capability in multiple response surface optimization problems. The ISWT algorithm uses an ε-constraint formulation to guarantee a non-dominated solution, and then interacts with the decision-maker after each iteration to determine the preference of the decision-maker in trading-off the value of the primary response for an increase in value of a secondary response. The current research modified the ISWT algorithm to develop a new constrained multiple response surface methodology that explicitly accounts for decision-maker preferences. The new Modified ISWT (MISWT) method maintains the essence of the original method while taking advantage of the specific properties of multiple response surface problems to simplify the application of the method. The MISWT is an accessible computer-based implementation of the ISWT. Five test problems from the multiple response surface optimization literature were used to demonstrate the new methodology. It was shown that this methodology can handle a variety of types and numbers of responses and independent variables. Furthermore, it was demonstrated that the methodology can be successful using a priori information from the decision-maker about bounds or targets or can use the extreme values obtained from the region of operability. In all cases, the methodology explicitly considered decision-maker preferences and provided non-dominated solutions. The contribution of this method is the removal of implicit assumptions and includes the decision-maker in explicit trade-offs among multiple objectives or responses.
Ph.D.
Department of Industrial Engineering and Management Systems
Engineering and Computer Science
Industrial Engineering PhD
APA, Harvard, Vancouver, ISO, and other styles
28

Dupuis, Romain. "Surrogate models coupled with machine learning to approximate complex physical phenomena involving aerodynamic and aerothermal simulations." Thesis, Toulouse, INPT, 2019. http://www.theses.fr/2019INPT0017/document.

Full text
Abstract:
Les simulations numériques représentent un élément central du processus de conception d’un avion complétant les tests physiques et essais en vol. Elles peuvent notamment bénéficier de méthodes innovantes, telle que l’intelligence artificielle qui se diffuse largement dans l’aviation. Simuler une mission de vol complète pour plusieurs disciplines pose d’importants problèmes à cause des coûts de calcul et des conditions d’opérations changeantes. De plus, des phénomènes complexes peuvent se produire. Par exemple, des chocs peuvent apparaître sur l’aile pour l’aérodynamique alors que le mélange entre les écoulements du moteur et de l’air extérieur impacte fortement l’aérothermie autour de la nacelle et du mât. Des modèles de substitution peuvent être utilisés pour remplacer les simulations haute-fidélité par des approximations mathématiques afin de réduire le coût de calcul et de fournir une méthode construite autour des données de simulations. Deux développements sont proposés dans cette thèse : des modèles de substitution utilisant l’apprentissage automatique pour approximer des calculs aérodynamiques et l’intégration de modèles de substitution classiques dans un processus aérothermique industriel. La première approche sépare les solutions en sous-ensembles selon leurs formes grâce à de l’apprentissage automatique. En outre, une méthode de reéchantillonnage complète la base d’entrainement en ajoutant de l’information dans des sous-ensembles spécifiques. Le deuxième développement se concentre sur le dimensionnement du mât moteur en remplaçant les simulations aérothermiques par des modèles de substitution. Ces deux développements sont appliqués sur des configurations avions afin de combler l’écart entre méthode académique et industrielle. On peut noter que des améliorations significatives en termes de coût et de précision ont été atteintes
Numerical simulations provide a key element in aircraft design process, complementing physical tests and flight tests. They could take advantage of innovative methods, such as artificial intelligence technologies spreading in aviation. Simulating the full flight mission for various disciplines pose important problems due to significant computational cost coupled to varying operating conditions. Moreover, complex physical phenomena can occur. For instance, the aerodynamic field on the wing takes different shapes and can encounter shocks, while aerothermal simulations around nacelle and pylon are sensitive to the interaction between engine flows and external flows. Surrogate models can be used to substitute expensive high-fidelitysimulations by mathematical and statistical approximations in order to reduce overall computation cost and to provide a data-driven approach. In this thesis, we propose two developments: (i) machine learning-based surrogate models capable of approximating aerodynamic experiments and (ii) integrating more classical surrogate models into industrial aerothermal process. The first approach mitigates aerodynamic issues by separating solutions with very different shapes into several subsets using machine learning algorithms. Moreover, a resampling technique takes advantage of the subdomain decomposition by adding extra information in relevant regions. The second development focuses on pylon sizing by building surrogate models substitutingaerothermal simulations. The two approaches are applied to aircraft configurations in order to bridge the gap between academic methods and real-world applications. Significant improvements are highlighted in terms of accuracy and cost gains
APA, Harvard, Vancouver, ISO, and other styles
29

Song, Hyeongjin. "Efficient sampling-based Rbdo by using virtual support vector machine and improving the accuracy of the Kriging method." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/1504.

Full text
Abstract:
The objective of this study is to propose an efficient sampling-based RBDO using a new classification method to reduce the computational cost. In addition, accuracy improvement strategies for the Kriging method are proposed to reduce the number of expensive computer experiments. Current research effort involves: (1) developing a new classification method that is more efficient than conventional surrogate modeling methods while maintaining required accuracy level; (2) developing a sequential adaptive sampling method that inserts samples near the limit state function; (3) improving the efficiency of the RBDO process by using a fixed hyper-spherical local window with an efficient uniform sampling method and identification of active/violated constraints; and (4) improving the accuracy of the Kriging method by introducing several strategies. In the sampling-based RBDO, only accurate classification information is needed instead of accurate response surface. On the other hand, in general, surrogates are constructed using all available DoE samples instead of focusing on the limit state function. Therefore, the computational cost of surrogates can be relatively expensive; and the accuracy of the limit state (or decision) function can be sacrificed in return for reducing the error on unnecessary regions away from the limit state function. On the contrary, the support vector machine (SVM), which is a classification method, only uses support vectors, which are located near the limit state function, to focus on the decision function. Therefore, the SVM is very efficient and ideally applicable to sampling-based RBDO, if the accuracy of SVM is improved by inserting virtual samples near the limit state function. The proposed sequential sampling method inserts new samples near the limit state function so that the number of DoE samples is minimized. In many engineering problems, expensive computer simulations are used and thus the total computational cost needs to be reduced by using less number of DoE samples. Several efficiency strategies such as: (1) launching RBDO at a deterministic optimum design, (2) hyper-spherical local windows with an efficient uniform sampling method, (3) filtering of constraints, (4) sample reuse, (5) improved virtual sample generation, are used for the proposed sampling-based RBDO using virtual SVM. The number of computer experiments is also reduced by implementing accuracy improvement strategies for the Kriging method. Since the Kriging method is used for generating virtual samples and generating response surface of the cost function, the number of computer experiments can be reduced by introducing: (1) accurate correlation parameter estimation, (2) penalized maximum likelihood estimation (PMLE) for small sample size, (3) correlation model selection by MLE, and (4) mean structure selection by cross-validation (CV) error.
APA, Harvard, Vancouver, ISO, and other styles
30

Oliveira, Lilian Kátia de. "Métodos exatos baseados em relaxação lagrangiana e surrogate para o problema de carregamento de paletes do produtor." Universidade Federal de São Carlos, 2004. https://repositorio.ufscar.br/handle/ufscar/3407.

Full text
Abstract:
Made available in DSpace on 2016-06-02T19:50:17Z (GMT). No. of bitstreams: 1 TeseLKO.pdf: 834201 bytes, checksum: 994d7b70c6b1001f9dec962fafc8b72e (MD5) Previous issue date: 2004-12-13
Universidade Federal de Sao Carlos
The purpose of this work is to develop exact methods, based on Lagrangean and Surrogate relaxation, with good performance to solve the manufacturer s pallet loading problem. This problem consists of orthogonally arranging the maximum number of rectangles of sizes (l,w) and (w,l) into a larger rectangle (L,W) without overlapping. Such methods involve a tree search procedure of branch and bound type and they use, in each node of the branch and bound tree, bounds derived from Lagrangean and/or Surrogate relaxations of a 0-1 linear programming formulation. Subgradient optimization algorithms are used to optimize such bounds. Problem reduction tests and Lagrangean and Surrogate heuristics are also applied in the subgradient optimization to obtain good feasible solution. Computational experiments were performed with instances from the literature and also real instances obtained from a carrier. The results show that the methods are able to solve these instances, on average, more quickly than other exact methods, including the software GAMS/CPLEX.
O objetivo deste trabalho é desenvolver métodos exatos, baseados em relaxação Lagrangiana e Surrogate, com bom desempenho para resolver o problema de carregamento de paletes do produtor. Tal problema consiste em arranjar ortogonalmente e sem sobreposição o máximo número de retângulos de dimensões ( , ) l w ou ( , ) w l sobre um retângulo maior ( , ) L W . Tais métodos exatos são procedimentos de busca em árvore do tipo branch and bound que, em cada nó, utilizam limitantes derivados de relaxações Lagrangiana e/ou Surrogate de uma formulação de programação linear 0 1 − . Algoritmos de otimização do subgradiente são usados para otimizar estes limitantes. São aplicados ainda testes de redução do problema e heurísticas Lagrangiana e Surrogate na otimização do subgradiente para obter boas soluções factíveis. Testes computacionais foram realizados utilizando exemplos da literatura e exemplos reais, obtidos de uma transportadora. Os resultados mostram que os métodos são capazes de resolvê-los, em média, mais rapidamente do que outros métodos exatos, incluindo o software GAMS/CPLEX.
APA, Harvard, Vancouver, ISO, and other styles
31

Iwata, Curtis. "A representation method for large and complex engineering design datasets with sequential outputs." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50266.

Full text
Abstract:
This research addresses the problem of creating surrogate models of high-level operations and sustainment (O&S) simulations with time sequential (TS) outputs. O&S is a continuous process of using and maintaining assets such as a fleet of aircraft, and the infrastructure to support this process is the O&S system. To track the performance of the O&S system, metrics such as operational availability are recorded and reported as a time history. Modeling and simulation (M&S) is often used as a preliminary tool to study the impact of implementing changes to O&S systems such as investing in new technologies and changing the inventory policies. A visual analytics (VA) interface is useful to navigate the data from the M&S process so that these options can be compared, and surrogate modeling enables some key features of the VA interface such as interpolation and interactivity. Fitting a surrogate model is difficult to TS data because of its size and nonlinear behavior. The Surrogate Modeling and Regression of Time Sequences (SMARTS) methodology was proposed to address this problem. An intermediate domain Z was calculated from the simulation output data in a way that a point in Z corresponds to a unique TS shape or pattern. A regression was then fit to capture the entire range of possible TS shapes using Z as the inputs, and a separate regression was fit to transform the inputs into the Z. The method was tested on output data from an O&S simulation model and compared against other regression methods for statistical accuracy and visual consistency. The proposed methodology was shown to be conditionally better than the other methodologies.
APA, Harvard, Vancouver, ISO, and other styles
32

Yeilaghi, Tamijani Ali. "Vibration and Buckling Analysis of Unitized Structure Using Meshfree Method and Kriging Model." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/37817.

Full text
Abstract:
The Element Free Galerkin (EFG) method, which is based on the Moving Least Squares (MLS) approximation, is developed here for vibration, buckling and static analysis of homogenous and FGM plate with curvilinear stiffeners. Numerical results for different stiffeners configurations and boundary conditions are presented. All results are verified using the commercial finite element software ANSYS® and other available results in literature. In addition, the vibration analysis of plates with curvilinear stiffeners is carried out using Ritz method. A 24 by 28 in. curvilinear stiffened panel was machined from 2219-T851 aluminum for experimental validation of the Ritz and meshfree methods of vibration mode shape predictions. Results were obtained for this panel mounted vertically to a steel clamping bracket using acoustic excitation and a laser vibrometer. Experimental results appear to correlate well with the meshfree and Ritz method results. In reality, many engineering structures are subjected to random pressure loads in nature and cannot be assumed to be deterministic. Typical engineering structures include buildings and towers, offshore structures, vehicles and ships, are subjected to random pressure. The vibrations induced from gust loads, engine noise, and other auxiliary electrical system can also produce noise inside aircraft. Consequently, all flight vehicles operate in random vibration environment. These random loads can be modeled by using their statistical properties. The dynamical responses of the structures which are subjected to random excitations are very complicated. To investigate their dynamic responses under random loads, the meshfree method is developed for random vibration analysis of curvilinearly-stiffened plates . Since extensive efforts have been devoted to study the buckling and vibration analysis of stiffened panel to maximize their natural frequencies and critical buckling loads, these structures are subjected to in-plane loading while the vibration analysis is considered. In these cases the natural frequencies calculated by neglecting the in-plane compression are usually over predicted. In order to have more accurate results it might be necessary to take into account the effects of in-plane load since it can change the natural frequency of plate considerably. To provide a better view of the free vibration behavior of the plate with curvilinear stiffeners subjected to axial/biaxial or shear stresses several numerical examples are studied. The FEM analysis of curvilinearly stiffened plate is quite computationally expensive, and the meshfree method seems to be a proper substitution to reduce the CPU time. However it will still require many simulations. Because of the number of simulations may be required in the solution of an engineering optimization problem, many researchers have tried to find approaches and techniques in optimization which can reduce the number of function evaluations. In these problems, surrogate models for analysis and optimization can be very efficient. The basic idea in surrogate model is to reduce computational cost and giving a better understanding of the influence of the design variables on the different objectives and constrains. To use the advantage of both meshfree method and surrogate model in reducing CPU time, the meshfree method is used to generate the sample points and combination of Kriging (a surrogate model) and Genetic Algorithms is used for design of curvilinearly stiffened plate. The meshfree and kriging results and CPU time were compared with those obtained using EBF3PanelOpt.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
33

Hinkle, Kurt Berlin. "An Automated Method for Optimizing Compressor Blade Tuning." BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/6230.

Full text
Abstract:
Because blades in jet engine compressors are subject to dynamic loads based on the engine's speed, it is essential that the blades are properly "tuned" to avoid resonance at those frequencies to ensure safe operation of the engine. The tuning process can be time consuming for designers because there are many parameters controlling the geometry of the blade and, therefore, its resonance frequencies. Humans cannot easily optimize design spaces consisting of multiple variables, but optimization algorithms can effectively optimize a design space with any number of design variables. Automated blade tuning can reduce design time while increasing the fidelity and robustness of the design. Using surrogate modeling techniques and gradient-free optimization algorithms, this thesis presents a method for automating the tuning process of an airfoil. Surrogate models are generated to relate airfoil geometry to the modal frequencies of the airfoil. These surrogates enable rapid exploration of the entire design space. The optimization algorithm uses a novel objective function that accounts for the contribution of every mode's value at a specific operating speed on a Campbell diagram. When the optimization converges on a solution, the new blade parameters are output to the designer for review. This optimization guarantees a feasible solution for tuning of a blade. With 21 geometric parameters controlling the shape of the blade, the geometry for an optimally tuned blade can be determined within 20 minutes.
APA, Harvard, Vancouver, ISO, and other styles
34

Deshpande, Shubhangi Govind. "Advances in aircraft design: multiobjective optimization and a markup language." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/25142.

Full text
Abstract:
Today's modern aerospace systems exhibit strong interdisciplinary coupling and require a multidisciplinary, collaborative approach. Analysis methods that were once considered feasible only for advanced and detailed design are now available and even practical at the conceptual design stage. This changing philosophy for conducting conceptual design poses additional challenges beyond those encountered in a low fidelity design of aircraft. This thesis takes some steps towards bridging the gaps in existing technologies and advancing the state-of-the-art in aircraft design. The first part of the thesis proposes a new Pareto front approximation method for multiobjective optimization problems. The method employs a hybrid optimization approach using two derivative free direct search techniques, and is intended for solving blackbox simulation based multiobjective optimization problems with possibly nonsmooth functions where the analytical form of the objectives is not known and/or the evaluation of the objective function(s) is very expensive (very common in multidisciplinary design optimization). A new adaptive weighting scheme is proposed to convert a multiobjective optimization problem to a single objective optimization problem. Results show that the method achieves an arbitrarily close approximation to the Pareto front with a good collection of well-distributed nondominated points. The second part deals with the interdisciplinary data communication issues involved in a collaborative mutidisciplinary aircraft design environment. Efficient transfer, sharing, and manipulation of design and analysis data in a collaborative environment demands a formal structured representation of data. XML, a W3C recommendation, is one such standard concomitant with a number of powerful capabilities that alleviate interoperability issues. A compact, generic, and comprehensive XML schema for an aircraft design markup language (ADML) is proposed here to provide a common language for data communication, and to improve efficiency and productivity within a multidisciplinary, collaborative environment. An important feature of the proposed schema is the very expressive and efficient low level schemata. As a proof of concept the schema is used to encode an entire Convair B58. As the complexity of models and number of disciplines increases, the reduction in effort to exchange data models and analysis results in ADML also increases.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
35

Dugelas, Loic. "Stratégies probabilistes appliquées à la modélisation numérique discrète : le cas des filets pare-pierres." Thesis, Université Grenoble Alpes, 2020. https://tel.archives-ouvertes.fr/tel-02498238.

Full text
Abstract:
L'objectif de ce travail de recherche est le développement d'outils numériques pour l'aide à la conception d'écrans souples de protection contre les chutes de blocs.Des modèles, basés sur la Méthode des Éléments Discrets (DEM), ont tout d'abord été développés à partir d'approches de modélisation issues de la littérature, pour deux écrans souples. La principale différence entre les deux écrans étudiés réside dans la nature de la nappe de filet : nappe à anneaux ou nappe ELITE. Pour l'écran équipé d'une nappe à anneaux, le modèle de structure développé s'est avéré suffisant pour obtenir un compromis en termes de pertinence, précision et efficacité. Par contre, pour l'écran équipé d'une nappe ELITE, de nouveaux développements numériques ont été nécessaires pour obtenir un tel compromis.L'obtention d'un modèle d'ouvrage complet efficace pour l'écran ELITE a nécessité la prise en compte du glissement entre les câbles constitutifs de la nappe. Deux approches ont été proposées pour l'intégration de ce glissement. La première approche intègre le glissement sans prendre en compte le frottement à l’interface entre les câbles, alors que la seconde permet cette prise en compte. Cependant, cette seconde approche ne peut être utilisée à l’échelle d’un ouvrage en raison de durées de calcul trop importantes.Les modèles d'écran développés ont été intégrés dans un outil d'aide à la conception, basé sur des approches de méta-modélisation. Cet outil permet de réaliser des études paramétriques et de sensibilité sur la réponse de l'ouvrage, ainsi que d'identifier les configurations optimales de l'ouvrage
This research aims at developing numerical tools to help the design of flexible fences against rockfall.Models based on the Discrete Element Method (DEM) are developed for two flexible fences, using modeling approaches taken from the literature. The main difference between the two flexible fences investigated is their interception structure: ring net or ELITE net. For the ring net flexible fence, the DEM model proved to be a sufficient compromise between relevance, accuracy, and efficiency. On the other side, for the flexible fence with a ELITE net, new numerical developments are necessary to reach such compromise.In order to get an efficient DEM model for the ELITE flexible fence, the sliding between the cables of the net has to be taken into account. Two approaches are proposed to integrate this sliding. In the first approach, the sliding is considered without friction between the cables, while in the second approach the friction is considered. However, the calculation duration obtained with the second approach was too important to integrate it into a complete fence model.The developed models have been integrated into a design assistance tool for flexible fences, based on surrogate modeling. Parametric and sensitivity analysis are carried out with this tool, and the optimal configurations of the fence are identified
APA, Harvard, Vancouver, ISO, and other styles
36

Lu, Ruijin. "Scalable Estimation and Testing for Complex, High-Dimensional Data." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/93223.

Full text
Abstract:
With modern high-throughput technologies, scientists can now collect high-dimensional data of various forms, including brain images, medical spectrum curves, engineering signals, etc. These data provide a rich source of information on disease development, cell evolvement, engineering systems, and many other scientific phenomena. To achieve a clearer understanding of the underlying mechanism, one needs a fast and reliable analytical approach to extract useful information from the wealth of data. The goal of this dissertation is to develop novel methods that enable scalable estimation, testing, and analysis of complex, high-dimensional data. It contains three parts: parameter estimation based on complex data, powerful testing of functional data, and the analysis of functional data supported on manifolds. The first part focuses on a family of parameter estimation problems in which the relationship between data and the underlying parameters cannot be explicitly specified using a likelihood function. We introduce a wavelet-based approximate Bayesian computation approach that is likelihood-free and computationally scalable. This approach will be applied to two applications: estimating mutation rates of a generalized birth-death process based on fluctuation experimental data and estimating the parameters of targets based on foliage echoes. The second part focuses on functional testing. We consider using multiple testing in basis-space via p-value guided compression. Our theoretical results demonstrate that, under regularity conditions, the Westfall-Young randomization test in basis space achieves strong control of family-wise error rate and asymptotic optimality. Furthermore, appropriate compression in basis space leads to improved power as compared to point-wise testing in data domain or basis-space testing without compression. The effectiveness of the proposed procedure is demonstrated through two applications: the detection of regions of spectral curves associated with pre-cancer using 1-dimensional fluorescence spectroscopy data and the detection of disease-related regions using 3-dimensional Alzheimer's Disease neuroimaging data. The third part focuses on analyzing data measured on the cortical surfaces of monkeys' brains during their early development, and subjects are measured on misaligned time markers. In this analysis, we examine the asymmetric patterns and increase/decrease trend in the monkeys' brains across time.
Doctor of Philosophy
With modern high-throughput technologies, scientists can now collect high-dimensional data of various forms, including brain images, medical spectrum curves, engineering signals, and biological measurements. These data provide a rich source of information on disease development, engineering systems, and many other scientific phenomena. The goal of this dissertation is to develop novel methods that enable scalable estimation, testing, and analysis of complex, high-dimensional data. It contains three parts: parameter estimation based on complex biological and engineering data, powerful testing of high-dimensional functional data, and the analysis of functional data supported on manifolds. The first part focuses on a family of parameter estimation problems in which the relationship between data and the underlying parameters cannot be explicitly specified using a likelihood function. We introduce a computation-based statistical approach that achieves efficient parameter estimation scalable to high-dimensional functional data. The second part focuses on developing a powerful testing method for functional data that can be used to detect important regions. We will show nice properties of our approach. The effectiveness of this testing approach will be demonstrated using two applications: the detection of regions of the spectrum that are related to pre-cancer using fluorescence spectroscopy data and the detection of disease-related regions using brain image data. The third part focuses on analyzing brain cortical thickness data, measured on the cortical surfaces of monkeys’ brains during early development. Subjects are measured on misaligned time-markers. By using functional data estimation and testing approach, we are able to: (1) identify asymmetric regions between their right and left brains across time, and (2) identify spatial regions on the cortical surface that reflect increase or decrease in cortical measurements over time.
APA, Harvard, Vancouver, ISO, and other styles
37

Valenzuela-Del, Rio Jose Eugenio. "Bayesian adaptive sampling for discrete design alternatives in conceptual design." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50263.

Full text
Abstract:
The number of technology alternatives has lately grown to satisfy the increasingly demanding goals in modern engineering. These technology alternatives are handled in the design process as either concepts or categorical design inputs. Additionally, designers desire to bring into early design more and more accurate, but also computationally burdensome, simulation tools to obtain better performing initial designs that are more valuable in subsequent design stages. It constrains the computational budget to optimize the design space. These two factors unveil the need of a conceptual design methodology to use more efficiently sophisticated tools for engineering problems with several concept solutions and categorical design choices. Enhanced initial designs and discrete alternative selection are pursued. Advances in computational speed and the development of Bayesian adaptive sampling techniques have enabled the industry to move from the use of look-up tables and simplified models to complex physics-based tools in conceptual design. These techniques focus computational resources on promising design areas. Nevertheless, the vast majority of the work has been done on problems with continuous spaces, whereas concepts and categories are treated independently. However, observations show that engineering objectives experience similar topographical trends across many engineering alternatives. In order to address these challenges, two meta-models are developed. The first one borrows the Hamming distance and function space norms from machine learning and functional analysis, respectively. These distances allow defining categorical metrics that are used to build an unique probabilistic surrogate whose domain includes, not only continuous and integer variables, but also categorical ones. The second meta-model is based on a multi-fidelity approach that enhances a concept prediction with previous concept observations. These methodologies leverage similar trends seen from observations and make a better use of sample points increasing the quality of the output in the discrete alternative selection and initial designs for a given analysis budget. An extension of stochastic mixed-integer optimization techniques to include the categorical dimension is developed by adding appropriate generation, mutation, and crossover operators. The resulted stochastic algorithm is employed to adaptively sample mixed-integer-categorical design spaces. The proposed surrogates are compared against traditional independent methods for a set of canonical problems and a physics-based rotor-craft model on a screened design space. Next, adaptive sampling algorithms on the developed surrogates are applied to the same problems. These tests provide evidence of the merit of the proposed methodologies. Finally, a multi-objective rotor-craft design application is performed in a large domain space. This thesis provides several novel academic contributions. The first contribution is the development of new efficient surrogates for systems with categorical design choices. Secondly, an adaptive sampling algorithm is proposed for systems with mixed-integer-categorical design spaces. Finally, previously sampled concepts can be brought to construct efficient surrogates of novel concepts. With engineering judgment, design community could apply these contributions to discrete alternative selection and initial design assessment when similar topographical trends are observed across different categories and/or concepts. Also, it could be crucial to overcome the current cost of carrying a set of concepts and wider design spaces in the categorical dimension forward into preliminary design.
APA, Harvard, Vancouver, ISO, and other styles
38

Mohammadian, Saeed. "Freeway traffic flow dynamics and safety: A behavioural continuum framework." Thesis, Queensland University of Technology, 2021. https://eprints.qut.edu.au/227209/1/Saeed_Mohammadian_Thesis.pdf.

Full text
Abstract:
Congestion and rear-end crashes are two undesirable phenomena of freeway traffic flows, which are interrelated and highly affected by human psychological factors. Since congestion is an everyday problem, and crashes are rare events, congestion management and crash risk prevention strategies are often implemented through separate research directions. However, overwhelming evidence has underscored the inter-relation between rear-end crashes and freeway traffic flow dynamics in recent decades. This dissertation develops novel mathematical models for freeway traffic flow dynamics and safety to integrate them into a unifiable framework. The outcomes of this PhD can enable moving towards faster and safer roads.
APA, Harvard, Vancouver, ISO, and other styles
39

Giacoma, Anthony. "Efficient acceleration techniques for non-linear analysis of structures with frictional contact." Thesis, Lyon, INSA, 2014. http://www.theses.fr/2014ISAL0095.

Full text
Abstract:
La mécanique computationnelle est un outil incontournable pour le monde de l’ingénierie mécanique. Motivé par un désir de réalisme et soumis à un perpétuel gigantisme, les modèles numériques doivent aujourd’hui inclure des phénomènes physiques de plus en plus complexes. Par conséquence, d’importantes capacités calculatoires sont requises afin de traiter des problèmes à la fois non-linéaires mais aussi de grande taille. Pour atteindre cet objectif, il convient de développer les stations de calculs mais aussi les méthodes algorithmiques utilisées afin de résoudre efficacement ces types de problèmes. Récemment, les méthodes de réduction de modèle se révèlent comme d’excellentes options au développement d’algorithmes de résolution performants. Le problème du contact frottant entre solides élastiques est particulièrement bien connu pour sa complexité et dont les temps de calcul peuvent devenir prohibitifs. En effet, les lois qui le régissent sont très hautement non-linéaires (non différentiables). Dans ce mémoire, nous nous proposons d’appliquer différentes méthodes de réduction de modèle (a posteriori et a priori) à ce type de problème afin de développer des méthodes de calculs accélérées dans le cadre de la méthode des éléments finis. Tout d’abord, en se plaçant dans le cadre des petites perturbations en évolution quasistatique, la réductibilité de diverses solutions impliquant du contact frottant est mise en évidence via leur décomposition en valeur singulière. De plus, leur contenu à échelle séparée est exhibé. La méthode non-incrémentale et non-linéaire à large incrément de temps (LATIN) est par la suite présentée. Dans un second temps et à partir des observations faites précédemment, une méthode LATIN accélérée est proposée en s’inspirant des méthodes multigrilles non-linéaires de type “full approximation scheme” (FAS). Cette méthode s’apparente en partie aux méthodes de réduction de modèle de type a posteriori. De plus, une stratégie de calcul de modes à partir d’un modèle de substitution est proposée. Par la suite, la décomposition propre généralisée (PGD) est utilisée afin de développer une méthode de résolution non-linéaire efficace reposant fondamentalement sur une approche de réduction de modèle de type a priori. Enfin, quelques extensions sont proposées telle que la résolution de problème faisant intervenir des études paramétriques, ou encore la prise en charge de non-linéarités supplémentaires telle que la plasticité
Computational mechanics is an essential tool for mechanical engineering purposes. Nowadays, numerical models have to take into account complex physical phenomenons to be even more realistic and become larger and larger. As a consequence, more and more computing capacities are required in order to tackle not only non-linear problems but also large scale problems. For that purpose, both computers and numerical methods have to be developed in order to solve them efficiently. In the last decades, model reduction methods show great abilities to assign such challenges. The frictional contact problem between elastic solids is particularly well-known for its difficulty. Because its governing laws are highly non-linear (non-smooth), prohibitive computational time can occur. In this dissertation, model reduction methods (both a posteriori and a priori approaches) are deployed in order to implement efficient numerical methods to solve frictional contact problem in the finite element framework. First, small perturbations hypothesis with a quasi-static evolution are assumed. Then, reducibility of some frictional solutions is emphasized and discussed using the singular value decomposition. In addition, a scale separability phenomenon is enlightened. Then, the non-linear large time increment method (LATIN) is introduced. Secondly, an accelerated LATIN method is suggested by drawing an analogy between previous scale separability observations and the non-linear multigrid full approximation scheme (FAS). This accelerated non-linear solver relies essentially on the a posteriori model reduction approach. A precomputation strategy for modes relying on surrogate models is also suggested. Next, the proper generalized decomposition (PGD) is used to implement a non-linear solver relying fundamentally on an a priori model reduction method. Finally, some extensions are given to assign parametric studies and to take into account an additional non-linearity such as elastoplastic constitutive laws
APA, Harvard, Vancouver, ISO, and other styles
40

Novák, Lukáš. "Pravděpodobnostní modelování smykové únosnosti předpjatých betonových nosníků: Citlivostní analýza a semi-pravděpodobnostní metody návrhu." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2018. http://www.nusl.cz/ntk/nusl-372051.

Full text
Abstract:
Diploma thesis is focused on advanced reliability analysis of structures solved by non--linear finite element analysis. Specifically, semi--probabilistic methods for determination of design value of resistance, sensitivity analysis and surrogate model created by polynomial chaos expansion are described in the diploma thesis. Described methods are applied on prestressed reinforced concrete roof girder.
APA, Harvard, Vancouver, ISO, and other styles
41

Ducasse, Quentin. "Etude de la méthode de substitution à partir de la mesure simultanée des probabilités de fission et d'émission gamma des actinides 236U, 238U, 237Np et 238Np." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0109/document.

Full text
Abstract:
Les sections efficaces induites par neutrons des noyaux de courte durée de vie jouent un rôle important dans des domaines variés parmi la physique fondamentale, l'astrophysique ou l'énergie nucléaire. Malheureusement de nombreuses contraintes liées à la radiotoxicité des cibles rendent la mesure de ces sections efficaces souvent très difficiles. La méthode de substitution est une technique de mesure indirecte de sections efficaces neutroniques de noyaux radioactifs qui à l'avantage de s'affranchir de ces contraintes. Pour la première fois dans ce type d'expérience,les probabilités de fission et d'émission gamma sont mesurées simultanément, pour les actinides236U, 238U, 237Np et 238Np dans le but d'étudier la validité de la méthode. Une des difficultés provient de la soustraction des gammas des fragments de fission et cette mesure constitue en cela un véritable défi. Cette expérience de mesure simultanée a été effectuée au cyclotron d'Oslo.A une énergie d'excitation fixée du noyau formé, les résultats montrent que les probabilités de fission de substitution sont en bon accord avec celles induites par neutron alors que les probabilités d'émission gamma mesurées sont plusieurs fois plus élevées. Ces écarts sont liés à la différence distribution spin peuplée par le noyau entre les deux méthodes. Des calculs de modèles statistiques avec des paramètres standards n'ont pas permis de reproduire cette insensibilité de la probabilité de fission vis à vis du spin du noyau. La reproduction des observations expérimentales devient possible en considérant un moment d'inertie du noyau fissionnant qui augmente plus rapidement avec la déformation du noyau que ne le préconisent les paramètres standards. De nouveaux efforts théoriques sont à fournir pour améliorer la compréhension de nos résultats
Neutron-induced cross sections of short-lived nuclei are important in various fields such as fundamental physics, astrophysics or nuclear energy. However, these cross sections are often extremely difficult to measure due to high radioactivity of the targets involved. The surrogate-reaction method is an indirect way to determine neutron-induced cross sections of short-lived nuclei. In order to study the validity of the method, we have measured for the very first time in a surrogate-reaction experiment simultaneously fission and gamma-decay probabilities for the actinides 236U, 238U, 237Np and 238Np. This is challenging because one has to remove the gamma rays emitted by the fission fragments. The measurement was performed at the Oslocyclotron.Our results show that for a given excitation energy, our gamma-decay probabilities are several times higher than neutron-induced probabilities, which can be attributed to differences in spin distribution between the two types of reactions. On the other hand, our fission probabilities are in good agreement with neutron-induced data. Statistical-model calculations applied with standardparameters cannot reproduce the weak spin sensibility to variations of the angular momentum observed for the fission probabilities. However, it is possible to reproduce the experimental observations by considering a stronger increase of the moment of inertia of the fissionning nucleus with deformation. Further theoretical efforts are needed to improve the understanding of our results
APA, Harvard, Vancouver, ISO, and other styles
42

Rokas, Konstantinos. "L'assistance médicale à la procréation en droit international privé comparé." Thesis, Paris 1, 2016. http://www.theses.fr/2016PA01D051/document.

Full text
Abstract:
L'assistance médicale bouleverse les données de la procréation humaine. La gestation pour autrui, la procréation médicalement assistée en faveur des couples de même sexe ou effectuée post mortem changent radicalement la conception de la filiation. La dimension transfrontalière du phénomène suscite des difficultés, notamment s'agissant de la reconnaissance de liens de filiation créés à l'étranger. L'étude des législations étrangères libérales ainsi que de la jurisprudence relative à la circulation des filiations révèle un certain recul de la règle de conflit en matière de filiation. La méthode de la reconnaissance ne semble pas non plus à même de faciliter considérablement la reconnaissance des liens de filiation établis à l'étranger. Néanmoins, la protection de la vie familiale constitue un fondement commun aux États européens en faveur de cette reconnaissance. La reconnaissance peut d'ailleurs être promue par l'adoption d'une règle matérielle de droit international privé et par le renforcement de la motivation dans la mise en œuvre du mécanisme de l'ordre public international. Un tel renforcement, combiné avec l'influence du droit européen sur la circulation du statut personnel permettrait de mieux satisfaire les objectifs de sécurité juridique et de prévisibilité. En définitive, la lutte contre les risques posés par une assistance médicale à la procréation à caractère international requiert l'adoption des règles matérielles aussi bien au niveau national qu'au niveau international et une meilleure coopération entre États-membres de l'Union européenne
Medically assisted reproduction radically affects human reproduction. Surrogacy, artificial reproduction technologies for same-sex couples, or which take place post mortem, profoundly change our concept of parentage. The cross-border dimension of this phenomenon provokes difficulties especially with respect to the recognition of parentage relationships that have been established in countries that authorise the aforementioned techniques. The study foreign liberal legislation, as well as of the case law on the circulation of legal parent-child relationships indicates that the conflict-of-laws rules on parentage becomes less significant. The method of recognition does not seem either able to facilitate considerably the recognition of parentage bonds that have been established in a foreign country. Nonetheless, the protection of family life constitutes a legal basis common in European states that can be invoked in favour of such recognition. Such recognition can furthermore be promoted by adopting a private international law rule of substantive nature and by strengthening the reasoning behind the recourse to the public policy exception mechanism this reinforcement of the reasoning and the influence of European law on the circulation of personal status can promote legal certainty and foreseeability. Finally, efficient solutions to cater for the risks inherent in cross­border access to assisted reproduction necessitates the adoption of rules substantial nature both in national and in international level and a better cooperation among member states of the European Union
APA, Harvard, Vancouver, ISO, and other styles
43

Kessedjian, Grégoire. "Mesures de sections efficaces d'actinides mineurs d'intérêt pour la transmutation." Thesis, Bordeaux 1, 2008. http://www.theses.fr/2008BOR13672/document.

Full text
Abstract:
Les réacteurs actuels produisent deux types de déchets dont la gestion et le devenir soulèvent des problèmes. Il s’agit d’abord de certains produits de fission et de noyaux lourds (isotopes de l’Américium et du Curium) au-delà de l’uranium appelés actinides mineurs. Deux options sont envisagées : le stockage en site géologique profond et/ou l’incinération de ces déchets dans un flux de neutrons rapides, c’est-à-dire, la transmutation par fission. Ces études font appel à de nombreuses données neutroniques. Malheureusement, les bases de données présentent encore de nombreuses insuffisances pour parvenir à des résultats fiables. L’objectif de ce travail est ici d’actualiser des données nucléaires et de les compléter. Nous avons ainsi mesuré la section efficace de fission de l’243Am (7370 ans) en référence à la diffusion élastique (n,p) afin de fournir des données indépendantes des mesures existantes dans la gamme des neutrons rapides (1 - 8 MeV). La réaction 243Am(n,f) a été analysée en utilisant un modèle statistique décrivant les voies de désexcitation du noyau composé d’244Am. Ainsi les sections efficaces de capture radiative (n,?) et de diffusion inélastique (n,n’) ont pu être évaluées. La mesure directe des sections efficaces neutroniques d’actinides mineurs constitue très souvent un véritable défi compte tenu de la forte activité des actinides mineurs. Pour cela, une méthode indirecte a été développée utilisant les réactions de transfert dans le but d’étudier certains isotopes du curium. Les réactions 243Am(3He,d)244Cm, 243Am(3He,t)243Cm et 243Am(3He,alpha)242Am nous ont permis de mesurer les probabilités de fission des noyaux de 243,244Cm et de l’242Am. Les sections efficaces de fission des curiums 242,243Cm(162,9 j, 28,5 ans) et de l’américium 241Am sont obtenues en multipliant ces probabilités par les sections efficaces calculées de formation des noyaux composés. Pour chaque mesure, une évaluation précise des erreurs a été réalisée à travers une étude des variances-covariances des résultats présentés. Pour les mesures de la réaction 243Am(n,f), une analyse des corrélations d’erreurs a permis d’interpréter la portée de ces mesures au sein des mesures existantes
The existing reactors produce two kinds of nuclear waste : the fission products and heavy nuclei beyond uranium called minor actinides (Americium and Curium isotopes). Two options are considered: storage in deep geological site and/or transmutation by fast neutron induced fission. These studies involve many neutron data. Unfortunately, these data bases have still many shortcomings to achieve reliable results. The aim of these measurements is to update nuclear data and complement them. We have measured the fission cross section of 243Am (7370y) in reference to the (n,p) elastic scattering to provide new data in a range of fast neutrons (1 - 8 MeV). A statistical model has been developed to describe the reaction 243Am(n,f). Moreover, the cross sections from the following reactions have been be extracted from these calculations: inelastic scattering 243Am(n,n’) and radiative capture 243Am(n,?) cross sections. The direct measurements of neutron cross sections are often a challenge considering the short half-lives of minor actinides. To overcome this problem, a surrogate method using transfer reactions has been used to study few isotopes of curium. The reactions 243Am(3He, d)244cm, 243Am(3He, t)243cm and 243Am(3He, alpha)242Am allowed to measure the fission probabilities of 243,244Cm and 242Am. The fission cross sections of 242,243Cm(162,9d, 28,5y) and 241Am(431y) have been obtained by multiplying these fission probabilities by the calculated compound nuclear neutron cross section relative to each channel. For each measurement, an accurate assessment of the errors was realized through variance-covariance studies. For measurements of the reaction 243Am(n,f), the analysis of error correlations allowed to interpret the scope of these measures within the existing measurements
APA, Harvard, Vancouver, ISO, and other styles
44

Abid, Fatma. "Contribution à la robustesse et à l'optimisation fiabiliste des structures Uncertainty of shape memory alloy micro-actuator using generalized polynomial chaos methodUncertainty of shape memory alloy micro-actuator using generalized polynomial chaos method Numerical modeling of shape memory alloy problem in presence of perturbation : application to Cu-Al-Zn-Mn specimen An approach for the reliability-based design optimization of shape memory alloy structure Surrogate models for uncertainty analysis of micro-actuator." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMIR24.

Full text
Abstract:
La conception des ouvrages économiques a suscité de nombreux progrès dans les domaines de la modélisation et de l’optimisation, permettant l’analyse de structures de plus en plus complexes. Cependant, les conceptions optimisées sans considérer les incertitudes des paramètres, peuvent ne pas respecter certains critères de fiabilité. Pour assurer le bon fonctionnement de la structure, il est important de prendre en considération l’incertitude dès la phase de conception. Il existe plusieurs théories dans la littérature pour traiter les incertitudes. La théorie de la fiabilité des structures consiste à définir la probabilité de défaillance d’une structure par la probabilité que les conditions de bon fonctionnement ne soient pas respectées. On appelle cette étude l’analyse de la fiabilité. L’intégration de l’analyse de fiabilité dans les problèmes d’optimisation constitue une nouvelle discipline introduisant des critères de fiabilité dans la recherche de la configuration optimale des structures, c’est le domaine de l’optimisation fiabiliste (RBDO). Cette méthodologie de RBDO vise donc à considérer la propagation des incertitudes dans les performances mécaniques en s’appuyant sur une modélisation probabiliste des fluctuations des paramètres d’entrée. Dans ce cadre, ce travail de thèse porte sur l’analyse robuste et l’optimisation fiabiliste des problèmes mécaniques complexes. Il est important de tenir compte des paramètres incertains du système pour assurer une conception robuste. L’objectif de la méthode RBDO est de concevoir une structure afin d’établir un bon compromis entre le coût et l’assurance de fiabilité. Par conséquent, plusieurs méthodes, telles que la méthode hybride et la méthode optimum safety factor, ont été développées pour atteindre cet objectif. Pour remédier à la complexité des problèmes mécaniques complexes comportant des paramètres incertains, des méthodologies spécifiques à cette problématique, tel que les méthodes de méta-modélisation, ont été développées afin de bâtir un modèle de substitution mécanique, qui satisfait en même temps l’efficacité et la précision du modèle
The design of economic system leads to many advances in the fields of modeling and optimization, allowing the analysis of structures more and more complex. However, optimized designs can suffer from uncertain parameters that may not meet certain reliability criteria. To ensure the proper functioning of the structure, it is important to consider uncertainty study is called the reliability analysis. The integration of reliability analysis in optimization problems is a new discipline introducing reliability criteria in the search for the optimal configuration of structures, this is the domain of reliability optimization (RBDO). This RBDO methodology aims to consider the propagation of uncertainties in the mechanical performance by relying on a probabilistic modeling of input parameter fluctuations. In this context, this thesis focuses on a robust analysis and a reliability optimization of complex mechanical problems. It is important to consider the uncertain parameters of the system to ensure a robust design. The objective of the RBDO method is to design a structure in order to establish a good compromise between the cost and the reliability assurance. As a result, several methods, such as the hybrid method and the optimum safety factor method, have been developed to achieve this goal. To address the complexity of complex mechanical problems with uncertain parameters, methodologies specific to this issue, such as meta-modeling methods, have been developed to build a mechanical substitution model, which at the same time satisfies the efficiency and the precision of the model
APA, Harvard, Vancouver, ISO, and other styles
45

Theroine, Camille. "Etude de la réaction de capture neutronique radiative pour le noyau instable du ¹⁷³Lu par méthode directe et par réaction de substitution." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00797443.

Full text
Abstract:
L'objectif de ce document est de présenter une étude sur la réaction de capture neutronique radiative du noyau instable du ¹⁷³Lu afin de déterminer sa section efficace (n,γ). Si globalement, pour les noyaux stables de nombreuses informations sont à disposition, il reste un véritable manque de données pour les noyaux radioactifs. La première partie de cette thèse se consacre à la présentation des différents formalismes impliqués dans le calcul d'une section efficace ainsi qu'à l'utilisation du code TALYS basé sur ces différents modèles. TALYS nous a permis d'évaluer la section efficace (n,γ) sur le ¹⁷³Lu en s'appuyant sur la connaissance de la réaction de capture sur le ¹⁷⁵Lu. Dans un deuxième temps, la section efficace (n,γ) sur le ¹⁷³Lu a été mesurée sur l'installation LANSCE avec le détecteur 4π DANCE. Cette expérience s'est révélée être un véritable défi tant pour la fabrication de la cible de ¹⁷³Lu que pour l'obtention de données dû à la grande radioactivité de cet isotope. Nous avons pu extraire des informations intéressantes comme le taux de capture total, identifier et caractériser de nouvelles résonances, déterminer des paramètres comme l'espacement moyen, les largeurs neutroniques et les largeurs γ ainsi que la valeur de la fonction densité. Tous ces nouveaux renseignements nous ont permis de reconstruire la section efficace (n,γ) sur le ¹⁷³Lu jusqu'à 200 eV. Grâce à ces différentes informations, nous avons estimé une correction à appliquer sur la section efficace évaluée par TALYS permettant in fine d'obtenir une nouvelle évaluation de cette quantité. La troisième partie de cette thèse est consacrée à apporter des informations supplémentaires sur la réaction ¹⁷³Lu(n,γ)¹⁷⁴Lu en utilisant la méthode de substitution. En premier lieu, nous avons testé la validité de cette méthode sur une réaction connue : ¹⁷⁵Lu(n,γ)¹⁷⁶Lu avec la réaction ¹⁷⁴Yb(³He,p)¹⁷⁶Lu puis dans un second temps nous avons regardé la réaction ¹⁷³Lu(n,γ)¹⁷⁴Lu avec la réaction ¹⁷⁴Yb(³He,t)¹⁷⁴Lu. Pour cela, la probabilité d'émission γ dans ces deux voies a été mesurée et comparée à un calcul TALYS. Cette comparaison a révélé de grandes disparités entre les réactions de transfert utilisées et les réactions induites par des neutrons. Notre investigation s'est alors orientée à regarder de plus près la distribution de spins du noyau composé formé. Celle-ci a pu être extraite grâce à un ajustement de la probabilité d'émission γ mesurée. Cette distribution montre que les réaction (³He,X) peuplent en réalité des spins beaucoup plus grands que ceux issus de la réaction (n,γ). La mesure des rapports des intensités de transitions γ corroborent aussi ce résultat même si les spins peuplés semblent être plus faibles. Cette expérience a mis clairement en évidence qu'une réaction induite par un faisceau d'³He ne pouvait pas se substituer à une réaction (n,γ). Tout au long de ce document, nous avons montré par différents aspects, à quel point il peut être difficile d'obtenir des informations sur les noyaux radioactifs tels que le ¹⁷³Lu et que pour le futur de nombreux défis tant expérimentaux que théoriques sont encore à relever.
APA, Harvard, Vancouver, ISO, and other styles
46

Boutoux, Guillaume. "Sections efficaces neutroniques via la méthode de substitution." Phd thesis, Bordeaux 1, 2011. http://tel.archives-ouvertes.fr/tel-00654677.

Full text
Abstract:
Les sections efficaces neutroniques des noyaux de courte durée de vie sont des données cruciales pour la physique fondamentale et appliquée dans des domaines tels que la physique des réacteurs ou l'astrophysique nucléaire. En général, l'extrême radioactivité de ces noyaux ne nous permet pas de procéder à des mesures induites par neutrons. Cependant, il existe une méthode de substitution (" surrogate " dans la littérature) qui permet de déterminer ces sections efficaces neutroniques par l'intermédiaire de réactions de transfert ou de réactions de diffusion inélastique. Son intérêt principal est de pouvoir utiliser des cibles moins radioactives et ainsi d'accéder à des sections efficaces neutroniques qui ne pourraient pas être mesurées directement. La méthode est basée sur l'hypothèse de formation d'un noyau composé et sur le fait que la désexcitation ne dépend essentiellement que de l'énergie d'excitation et du spin et parité de l'état composé peuplé. Toutefois, les distributions de moments angulaires et parités peuplés dans des réactions de transfert et celles induites par neutrons sont susceptibles d'être différentes. Ce travail fait l'état de l'art sur la méthode substitution et sa validité. En général, la méthode de substitution fonctionne très bien pour extraire des sections efficaces de fission. Par contre, la méthode de substitution dédiée à la capture radiative est mise à mal par la comparaison aux réactions induites par neutrons. Nous avons réalisé une expérience afin de déterminer les probabilités de désexcitation gamma du 176Lu et du 173Yb à partir des réactions de substitution 174Yb(3He,p)176Lu* et 174Yb(3He,alpha)173Yb*, respectivement, et nous les avons comparées avec les probabilités de capture radiative correspondantes aux réactions 175Lu(n,gamma) et 172Yb(n,gamma) qui sont bien connues. Cette expérience a permis de comprendre pourquoi, dans le cas de la désexcitation gamma, la méthode de substitution donne des écarts importants par rapport à la réaction neutronique correspondante. Ce travail dans la région de terres rares a permis d'évaluer dans quelle mesure la méthode de substitution peut s'appliquer pour extraire des probabilités de capture dans la région des actinides. Des expériences précédentes sur la fission ont aussi pu être réinterprétées. Ce travail apporte donc un éclairage nouveau sur la méthode de substitution.
APA, Harvard, Vancouver, ISO, and other styles
47

Sun, Wan-Na, and 孫婉娜. "Explore of Decision-Conflict Among Surrogate of Cancer Patient in Intensive Care Unit: Mmixed-Methods Research." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/y9k238.

Full text
Abstract:
碩士
高雄醫學大學
護理學系碩士班
106
Background: This study primarily discusses the decision making process for medical agents of cancer patients in the intensive care unit, the factors that can easily cause conflict to the decision making, and the experience of this process. Methods: The study was performed using a mixed qualitative and quantitative research method, divided into two stages, with convenience sampling, in an adult intensive care unit of a medical center in southern Taiwan. Sample were medical decision makers at least 20 years old, and are medical agents of cancer patients admitted into the intensive care unit. The first stage was a cross-sectional, predictive quantitative research, and data collection was performed twice via structured questionnaires given to the patients within three days of entering and exiting the intensive care unit. The second stage was a qualitative research with a phenomenological approach, exploring the medical agents’ experience of the decision making process through deep interviews and content analysis. Results: A total of 115 surrogates were enrolled for the quantitative study, with most agents being female(57.4%), married(70.4%), employed(64.3%), have an education level of university(47.8%) and above, and are seniors of the patients(31.3%). The study results found that the age(r=0.278, p=0.003) and stress (r=0.290, p<0.01) of surrogates showed a positive correlation with decision conflicts, while the degree of support from medical staff showed a negative correlation with(r=-0.363, p<0.01) decision conflicts. Stepwise multiple regression analysis was used to calculate explained variation of the decision conflict making by each of the variables of age (sr2=5%), stress (sr2=8%), and medical staff support (sr2=16%) from medical personnel, and the total variance explained was 29%.A total of 8 surrogates were included for qualitative interviews on the medical decision-making process. Based on the context of this paper, a total of 4 major themes were classified: “Use love to resist: a quiet scream”, “Dilemmas with love: Disqualification behind bars”, “Allow love to spread: An angel among us” and Suffocating love, “difficult decision: Conjoined twin’s elegy”. The reason for the first two themes comes from visitor time restrictions in the ICU, disallowing surrogates to stay by the side of the patient. The agent’s restless speculations, worries, and suspicions towards the direction of medical treatment and treatment effectiveness, in addition to the use of high precision equipment and tubing on the patient resulting in significant changes in physical appearance, can cause unimaginable impact faced by surrogates. The third major theme is unique to the Chinese culture. This is because expressions of love and emotions in Chinese people are restrained and implicit. Affected relationships between family members and spouses are complex and have profound meaning. Regardless of optimistic or pessimistic emotional connections, these feelings all affect the actual feelings of surrogates during the decision-making process. The fourth theme talks about how during the instance of the surrogate’s decision making process, the source and feelings of conflict include the difficulty to measure or imagine the prognosis and change after making the decision, in addition to the gap in communication and knowledge with the medical team and the war between the responsibility of a sibling and guilt, causing conflict to emerge during the decision making process. Conclusion: Surrogates are required to assist in determining medical treatments for their loved ones under circumstances where their medical knowledge and information are relatively insufficient. The enormous physical and mental stress burdened on them cannot be understood by others. Therefore, there tends to exist a tense relationship between surrogates and the medical team during these situations. The medical team can provide appropriate and sufficient support for high-conflict populations, briefly explain the treatment regimen, and proactively provide flexible visitor timings. This can decrease the impact and negative feelings felt by surrogates during the decision-making process.
APA, Harvard, Vancouver, ISO, and other styles
48

Bouffin, Nicolas. "Net pay evaluation: a comparison of methods to estimate net pay and net-to-gross ratio using surrogate variables." Thesis, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1953.

Full text
Abstract:
Net pay (NP) and net-to-gross ratio (NGR) are often crucial quantities to characterize a reservoir and assess the amount of hydrocarbons in place. Numerous methods in the industry have been developed to evaluate NP and NGR, depending on the intended purposes. These methods usually involve the use of cut-off values of one or more surrogate variables to discriminate non-reservoir from reservoir rocks. This study investigates statistical issues related to the selection of such cut-off values by considering the specific case of using porosity () as the surrogate. Four methods are applied to permeability-porosity datasets to estimate porosity cut-off values. All the methods assume that a permeability cut-off value has been previously determined and each method is based on minimizing the prediction error when particular assumptions are satisfied. The results show that delineating NP and evaluating NGR require different porosity cut-off values. In the case where porosity and the logarithm of permeability are joint normally distributed, NP delineation requires the use of the Y-on-X regression line to estimate the optimal porosity cut-off while the reduced major axis (RMA) line provides the optimal porosity cut-off value to evaluate NGR. Alternatives to RMA and regression lines are also investigated, such as discriminant analysis and a data-oriented method using a probabilistic analysis of the porosity-permeability crossplots. Joint normal datasets are generated to test the ability of the methods to predict accurately the optimal porosity cut-off value for sampled sub datasets. These different methods have been compared to one another on the basis of the bias, standard error and robustness of the estimates. A set of field data has been used from the Travis Peak formation to test the performance of the methods. The conclusions of the study have been confirmed when applied to field data: as long as the initial assumptions concerning the distribution of data are verified, it is recommended to use the Y-on-X regression line to delineate NP while either the RMA line or discriminant analysis should be used for evaluating NGR. In the case where the assumptions on data distribution are not verified, the quadrant method should be used.
APA, Harvard, Vancouver, ISO, and other styles
49

"Evaluation of Testing Methods for Suction-Volume Change of Natural Clay Soils." Master's thesis, 2017. http://hdl.handle.net/2286/R.I.46336.

Full text
Abstract:
abstract: Design and mitigation of infrastructure on expansive soils requires an understanding of unsaturated soil mechanics and consideration of two stress variables (net normal stress and matric suction). Although numerous breakthroughs have allowed geotechnical engineers to study expansive soil response to varying suction-based stress scenarios (i.e. partial wetting), such studies are not practical on typical projects due to the difficulties and duration needed for equilibration associated with the necessary laboratory testing. The current practice encompasses saturated “conventional” soil mechanics testing, with the implementation of numerous empirical correlations and approximations to obtain an estimate of true field response. However, it has been observed that full wetting rarely occurs in the field, leading to an over-conservatism within a given design when partial wetting conditions are ignored. Many researchers have sought to improve ways of estimation of soil heave/shrinkage through intense studies of the suction-based response of reconstituted clay soils. However, the natural behavior of an undisturbed clay soil sample tends to differ significantly from a remolded sample of the same material. In this study, laboratory techniques for the determination of soil suction were evaluated, a methodology for determination of the in-situ matric suction of a soil specimen was explored, and the mechanical response to changes in matric suction of natural clay specimens were measured. Suction-controlled laboratory oedometer devices were used to impose partial wetting conditions, similar to those experienced in a natural setting. The undisturbed natural soils tested in the study were obtained from Denver, CO and San Antonio, TX. Key differences between the soil water characteristic curves of the undisturbed specimen test compared to the conventional reconstituted specimen test are highlighted. The Perko et al. (2000) and the PTI (2008) methods for estimating the relationship between volume and changes in matric suction (i.e. suction compression index) were evaluated by comparison to the directly measured values. Lastly, the directly measured partial wetting swell strain was compared to the fully saturated, one-dimensional, oedometer test (ASTM D4546) and the Surrogate Path Method (Singhal, 2010) to evaluate the estimation of partial wetting heave.
Dissertation/Thesis
Masters Thesis Engineering 2017
APA, Harvard, Vancouver, ISO, and other styles
50

Alves, Ana Maria da Rocha de Sousa Guedes. "Is the Iberian electricity market chaotic? Characterization and prediction with nonlinear methods." Doctoral thesis, 2013. http://hdl.handle.net/10071/6927.

Full text
Abstract:
Classificação: C01, C02, C63, G17, Q41 Q47
Com a alteração do paradigma relativo aos sistemas eléctricos, deixando de ser regulados e passando a ser liberalizados, o estudo e a previsão de preços e de potências de carga nos sistemas eléctricos tornaram-se num novo tema de interesse para os investi- gadores. Devido às particularidades da electricidade, um mercado de electricidade tem regras muito especí cas que têm que ser compreendidas antes de se iniciar o seu estudo. Este trabalho apresenta um estudo sobre o mercado Ibérico de Electricidade, repres- entado pelas séries de potências de carga e de preços, segundo uma abordagem de sistemas dinâmicos deterministícos caóticos. O objectivo do trabalho consistiu em veri car se as séries de potências de carga e de preços apresentam características caóticas, reconstruindo os seus atractores e estimando alguns invariantes do sistema, tais como a dimensão de correlação, a entropia de Kolmogorov-Sinai e os expoentes de Lyapunov. A previsão para as próximas 24 horas pode então ser feita usando o método determinístico de coordenadas com atraso do tempo e redes neuronais arti ciais. Como resultado deste trabalho, foram identi cadas evidências de que tanto a série das potências de carga como a série dos preços de electricidade são regidas por um sistema dinâmico caótico e as suas previsões foram conseguidas com bastante sucesso.
With the paradigm shift regarding power systems, that used to be regulated and started to be liberalized, the study and forecast of prices and electricity demand have become a new topic of interest to researchers. Due to the peculiarities of electricity, electricity markets have very speci c rules that must be understood before starting their study. This thesis presents a study of the Iberian Electricity Market, represented by the series of demand and prices, in the framework of nonlinear deterministic chaos. The goal of this research was to verify that the series of demand and prices have chaotic features, reconstructing their attractors and estimating some invariants of the system as the correlation dimension, the Kolmogorov-Sinai entropy and the Lyapunov exponents. The forecast for the next 24 hours can then be done using deterministic tools like the method of time delay and arti cial neural networks. As a result of this research, we identi ed evidence that both the series of the demand and the series of electricity prices are governed by a chaotic dynamic system and their predictions were successfully achieved.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography