Dissertations / Theses on the topic 'Surrogate Function'

To see the other types of publications on this topic, follow the link: Surrogate Function.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 29 dissertations / theses for your research on the topic 'Surrogate Function.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Eisner, Mariah Claire. "Comparing Logit and Hinge Surrogate Loss Functions in Outcome Weighted Learning." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1585657996755039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Smith, Nicola Marianne Godwin. "Characterisation of T cell surface phenotype and effector function in a surrogate model of rheumatoid arthritis." Thesis, Imperial College London, 2009. http://hdl.handle.net/10044/1/4391.

Full text
Abstract:
TNFα plays a pivotal role in the pathogenesis of rheumatoid arthritis (RA), however the mechanisms underlying its dysregulation are not completely understood. TNFα production by macrophages is dependent on their contact with synovial T cells. In an in vitro model of RA, peripheral blood lymphocytes stimulated with a cocktail of cytokines mimic this RA T cell effector function. This thesis defines and characterises the effector population of cytokine-activated human T cells through two different approaches. Studies presented here show that within a population of cytokine-activated T cells, CD4+CD45RO+CCR7-cells induce the highest levels of TNFα when co-cultured with monocytes. Cytokine-activated CD4+ memory T cells phenotypically and functionally resemble lymphocytes isolated from RA synovial tissue. The cytokine cocktail induces proliferation and differentiation of peripheral blood T cells into highly potent effectors. These cells upregulate specific activation markers, adhesion molecules and chemokine receptors; such as CD25, CD69, CD62L, VLA-4, LFA-1 and CXCR4 which directly or indirectly, contribute to the induction of TNFα. By defining the phenotype of the lymphocytes most capable of inducing TNFα in our model, I isolated a population of T cells on which to focus my studies. The molecular nature of contact-dependent monocyte activation by cytokine-activated T cells was further investigated through proteomic profiling of the T cell surface. Plasma membrane protein-enriched samples were resolved in one- and two-dimensions. Subsequent mass spectrometry identified two molecules of interest. CD97 was found to be highly expressed by cytokine-activated CD4+ memory T cells, and contributed to both the induction of monocyte TNFα and spontaneous TNFα release from rheumatoid synovial tissue. Expression of CD81 and other tetraspanin family members increased on cytokine activation and was observed in synovial tissue. The results presented in this thesis provide further insight into the contribution of T cells in RA.
APA, Harvard, Vancouver, ISO, and other styles
3

Sarmiento, Alam Natalia Catalina [Verfasser], Johannes [Akademischer Betreuer] [Gutachter] Buchner, and Bernd [Gutachter] Reif. "Structure and function of the surrogate light chain / Natalia Catalina Sarmiento Alam ; Gutachter: Bernd Reif, Johannes Buchner ; Betreuer: Johannes Buchner." München : Universitätsbibliothek der TU München, 2015. http://d-nb.info/1133261825/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yu, Jiaqian. "Minimisation du risque empirique avec des fonctions de perte nonmodulaires." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLC012/document.

Full text
Abstract:
Cette thèse aborde le problème de l’apprentissage avec des fonctions de perte nonmodulaires. Pour les problèmes de prédiction, où plusieurs sorties sont prédites simultanément, l’affichage du résultat comme un ensemble commun de prédiction est essentiel afin de mieux incorporer les circonstances du monde réel. Dans la minimisation du risque empirique, nous visons à réduire au minimum une somme empirique sur les pertes encourues sur l’échantillon fini avec une certaine perte fonction qui pénalise sur la prévision compte tenu de la réalité du terrain. Dans cette thèse, nous proposons des méthodes analytiques et algorithmiquement efficaces pour traiter les fonctions de perte non-modulaires. L’exactitude et l’évolutivité sont validées par des résultats empiriques. D’abord, nous avons introduit une méthode pour les fonctions de perte supermodulaires, qui est basé sur la méthode d’orientation alternée des multiplicateurs, qui ne dépend que de deux problémes individuels pour la fonction de perte et pour l’infèrence. Deuxièmement, nous proposons une nouvelle fonction de substitution pour les fonctions de perte submodulaires, la Lovász hinge, qui conduit à une compléxité en O(p log p) avec O(p) oracle pour la fonction de perte pour calculer un gradient ou méthode de coupe. Enfin, nous introduisons un opérateur de fonction de substitution convexe pour des fonctions de perte nonmodulaire, qui fournit pour la première fois une solution facile pour les pertes qui ne sont ni supermodular ni submodular. Cet opérateur est basé sur une décomposition canonique submodulairesupermodulaire
This thesis addresses the problem of learning with non-modular losses. In a prediction problem where multiple outputs are predicted simultaneously, viewing the outcome as a joint set prediction is essential so as to better incorporate real-world circumstances. In empirical risk minimization, we aim at minimizing an empirical sum over losses incurred on the finite sample with some loss function that penalizes on the prediction given the ground truth. In this thesis, we propose tractable and efficient methods for dealing with non-modular loss functions with correctness and scalability validated by empirical results. First, we present the hardness of incorporating supermodular loss functions into the inference term when they have different graphical structures. We then introduce an alternating direction method of multipliers (ADMM) based decomposition method for loss augmented inference, that only depends on two individual solvers for the loss function term and for the inference term as two independent subproblems. Second, we propose a novel surrogate loss function for submodular losses, the Lovász hinge, which leads to O(p log p) complexity with O(p) oracle accesses to the loss function to compute a subgradient or cutting-plane. Finally, we introduce a novel convex surrogate operator for general non-modular loss functions, which provides for the first time a tractable solution for loss functions that are neither supermodular nor submodular. This surrogate is based on a canonical submodular-supermodular decomposition
APA, Harvard, Vancouver, ISO, and other styles
5

ALINEJAD, FARHAD. "Development of advanced criteria for blade root design and optimization." Doctoral thesis, Politecnico di Torino, 2018. http://hdl.handle.net/11583/2711560.

Full text
Abstract:
In gas and steam turbine engines, blade root attachments are considered as critical components which require special attention for design. The traditional method of root design required high experienced engineers yet the strength of the material was not fully exploited in most cases. In the current thesis, different methodologies for automatic design and optimization of the blade root has been evaluated. Moreover, some methods for reducing the computational time have been proposed. First, a simplified analytical model of the fir-tree was developed in order to evaluate mean stress in different sections of the blade root and disc groove. Then, a more detailed two-dimensional shape of the attachment capable to be analyzed in finite element (FE) analysis was developed for dovetail and fir-tree. The model was developed to be general in a way to include all possible shapes of the attachment. Then the projection of the analytical model over the 2D model was performed to compare the results obtained from analytical and FE methods. This comparison is essential in the later use of analytical evaluation of the fir-tree as a reduction technique of searching domain optimization. Moreover, the possibility of predicting the contact normal stress of the blade and disc attachment by the use of a punch test was evaluated. A puncher composed of a flat surface and rounded edge was simulated equivalent to a sample case of a dovetail. The stress profile of the contact in analytical, 2d and 3d for puncher and dovetail was compared. As an optimizer Genetic Algorithm (GA) was described and different rules affecting this algorithm was introduced. In order to reduce the number of callbacks to high fidelity finite element (FE) method, the surrogate functions were evaluated and among them, the Kriging function was selected to be constructed for use in the current study. Its efficiency was evaluated within a numerical optimization of a single lob. In this study, the surrogate model is not used solely in finding the optimum of the attachment shape as it may provide low accuracy but in order to benefit its fast evaluation and diminish its low accuracy drawback, the Kriging function (KRG) was used within GA as a pre-evaluation of the candidate before performing FE analysis. Moreover, the feasible and non-feasible space in a multi-dimensional complex searching domain of the attachment geometry is explained and also the challenge of a multi-district domain is tackled with a new mutation operation. In order to rectify the non-continuous domain, an adaptive penalty method based on Latin Hypercube Sampling (LHS) was proposed which could successfully improve the optimization convergence. Furthermore, different topologies of the contact in a dovetail were assessed. Four different types of contact were modeled and optimized under the same loading and boundary conditions. The punch test was also assessed with different contact shapes. In addition, the state of stress for the dovetail in different rotational speed with different types of contact was assessed. In the results and discussion, an optimization of a dovetail with the analytical approach was performed and the optimum was compared with the one obtained by FE analysis. It was found that the analytical approach has the advantage of fast evaluation and if constraints are well defined the results are comparable to the FE solution. Then, a Kriging function was embedded within the GA optimization and the approach was evaluated in an optimization of a dovetail. The results revealed that the low computational cost of the surrogate model is an advantage and the low accuracy would be diminished in a collaboration of FE and surrogate models. Later, the capability of employing the analytical approach in a fir-tree optimization is assessed. As the fir-tree geometry has a higher complexity working domain in comparison to the dovetail, the results would be consistent for the dovetail also. Different methods are assessed and compared. In the first attempt, the analytical approach was adopted as a filter to select out the least probable fit candidates. This method could provide a 7\% improvement in convergence. In another attempt, the proposed adaptive penalty method was added to the optimization which successfully found the reasonable optimum with 47\% reduction in computational cost. Later, a combination of analytical and FE models was joined in a multi-objective multi-level optimization which provided 32\% improvement with less error comparing to the previous method. In the last evaluation of this type, the analytical approach was solely used in a multi-objective optimization in which the results were selected according to an FE evaluation of most fit candidates. This approach although provided 86\% improvement in computational time reduction but it depends highly on the case under investigation and provides low accuracy in the final solution. Furthermore, a robust optimum was found for both dovetail and fir-tree in a multi-objective optimization. In this trial, the proposed adaptive penalty method in addition to the surrogate model was also involved.
APA, Harvard, Vancouver, ISO, and other styles
6

Hinkle, Kurt Berlin. "An Automated Method for Optimizing Compressor Blade Tuning." BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/6230.

Full text
Abstract:
Because blades in jet engine compressors are subject to dynamic loads based on the engine's speed, it is essential that the blades are properly "tuned" to avoid resonance at those frequencies to ensure safe operation of the engine. The tuning process can be time consuming for designers because there are many parameters controlling the geometry of the blade and, therefore, its resonance frequencies. Humans cannot easily optimize design spaces consisting of multiple variables, but optimization algorithms can effectively optimize a design space with any number of design variables. Automated blade tuning can reduce design time while increasing the fidelity and robustness of the design. Using surrogate modeling techniques and gradient-free optimization algorithms, this thesis presents a method for automating the tuning process of an airfoil. Surrogate models are generated to relate airfoil geometry to the modal frequencies of the airfoil. These surrogates enable rapid exploration of the entire design space. The optimization algorithm uses a novel objective function that accounts for the contribution of every mode's value at a specific operating speed on a Campbell diagram. When the optimization converges on a solution, the new blade parameters are output to the designer for review. This optimization guarantees a feasible solution for tuning of a blade. With 21 geometric parameters controlling the shape of the blade, the geometry for an optimally tuned blade can be determined within 20 minutes.
APA, Harvard, Vancouver, ISO, and other styles
7

Tancred, James Anderson. "Aerodynamic Database Generation for a Complex Hypersonic Vehicle Configuration Utilizing Variable-Fidelity Kriging." University of Dayton / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1543801033672049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Guo, Xiao. "Bayesian surrogates for functional response modeling and metamaterial rapid design." HKBU Institutional Repository, 2017. http://repository.hkbu.edu.hk/etd_oa/418.

Full text
Abstract:
In many scientific and engineering researches, Bayesian surrogate models are utilized to handle nonlinear data for regression and classification tasks. In this thesis, we consider a real-life problem, functional response modeling of metamaterial and its rapid design, to which we establish and test such models. To familiarize with this subject, some fundamental electromagnetic physics are provided.. Noticing that the dispersive data are usually in rational form, a two-stage modeling approach is proposed, where in the first stage, a universal link function is formulated to rationally approximate the data with a few discrete parameters, namely poles and residues. Then they are used to synthesize equivalent circuits, and surrogate models are applied to circuit elements in the second stage.. To start with a regression scheme, the classical Gaussian process (GP) is introduced, which proceeds by parameterizing a covariance function of any continuous inputs, and infers hyperparameters given the training data. Two metamaterial prototypes are illustrated to demonstrate the methodology of model building, whose results are shown to prove the efficiency and precision of probabilistic pre- dictions. One well-known problem with metamaterial functionality is its great variability in resonance identities, which shows discrepancy in approximation orders required to fit the data with rational functions. In order to give accurate prediction, both approximation order and the presenting circuit elements should be inferred, by classification and regression, respectively. An augmented Bayesian surrogate model, which integrates GP multiclass classification, Bayesian treed GP regression, is formulated to provide a systematic dealing to such unique physical phenomenon. Meanwhile, the nonstationarity and computational complexity are well scaled with such model.. Finally, as one of the most advantageous property of Bayesian perspective, probabilistic assessment to underlying uncertainties is also discussed and demonstrated with detailed formulation and examples.
APA, Harvard, Vancouver, ISO, and other styles
9

Riley, Mike J. W. "Evaluating cascade correlation neural networks for surrogate modelling needs and enhancing the Nimrod/O toolkit for multi-objective optimisation." Thesis, Cranfield University, 2011. http://dspace.lib.cranfield.ac.uk/handle/1826/6796.

Full text
Abstract:
Engineering design often requires the optimisation of multiple objectives, and becomes significantly more difficult and time consuming when the response surfaces are multimodal, rather than unimodal. A surrogate model, also known as a metamodel, can be used to replace expensive computer simulations, accelerating single and multi-objective optimisation and the exploration of new design concepts. The main research focus of this work is to investigate the use of a neural network surrogate model to improve optimisation of multimodal surfaces. Several significant contributions derive from evaluating the Cascade Correlation neural network as the basis of a surrogate model. The contributions to the neural network community ultimately outnumber those to the optimisation community. The effects of training this surrogate on multimodal test functions are explored. The Cascade Correlation neural network is shown to map poorly such response surfaces. A hypothesis for this weakness is formulated and tested. A new subdivision technique is created that addresses this problem; however, this new technique requires excessively large datasets upon which to train. The primary conclusion of this work is that Cascade Correlation neural networks form an unreliable basis for a surrogate model, despite successes reported in the literature. A further contribution of this work is the enhancement of an open source optimisation toolkit, achieved by the first integration of a truly multi-objective optimisation algorithm.
APA, Harvard, Vancouver, ISO, and other styles
10

Wikström, Jonas. "3D Model of Fuel Tank for System Simulation : A methodology for combining CAD models with simulation tools." Thesis, Linköpings universitet, Maskinkonstruktion, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-71370.

Full text
Abstract:
Engineering aircraft systems is a complex task. Therefore models and computer simulations are needed to test functions and behaviors of non existing systems, reduce testing time and cost, reduce the risk involved and to detect problems early which reduce the amount of implementation errors. At the section Vehicle Simulation and Thermal Analysis at Saab Aeronautics in Linköping every basic aircraft system is designed and simulated, for example the fuel system. Currently 2-dimensional rectangular blocks are used in the simulation model to represent the fuel tanks. However, this is too simplistic to allow a more detailed analysis. The model needs to be extended with a more complex description of the tank geometry in order to get a more accurate model. This report explains the different steps in the developed methodology for combining 3-dimensional geometry models of any fuel tank created in CATIA with dynamic simulation of the fuel system in Dymola. The new 3-dimensional representation of the tank in Dymola should be able to calculate fuel surface location during simulation of a maneuvering aircraft.  The first step of the methodology is to create a solid model of the fuel contents in the tank. Then the area of validity for the model has to be specified, in this step all possible orientations of the fuel acceleration vector within the area of validity is generated. All these orientations are used in the automated volume analysis in CATIA. For each orientation CATIA splits the fuel body in a specified number of volumes and records the volume, the location of the fuel surface and the location of the center of gravity. This recorded data is then approximated with the use of radial basis functions implemented in MATLAB. In MATLAB a surrogate model is created which are then implemented in Dymola. In this way any fuel surface location and center of gravity can be calculated in an efficient way based on the orientation of the fuel acceleration vector and the amount of fuel. The new 3-dimensional tank model is simulated in Dymola and the results are compared with measures from the model in CATIA and with the results from the simulation of the old 2-dimensional tank model. The results shows that the 3-dimensional tank gives a better approximation of reality and that there is a big improvement compared with the 2-dimensional tank model. The downside is that it takes approximately 24 hours to develop this model.
Att utveckla ett nytt flygplanssystem är en väldigt komplicerad arbetsuppgift. Därför används modeller och simuleringar för att testa icke befintliga system, minska utvecklingstiden och kostnaderna, begränsa riskerna samt upptäcka problem tidigt och på så sätt minska andelen implementerade fel. Vid sektionen Vehicle Simulation and Thermal Analysis på Saab Aeronautics i Linköping designas och simuleras varje grundflygplanssystem, ett av dessa system är bränslesystemet. För närvarande används 2-dimensionella rätblock i simuleringsmodellen för att representera bränsletankarna, vilket är en väldigt grov approximation. För att kunna utföra mer detaljerade analyser behöver modellerna utökas med en bättre geometrisk beskrivning av bränsletankarna. Denna rapport går igenom de olika stegen i den framtagna metodiken för att kombinera 3- dimensionella tankmodeller skapade i CATIA med dynamisk simulering av bränslesystemet i Dymola. Den nya 3-dimensionella representationen av en tank i Dymola bör kunna beräkna bränsleytans läge under en simulering av ett manövrerande flygplan. Första steget i metodiken är att skapa en solid modell av bränslet som finns i tanken. Därefter specificeras modellens giltighetsområde och alla tänkbara riktningar hos accelerationsvektorn som påverkar bränslet genereras, dessa används sedan i den automatiserade volymanalysen i CATIA.  För varje riktning delar CATIA upp bränslemodellen i ett bestämt antal delar och registrerar volymen, bränsleytans läge samt tyngdpunktens position för varje del. Med hjälp av radiala basfunktioner som har implementerats i MATLAB approximeras dessa data och en surrogatmodell tas fram, denna implementeras sedan i Dymola. På så sätt kan bränsleytans och tyngdpunktens läge beräknas på ett effektivt sätt, baserat på riktningen hos bränslets accelerationsvektor samt mängden bränsle i tanken. Den nya 3-dimensionella tankmodellen simuleras i Dymola och resultaten jämförs med mätningar utförda i CATIA samt med resultaten från den gamla simuleringsmodellen. Resultaten visar att den 3-dimensionella tankmodellen ger en mycket bättre representation av verkligheten och att det är en stor förbättring jämfört med den 2-dimensionella representationen. Nackdelen är att det tar ungefär 24 timmar att få fram denna 3-dimensionella representation.
APA, Harvard, Vancouver, ISO, and other styles
11

Olofsson, Karl-Johan. "Black-box optimization of simulated light extraction efficiency from quantum dots in pyramidal gallium nitride structures." Thesis, Linköpings universitet, Matematiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-162235.

Full text
Abstract:
Microsized hexagonal gallium nitride pyramids show promise as next generation Light Emitting Diodes (LEDs) due to certain quantum properties within the pyramids. One metric for evaluating the efficiency of a LED device is by studying its Light Extraction Efficiency (LEE). To calculate the LEE for different pyramid designs, simulations can be performed using the FDTD method. Maximizing the LEE is treated as a black-box optimization problem with an interpolation method that utilizes radial basis functions. A simple heuristic is implemented and tested for various pyramid parameters. The LEE is shown to be highly dependent on the pyramid size, the source position and the polarization. Under certain circumstances, a LEE over 17% is found above the pyramid. The results are however in some situations very sensitive to the simulation parameters, leading to results not converging properly. Establishing convergence for all simulation evaluations must be done with further care. The results imply a high LEE for the pyramids is possible, which motivates the need for further research.
APA, Harvard, Vancouver, ISO, and other styles
12

Fowkes, Jaroslav Mrazek. "Bayesian numerical analysis : global optimization and other applications." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:ab268fe7-f757-459e-b1fe-a4a9083c1cba.

Full text
Abstract:
We present a unifying framework for the global optimization of functions which are expensive to evaluate. The framework is based on a Bayesian interpretation of radial basis function interpolation which incorporates existing methods such as Kriging, Gaussian process regression and neural networks. This viewpoint enables the application of Bayesian decision theory to derive a sequential global optimization algorithm which can be extended to include existing algorithms of this type in the literature. By posing the optimization problem as a sequence of sampling decisions, we optimize a general cost function at each stage of the algorithm. An extension to multi-stage decision processes is also discussed. The key idea of the framework is to replace the underlying expensive function by a cheap surrogate approximation. This enables the use of existing branch and bound techniques to globally optimize the cost function. We present a rigorous analysis of the canonical branch and bound algorithm in this setting as well as newly developed algorithms for other domains including convex sets. In particular, by making use of Lipschitz continuity of the surrogate approximation, we develop an entirely new algorithm based on overlapping balls. An application of the framework to the integration of expensive functions over rectangular domains and spherical surfaces in low dimensions is also considered. To assess performance of the framework, we apply it to canonical examples from the literature as well as an industrial model problem from oil reservoir simulation.
APA, Harvard, Vancouver, ISO, and other styles
13

Marque-Pucheu, Sophie. "Gaussian process regression of two nested computer codes." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC155/document.

Full text
Abstract:
Cette thèse traite de la métamodélisation (ou émulation) par processus gaussien de deux codes couplés. Le terme « deux codes couplés » désigne ici un système de deux codes chaînés : la sortie du premier code est une des entrées du second code. Les deux codes sont coûteux. Afin de réaliser une analyse de sensibilité de la sortie du code couplé, on cherche à construire un métamodèle de cette sortie à partir d'un faible nombre d'observations. Trois types d'observations du système existent : celles de la chaîne complète, celles du premier code uniquement, celles du second code uniquement.Le métamodèle obtenu doit être précis dans les zones les plus probables de l'espace d'entrée.Les métamodèles sont obtenus par krigeage universel, avec une approche bayésienne.Dans un premier temps, le cas sans information intermédiaire, avec sortie scalaire, est traité. Une méthode innovante de définition de la fonction de la moyenne du processus gaussien, basée sur le couplage de deux polynômes, est proposée. Ensuite le cas avec information intermédiaire est traité. Un prédicteur basé sur le couplage des prédicteurs gaussiens associés aux deux codes est proposé. Des méthodes pour évaluer rapidement la moyenne et la variance du prédicteur obtenu sont proposées. Les résultats obtenus pour le cas scalaire sont ensuite étendus au cas où les deux codes sont à sortie de grande dimension. Pour ce faire, une méthode de réduction de dimension efficace de la variable intermédiaire de grande dimension est proposée pour faciliter la régression par processus gaussien du deuxième code.Les méthodes proposées sont appliquées sur des exemples numériques
Three types of observations of the system exist: those of the chained code, those of the first code only and those of the second code only. The surrogate model has to be accurate on the most likely regions of the input domain of the nested code.In this work, the surrogate models are constructed using the Universal Kriging framework, with a Bayesian approach.First, the case when there is no information about the intermediary variable (the output of the first code) is addressed. An innovative parametrization of the mean function of the Gaussian process modeling the nested code is proposed. It is based on the coupling of two polynomials.Then, the case with intermediary observations is addressed. A stochastic predictor based on the coupling of the predictors associated with the two codes is proposed.Methods aiming at computing quickly the mean and the variance of this predictor are proposed. Finally, the methods obtained for the case of codes with scalar outputs are extended to the case of codes with high dimensional vectorial outputs.We propose an efficient dimension reduction method of the high dimensional vectorial input of the second code in order to facilitate the Gaussian process regression of this code. All the proposed methods are applied to numerical examples
APA, Harvard, Vancouver, ISO, and other styles
14

Amouzgar, Kaveh. "Metamodel based multi-objective optimization." Licentiate thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH. Forskningsmiljö Produktutveckling - Simulering och optimering, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-28432.

Full text
Abstract:
As a result of the increase in accessibility of computational resources and the increase in the power of the computers during the last two decades, designers are able to create computer models to simulate the behavior of a complex products. To address global competitiveness, companies are forced to optimize their designs and products. Optimizing the design needs several runs of computationally expensive simulation models. Therefore, using metamodels as an efficient and sufficiently accurate approximate of the simulation model is necessary. Radial basis functions (RBF) is one of the several metamodeling methods that can be found in the literature. The established approach is to add a bias to RBF in order to obtain a robust performance. The a posteriori bias is considered to be unknown at the beginning and it is defined by imposing extra orthogonality constraints. In this thesis, a new approach in constructing RBF with the bias to be set a priori by using the normal equation is proposed. The performance of the suggested approach is compared to the classic RBF with a posteriori bias. Another comprehensive comparison study by including several modeling criteria, such as problem dimension, sampling technique and size of samples is conducted. The studies demonstrate that the suggested approach with a priori bias is in general as good as the performance of RBF with a posteriori bias. Using the a priori RBF, it is clear that the global response is modeled with the bias and that the details are captured with radial basis functions. Multi-objective optimization and the approaches used in solving such problems are briefly described in this thesis. One of the methods that proved to be efficient in solving multi-objective optimization problems (MOOP) is the strength Pareto evolutionary algorithm (SPEA2). Multi-objective optimization of a disc brake system of a heavy truck by using SPEA2 and RBF with a priori bias is performed. As a result, the possibility to reduce the weight of the system without extensive compromise in other objectives is found. Multi-objective optimization of material model parameters of an adhesive layer with the aim of improving the results of a previous study is implemented. The result of the original study is improved and a clear insight into the nature of the problem is revealed.
APA, Harvard, Vancouver, ISO, and other styles
15

Postel-Vinay, Sophie. "Synthetic lethality and functional study of DNA repair defects in ERCC1-deficient non-small-cell lung cancer." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA11T094/document.

Full text
Abstract:
Excision Repair Cross-Complementation group 1 (ERCC1) est une enzyme de réparation de l’ADN fréquemment déficiente dans le cancer bronchique non-à-petites cellules. Bien qu’une expression faible d’ERCC1 soit prédictive de réponse aux sels de platine, l’efficacité des chimiothérapies à base de platine est limitée par leur toxicité et l’apparition de résistance, justifiant la nécessité de stratégies thérapeutiques alternatives. Par ailleurs, l’absence de test compagnon diagnostic permettant d’évaluer la fonctionnalité d’ERCC1 dans la pratique clinique empêche actuellement toute thérapie personnalisée basée sur le statut ERCC1.Afin d’identifier de nouvelles stratégies thérapeutiques pour les tumeurs ERCC1-déficientes en exploitant le concept de létalité synthétique, des screens à haut-débit , utilisant des composés pharmaceutiques ou par ARN interférence, ont été réalisés dans un modèle isogénique de CBNPC déficient en ERCC1. Cette approche a permis d’identifier plusieurs inhibiteurs de poly(ADP-ribose) polymerase 1 et 2 (PARP1/2), tels l’opalarib (AZD2281), le niraparib (MK-24827) et BMN 673 comme sélectifs pour les cellules ERCC1-déficientes. Les mécanismes sous-tendant cette sensibilité sélective ont été étudiés, et les résultats suivants ont été mis en évidence : (i) les cellules ERCC1-déficientes présentent un blocage prolongé en phase G2/M après exposition à l’olaparib ; (ii) l’isoforme 202 d’ERCC1, dont le rôle a été récemment mis en évidence dans la résistance aux sels de platine, module également la sensibilité aux inhibiteurs de PARP ; (iii) la déficience en ERCC1 est épistatique avec les défauts de recombinaison homologue (RH), malgré une capacité normale des cellules ERCC1-déficientes à former des foyers RAD51 ; ceci suggère qu’ERCC1 pourrait intervenir dans la réparation d’une lésion de l’ADN induite par l’inhibiteur de PARP1/2 en amont de l’invasion du brin d’ADN lors de la RH ; (iv) l’inhibition de l’expression de PARP1 par ARN interférence permet de restaurer la résistance aux inhibiteurs de PARP1/2, dans les cellules ERCC1-déficientes uniquement. Ces résultats suggèrent que les inhibiteurs de PARP1/2 pourraient représenter une nouvelle stratégie thérapeutique chez les patients dont la tumeur est déficiente en ERCC1 et un essai clinique va être mis en place pour évaluer cette hypothèse.Afin d’explorer la présence de biomarqueurs de la fonctionnalité d’ERCC1, quatre approches ont été entreprises en parallèle dans le modèle isogénique de CBNPC déficient en ERCC1: (i) irradiation aux UV, afin d’évaluer la voie NER (Nucleotide Excision Repair); (ii) séquençage d’exome, dans le but de rechercher une signature génomique (ADN) ; (iii) analyse du transcriptome cellulaire, pour identifier des modifications d’expression d’ARN ; et (iv) SILAC (Stable Isotope Labeling by Amino acids in Cell culture) afin de comparer le protéome des cellules ERCC1-déficientes et ERCC1-proficientes. Ces approches ont permis d’identifier une potentielle signature génomique, ainsi que de biomarqueurs d’activité – guanine deaminase (GDA) et nicotinamide phosphoribosyltransferase (NAMPT). De plus amples validations et investigations mécanistiques de ces observations préliminaires sont actuellement requises
Excision Repair Cross-Complementation group 1 (ERCC1) is a DNA repair enzyme that is frequently deficient in non-small cell lung cancer (NSCLC). Although low ERCC1 expression correlates with platinum sensitivity, the clinical effectiveness of platinum therapy is limited - mainly by toxicities and occurrence of resistance - highlighting the need for alternative treatment strategies. In addition, the lack of a reliable assay evaluating ERCC1 functionality in the clinical setting currently precludes personalising therapy based on ERCC1 status. To discover new synthetic lethality-based therapeutic strategies for ERCC1-defective tumours, high-throughput drug and siRNA screens in an isogenic NSCLC model of ERCC1 deficiency were performed. This approach identified multiple clinical poly(ADP-ribose) polymerase 1 and 2 (PARP1/2) inhibitors such as olaparib (AZD-2281), niraparib (MK-4827) and BMN 673 as being selective for ERCC1 deficiency. The mechanism underlying ERCC1-selective effects was dissected by studying molecular biomarkers of tumour cell response, and revealed that: (i) ERCC1-deficient cells displayed a significant delay in double-strand break repair associated with a profound and prolonged G2/M arrest following PARP1/2 inhibitor treatment; (ii) ERCC1 isoform 202, which has recently been shown to mediate platinum sensitivity, also modulated PARP1/2 sensitivity; (iii) ERCC1-deficiency was epistatic with homologous recombination deficiency, although ERCC1-deficient cells did not display a defect in RAD51 foci formation. This suggests that ERCC1 might be required to process PARP1/2 inhibitor induced DNA lesions prior to DNA strand invasion; and (iv) PARP1 silencing restored PARP1/2 inhibitor resistance in ERCC1-deficient cells but had no effect in ERCC1-proficient cells, supporting the hypothesis that PARP1 might be required for the ERCC1 selectivity of PARP1/2 inhibitors. This study indicated that PARP1/2 inhibitors as a monotherapy could represent a novel therapeutic strategy for NSCLC patients with ERCC1-deficient tumours, and a clinical protocol is being written to evaluate this hypothesis.To investigate whether a surrogate biomarker of ERCC1 functionality could be developed, four parallel approaches were undertaken in the ERCC1-isogenic NSCLC model: (i) UV irradiation, to evaluate the Nucleotide Excision Repair (NER) pathway; (ii) whole exome sequencing, to look for an ERCC1-associated genomic scar at the DNA level; (iii) transcriptomic analysis, to investigate changes at the RNA expression level; and (iv) SILAC (Stable Isotope Labeling by Amino acids in Cell culture) analysis, to compare proteomic profiles between ERCC1-proficient and ERCC1-deficient cells. These approaches allowed the identification of putative genomic signature and potential metabolic surrogate biomarkers - guanine deaminase (GDA) and nicotinamide phosphoribosyltransferase (NAMPT). Further validation and mechanistic investigations of these latter preliminary observations are warranted
APA, Harvard, Vancouver, ISO, and other styles
16

Lu, Ruijin. "Scalable Estimation and Testing for Complex, High-Dimensional Data." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/93223.

Full text
Abstract:
With modern high-throughput technologies, scientists can now collect high-dimensional data of various forms, including brain images, medical spectrum curves, engineering signals, etc. These data provide a rich source of information on disease development, cell evolvement, engineering systems, and many other scientific phenomena. To achieve a clearer understanding of the underlying mechanism, one needs a fast and reliable analytical approach to extract useful information from the wealth of data. The goal of this dissertation is to develop novel methods that enable scalable estimation, testing, and analysis of complex, high-dimensional data. It contains three parts: parameter estimation based on complex data, powerful testing of functional data, and the analysis of functional data supported on manifolds. The first part focuses on a family of parameter estimation problems in which the relationship between data and the underlying parameters cannot be explicitly specified using a likelihood function. We introduce a wavelet-based approximate Bayesian computation approach that is likelihood-free and computationally scalable. This approach will be applied to two applications: estimating mutation rates of a generalized birth-death process based on fluctuation experimental data and estimating the parameters of targets based on foliage echoes. The second part focuses on functional testing. We consider using multiple testing in basis-space via p-value guided compression. Our theoretical results demonstrate that, under regularity conditions, the Westfall-Young randomization test in basis space achieves strong control of family-wise error rate and asymptotic optimality. Furthermore, appropriate compression in basis space leads to improved power as compared to point-wise testing in data domain or basis-space testing without compression. The effectiveness of the proposed procedure is demonstrated through two applications: the detection of regions of spectral curves associated with pre-cancer using 1-dimensional fluorescence spectroscopy data and the detection of disease-related regions using 3-dimensional Alzheimer's Disease neuroimaging data. The third part focuses on analyzing data measured on the cortical surfaces of monkeys' brains during their early development, and subjects are measured on misaligned time markers. In this analysis, we examine the asymmetric patterns and increase/decrease trend in the monkeys' brains across time.
Doctor of Philosophy
With modern high-throughput technologies, scientists can now collect high-dimensional data of various forms, including brain images, medical spectrum curves, engineering signals, and biological measurements. These data provide a rich source of information on disease development, engineering systems, and many other scientific phenomena. The goal of this dissertation is to develop novel methods that enable scalable estimation, testing, and analysis of complex, high-dimensional data. It contains three parts: parameter estimation based on complex biological and engineering data, powerful testing of high-dimensional functional data, and the analysis of functional data supported on manifolds. The first part focuses on a family of parameter estimation problems in which the relationship between data and the underlying parameters cannot be explicitly specified using a likelihood function. We introduce a computation-based statistical approach that achieves efficient parameter estimation scalable to high-dimensional functional data. The second part focuses on developing a powerful testing method for functional data that can be used to detect important regions. We will show nice properties of our approach. The effectiveness of this testing approach will be demonstrated using two applications: the detection of regions of the spectrum that are related to pre-cancer using fluorescence spectroscopy data and the detection of disease-related regions using brain image data. The third part focuses on analyzing brain cortical thickness data, measured on the cortical surfaces of monkeys’ brains during early development. Subjects are measured on misaligned time-markers. By using functional data estimation and testing approach, we are able to: (1) identify asymmetric regions between their right and left brains across time, and (2) identify spatial regions on the cortical surface that reflect increase or decrease in cortical measurements over time.
APA, Harvard, Vancouver, ISO, and other styles
17

Silva, Lidiane Cristina da. "A comunidade zooplanctônica de rios amazônicos na área de influência da Usina Hidrelétrica de Santo Antônio do Madeira, RO: diferentes abordagens no monitoramento." Universidade Federal de São Carlos, 2015. https://repositorio.ufscar.br/handle/ufscar/1848.

Full text
Abstract:
Made available in DSpace on 2016-06-02T19:30:11Z (GMT). No. of bitstreams: 1 6647.pdf: 3348146 bytes, checksum: 2a91ad90e181871a3f0cd04d215a295b (MD5) Previous issue date: 2015-03-09
Universidade Federal de Sao Carlos
The growing world interest for the Amazon region has risen up the most diverse issues. Among these are those of ecological importance, to promote the conservation and maintenance of natural resources of the region. It was intended to, in this study, analyze the structure of the zooplankton community and its spatial distribution and temporal patterns on the Madeira River and tributaries in the area of influence of the Santo Antônio HPP, before, during and after their deployment, to verify the occurrence possible changes. Quarterly samplings were carried out for four years for the physical and chemical variables and zooplankton. The zooplankton community structure varied widely depending on the hydrological events, but was little changed after filling the reservoir probably because the HPP be of the run-of-river type. Changes in the density of zooplankton were lower than those observed in large accumulation reservoir previously built in the Amazon. However there have been changes in the proportions of biomass of different zooplankton groups. Whereas the functional approach, in all analyzed rivers was observed that before the filling had greater selection of r strategists and smaller species and after filling greater numbers of functional groups coexisted. In relation to the components of diversity, greater evenness values were recorded in the last years of sampling. For other functional and taxonomic indices (richness, Shannon, FDis and FEev), differences related to impoundment on the Madeira River were not verified. Considering the approach of surrogates, the amounts recorded for analysis of concordance between the zooplankton community groups and also among the taxonomic classification levels were low significance, which prevents them from being used separately in biomonitoring studies in the region. We conclude that the Madeira River and its tributaries have high zooplankton diversity and until the moment the changes in the community were low, presumably by maintaining the short residence time. The results show that different approaches, both functional as taxonomic, evaluated together, can be a great step toward understanding the relationship between environmental standards, management practices and production of ecosystem services.
O crescente interesse mundial pela região Amazônica tem levantado diversas questões, dentre elas as de importância ecológica, visando a conservação e manutenção dos recursos naturais da região. Pretendeu-se, no presente estudo, analisar a estrutura da comunidade zooplanctônica e seus padrões de distribuição espacial e temporal no rio Madeira e tributários na área de influência da UHE de Santo Antônio, antes, durante e após a sua implantação, visando verificar a ocorrência de possíveis alterações. Foram realizadas amostragens trimestrais durante o período de 2009 a 2013, para as variáveis físicas, químicas e do zooplâncton. A estrutura da comunidade variou amplamente em função dos eventos hidrológicos, mas foi pouco alterada após o enchimento do reservatório provavelmente pelo fato da usina ser do tipo fio d água. As mudanças na densidade do zooplâncton foram menores do que as observadas nos grandes reservatórios de acumulação anteriormente construídos na Amazônia. Contudo ocorreram mudanças nas proporções da biomassa dos diferentes grupos zooplanctônicos. Considerando a abordagem funcional, em todos os rios analisados observou-se que durante a fase pré-enchimento havia maior seleção de espécies r estrategistas e de menor tamanho e que após o enchimento um maior número de grupos funcionais coexistiu. Em relação aos componentes da diversidade, maiores valores de equitabilidade foram registrados nos últimos anos de amostragem. Para os demais índices funcionais e taxonômicos (riqueza, Shannon, FDis e FEev), diferenças relacionadas ao barramento do rio Madeira não foram verificadas. Pela abordagem de grupos substitutos, os valores de concordância obtidos entre os grupos zooplanctônicos e também entre os níveis de classificação taxonômica foram de baixa significância, o que impede que estes possam ser usados separadamente no biomonitoramento da região. Conclui-se que o rio Madeira e seus tributários estudados detêm elevada diversidade zooplanctônica e que até o momento as alterações na comunidade foram baixas, provavelmente pela manutenção do curto tempo de residência. Os resultados mostram que diferentes abordagens, tanto funcionais como taxonômicas, avaliadas em conjunto, podem constituir uma ferramenta valiosa em direção à compreensão da relação entre padrões ecológicos, práticas de manejo e produção de serviços ecossistêmicos.
APA, Harvard, Vancouver, ISO, and other styles
18

Boutoux, Guillaume. "Sections efficaces neutroniques via la méthode de substitution." Phd thesis, Bordeaux 1, 2011. http://tel.archives-ouvertes.fr/tel-00654677.

Full text
Abstract:
Les sections efficaces neutroniques des noyaux de courte durée de vie sont des données cruciales pour la physique fondamentale et appliquée dans des domaines tels que la physique des réacteurs ou l'astrophysique nucléaire. En général, l'extrême radioactivité de ces noyaux ne nous permet pas de procéder à des mesures induites par neutrons. Cependant, il existe une méthode de substitution (" surrogate " dans la littérature) qui permet de déterminer ces sections efficaces neutroniques par l'intermédiaire de réactions de transfert ou de réactions de diffusion inélastique. Son intérêt principal est de pouvoir utiliser des cibles moins radioactives et ainsi d'accéder à des sections efficaces neutroniques qui ne pourraient pas être mesurées directement. La méthode est basée sur l'hypothèse de formation d'un noyau composé et sur le fait que la désexcitation ne dépend essentiellement que de l'énergie d'excitation et du spin et parité de l'état composé peuplé. Toutefois, les distributions de moments angulaires et parités peuplés dans des réactions de transfert et celles induites par neutrons sont susceptibles d'être différentes. Ce travail fait l'état de l'art sur la méthode substitution et sa validité. En général, la méthode de substitution fonctionne très bien pour extraire des sections efficaces de fission. Par contre, la méthode de substitution dédiée à la capture radiative est mise à mal par la comparaison aux réactions induites par neutrons. Nous avons réalisé une expérience afin de déterminer les probabilités de désexcitation gamma du 176Lu et du 173Yb à partir des réactions de substitution 174Yb(3He,p)176Lu* et 174Yb(3He,alpha)173Yb*, respectivement, et nous les avons comparées avec les probabilités de capture radiative correspondantes aux réactions 175Lu(n,gamma) et 172Yb(n,gamma) qui sont bien connues. Cette expérience a permis de comprendre pourquoi, dans le cas de la désexcitation gamma, la méthode de substitution donne des écarts importants par rapport à la réaction neutronique correspondante. Ce travail dans la région de terres rares a permis d'évaluer dans quelle mesure la méthode de substitution peut s'appliquer pour extraire des probabilités de capture dans la région des actinides. Des expériences précédentes sur la fission ont aussi pu être réinterprétées. Ce travail apporte donc un éclairage nouveau sur la méthode de substitution.
APA, Harvard, Vancouver, ISO, and other styles
19

Vogel, Adam P. "Speech as a surrogate marker of central nervous system function: practical, experimental and statistical considerations." 2010. http://repository.unimelb.edu.au/10187/7682.

Full text
Abstract:
The speech of an individual conveys a great deal of information about how their central nervous system (CNS) is performing. Whether they are tired, distressed or suffering from a degenerative disease affecting the brainstem, speech can change as a function of an individual’s condition. Yet, when assessing the speech in an individual on the first occasion, it is often difficult to determine whether their performance is different from a pre-morbid level. Therefore, the repeated acquisition and analysis of a set of brief and simple speech measures could provide information on changes in a patient’s performance over time. This could ultimately lead to the inclusion of objective markers of change in trials of conditions and disorders that currently rely of subjective, clinician derived measures of severity or patient self report, such as pain, depression or fatigue. Furthermore, the information could be used to track patient performance in treatment trials for degenerative disorders, such as Friedreich ataxia or Huntington’s disease.
This thesis aimed to evaluate the practical, experimental and statistical requirements of speech assessment protocols designed to monitor patient performance over time. The research involved a number of studies evaluating methods for acquiring and analysing data, studies examining the stability and sensitivity of speech stimuli, and finally, the functionality of these findings in an experimental model known to induce change in CNS function (i.e., sustained wakefulness).
Methods for acquiring and analysing speech data were designed to provide a balance between the concurrent demands for precision and useability inherent in repeated assessment protocols. Data from these studies provided evidence that techniques offering high levels of useability (e.g., easy to use, automated) are capable of offering adequate precision on broad acoustic measures of timing and frequency. Moreover, these methods could be standardised and automated, allowing non-expert users to collect and analyse data in a controlled and time efficient manner. The second series of experiments systematically documented the stability and responsiveness of speech stimuli within a variety of experimental conditions. These studies were designed to establish the suitability of select speech measures for monitoring change in individuals over time, as stimuli that proved to be both stable (across several re-test intervals) and sensitive to change or impairment were ideal candidates. Finally, a proof of concept study designed to evaluate the efficiency and sensitivity of the proposed methodology was initiated in an experimental model known to induce changes in psychomotor functioning in healthy adults (sustained wakefulness). Significant changes from baseline were observed in speech production as a function of increasing levels of fatigue. These findings are important as they demonstrate the potential of speech as a valid, reliable and sensitive marker of change in conditions where the CNS is subject to stress.
APA, Harvard, Vancouver, ISO, and other styles
20

Marques, Inês Pereira Dias. "PROGRESS – Progression of Diabetic Retinopathy. Identification of Signs and Surrogate outcomes." Doctoral thesis, 2021. http://hdl.handle.net/10316/96391.

Full text
Abstract:
Tese no âmbito do Programa de Doutoramento em Ciências da Saúde – ramo de Medicina, orientada pela Professora Doutora Maria da Conceição Lobo Fonseca e pelo Professor Doutor João Pereira Figueira e apresentada à Faculdade de Medicina da Universidade de Coimbra.
Diabetic retinopathy (DR) is the most frequent complication of diabetes mellitus and the leading cause of legal blindness in active populations of industrialized countries. Progression of DR does not occur at the same rate in all patients. Some never develop vision loss, whereas others rapidly progress to macular edema or neovascularization leading to vision loss. The already known risk factors are unable to predict the patients that will progress and develop complications. The main goal in diabetes is to prevent the development of DR. When DR lesions develop, early intervention should be attempt in order to preserve vision. It is essential to understand the mechanisms by which diabetes affects retina, improve the methods for early disease detection and find new molecules for targeted treatment. The understanding of the mechanisms that balance in different direction is the main objective of this thesis. This thesis represents the results of an observational longitudinal clinical study, the PROGRESS study (NCT03010397), that followed up 212 type 2 Diabetes mellitus patients with no or mild DR, in a 5-year period, with annual visits. The overall purpose of this research was to characterize both functionally and morphologically initial DR stages. We found that different DR pathways of disease may be identified in different eyes, representing ischemia, neurodegeneration and edema. We wanted to further characterize the already identified DR phenotypes that may be used as biomarkers of progression. Furthermore, understanding the extent of neuroretinal abnormalities and characterize the neurodegeneration progression in patients with or without detectable microvascular damage was a main goal of the study. I have started, in CHAPTER 1, with a general introduction, where I go through the epidemiology and pathophysiology of DR, principal risk factors, different classification systems and the main pathways of disease progression. In CHAPTER 2, 3 and 4 I present the results of the 5 year follow up study, with a description of the demographic and systemic characteristics of the study population and the 5 year progression to vision threatening complications (VTC) and ETDRS level progression. The predictive value of systemic and ocular risk factors were explored, and imaging biomarkers identified. Phenotype C patients present higher HbA1c levels and higher values of triglycerides when compared to other phenotypes. Phenotype C was identified mainly in eyes with ETDRS grade 35 suggesting that ETDRS grade 35 may be the turning point in the progression of DR. Different retinopathy phenotypes in T2D show different five-year risk for development of VTC: CSME, CIME and PDR. Phenotype C identifies eyes at higher risk for development of vision-threatening complications (CSME or PDR). In contrast, phenotype A identifies eyes that are at a very low risk of development of vision–threatening complications. Microaneurysm turnover and phenotype C correlate well with changes in ETDRS severity levels, independent of CRT values, validating its use as a simple to use biomarker of DR progression. Phenotype A and B, representing 70% of the entire cohort, have a very low risk for 2-or-more-step ETDRS worsening (2%). In CHAPTER 5 we presented a cross-sectional analysis of a cohort of NPDR patients, grouped according to the ETDRS grading protocol, in levels 10-20, 35, and 43-47. Three different pathways of disease were identified: neurodegeneration, ischemia and edema, with different prevalence in different patients, indicating that the predominant mechanism of retinal disease may be different in different individuals. They appear to occur independently of each other. Only the metrics of vessel density, indicating ischemia, appear to be associated with the ETDRS level. In the next 2 chapters, CHAPTER 6 and 7, I present the results of a 2-year and a 3-year follow up studies, performed in a subset of patients, characterizing the evolution of the three identified retinal pathways occurring in DR. In each ETDRS group, values of capillary dropout (reduced vessel density), edema and neurodegeneration covered a wide range, identifying different levels of damage in different eyes. Vessel density remained the only metrics significantly different between ETDRS groups, even after adjusting for multiple baseline factors. During the 2-year follow-up period the vessel density decreased in all retinal plexuses, particularly in the superficial capillary plexus, and was more important in eyes with worsening in ETDRS level comparing to eyes that maintained DR severity, whereas edema and neurodegeneration remained stable. In the 3-year follow up study it was evident a GCL+IPL thickness decreased (representing neurodegeneration) during the follow up, however, this decrease do not discriminate between eyes that will present worsening comparing to eyes that maintain the ETDRS severity level. In the last paper presented, in CHAPTER 8, we evaluated 105 eyes with the innovative Swept-Source OCTA (SS-OCTA, PlexElite, Carl Zeiss Meditec), that allow to explore a bigger area of the retina, using 15x9 mm and 3x3 mm protocol. We observe that capillary closure in the midperiphery in a diabetic retina is indicative of an advanced stage of retinopathy, whereas capillary closure limited to the perifovea suggests a milder stage of the disease. In the last chapter I performed a brief discussion of the obtained results. This information has a crucial impact in DR management, contributing for an individualized monitoring and care and open new perspectives concerning new therapies to be used in the early phase, before clinically significant complications o
APA, Harvard, Vancouver, ISO, and other styles
21

Talukder, A. K. M. K. A. "On the Pareto-Following Variation Operator for fast converging Multiobjective Evolutionary Algorithms." 2008. http://repository.unimelb.edu.au/10187/3604.

Full text
Abstract:
The focus of this research is to provide an efficient approach to deal with computationally expensive Multiobjective Optimization Problems (MOP’s). Typically, approximation or surrogate based techniques are adopted to reduce the computational cost. In such cases, the original expensive objective function is replaced by a cheaper mathematical model, where this model mimics the behavior/input-output (i.e. design variable – objective value) relationship. However, it is difficult to model an exact substitute of the targeted objective function. Furthermore, if this kind of approach is used in an evolutionary search, literally, the number of function evaluations does not reduce (i.e. The number of original function evaluation is replaced by the number of surrogate/approximate function evaluation). However, if a large number of individuals are considered, the surrogate model fails to offer smaller computational cost.
To tackle this problem, we have reformulated the concept of surrogate modeling in a different way, which is more suitable for the Multiobjective Evolutionary Algorithm(MOEA) paradigm. In our approach, we do not approximate the objective function; rather we model the input-output behavior of the underlying MOEA itself. The model attempts to identify the search path (in both design-variable and objective spaces) and from this trajectory the model is guaranteed to generate non-dominated solutions (especially, during the initial iterations of the underlying MOEA – with respect to the current solutions) for the next iterations of the MOEA. Therefore, the MOEA can avoid re-evaluating the dominated solutions and thus can save large amount of computational cost due to expensive function evaluations. We have designed our approximation model as a variation operator – that follows the trajectory of the fronts and can be “plugged-in” to any kind of MOEA where non-domination based selection is used. Hence it is termed– the “Pareto-Following Variation Operator (PFVO)”. This approach also provides some added advantage that we can still use the original objective function and thus the search procedure becomes robust and suitable, especially for dynamic problems.
We have integrated the model into three base-line MOEA’s: “Non-dominated Sorting Genetic Algorithm - II (NSGA-II)”, “Strength Pareto Evolutionary Algorithm - II (SPEAII)”and the recently proposed “Regularity Model Based Estimation of Distribution Algorithm (RM-MEDA)”. We have also conducted an exhaustive simulation study using several benchmark MOP’s. Detailed performance and statistical analysis reveals promising results. As an extension, we have implemented our idea for dynamic MOP’s. We have also integrated PFVO into diffusion based/cellular MOEA in a distributed/Grid environment. Most experimental results and analysis reveal that PFVO can be used as a performance enhancement tool for any kind of MOEA.
APA, Harvard, Vancouver, ISO, and other styles
22

Razavi, Seyed Saman. "Developing Efficient Strategies for Automatic Calibration of Computationally Intensive Environmental Models." Thesis, 2013. http://hdl.handle.net/10012/7443.

Full text
Abstract:
Environmental simulation models have been playing a key role in civil and environmental engineering decision making processes for decades. The utility of an environmental model depends on how well the model is structured and calibrated. Model calibration is typically in an automated form where the simulation model is linked to a search mechanism (e.g., an optimization algorithm) such that the search mechanism iteratively generates many parameter sets (e.g., thousands of parameter sets) and evaluates them through running the model in an attempt to minimize differences between observed data and corresponding model outputs. The challenge rises when the environmental model is computationally intensive to run (with run-times of minutes to hours, for example) as then any automatic calibration attempt would impose a large computational burden. Such a challenge may make the model users accept sub-optimal solutions and not achieve the best model performance. The objective of this thesis is to develop innovative strategies to circumvent the computational burden associated with automatic calibration of computationally intensive environmental models. The first main contribution of this thesis is developing a strategy called “deterministic model preemption” which opportunistically evades unnecessary model evaluations in the course of a calibration experiment and can save a significant portion of the computational budget (even as much as 90% in some cases). Model preemption monitors the intermediate simulation results while the model is running and terminates (i.e., pre-empts) the simulation early if it recognizes that further running the model would not guide the search mechanism. This strategy is applicable to a range of automatic calibration algorithms (i.e., search mechanisms) and is deterministic in that it leads to exactly the same calibration results as when preemption is not applied. One other main contribution of this thesis is developing and utilizing the concept of “surrogate data” which is basically a reasonably small but representative proportion of a full set of calibration data. This concept is inspired by the existing surrogate modelling strategies where a surrogate model (also called a metamodel) is developed and utilized as a fast-to-run substitute of an original computationally intensive model. A framework is developed to efficiently calibrate hydrologic models to the full set of calibration data while running the original model only on surrogate data for the majority of candidate parameter sets, a strategy which leads to considerable computational saving. To this end, mapping relationships are developed to approximate the model performance on the full data based on the model performance on surrogate data. This framework can be applicable to the calibration of any environmental model where appropriate surrogate data and mapping relationships can be identified. As another main contribution, this thesis critically reviews and evaluates the large body of literature on surrogate modelling strategies from various disciplines as they are the most commonly used methods to relieve the computational burden associated with computationally intensive simulation models. To reliably evaluate these strategies, a comparative assessment and benchmarking framework is developed which presents a clear computational budget dependent definition for the success/failure of surrogate modelling strategies. Two large families of surrogate modelling strategies are critically scrutinized and evaluated: “response surface surrogate” modelling which involves statistical or data–driven function approximation techniques (e.g., kriging, radial basis functions, and neural networks) and “lower-fidelity physically-based surrogate” modelling strategies which develop and utilize simplified models of the original system (e.g., a groundwater model with a coarse mesh). This thesis raises fundamental concerns about response surface surrogate modelling and demonstrates that, although they might be less efficient, lower-fidelity physically-based surrogates are generally more reliable as they to-some-extent preserve the physics involved in the original model. Five different surface water and groundwater models are used across this thesis to test the performance of the developed strategies and elaborate the discussions. However, the strategies developed are typically simulation-model-independent and can be applied to the calibration of any computationally intensive simulation model that has the required characteristics. This thesis leaves the reader with a suite of strategies for efficient calibration of computationally intensive environmental models while providing some guidance on how to select, implement, and evaluate the appropriate strategy for a given environmental model calibration problem.
APA, Harvard, Vancouver, ISO, and other styles
23

Domingos, Célia Margarida Viegas. "Characterization of a wearable monitoring system of physical activity as a surrogate of brain structure and function in older populations." Doctoral thesis, 2021. http://hdl.handle.net/1822/76084.

Full text
Abstract:
Tese de doutoramento em Ciências da Saúde
O envelhecimento cerebral saudável é um fator determinante do estado de saúde e da qualidade de vida da população idosa. Estudos tem demonstrado que a atividade física (AF) pode melhorar a saúde cerebral. Contudo, o uso de medidas subjetivas de AF e a ausência de abordagens multimodais de neuroimagem têm limitado a compreensão dos mecanismos associados. Nesta tese, pretendemos realizar um estudo observacional transversal em idosos para (i) comparar estimativas objetivas de AF com dados obtidos a partir de questionários em idosos residentes na comunidade; (ii) avaliar o perfil de AF e suas associações com variáveis relativas a indicadores de saúde; (iii) testar a aceitação, usabilidade e satisfação de um sistema de monitorização de AF (Xiaomi Mi Band 2®); e (iv) explorar associações entre AF e a estrutura e função do cérebro através da avaliação objetiva da AF e aquisições de neuroimagem. Esta abordagem foi complementada por uma análise sistemática de estudos de neuroimagem observacionais que avaliaram a relação entre PA e a estrutura e função do cérebro em idosos sem doença cognitiva ou neuropatológica. Os resultados evidenciaram uma grande variação do tempo sedentário e AF entre as medidas subjetivas (auto-relato) e as correspondentes medidas objetivas (Xiaomi Mi Band 2®), verificando-se a maior diferença para a estimativa do tempo sedentário. Adicionalmente, verificámos que a AF auto-relatada e medida objetivamente associam-se de forma diferente com as variáveis relativas a indicadores de saúde. Neste estudo verificou-se também que o Xiaomi Mi Band® apresenta um excelente nível de aceitação, usabilidade e satisfação entre os idosos, sugerindo que este dispositivo é adequado para esta população. Além disso, verificámos que a usabilidade é um fator importante na determinação da satisfação do utilizador. Por fim, observou-se uma correlação positiva significativa entre a AF vigorosa e o volume do giro parahipocampal esquerdo e do hipocampo direito. Além disso, foi observada uma maior conectividade funcional (CF) entre o giro frontal, o giro cingulado, o lobo inferior occipital e a AF de intensidade leve, moderada e total e menor CF associada ao tempo sedentário para as mesmas redes. Concluindo, os resultados sugerem que dispositivos com características semelhantes ao Xiaomi Mi Band® podem ser sistemas de monitorização de AF viáveis, podendo ser implementados com sucesso em estratégias de promoção de AF para a adoção de um estilo de vida fisicamente ativo. Além disso, os benefícios de um estilo de vida fisicamente ativo podem resultar numa melhor saúde cerebral.
Healthy brain aging is one of the most important determinants of health status and quality of life in the older population. There is increasing evidence that physical activity (PA) contributes to brain health. However, the lack of use of integrative approaches including PA objective measures and multimodal neuroimaging has limited the knowledge of the mechanisms underlying the association. In this thesis, we aimed to conduct an observational cross-sectional study with community-dwelling older adults to (i) compare PA objective estimates with data obtained from self-report questionnaires; (ii) assess PA profiles and their associations with health outcomes; (iii) test acceptability, usability, and user satisfaction of a commercially available wearable PA monitoring system (Xiaomi Mi Band 2®); and (iv) explore the associations between objectively measured PA and brain structure and function. This approach was complemented by a systematic review of observational neuroimaging studies that have examined the relationship between PA and brain structure and function in older adults without cognitive disease or neuropathology. Altogether, the findings highlighted the large variation between subjective (self-reported) and objectively measured PA (Xiaomi Mi Band 2®) and sedentary time parameters, with the highest difference found for the latter. Additionally, self-reported and objectively measured PA were differently associated with health outcomes (cognitive and mood profiles, anthropometric, body composition, and physical performance measures). Findings also demonstrated that the Xiaomi Mi Band® has an excellent level of acceptability, usability, and satisfaction among older adults, suggesting that this device is suitable for this population. Moreover, usability was noted to be an important factor influencing user satisfaction. Finally, a significant positive correlation was found between the time spent in vigorous PA and the left parahippocampal gyrus and right hippocampus volumes. Findings also revealed higher functional connectivity (FC) between the frontal gyrus, cingulate gyrus, and occipital inferior lobe for light, moderate, and total PA time, and sedentary time associated with lower FC in the same networks. In conclusion, these results suggest that wearables with characteristics similar to the Xiaomi Mi Band® may be a feasible monitoring system of PA. Thus, these types of devices may be used to create PA promotion strategies for the adoption of a physically active lifestyle in older populations. Moreover, the benefits of having a physically active lifestyle may translate into better brain health.
Financial support was provided by FEDER funds through the Operational Programme Competitiveness Factors – COMPETE and National Funds through FCT under the project POCI-01-0145-FEDER-007038, UIDB/50026/2020 and UIDP/50026/2020, by the project MEDPERSYST [ POCI-01-0145 -FEDER-016428; supported by the Operational Programme Competitiveness and Internationalization (COMPETE 2020) and the Regional Operational Program of Lisbon and National Funding through Portuguese Foundation for Science and Technology (FCT, Portugal)], and by the Portuguese North Regional Operational Programme . The work was also developed under the scope of the 2CA-Braga Grant of the 2017 Clinical Research Projects. CD was supported by a combined Ph.D. scholarship from FCT and the company iCognitus4ALL – IT Solutions, Lda, Braga, Portugal (grant number PD/BDE/127831/2016).
APA, Harvard, Vancouver, ISO, and other styles
24

Shan, Songqing. "Metamodeling strategies for high-dimensional simulation-based design problems." 2010. http://hdl.handle.net/1993/4271.

Full text
Abstract:
Computational tools such as finite element analysis and simulation are commonly used for system performance analysis and validation. It is often impractical to rely exclusively on the high-fidelity simulation model for design activities because of high computational costs. Mathematical models are typically constructed to approximate the simulation model to help with the design activities. Such models are referred to as “metamodel.” The process of constructing a metamodel is called “metamodeling.” Metamodeling, however, faces eminent challenges that arise from high-dimensionality of underlying problems, in addition to the high computational costs and unknown function properties (that is black-box functions) of analysis/simulation. The combination of these three challenges defines the so-called high-dimensional, computationally-expensive, and black-box (HEB) problems. Currently there is a lack of practical methods to deal with HEB problems. This dissertation, by means of surveying existing techniques, has found that the major deficiency of the current metamodeling approaches lies in the separation of the metamodeling from the properties of underlying functions. The survey has also identified two promising approaches - mapping and decomposition - for solving HEB problems. A new analytic methodology, radial basis function–high-dimensional model representation (RBF-HDMR), has been proposed to model the HEB problems. The RBF-HDMR decomposes the effects of variables or variable sets on system outputs. The RBF-HDMR, as compared with other metamodels, has three distinct advantages: 1) fundamentally reduces the number of calls to the expensive simulation in order to build a metamodel, thus breaks/alleviates exponentially-increasing computational difficulty; 2) reveals the functional form of the black-box function; and 3) discloses the intrinsic characteristics (for instance, linearity/nonlinearity) of the black-box function. The RBF-HDMR has been intensively tested with mathematical and practical problems chosen from the literature. This methodology has also successfully applied to the power transfer capability analysis of Manitoba-Ontario Electrical Interconnections with 50 variables. The test results demonstrate that the RBF-HDMR is a powerful tool to model large-scale simulation-based engineering problems. The RBF-HDMR model and its constructing approach, therefore, represent a breakthrough in modeling HEB problems and make it possible to optimize high-dimensional simulation-based design problems.
APA, Harvard, Vancouver, ISO, and other styles
25

"Calibration of Flush Air Data Sensing Systems Using Surrogate Modeling Techniques." Thesis, 2011. http://hdl.handle.net/1911/70450.

Full text
Abstract:
In this work the problem of calibrating Flush Air Data Sensing (FADS) has been addressed. The inverse problem of extracting freestream wind speed and angle of attack from pressure measurements has been solved. The aim of this work was to develop machine learning and statistical tools to optimize design and calibration of FADS systems. Experimental and Computational Fluid Dynamics (EFD and CFD) solve the forward problem of determining the pressure distribution given the wind velocity profile and bluff body geometry. In this work three ways are presented in which machine learning techniques can improve calibration of FADS systems. First, a scattered data approximation scheme, called Sequential Function Approximation (SFA) that successfully solved the current inverse problem was developed. The proposed scheme is a greedy and self-adaptive technique that constructs reliable and robust estimates without any user-interaction. Wind speed and direction prediction algorithms were developed for two FADS problems. One where pressure sensors are installed on a surface vessel and the other where sensors are installed on the Runway Assisted Landing Site (RALS) control tower. Second, a Tikhonov regularization based data-model fusion technique with SFA was developed to fuse low fidelity CFD solutions with noisy and sparse wind tunnel data. The purpose of this data model fusion approach was to obtain high fidelity, smooth and noiseless flow field solutions by using only a few discrete experimental measurements and a low fidelity numerical solution. This physics based regularization technique gave better flow field solutions compared to smoothness based solutions when wind tunnel data is sparse and incomplete. Third, a sequential design strategy was developed with SFA using Active Learning techniques from the machine learning theory and Optimal Design of Experiments from statistics for regression and classification problems. Uncertainty Sampling was used with SFA to demonstrate the effectiveness of active learning versus passive learning on a cavity flow classification problem. A sequential G-optimal design procedure was also developed with SFA for regression problems. The effectiveness of this approach was demonstrated on a simulated problem and the above mentioned FADS problem.
APA, Harvard, Vancouver, ISO, and other styles
26

Abdul, Jameel Abdul Gani. "A functional group approach for predicting fuel properties." Diss., 2019. http://hdl.handle.net/10754/631722.

Full text
Abstract:
Experimental measurement of fuel properties are expensive, require sophisticated instrumentation and are time consuming. Mathematical models and approaches for predicting fuel properties can help reduce time and costs. A new approach for characterizing petroleum fuels called the functional group approach was developed by disassembling the innumerable fuel molecules into a finite number of molecular fragments or ‘functional groups’. This thesis proposes and tests the following hypothesis, Can a fuels functional groups be used to predict its combustion properties? Analytical techniques like NMR spectroscopy that are ideally suited to identify and quantify the various functional groups present in the fuels was used. Branching index (BI), a new parameter that quantifies the degree and quality of branching in a molecule was defined. The proposed hypothesis was tested on three classes of fuels namely gasolines, diesel and heavy fuel oil. Five key functional groups namely paraffinic CH3, paraffinic CH2, paraffinic CH, naphthenic CH-CH2 and aromatic C-CH groups along with BI were used as matching targets to formulate simple surrogates of one or two molecules that reproduce the combustion characteristics. Using this approach, termed as the minimalist functional group (MFG) approach surrogates were formulated for a number of standard gasoline, diesel and jet fuels. The surrogates were experimentally validated using measurements from Ignition quality tester (IQT), Rapid compression machine (RCM) and smoke point (SP) apparatus. The functional group approach was also employed to predict research octane number (RON) and motor octane number (MON) of fuels blended with ethanol using artificial neural networks (ANN). A multiple linear regression (MLR) based model for predicting derived cetane number (DCN) of hydrocarbon fuels was also developed. The functional group approach was also extended to study heavy fuel oil (HFO), a viscous residual fuel that contains heteroatoms like S, N and O. It is used in ships as marine fuel and also in boilers for electricity generation. 1H NMR and 13C NMR measurements were made to analyze the average molecular parameters (AMP) of HFO molecules. The fuel was divided into 19 different functional groups and their concentrations were calculated from the AMP values. A surrogate molecule that represents the average structure of HFO was then formulated and its properties were predicted using QSPR approaches.
APA, Harvard, Vancouver, ISO, and other styles
27

Bajer, Lukáš. "Metody evoluční optimalizace založené na modelech." Doctoral thesis, 2018. http://www.nusl.cz/ntk/nusl-387210.

Full text
Abstract:
Model-based black-box optimization is a topic that has been intensively studied both in academia and industry. Especially real-world optimization tasks are often characterized by expensive or time-demanding objective functions for which statistical models can save resources or speed-up the optimization. Each of three parts of the thesis concerns one such model: first, copulas are used instead of a graphical model in estimation of distribution algorithms, second, RBF networks serve as surrogate models in mixed-variable genetic algorithms, and third, Gaussian processes are employed in Bayesian optimization algorithms as a sampling model and in the Covariance matrix adaptation Evolutionary strategy (CMA-ES) as a surrogate model. The last combination, described in the core part of the thesis, resulted in the Doubly trained surrogate CMA-ES (DTS-CMA-ES). This algorithm uses the uncertainty prediction of a Gaussian process for selecting only a part of the CMA-ES population for evaluation with the expensive objective function while the mean prediction is used for the rest. The DTS-CMA-ES improves upon the state-of-the-art surrogate continuous optimizers in several benchmark tests.
APA, Harvard, Vancouver, ISO, and other styles
28

Coropulis, Stefano. "Safety assessment in future scenarios with Automated Vehicles." Doctoral thesis, 2023. https://hdl.handle.net/11589/247061.

Full text
Abstract:
Nowadays, several Advanced Driving Assistance Systems (ADAS) are installed in vehicles, helping drivers with sevral tasks. Human drivers are evermore less involved in driving thanks to the technological help. According to the automation rate of the vehicles and the human involvment, vehicles can be considered partially or fully automated. The partially automated vehicles (AVs) belong to the SAE level 2-3 and follow a cautious behavior because they are still controlled for some tasks by human drivers. The fully automated ones belong to the SAE level 4-5, and their behavior is thought to be more aggressive since there is no need for human drivers to take over maneuver or manage some driving tasks. The reliability of technologies is considered greater than the ones of men for managing and reacting to any changes in traffic conditions, so the behavior is more assertive, headway between vehicles reduced, and greater acceleration and deceleration. Starting from these assumptions, in this thesis, three different vehicle typologies are studied, regular vehicles (RVs), Partially AVs (SAE level 2-3), and Fully AVs (SAE level 4-5) for crash assessments in future scenarios (short-term, mid-term, and long-term). This work aims at providing a methodological framework that can be used in every context and for every road type considering the introduction of technologies in traffic for safety assessments. This aspect is crucial since, practically speaking, plans for mobility and road design procedures require safety assessments projected in long temporal horizons. During this considered period there is the great chance that the vehicle types circulating on roads drastically change. Not considering new vehicles and their interactions with RVs in future scenarios can lead to misestimations of safety. The methodological framework was applied to a real-world case, in the context of the SUMP for the Province of Bari. The main results of this study highlight the importance of automation in traffic. Traffic made just of Fully AVs drastically decrease the crash frequency. Contrary, promiscuity of vehicles in traffic enhances the crash occurrence if compared to the current scenario. In order to foresee the impact of such changes in traffic, an ad hoc Safety Performance Function for AVs was developed, with the intent of predicting crashes in the future with AVs.
APA, Harvard, Vancouver, ISO, and other styles
29

ATTL, Karel. "Aktuální otázky rodiny z právního hlediska." Doctoral thesis, 2012. http://www.nusl.cz/ntk/nusl-118391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography