Dissertations / Theses on the topic 'Sensitivy analysis'

To see the other types of publications on this topic, follow the link: Sensitivy analysis.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Sensitivy analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Fruth, Jana. "Sensitivy analysis and graph-based methods for black-box functions with on application to sheet metal forming." Thesis, Saint-Etienne, EMSE, 2015. http://www.theses.fr/2015EMSE0779/document.

Full text
Abstract:
Le domaine général de la thèse est l’analyse de sensibilité de fonctions boîte noire. L’analyse de sensibilité étudie comment la variation d’une sortie peut être reliée à la variation des entrées. C’est un outil important dans la construction, l’analyse et l’optimisation des expériences numériques (computer experiments).Nous présentons tout d’abord l’indice d’interaction total, qui est utile pour le criblage d’interactions. Plusieurs méthodes d’estimation sont proposées. Leurs propriétés sont étudiées au plan théorique et par des simulations.Le chapitre suivant concerne l’analyse de sensibilité pour des modèles avec des entrées fonctionnelles et une sortie scalaire. Une approche séquentielle très économique est présentée, qui permet non seulement de retrouver la sensibilité de entrées fonctionnelles globalement, mais aussi d’identifier les régions d’intérêt dans leur domaine de définition.Un troisième concept est proposé, les support index functions, mesurant la sensibilité d’une entrée sur tout le support de sa loi de probabilité.Finalement les trois méthodes sont appliquées avec succès à l’analyse de sensibilité de modèles d’emboutissage
The general field of the thesis is the sensitivity analysis of black-box functions. Sensitivity analysis studies how the variation of the output can be apportioned to the variation of input sources. It is an important tool in the construction, analysis, and optimization of computer experiments.The total interaction index is presented, which can be used for the screening of interactions. Several variance-based estimation methods are suggested. Their properties are analyzed theoretically as well as on simulations.A further chapter concerns the sensitivity analysis for models that can take functions as input variables and return a scalar value as output. A very economical sequential approach is presented, which not only discovers the sensitivity of those functional variables as a whole but identifies relevant regions in the functional domain.As a third concept, support index functions, functions of sensitivity indices over the input distribution support, are suggested.Finally, all three methods are successfully applied in the sensitivity analysis of sheet metal forming models
APA, Harvard, Vancouver, ISO, and other styles
2

Nilsson, Martina. "Mitochondrial DNA in Sensitive Forensic Analysis." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis : Univ.-bibl. [distributör], 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Galanopoulos, Aristides. "Analysis of frequency sensitive competitive learning /." The Ohio State University, 1996. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487941504294837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wan, Din Wan Ibrahim. "Sensitivity analysis intolerance allocation." Thesis, Queen's University Belfast, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.675480.

Full text
Abstract:
In Computer Aided Design model the shape is usually defined as a boundary representation, a cell complex of faces, edges, and vertices. The boundary representation is generated from a system of geometric constraints, with parameters as degrees of freedom. The dimensions of the boundary representation are determined by the parameters in the CAD system used to model the part, and every single parametric perturbation may generate different changes in the part model shape and dimensions. Thus, one can compute dimensional sensitivity to parameter perturbations. A "Sensitivity Analysis" method is proposed to automatically quantify the dependencies of the Key Characteristic dimensions on each of the feature parameters in a CAD model. Once the sensitivities of the feature parameters to Key Characteristic dimensions have been determined the appropriate perturbations of each parameter to cause a desired change in critical dimension can be determined. This methodology is then applied to real applications of tolerancing in mechanical assembly models to show the efficiencies of this new developed strategy. The approach can identify where specific tolerances need to be applied to a Key Control Characteristic dimension, the range of part tolerances that could be used to achieve the desired Key Product Characteristic dimension tolerances, and also if existing part tolerances make it impossible to achieve the desired Key Product Characteristic dimension tolerances. This thesis provides an explanation of a novel automated tolerance allocation process for an assembly model based on the parametric CAD sensitivity method. The objective of this research is to expose the relationship between parameters sensitivity of CAD design in mechanical assembly product and tolerance design. This exposes potential new avenues of research in how to develop standard process and methodology for geometrical dimensioning and tolerancing (GD&T) in a digital design tools environment known as Digital MockUp (DMU).
APA, Harvard, Vancouver, ISO, and other styles
5

Munster, Drayton William. "Sensitivity Enhanced Model Reduction." Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/23169.

Full text
Abstract:
In this study, we numerically explore methods of coupling sensitivity analysis to the reduced model in order to increase the accuracy of a proper orthogonal decomposition (POD) basis across a wider range of parameters. Various techniques based on polynomial interpolation and basis alteration are compared. These techniques are performed on a 1-dimensional reaction-diffusion equation and 2-dimensional incompressible Navier-Stokes equations solved using the finite element method (FEM) as the full scale model. The expanded model formed by expanding the POD basis with the orthonormalized basis sensitivity vectors achieves the best mixture of accuracy and computational efficiency among the methods compared.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
6

Edlund, Hanna. "Sensitive Identification Tools in Forensic DNA Analysis." Doctoral thesis, Uppsala universitet, Institutionen för genetik och patologi, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-131904.

Full text
Abstract:
DNA as forensic evidence is valuable in criminal investigations. Implementation of new, sensitive and fast technologies is an important part of forensic genetic research. This thesis aims to evaluate new sensitive methods to apply in forensic DNA analysis including analysis of old skeletal remains. In Paper I and II, two novel systems for analysis of STRs, based on the Pyrosequencing technology, are presented. In Paper I, Y chromosomal STRs are analysed. Markers on the male specific Y chromosome are especially useful in analysis of DNA mixtures. In Paper II, ten autosomal STRs are genotyped. The systems are based on sequencing of STR loci instead of size determination of STR fragments as in routine analysis. This provides a higher resolution since sequence variants within the repeats can be detected. Determination of alleles is based on a termination recognition base. This is the base in the template strand that is excluded from the dispensation order in the sequencing of the complementary strand and therefore terminates the reaction. Furthermore, skeletal remains are often difficult to analyse, due to damaging effects from the surrounding environment on the DNA and the high risk of exogenous contamination. Analysis of mitochondrial DNA is useful on degraded samples and in Paper III, mtDNA analysis of 700 years old skeletal remains is performed to investigate a maternal relationship. The quantity and quality of DNA are essential in forensic genetics. In Paper IV the efficiency of DNA isolation is investigated. Soaking skeletal remains in bleach is efficient for decontamination but result in a lower DNA yield, especially on pulverised skull samples. In conclusion, this thesis presents novel sequencing systems for accurate and fast analysis of STR loci that can be useful in evaluation of new loci and database assembly as well as the utility of mtDNA in forensic genetics.
APA, Harvard, Vancouver, ISO, and other styles
7

Mazanec, Josef. "Exploratory market structure analysis. Topology-sensitive methodology." SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 1999. http://epub.wu.ac.at/928/1/document.pdf.

Full text
Abstract:
Given the recent abundance of brand choice data from scanner panels market researchers have neglected the measurement and analysis of perceptions. Heterogeneity of perceptions is still a largely unexplored issue in market structure and segmentation studies. Over the last decade various parametric approaches toward modelling segmented perception-preference structures such as combined MDS and Latent Class procedures have been introduced. These methods, however, are not taylored for qualitative data describing consumers' redundant and fuzzy perceptions of brand images. A completely different method is based on topology-sensitive vector quantization (VQ) for consumers-by-brands-by-attributes data. It maps the segment-specific perceptual structures into bubble-pie-bar charts with multiple brand positions demonstrating perceptual distinctiveness or similarity. Though the analysis proceeds without any distributional assumptions it allows for significance testing. The application of exploratory and inferential data processing steps to the same data base is statistically sound and particularly attractive for market structure analysts. A brief outline of the VQ method is followed by a sample study with travel market data which proved to be particularly troublesome for conventional processing tools. (author's abstract)
Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
APA, Harvard, Vancouver, ISO, and other styles
8

Boccardo, Davidson Rodrigo [UNESP]. "Context-sensitive analysis of x86 obfuscated executables." Universidade Estadual Paulista (UNESP), 2009. http://hdl.handle.net/11449/100286.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:30:32Z (GMT). No. of bitstreams: 0 Previous issue date: 2009-10-09Bitstream added on 2014-06-13T18:40:52Z : No. of bitstreams: 1 boccardo_dr_dr_ilha.pdf: 1178776 bytes, checksum: cdd885f0beff962757e3b9de59ce0832 (MD5)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Ofusca c~ao de c odigo tem por nalidade di cultar a detec c~ao de propriedades intr nsecas de um algoritmo atrav es de altera c~oes em sua sintaxe, entretanto preservando sua sem antica. Desenvolvedores de software usam ofusca c~ao de c odigo para defender seus programas contra ataques de propriedade intelectual e para aumentar a seguran ca do c odigo. Por outro lado, programadores maliciosos geralmente ofuscam seus c odigos para esconder comportamento malicioso e para evitar detec c~ao pelos anti-v rus. Nesta tese, e introduzido um m etodo para realizar an alise com sensitividade ao contexto em bin arios com ofuscamento de chamada e retorno de procedimento. Para obter sem antica equivalente, estes bin arios utilizam opera c~oes diretamente na pilha ao inv es de instru c~oes convencionais de chamada e retorno de procedimento. No estado da arte atual, a de ni c~ao de sensitividade ao contexto est a associada com opera c~oes de chamada e retorno de procedimento, assim, an alises interprocedurais cl assicas n~ao s~ao con aveis para analisar bin arios cujas opera c~oes n~ao podem ser determinadas. Uma nova de ni c~ao de sensitividade ao contexto e introduzida, baseada no estado da pilha em qualquer instru c~ao. Enquanto mudan cas em contextos a chamada de procedimento s~ao intrinsicamente relacionadas com transfer encia de controle, assim, podendo ser obtidas em termos de caminhos em um grafo de controle de uxo interprocedural, o mesmo n~ao e aplic avel para mudan cas em contextos a pilha. Um framework baseado em interpreta c~ao abstrata e desenvolvido para avaliar contexto baseado no estado da pilha e para derivar m etodos baseado em contextos a chamada de procedimento para uso com contextos baseado no estado da pilha. O metodo proposto n~ao requer o uso expl cito de instru c~oes de chamada e retorno de procedimento, por em depende do...
A code obfuscation intends to confuse a program in order to make it more di cult to understand while preserving its functionality. Programs may be obfuscated to protect intellectual property and to increase security of code. Programs may also be obfuscated to hide malicious behavior and to evade detection by anti-virus scanners. We introduce a method for context-sensitive analysis of binaries that may have obfuscated procedure call and return operations. These binaries may use direct stack operators instead of the native call and ret instructions to achieve equivalent behavior. Since de nition of context-sensitivity and algorithms for context-sensitive analysis has thus far been based on the speci c semantics associated to procedure call and return operations, classic interprocedural analyses cannot be used reliably for analyzing programs in which these operations cannot be discerned. A new notion of context-sensitivity is introduced that is based on the state of the stack at any instruction. While changes in calling-context are associated with transfer of control, and hence can be reasoned in terms of paths in an interprocedural control ow graph (ICFG), the same is not true for changes in stackcontext. An abstract interpretation based framework is developed to reason about stackcontext and to derive analogues of call-strings based methods for the context-sensitive analysis using stack-context. This analysis requires the knowledge of how the stack, rather the stack pointer, is represented and on the knowledge of operators that manipulate the stack pointer. The method presented is used to create a context-sensitive version of Venable et al.'s algorithm for detecting obfuscated calls. Experimental results show that the contextsensitive version of the algorithm generates more precise results and is also computationally more e cient than its context-insensitive counterpart.
APA, Harvard, Vancouver, ISO, and other styles
9

Boccardo, Davidson Rodrigo. "Context-Sensitive Analysis of x86 Obfuscated Executables." Ilha Solteira : [s.n.], 2009. http://hdl.handle.net/11449/100286.

Full text
Abstract:
Orientador: Aleardo Manacero Junior
Banca: Sergio Azevedo de Oliveira
Banca: Francisco Villarreal Alvarado
Banca: Rodolfo Jardim Azevedo
Banca: André Luiz Moura dos Santos
Resumo: Ofusca c~ao de c odigo tem por nalidade di cultar a detec c~ao de propriedades intr nsecas de um algoritmo atrav es de altera c~oes em sua sintaxe, entretanto preservando sua sem^antica. Desenvolvedores de software usam ofusca c~ao de c odigo para defender seus programas contra ataques de propriedade intelectual e para aumentar a seguran ca do c odigo. Por outro lado, programadores maliciosos geralmente ofuscam seus c odigos para esconder comportamento malicioso e para evitar detec c~ao pelos anti-v rus. Nesta tese, e introduzido um m etodo para realizar an alise com sensitividade ao contexto em bin arios com ofuscamento de chamada e retorno de procedimento. Para obter sem^antica equivalente, estes bin arios utilizam opera c~oes diretamente na pilha ao inv es de instru c~oes convencionais de chamada e retorno de procedimento. No estado da arte atual, a de ni c~ao de sensitividade ao contexto est a associada com opera c~oes de chamada e retorno de procedimento, assim, an alises interprocedurais cl assicas n~ao s~ao con aveis para analisar bin arios cujas opera c~oes n~ao podem ser determinadas. Uma nova de ni c~ao de sensitividade ao contexto e introduzida, baseada no estado da pilha em qualquer instru c~ao. Enquanto mudan cas em contextos a chamada de procedimento s~ao intrinsicamente relacionadas com transfer^encia de controle, assim, podendo ser obtidas em termos de caminhos em um grafo de controle de uxo interprocedural, o mesmo n~ao e aplic avel para mudan cas em contextos a pilha. Um framework baseado em interpreta c~ao abstrata e desenvolvido para avaliar contexto baseado no estado da pilha e para derivar m etodos baseado em contextos a chamada de procedimento para uso com contextos baseado no estado da pilha. O metodo proposto n~ao requer o uso expl cito de instru c~oes de chamada e retorno de procedimento, por em depende do... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: A code obfuscation intends to confuse a program in order to make it more di cult to understand while preserving its functionality. Programs may be obfuscated to protect intellectual property and to increase security of code. Programs may also be obfuscated to hide malicious behavior and to evade detection by anti-virus scanners. We introduce a method for context-sensitive analysis of binaries that may have obfuscated procedure call and return operations. These binaries may use direct stack operators instead of the native call and ret instructions to achieve equivalent behavior. Since de nition of context-sensitivity and algorithms for context-sensitive analysis has thus far been based on the speci c semantics associated to procedure call and return operations, classic interprocedural analyses cannot be used reliably for analyzing programs in which these operations cannot be discerned. A new notion of context-sensitivity is introduced that is based on the state of the stack at any instruction. While changes in calling-context are associated with transfer of control, and hence can be reasoned in terms of paths in an interprocedural control ow graph (ICFG), the same is not true for changes in stackcontext. An abstract interpretation based framework is developed to reason about stackcontext and to derive analogues of call-strings based methods for the context-sensitive analysis using stack-context. This analysis requires the knowledge of how the stack, rather the stack pointer, is represented and on the knowledge of operators that manipulate the stack pointer. The method presented is used to create a context-sensitive version of Venable et al.'s algorithm for detecting obfuscated calls. Experimental results show that the contextsensitive version of the algorithm generates more precise results and is also computationally more e cient than its context-insensitive counterpart.
Doutor
APA, Harvard, Vancouver, ISO, and other styles
10

Gasparini, Luca. "Severity sensitive norm analysis and decision making." Thesis, University of Aberdeen, 2017. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=231873.

Full text
Abstract:
Normative systems have been proposed as a useful abstraction to represent ideals of behaviour for autonomous agents in a social context. They specify constraints that agents ought to follow, but may sometimes be violated. Norms can increase the predictability of a system and make undesired situations less likely. When designing normative systems, it is important to anticipate the effects of possible violations and understand how robust these systems are to violations. Previous research on robustness analysis of normative systems builds upon simplistic norm formalisms, lacking support for the specification of complex norms that are often found in real world scenarios. Furthermore, existing approaches do not consider the fact that compliance with different norms may be more or less important in preserving some desirable properties of a system; that is, norm violations may vary in severity. In this thesis we propose models and algorithms to represent and reason about complex norms, where their violation may vary in severity. We build upon existing preference-based deontic logics and propose mechanisms to rank the possible states of a system according to what norms they violate, and their severity. Further, we propose mechanisms to analyse the properties of the system under different compliance assumptions, taking into account the severity of norm violations. Our norm formalism supports the specification of norms that regulate temporally extended behaviour and those that regulate situations where other norms have been violated. We then focus on algorithms that allow coalitions of agents to coordinate their actions in order to minimise the risk of severe violations. We propose offline algorithms and heuristics for pre-mission planning in stochastic scenarios where there is uncertainty about the current state of the system. We then develop online algorithms that allow agents to maintain a certain degree of coordination and to use communication to improve their performance.
APA, Harvard, Vancouver, ISO, and other styles
11

Poveda, David. "Sensitivity analysis of capital projects." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/27990.

Full text
Abstract:
This thesis presents a very generalized model useful in the economic evaluation of capital projects. Net Present Value and the Internal Rate of Return are used as the performance measures. A theoretical framework to perform sensitivity analysis including bivariate analysis and sensitivity to functional forms is presented. During the development of the model, emphasis is given to the financial mechanisms available to fund large capital projects. Also, mathematical functions that can be used to represent cash flow profiles generated in each project phase are introduced. These profiles are applicable to a number of project types including oil and gas, mining, real estate and chemical process projects. Finally, a computer program has been developed which treats most of the theoretical concepts explored in this thesis, and an example of its application is presented. This program constitutes a useful teaching tool.
Applied Science, Faculty of
Civil Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
12

Faria, Jairo Rocha de. "Second order topological sensitivity analysis." Laboratório Nacional de Computação Científica, 2008. http://www.lncc.br/tdmc/tde_busca/arquivo.php?codArquivo=141.

Full text
Abstract:
The topological derivative provides the sensitivity of a given shape functional with respect to an infinitesimal non-smooth domain pertubation (insertion of hole or inclusion, for instance). Classically, this derivative comes from the second term of the topological asymptotic expansion, dealing only with inifinitesimal pertubations. However, for pratical applications, we need to insert pertubations of finite sizes.Therefore, we consider other terms in the expansion, leading to the concept of higher-order topological derivatives. In a particular, we observe that the topological-shape sensitivity method can be naturally extended to calculate these new terms, resulting in a systematic methodology to obtain higher-order topological derivatives. In order to present these ideas, initially we apply this technique in some problems with exact solution, where the topological asymptotic expansion is obtained until third order. Later, we calculate first as well as second order topological derivative for the total potential energy associated to the Laplace equation in two-dimensional domain pertubed with the insertion of a hole, considering homogeneous Neumann or Dirichlet boundary conditions, or an inclusion with thermal conductivity coefficient value different from the bulk material. With these results, we present some numerical experiments showing the influence of the second order topological derivative in the topological asymptotic expansion, which has two main features:it allows us to deal with pertubations of finite sizes and provides a better descent direction in optimization and reconstruction algorithms.
A derivada topológica fornece a sensibilidade de uma dada função custo quando uma pertubação não suave e infinitesimal (furo ou inclusão, por exemplo) é introduzida. Classicamente, esta derivada vem do segundo termo da expansão assintótica topológica considerando-se apenas pertubações infinitesimais. No entanto, em aplicações práticas, é necessário considerar pertubação de tamanho finito. Motivado por este fato, o presente trabalho tem como objetivo fundamental introduzir o conceito de derivadas topológicas de ordem superiores, o que permite considerar mais termos na expansão assintótica topológica. Em particular, observa-se que o topological-shape sensitivity method pode ser naturalmente estendido para o cálculo destes novos termos, resultando em uma metodologia sistemática de análise de sensibilidade topológica de ordem superior. Para se apresentar essas idéias, inicialmente essa técnica é verificada através de alguns problemas que admitem solução exata, onde se calcula explicitamente a expansão assintótica topológica até terceira ordem. Posteriormente, considera-se a equação de Laplace bidimensional, cujo domínio é topologicamente pertubado pela introdução de um furo com condição de contorno de Neumann ou de Dirichlet homogêneas, ou ainda de uma inclusão com propriedade física distinta do meio. Nesse caso, são calculadas explicitamente as derivadas topológicas de primeira e segunda ordens. Com os resultados obtidos em todos os casos, estuda-se a influência dos termos de ordem superiores na expansão assintótica topológica, através de experimentos numéricos. Em particular, observa-se que esses novos termos, além de permitir considerar pertubações de tamanho finito, desempenham ainda um importante papel tanto como fator de correção da expansão assintótica topológica, quanto como direção de descida em processos de otimização. Finalmente, cabe mencionar que a metodologia desenvolvida neste trabalho apresenta um grande potencial para aplicação na otimização e em algoritimos de reconstrução.
APA, Harvard, Vancouver, ISO, and other styles
13

Witzgall, Zachary F. "Parametric sensitivity analysis of microscrews." Morgantown, W. Va. : [West Virginia University Libraries], 2006. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4892.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2006.
Title from document title page. Document formatted into pages; contains xi, 73 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 52-53).
APA, Harvard, Vancouver, ISO, and other styles
14

鄧國良 and Kwok-leong Tang. "Sensitivity analysis of bootstrap methods." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B31977479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Fang, Xinding S. M. Massachusetts Institute of Technology. "Sensitivity analysis of fracture scattering." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/59789.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Earth, Atmospheric, and Planetary Sciences, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 40-42).
We use a 2-D finite difference method to numerically calculate the seismic response of a single finite fracture in a homogeneous media. In our experiments, we use a point explosive source and ignore the free surface effect, so the fracture scattering wave field contains two parts: P-to-P scattering and P-to-S scattering. We vary the fracture compliance within a range considered appropriate for field observations, 10-12 m/Pa to 10-9 m/Pa, and investigate the variation of the scattering pattern of a single fracture as a function of normal and tangential fracture compliance. We show that P-to-P and P-to-S fracture scattering patterns are sensitive to the ratio of normal to tangential fracture compliance and different incident angle, while radiation pattern amplitudes scale as the square of the compliance. We find that, for a vertical fracture system, if the source is located at the surface, most of the energy scattered by a fracture propagates downwards, specifically, the P-to-P scattering energy propagates down and forward while the P-to-S scattering energy propagates down and backward. Therefore, most of the fracture scattered waves observed on the surface are, first scattered by fractures, and then reflected back to the surface by reflectors below the fracture zone, so the fracture scattered waves have complex ray paths and are contaminated by the reflectivity of matrix reflectors.
by Xinding Fang.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
16

Masinde, Brian. "Birds' Flight Range. : Sensitivity Analysis." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166248.

Full text
Abstract:
’Flight’ is a program that uses flight mechanics to estimate the flight range of birds. This program, used by ornithologists, is only available for Windows OS. It requires manual imputation of body measurements and constants (one observation at a time) and this is time-consuming. Therefore, the first task is to implement the methods in R, a programming language that runs on various platforms. The resulting package named flying, has three advantages; first, it can estimate flight range of multiple bird observations, second, it makes it easier to experiment with different settings (e.g. constants) in comparison to Flight and third, it is open-source making contribution relatively easy. Uncertainty and global sen- sitivity analyses are carried out on body measurements separately and with various con- stants. In doing so, the most influential body variables and constants are discovered. This task would have been near impossible to undertake using ’Flight’. A comparison is made amongst the results from a crude partitioning method, generalized additive model, gradi- ent boosting machines and quasi-Monte Carlo method. All of these are based on Sobol’s method for variance decomposition. The results show that fat mass drives the simulations with other inputs playing a secondary role (for example mechanical conversion efficiency and body drag coefficient).
APA, Harvard, Vancouver, ISO, and other styles
17

Tang, Kwok-leong. "Sensitivity analysis of bootstrap methods." [Hong Kong] : University of Hong Kong, 1993. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13793792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Cedrini, Luca. "Time Sensitive Networks: analysis, testing, scheduling and simulation." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22305/.

Full text
Abstract:
The industrial automation world is an extremely complex framework, where state-of-the-art cutting edge technologies are continuously being replaced in order to achieve the best possible performances. The criterion guiding this change has always been the productivity. This term has, however, a broad meaning and there are many ways to improve the productivity, that go beyond the simplistic products/min ratio. One concept that has been increasingly emerging in the last years is the idea of interoperability: a flexible environment, where products of different and diverse vendors can be easily integrated togheter, would increase the productivity by simplifying the design and installation of any automatic system. Connected to this concept of interoperability is the Industrial Internet of Things (IIoT), which is one of the main sources of the industrial innovation at the moment: the idea of a huge network connecting every computer, sensor or generic device so as to allow seamless data exchange, status updates and information passing. It is in this framework that Time Sensitive Networks are placed: it is a new, work-in-progress set of communication standards whose goal is to provide a common infrastructure where all kinds of important data for an industrial automation environment, namely deterministic and non deterministic data, can flow. This work aims to be an initial step towards the actual implementation of the above-mentioned technology. The focus will therefore be not only on the theoretical aspects, but also on a set of practical tests that have been carried out in order to evaluate the performances, the required hardware and software features, advantages and drawbacks of such an application.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Ruhuai. "Frequency domain fatigue analysis of dynamically sensitive structures." Thesis, University College London (University of London), 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Whaley, John. "Context-sensitive pointer analysis using binary decision diagrams /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Muminovic, Mia, and Haris Suljic. "Performance Study and Analysis of Time Sensitive Networking." Thesis, Mälardalens högskola, Inbyggda system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-44085.

Full text
Abstract:
Modern technology requires reliable, fast, and cheap networks as a backbone for the data transmission. Among many available solutions, switched Ethernet combined with Time Sensitive Networking (TSN) standard excels because it provides high bandwidth and real-time characteristics by utilizing low-cost hardware. For the industry to acknowledge this technology, extensive performance studies need to be conducted, and this thesis provides one. Concretely, the thesis examines the performance of two amendments IEEE 802.1Qbv and IEEE 802.1Qbu that are recently appended to the TSN standard. The academic community understands the potential of this technology, so several simulation frameworks already exist, but most of them are unstable and undertested. This thesis builds on top of existent frameworks and utilizes the framework developed in OMNeT++. Performance is analyzed through several segregated scenarios and is measured in terms of end-to-end transmission latency and link utilization. Attained results justify the industry interest in this technology and could lead to its greater representation in the future.
APA, Harvard, Vancouver, ISO, and other styles
22

Dusastre, Vincent Jean-Marie. "Semiconducting oxide gas-sensitive resistors." Thesis, University College London (University of London), 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.300516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Kennedy, Christopher Brandon. "GPT-Free Sensitivity Analysis for Reactor Depletion and Analysis." Thesis, North Carolina State University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3710730.

Full text
Abstract:

Introduced in this dissertation is a novel approach that forms a reduced-order model (ROM), based on subspace methods, that allows for the generation of response sensitivity profiles without the need to set up or solve the generalized inhomogeneous perturbation theory (GPT) equations. The new approach, denoted hereinafter as the generalized perturbation theory free (GPT-Free) approach, computes response sensitivity profiles in a manner that is independent of the number or type of responses, allowing for an efficient computation of sensitivities when many responses are required. Moreover, the reduction error associated with the ROM is quantified by means of a Wilks’ order statistics error metric denoted by the κ-metric.

Traditional GPT has been recognized as the most computationally efficient approach for performing sensitivity analyses of models with many input parameters, e.g. when forward sensitivity analyses are computationally overwhelming. However, most neutronics codes that can solve the fundamental (homogenous) adjoint eigenvalue problem do not have GPT (inhomogenous) capabilities unless envisioned during code development. Additionally, codes that use a stochastic algorithm, i.e. Monte Carlo methods, may have difficult or undefined GPT equations. When GPT calculations are available through software, the aforementioned efficiency gained from the GPT approach diminishes when the model has both many output responses and many input parameters. The GPT-Free approach addresses these limitations, first by only requiring the ability to compute the fundamental adjoint from perturbation theory, and second by constructing a ROM from fundamental adjoint calculations, constraining input parameters to a subspace. This approach bypasses the requirement to perform GPT calculations while simultaneously reducing the number of simulations required.

In addition to the reduction of simulations, a major benefit of the GPT-Free approach is explicit control of the reduced order model (ROM) error. When building a subspace using the GPT-Free approach, the reduction error can be selected based on an error tolerance for generic flux response-integrals. The GPT-Free approach then solves the fundamental adjoint equation with randomly generated sets of input parameters. Using properties from linear algebra, the fundamental k-eigenvalue sensitivities, spanned by the various randomly generated models, can be related to response sensitivity profiles by a change of basis. These sensitivity profiles are the first-order derivatives of responses to input parameters. The quality of the basis is evaluated using the κ-metric, developed from Wilks’ order statistics, on the user-defined response functionals that involve the flux state-space. Because the κ-metric is formed from Wilks’ order statistics, a probability-confidence interval can be established around the reduction error based on user-defined responses such as fuel-flux, max-flux error, or other generic inner products requiring the flux. In general, The GPT-Free approach will produce a ROM with a quantifiable, user-specified reduction error.

This dissertation demonstrates the GPT-Free approach for steady state and depletion reactor calculations modeled by SCALE6, an analysis tool developed by Oak Ridge National Laboratory. Future work includes the development of GPT-Free for new Monte Carlo methods where the fundamental adjoint is available. Additionally, the approach in this dissertation examines only the first derivatives of responses, the response sensitivity profile; extension and/or generalization of the GPT-Free approach to higher order response sensitivity profiles is natural area for future research.

APA, Harvard, Vancouver, ISO, and other styles
24

Guo, Jia. "Uncertainty analysis and sensitivity analysis for multidisciplinary systems design." Diss., Rolla, Mo. : Missouri University of Science and Technology, 2008. http://scholarsmine.mst.edu/thesis/pdf/Guo_09007dcc8066e905.pdf.

Full text
Abstract:
Thesis (Ph. D.)--Missouri University of Science and Technology, 2008.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed May 28, 2009) Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
25

Konarski, Roman. "Sensitivity analysis for structural equation models." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq22893.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Sulieman, Hana. "Parametric sensitivity analysis in nonlinear regression." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0004/NQ27858.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Yu, Jianbin. "Flexible reinforced pavement structure-sensitivity analysis." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0015/MQ52682.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Kolen, A. W. J., Kan A. H. G. Rinnooy, Hoesel C. P. M. Van, and Albert Wagelmans. "Sensitivity Analysis of List Scheduling Heuristics." Massachusetts Institute of Technology, Operations Research Center, 1990. http://hdl.handle.net/1721.1/5268.

Full text
Abstract:
When jobs have to be processed on a set of identical parallel machines so as to minimize the makespan of the schedule, list scheduling rules form a popular class of heuristics. The order in which jobs appear on the list is assumed here to be determined by the relative size of their processing times; well known special cases are the LPT rule and the SPT rule, in which the jobs are ordered according to non-increasing and non-decreasing processing time respectively. When one of the job processing times is gradually increased, the schedule produced by a list scheduling rule will be affected in a manner reflecting its sensitivity to data perturbations. We analyze this phenomenon and obtain analytical support for the intuitively plausible notion that the sensitivity of a list scheduling rule increases with the quality of the schedule produced.
APA, Harvard, Vancouver, ISO, and other styles
29

Maginot, Jeremy. "Sensitivity analysis for multidisciplinary design optmization." Thesis, Cranfield University, 2007. http://dspace.lib.cranfield.ac.uk/handle/1826/5667.

Full text
Abstract:
When designing a complex industrial product, the designer often has to optimise simultaneously multiple conflicting criteria. Such a problem does not usually have a unique solution, but a set of non-dominated solutions known as Pareto solutions. In this context, the progress made in the development of more powerful but more computationally demanding numerical methods has led to the emergence of multi-disciplinary optimisation (MDO). However, running computationally expensive multi-objective optimisation procedures to obtain a comprehensive description of the set of Pareto solutions might not always be possible. The aim of this research is to develop a methodology to assist the designer in the multi-objective optimisation process. As a result, an approach to enhance the understanding of the optimisation problem and to gain some insight into the set of Pareto solutions is proposed. This approach includes two main components. First, global sensitivity analysis is used prior to the optimisation procedure to identify non- significant inputs, aiming to reduce the dimensionality of the problem. Second, once a candidate Pareto solution is obtained, the local sensitivity is computed to understand the trade-offs between objectives. Exact linear and quadratic approximations of the Pareto surface have been derived in the general case and are shown to be more accurate than the ones found in literature. In addition, sufficient conditions to identify non-differentiable Pareto points have been proposed. Ultimately, this approach enables the designer to gain more knowledge about the multi-objective optimisation problem with the main concern of minimising the computational cost. A number of test cases have been considered to evaluate the approach. These include algebraic examples, for direct analytical validation, and more representative test cases to evaluate its usefulness. In particular, an airfoil design problem has been developed and implemented to assess the approach on a typical engineering problem. The results demonstrate the potential of the methodology to achieve a reduction of computational time by concentrating the optimisation effort on the most significant variables. The results also show that the Pareto approximations provide the designer with essential information about trade-offs at reduced computational cost.
APA, Harvard, Vancouver, ISO, and other styles
30

North, Simon John. "High sensitivity mass spectrometric glycoprotein analysis." Thesis, Imperial College London, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.404993.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Maginot, Jeremy. "Sensitivity analysis for multidisciplinary design optimization." Thesis, Cranfield University, 2007. http://dspace.lib.cranfield.ac.uk/handle/1826/5667.

Full text
Abstract:
When designing a complex industrial product, the designer often has to optimise simultaneously multiple conflicting criteria. Such a problem does not usually have a unique solution, but a set of non-dominated solutions known as Pareto solutions. In this context, the progress made in the development of more powerful but more computationally demanding numerical methods has led to the emergence of multi-disciplinary optimisation (MDO). However, running computationally expensive multi-objective optimisation procedures to obtain a comprehensive description of the set of Pareto solutions might not always be possible. The aim of this research is to develop a methodology to assist the designer in the multi-objective optimisation process. As a result, an approach to enhance the understanding of the optimisation problem and to gain some insight into the set of Pareto solutions is proposed. This approach includes two main components. First, global sensitivity analysis is used prior to the optimisation procedure to identify non- significant inputs, aiming to reduce the dimensionality of the problem. Second, once a candidate Pareto solution is obtained, the local sensitivity is computed to understand the trade-offs between objectives. Exact linear and quadratic approximations of the Pareto surface have been derived in the general case and are shown to be more accurate than the ones found in literature. In addition, sufficient conditions to identify non-differentiable Pareto points have been proposed. Ultimately, this approach enables the designer to gain more knowledge about the multi-objective optimisation problem with the main concern of minimising the computational cost. A number of test cases have been considered to evaluate the approach. These include algebraic examples, for direct analytical validation, and more representative test cases to evaluate its usefulness. In particular, an airfoil design problem has been developed and implemented to assess the approach on a typical engineering problem. The results demonstrate the potential of the methodology to achieve a reduction of computational time by concentrating the optimisation effort on the most significant variables. The results also show that the Pareto approximations provide the designer with essential information about trade-offs at reduced computational cost.
APA, Harvard, Vancouver, ISO, and other styles
32

Johnson, Timothy J. "Sensitivity analysis of transputer workfarm topologies." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/27258.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Khan, Kamil Ahmad. "Sensitivity analysis for nonsmooth dynamic systems." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/98156.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 369-377).
Nonsmoothness in dynamic process models can hinder conventional methods for simulation, sensitivity analysis, and optimization, and can be introduced, for example, by transitions in flow regime or thermodynamic phase, or through discrete changes in the operating mode of a process. While dedicated numerical methods exist for nonsmooth problems, these methods require generalized derivative information that can be difficult to furnish. This thesis presents some of the first automatable methods for computing these generalized derivatives. Firstly, Nesterov's lexicographic derivatives are shown to be elements of the plenary hull of Clarke's generalized Jacobian whenever they exist. Lexicographic derivatives thus provide useful local sensitivity information for use in numerical methods for nonsmooth problems. A vector forward mode of automatic differentiation is developed and implemented to evaluate lexicographic derivatives for finite compositions of simple lexicographically smooth functions, including the standard arithmetic operations, trigonometric functions, exp / log, piecewise differentiable functions such as the absolute-value function, and other nonsmooth functions such as the Euclidean norm. This method is accurate, automatable, and computationally inexpensive. Next, given a parametric ordinary differential equation (ODE) with a lexicographically smooth right-hand side function, parametric lexicographic derivatives of a solution trajectory are described in terms of the unique solution of a certain auxiliary ODE. A numerical method is developed and implemented to solve this auxiliary ODE, when the right-hand side function for the original ODE is a composition of absolute-value functions and analytic functions. Computationally tractable sufficient conditions are also presented for differentiability of the original ODE solution with respect to system parameters. Sufficient conditions are developed under which local inverse and implicit functions are lexicographically smooth. These conditions are combined with the results above to describe parametric lexicographic derivatives for certain hybrid discrete/ continuous systems, including some systems whose discrete mode trajectories change when parameters are perturbed. Lastly, to eliminate a particular source of nonsmoothness, a variant of McCormick's convex relaxation scheme is developed and implemented for use in global optimization methods. This variant produces twice-continuously differentiable convex underestimators for composite functions, while retaining the advantageous computational properties of McCormick's original scheme. Gradients are readily computed for these underestimators using automatic differentiation.
by Kamil Ahmad Khan.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
34

Saxena, Vibhu Prakash. "Sensitivity analysis of oscillating hybrid systems." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61899.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 137-140).
Many models of physical systems oscillate periodically and exhibit both discrete-state and continuous-state dynamics. These systems are called oscillating hybrid systems and find applications in diverse areas of science and engineering, including robotics, power systems, systems biology, and so on. A useful tool that can provide valuable insights into the influence of parameters on the dynamic behavior of such systems is sensitivity analysis. A theory for sensitivity analysis with respect to the initial conditions and/or parameters of oscillating hybrid systems is developed and discussed. Boundary-value formulations are presented for initial conditions, period, period sensitivity and initial conditions for the sensitivities. A difference equation analysis of general homogeneous equations and parametric sensitivity equations with linear periodic piecewise continuous coefficients is presented. It is noted that the monodromy matrix for these systems is not a fundamental matrix evaluated after one period, but depends on one. A three part decomposition of the sensitivities is presented based on the analysis. These three parts classify the influence of the parameters on the period, amplitude and relative phase of the limit-cycles of hybrid systems, respectively. The theory developed is then applied to the computation of sensitivity information for some examples of oscillating hybrid systems using existing numerical techniques and methods. The relevant information given by the sensitivity trajectory and its parts can be used in algorithms for different applications such as parameter estimation, control system design, stability analysis and dynamic optimization.
by Vibhu Prakash Saxena.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
35

Siannis, Fotios. "Sensitivity analysis for correlated survival models." Thesis, University of Warwick, 2001. http://wrap.warwick.ac.uk/78861/.

Full text
Abstract:
In this thesis we introduce a model for informative censoring. We assume that the joint distribution of the failure and the censored times depends on a parameter δ, which is actually a measure of the possible dependence, and a bias function B(t,θ). Knowledge of δ means that the joint distribution is fully specified, while B(t,θ) can be any function of the failure times. Being unable to draw inferences about δ, we perform a sensitivity analysis on the parameters of interest for small values of δ, based on a first order approximation. This will give us an idea of how robust our estimates are in the presence of small dependencies, and whether the ignorability assumption can lead to misleading results. Initially we propose the model for the general parametric case. This is the simplest possible case and we explore the different choices for the standardized bias function. After choosing a suitable function for B(t,θ) we explore the potential interpretation of δ through it's relation to the correlation between quantities of the failure and the censoring processes. Generalizing our parametric model we propose a proportional hazards structure, allowing the presence of covariates. At this stage we present a data set from a leukemia study in which the knowledge, under some certain assumptions, of the censored and the death times of a number of patients allows us to explore the impact of informative censoring to our estimates. Following the analysis of the above data we introduce an extension to Cox's partial likelihood, which will call "modified Cox's partial likelihood", based on the assumptions that censored times do contribute information about the parameters of interest. Finally we perform parametric bootstraps to assess the validity of our model and to explore up to what values of parameter δ our approximation holds.
APA, Harvard, Vancouver, ISO, and other styles
36

Santos, Miguel Duque. "UK pension funds : liability sensitivity analysis." Master's thesis, Instituto Superior de Economia e Gestão, 2019. http://hdl.handle.net/10400.5/19509.

Full text
Abstract:
Mestrado em Mathematical Finance
No Reino Unido, muitos empregadores oferecem aos seus empregados algum tipo de regime de pensões profissionais. Um destes tipos é o regime de pensões de benefício definido, isto é, quando um empregador promete pagar uma certa quantidade (definida) de benefícios de pensão ao empregado baseado no salário final e nos anos de serviço. Deste modo, neste tipo de regime de pensão profissional, o empregador suporta todo o risco, porque tem de garantir o pagamento dos benefícios de reforma aos membros quando eles vencem. Os atuários conseguem estimar os pagamentos futuros e descontá-los para a data atual. Este valor atual dos pagamentos futuros é chamado de responsabilidade e pode ser comparado com o montante de ativos para verificar se há dinheiro suficiente no presente para pagar os benefícios futuros prometidos. Contudo, a responsabilidade está sujeita a variações ao longo do tempo porque está exposta ao risco de juros e inflação. Tendo isto em conta, a Mercer desenvolveu uma estratégia de investimento sofisticada chamada "Liability Benchmark Portfolio" ou LBP que é uma carteira de investimentos de baixo risco composta por obrigações do governo de cupão zero que vão igualar aproximadamente as sensibilidades das responsabilidades a mudanças da taxa de inflação e de juro. A minha tarefa no estágio era calcular estas sensibilidades das responsabilidades, que são necessárias para que a equipa de investimentos consiga construir um LBP. Sendo assim, o risco vai ser reduzido e estamos mais perto de assegurar que os membros do fundo recebam os seus benefícios de pensão.
In the United Kingdom, most employers offer their employees some type of occupational pension scheme. One of these types is a Defined Benefit pension plan, this is when an employer promises to pay a certain (defined) amount of pension benefit to the employee based on the final salary and years of service. So, in this type of occupational pension scheme, the employers bear all the risk, as they have to ensure the payment of the retirement benefits to the members when they fall due. The Actuaries are able to estimate the future payments and discount them to a current date. This present value of the future payments is called the liability and can be compared with the amount of assets to check there is enough money in the present to pay the promised future benefits. However, the liability is subject to variation over time because it is exposed to interest and inflation risk. Taking this into account, Mercer developed a sophisticated investment strategy called the Liability Benchmark Portfolio or LBP which is a low risk investment portfolio composed by zero coupon government bonds that will closely match the sensitivities of the liabilities to shifts in the inflation and interest rate. My task in the internship was to calculate these sensitivities of the liabilities that are required by the investment team to be able to build an LBP. Therefore, the risk will be reduced and we are closer to ensure that the members of the fund will receive their pension benefits.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
37

Sen, Sharma Pradeep Kumar. "Sensitivity analysis of ship longitudinal strength." Thesis, Virginia Tech, 1988. http://hdl.handle.net/10919/45183.

Full text
Abstract:
The present work addresses the usefulness of a simple and efficient computer program (ULTSTR) for a sensitivity analysis of ship longitudinal strength, where this program was originally developed for calculating the collapse moment. Since the program is efficient it can be used to obtain ultimate strength variability for various values of parameters which affects the longitudinal strength, viz., yield. stress, Young's modulus, thickness, initial imperfections, breadth, depth, etc. The results obtained with this approach are in good agreement with those obtained by use of a more complex nonlinear finite element program USAS, developed by American Bureau of Shipping.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
38

DeBrunner, Victor Earl. "Sensitivity analysis of digital filter structures." Thesis, Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/104319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Wycoff, Nathan Benjamin. "Gradient-Based Sensitivity Analysis with Kernels." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/104683.

Full text
Abstract:
Emulation of computer experiments via surrogate models can be difficult when the number of input parameters determining the simulation grows any greater than a few dozen. In this dissertation, we explore dimension reduction in the context of computer experiments. The active subspace method is a linear dimension reduction technique which uses the gradients of a function to determine important input directions. Unfortunately, we cannot expect to always have access to the gradients of our black-box functions. We thus begin by developing an estimator for the active subspace of a function using kernel methods to indirectly estimate the gradient. We then demonstrate how to deploy the learned input directions to improve the predictive performance of local regression models by ``undoing" the active subspace. Finally, we develop notions of sensitivities which are local to certain parts of the input space, which we then use to develop a Bayesian optimization algorithm which can exploit locally important directions.
Doctor of Philosophy
Increasingly, scientists and engineers developing new understanding or products rely on computers to simulate complex phenomena. Sometimes, these computer programs are so detailed that the amount of time they take to run becomes a serious issue. Surrogate modeling is the problem of trying to predict a computer experiment's result without having to actually run it, on the basis of having observed the behavior of similar simulations. Typically, computer experiments have different settings which induce different behavior. When there are many different settings to tweak, typical surrogate modeling approaches can struggle. In this dissertation, we develop a technique for deciding which input settings, or even which combinations of input settings, we should focus our attention on when trying to predict the output of the computer experiment. We then deploy this technique both to prediction of computer experiment outputs as well as to trying to find which of the input settings yields a particular desired result.
APA, Harvard, Vancouver, ISO, and other styles
40

Kern, Simon. "Sensitivity Analysis in 3D Turbine CFD." Thesis, KTH, Mekanik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210821.

Full text
Abstract:
A better understanding of turbine performance and its sensitivity to variations in the inletboundary conditions is crucial in the quest of further improving the efficiency of aero engines.Within the research efforts to reach this goal, a high-pressure turbine test rig has been designedby Rolls-Royce Deutschland in cooperation with the Deutsches Zentrum für Luft- und Raumfahrt(DLR), the German Aerospace Center. The scope of the test rig is high-precision measurement ofaerodynamic efficiency including the effects of film cooling and secondary air flows as well as theimprovement of numerical prediction tools, especially 3D Computational Fluid Dynamics (CFD).A sensitivity analysis of the test rig based on detailed 3D CFD computations was carried outwith the aim to quantify the influence of inlet boundary condition variations occurring in the testrig on the outlet capacity of the first stage nozzle guide vane (NGV) and the turbine efficiency.The analysis considered variations of the cooling and rimseal leakage mass flow rates as well asfluctuations in the inlet distributions of total temperature and pressure. The influence of anincreased rotor tip clearance was also studied.This thesis covers the creation, calibration and validation of the steady state 3D CFD modelof the full turbine domain. All relevant geometrical details of the blades, walls and the rimsealcavities are included with the exception of the film cooling holes that are replaced by a volumesource term based cooling strip model to reduce the computational cost of the analysis. Thehigh-fidelity CFD computation is run only on a sample of parameter combinations spread overthe entire input parameter space determined using the optimal latin hypercube technique. Thesubsequent sensitivity analysis is based on a Kriging response surface model fit to the sampledata. The results are discussed with regard to the planned experimental campaign on the test rigand general conclusions concerning the impacts of the studied parameters on turbine performanceare deduced.
APA, Harvard, Vancouver, ISO, and other styles
41

Issac, Jason Cherian. "Sensitivity analysis of wing aeroelastic responses." Diss., This resource online, 1995. http://scholar.lib.vt.edu/theses/available/etd-06062008-164301/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Kovalov, Ievgen. "Context-sensitive Points-To Analysis : Comparing precision and scalability." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-18225.

Full text
Abstract:
Points-to analysis is a static program analysis that tries to predict the dynamic behavior of programs without running them. It computes reference information by approximating for each pointer in the program a set of possible objects to which it could point to at runtime. In order to justify new analysis techniques, they need to be compared to the state of the art regarding their accuracy and efficiency. One of the main parameters influencing precision in points-to analysis is context-sensitivity that provides the analysis of each method separately for different contexts it was called on. The problem raised due to providing such a property to points-to analysis is decreasing of analysis scalability along with increasing memory consumption used during analysis process. The goal of this thesis is to present a comparison of precision and scalability of context-sensitive and context-insensitive analysis using three different points-to analysis techniques (Spark, Paddle, P2SSA) produced by two research groups. This comparison provides basic trade-offs regarding scalability on the one hand and efficiency and accuracy on the other. This work was intended to involve previous research work in this field consequently to investigate and implement several specific metrics covering each type of analysis regardless context-sensitivity – Spark, Paddle and P2SSA. These three approaches for points-to analysis demonstrate the intended achievements of different research groups. Common output format enables to choose the most efficient type of analysis for particular purpose.
APA, Harvard, Vancouver, ISO, and other styles
43

Wang, Mengchao. "Sensitivity analysis and evolutionary optimization for building design." Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/16282.

Full text
Abstract:
In order to achieve global carbon reduction targets, buildings must be designed to be energy efficient. Building performance simulation methods, together with sensitivity analysis and evolutionary optimization methods, can be used to generate design solution and performance information that can be used in identifying energy and cost efficient design solutions. Sensitivity analysis is used to identify the design variables that have the greatest impacts on the design objectives and constraints. Multi-objective evolutionary optimization is used to find a Pareto set of design solutions that optimize the conflicting design objectives while satisfying the design constraints; building design being an inherently multi-objective process. For instance, there is commonly a desire to minimise both the building energy demand and capital cost while maintaining thermal comfort. Sensitivity analysis has previously been coupled with a model-based optimization in order to reduce the computational effort of running a robust optimization and in order to provide an insight into the solution sensitivities in the neighbourhood of each optimum solution. However, there has been little research conducted to explore the extent to which the solutions found from a building design optimization can be used for a global or local sensitivity analysis, or the extent to which the local sensitivities differ from the global sensitivities. It has also been common for the sensitivity analysis to be conducted using continuous variables, whereas building optimization problems are more typically formulated using a mixture of discretized-continuous variables (with physical meaning) and categorical variables (without physical meaning). This thesis investigates three main questions; the form of global sensitivity analysis most appropriate for use with problems having mixed discretised-continuous and categorical variables; the extent to which samples taken from an optimization run can be used in a global sensitivity analysis, the optimization process causing these solutions to be biased; and the extent to which global and local sensitivities are different. The experiments conducted in this research are based on the mid-floor of a commercial office building having 5 zones, and which is located in Birmingham, UK. The optimization and sensitivity analysis problems are formulated with 16 design variables, including orientation, heating and cooling setpoints, window-to-wall ratios, start and stop time, and construction types. The design objectives are the minimisation of both energy demand and capital cost, with solution infeasibility being a function of occupant thermal comfort. It is concluded that a robust global sensitivity analysis can be achieved using stepwise regression with the use of bidirectional elimination, rank transformation of the variables and BIC (Bayesian information criterion). It is concluded that, when the optimization is based on a genetic algorithm, that solutions taken from the start of the optimization process can be reliably used in a global sensitivity analysis, and therefore, there is no need to generate a separate set of random samples for use in the sensitivity analysis. The extent to which the convergence of the variables during the optimization can be used as a proxy for the variable sensitivities has also been investigated. It is concluded that it is not possible to identify the relative importance of variables through the optimization, even though the most important variable exhibited fast and stable convergence. Finally, it is concluded that differences exist in the variable rankings resulting from the global and local sensitivity methods, although the top-ranked solutions from each approach tend to be the same. It also concluded that the sensitivity of the objectives and constraints to all variables is obtainable through a local sensitivity analysis, but that a global sensitivity analysis is only likely to identify the most important variables. The repeatability of these conclusions has been investigated and confirmed by applying the methods to the example design problem with the building being located in four different climates (Birmingham, UK; San Francisco, US; and Chicago, US).
APA, Harvard, Vancouver, ISO, and other styles
44

Schmidt, Dirk. "Call path sensitive interprocedural alias analysis of C programs." [S.l. : s.n.], 1999. http://deposit.ddb.de/cgi-bin/dokserv?idn=957595891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Ballew, Nicholas D. "Development of a sensitive [delta]¹⁵ N analysis for photopigments /." Electronic version (PDF), 2007. http://dl.uncw.edu/etd/2007-2/ballewn/nicholasballew.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Sheehy, Noreen. "Analysis of AZT sensitive and resistant HIV-1 strains." Thesis, University of Warwick, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.308439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Flory, Long Mrs. "A WEB PERSONALIZATION ARTIFACT FOR UTILITY-SENSITIVE REVIEW ANALYSIS." VCU Scholars Compass, 2015. http://scholarscompass.vcu.edu/etd/3739.

Full text
Abstract:
Online customer reviews are web content voluntarily posted by the users of a product (e.g. camera) or service (e.g. hotel) to express their opinions about the product or service. Online reviews are important resources for businesses and consumers. This dissertation focuses on the important consumer concern of review utility, i.e., the helpfulness or usefulness of online reviews to inform consumer purchase decisions. Review utility concerns consumers since not all online reviews are useful or helpful. And, the quantity of the online reviews of a product/service tends to be very large. Manual assessment of review utility is not only time consuming but also information overloading. To address this issue, review helpfulness research (RHR) has become a very active research stream dedicated to study utility-sensitive review analysis (USRA) techniques for automating review utility assessment. Unfortunately, prior RHR solution is inadequate. RHR researchers call for more suitable USRA approaches. Our current research responds to this urgent call by addressing the research problem: What is an adequate USRA approach? We address this problem by offering novel Design Science (DS) artifacts for personalized USRA (PUSRA). Our proposed solution extends not only RHR research but also web personalization research (WPR), which studies web-based solutions for personalized web provision. We have evaluated the proposed solution by applying three evaluation methods: analytical, descriptive, and experimental. The evaluations corroborate the practical efficacy of our proposed solution. This research contributes what we believe (1) the first DS artifacts to the knowledge body of RHR and WPR, and (2) the first PUSRA contribution to USRA practice. Moreover, we consider our evaluations of the proposed solution the first comprehensive assessment of USRA solutions. In addition, this research contributes to the advancement of decision support research and practice. The proposed solution is a web-based decision support artifact with the capability to substantially improve accurate personalized webpage provision. Also, website designers can apply our research solution to transform their works fundamentally. Such transformation can add substantial value to businesses.
APA, Harvard, Vancouver, ISO, and other styles
48

Capozzi, Marco G. F. "FINITE ELEMENT ANALYSIS AND SENSITIVITY ANALYSIS FOR THE POTENTIAL EQUATION." MSSTATE, 2004. http://sun.library.msstate.edu/ETD-db/theses/available/etd-04222004-131403/.

Full text
Abstract:
A finite element solver has been developed for performing analysis and sensitivity analysis with Poisson's equation. An application of Poisson's equation in fluid dynamics is that of poential flow, in which case Posson's equaiton reduces to Laplace's equation. The stiffness matrix and sensitivity of the stiffness matrix are evaluated by direct integrations, as opposed to numerical integration. This allows less computational effort and minimizes the sources of computational errors. The capability of evaluating sensitivity derivatives has been added in orde to perform design sensitivity analysis of non-lifting airfoils. The discrete-direct approach to sensitivity analysis is utilized in the current work. The potential flow equations and the sensitivity equations are computed by using a preconditionaed conjugate gradient method. This method greatly reduces the time required to perfomr analysis, and the subsequent design optimization. Airfoil shape is updated at each design iteratioan by using a Bezier-Berstein surface parameterization. The unstrucured grid is adapted considering the mesh as a system of inteconnected springs. Numerical solutions from the flow solver are compared with analytical results obtained for a Joukowsky airfoil. Sensitivity derivaatives are validated using carefully determined central finite difference values. The developed software is then used to perform inverse design of a NACA 0012 and a multi-element airfoil.
APA, Harvard, Vancouver, ISO, and other styles
49

Rapadamnaba, Robert. "Uncertainty analysis, sensitivity analysis, and machine learning in cardiovascular biomechanics." Thesis, Montpellier, 2020. http://www.theses.fr/2020MONTS058.

Full text
Abstract:
Cette thèse fait suite à une étude récente, menée par quelques chercheurs de l'Université de Montpellier, dans le but de proposer à la communauté scientifique une procédure d'inversion capable d'estimer de manière non invasive la pression dans les artères cérébrales d'un patient.Son premier objectif est, d'une part, d'examiner la précision et la robustesse de la procédure d'inversion proposée par ces chercheurs, en lien avec diverses sources d'incertitude liées aux modèles utilisés, aux hypothèses formulées et aux données cliniques du patient, et d'autre part, de fixer un critère d'arrêt pour l'algorithme basé sur le filtre de Kalman d'ensemble utilisé dans leur procédure d'inversion. À cet effet, une analyse d'incertitude et plusieurs analyses de sensibilité sont effectuées. Le second objectif est d'illustrer comment l'apprentissage machine, orienté réseaux de neurones convolutifs, peut être une très bonne alternative à la longue et coûteuse procédure mise en place par ces chercheurs pour l'estimation de la pression.Une approche prenant en compte les incertitudes liées au traitement des images médicales du patient et aux hypothèses formulées sur les modèles utilisés, telles que les hypothèses liées aux conditions limites, aux paramètres physiques et physiologiques, est d'abord présentée pour quantifier les incertitudes sur les résultats de la procédure. Les incertitudes liées à la segmentation des images sont modélisées à l'aide d'une distribution gaussienne et celles liées au choix des hypothèses de modélisation sont analysées en testant plusieurs scénarios de choix d'hypothèses possibles. De cette démarche, il ressort que les incertitudes sur les résultats de la procédure sont du même ordre de grandeur que celles liées aux erreurs de segmentation. Par ailleurs, cette analyse montre que les résultats de la procédure sont très sensibles aux hypothèses faites sur les conditions aux limites du modèle du flux sanguin. En particulier, le choix des conditions limites symétriques de Windkessel pour le modèle s'avère être le plus approprié pour le cas du patient étudié.Ensuite, une démarche permettant de classer les paramètres estimés à l'aide de la procédure par ordre d'importance et de fixer un critère d'arrêt pour l'algorithme utilisé dans cette procédure est proposée. Les résultats de cette stratégie montrent, d'une part, que la plupart des résistances proximales sont les paramètres les plus importants du modèle pour l'estimation du débit sanguin dans les carotides internes et, d'autre part, que l'algorithme d'inversion peut être arrêté dès qu'un certain seuil de convergence raisonnable de ces paramètres les plus influents est atteint.Enfin, une nouvelle plateforme numérique basée sur l'apprentissage machine permettant d'estimer la pression artérielle spécifique au patient dans les artères cérébrales beaucoup plus rapidement qu'avec la procédure d'inversion mais avec la même précision, est présentée. L'application de cette plateforme aux données du patient utilisées dans la procédure d'inversion permet une estimation non invasive et en temps réel de la pression dans les artères cérébrales du patient cohérente avec l'estimation de la procédure d'inversion
This thesis follows on from a recent study conducted by a few researchers from University of Montpellier, with the aim of proposing to the scientific community an inversion procedure capable of noninvasively estimating patient-specific blood pressure in cerebral arteries. Its first objective is, on the one hand, to examine the accuracy and robustness of the inversion procedure proposed by these researchers with respect to various sources of uncertainty related to the models used, formulated assumptions and patient-specific clinical data, and on the other hand, to set a stopping criterion for the ensemble Kalman filter based algorithm used in their inversion procedure. For this purpose, uncertainty analysis and several sensitivity analyses are carried out. The second objective is to illustrate how machine learning, mainly focusing on convolutional neural networks, can be a very good alternative to the time-consuming and costly inversion procedure implemented by these researchers for cerebral blood pressure estimation.An approach taking into account the uncertainties related to the patient-specific medical images processing and the blood flow model assumptions, such as assumptions about boundary conditions, physical and physiological parameters, is first presented to quantify uncertainties in the inversion procedure outcomes. Uncertainties related to medical images segmentation are modelled using a Gaussian distribution and uncertainties related to modeling assumptions choice are analyzed by considering several possible hypothesis choice scenarii. From this approach, it emerges that the uncertainties on the procedure results are of the same order of magnitude as those related to segmentation errors. Furthermore, this analysis shows that the procedure outcomes are very sensitive to the assumptions made about the model boundary conditions. In particular, the choice of the symmetrical Windkessel boundary conditions for the model proves to be the most relevant for the case of the patient under study.Next, an approach for ranking the parameters estimated during the inversion procedure in order of importance and setting a stopping criterion for the algorithm used in the inversion procedure is presented. The results of this strategy show, on the one hand, that most of the model proximal resistances are the most important parameters for blood flow estimation in the internal carotid arteries and, on the other hand, that the inversion algorithm can be stopped as soon as a certain reasonable convergence threshold for the most influential parameter is reached.Finally, a new numerical platform, based on machine learning and allowing to estimate the patient-specific blood pressure in the cerebral arteries much faster than with the inversion procedure but with the same accuracy, is presented. The application of this platform to the patient-specific data used in the inversion procedure provides noninvasive and real-time estimate of patient-specific cerebral pressure consistent with the inversion procedure estimation
APA, Harvard, Vancouver, ISO, and other styles
50

Counsil, Tyler I. "Real-time RNA-based amplification allows for sensitive forensic blood evidence analysis." Virtual Press, 2008. http://liblink.bsu.edu/uhtbin/catkey/1391475.

Full text
Abstract:
The purpose of this experiment was to determine if nucleic acid sequence based amplification (NASBA) is a suitable application for the differentiation of body fluids that might comprise a forensic evidence sample. NASBA is a sensitive RNA transcription based amplification system. NASBA could theorhetically be used for bodily fluid identification based upon amplification of tissue-specific mRNA transcripts present in a given forensic sample.Amplification of both Glyceraldehyde 3-phosphate dehydrogenase (GAPDH) and Matrix Metalloproteinase 1 1 (MMPmRNA transcripts were used to determine that NASBA could amplify body fluid transcripts and whether it could distinguish between menstrual and non-menstrual blood, respectively. GAPDH is a housekeeping gene that is constituently expressed and its mRNA transcripts could therefore be used to determine whether non-menstrual blood could be amplified using the NASBA procedure. MMP 11 is a menstrual cycle-specific gene associated with endometrial breakdown. Using the mRNA transcripts from MMP 11, NASBA could be utilized for menstrual blood identification. In this study, non-menstrual and menstrual blood samples were analyzed with NASBA both in the presence and absence of chemical contamination. Contaminants utilized ranged from commercial automotive wax, transmission fluid, brake fluid, artificial tears, hand soap, 10% bleach, and the luminol blood detecting reagent. Non-menstrual blood was aliquoted onto a 1 cm x 1 cm cotton cloth for contamination, while menstrual blood was provided on a 1 cm x 1 cm area of sterile menstrual pad. All samples underwent Tri reagent extraction to obtain RNA samples for NASBA amplification.With respect to NASBA amplification data, non-menstrual blood data (from extracted RNA and unextracted blood samples) revealed the highest levels of amplification as shown in relative fluorescence units (RFU). Uncontaminated menstrual blood revealed the second highest amplification data. In the presence of chemical contamination, high levels of amplification were observed when samples were contaminated with brake fluid and commercial hand soap. Moderately low amplification was observed with samples contaminated with transmission fluid, 10% bleach, and artificial tears. NASBA amplification was completely inhibited in the presence of automotive wax and luminol. Cycle threshold (CO values for each amplification result were also obtained from each reaction. Smaller Ct values correspond to a higher NASBAreaction efficiency and therefore larger amplification values. The Ct values obtained for each amplified sample correlate strongly with the amount of amplification observed from reaction. Based upon the results of this experiment, NASBA should be considered as a novel tool for forensic evidence analysis.
Department of Biology
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography