To see the other types of publications on this topic, follow the link: Structural models.

Dissertations / Theses on the topic 'Structural models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Structural models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lievin-Lieven, Nicholas Andrew John. "Validation of structural dynamic models." Thesis, Imperial College London, 1990. http://hdl.handle.net/10044/1/46413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Adhikari, Sondipon. "Damping models for structural vibration." Thesis, University of Cambridge, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.620975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fonseca, Jose Manuel Rios. "Uncertainty in structural dynamic models." Thesis, Swansea University, 2005. https://cronfa.swan.ac.uk/Record/cronfa42563.

Full text
Abstract:
Modelling of uncertainty increases trust in analysis tools by providing predictions with confidence levels, produces more robust designs, and reduces design cycle time/cost by reducing the amount of experimental verification and validation that is required. However, uncertainty-based methods are more complex and computationally expensive than their deterministic counterparts, the characterisation of uncertainties is a non-trivial task, and the industry feels comfortable with the traditional design methods. In this work the three most popular uncertainty propagation methods (Monte Carlo simulation, perturbation, and fuzzy) are extensively benchmarked in structural dynamics applications. The main focus of the benchmark is accuracy, simplicity, and scalability. Some general guidelines for choosing the adequate uncertainty propagation method for an application are given. Since direct measurement is often prohibitively costly or even impossible, a novel method to characterise uncertainty sources from indirect measurements is presented. This method can accurately estimate the probability distribution of uncertain parameters by maximising the likelihood of the measurements. The likelihood is estimated using efficient variations of the Monte Carlo simulation and perturbation methods, which shift the computational burden to the outside of the optimisation loop, achieving a substantial time-saving without compromising accuracy. The approach was verified experimentally in several applications with promising results. A novel probabilistic procedure for robust design is proposed. It is based on reweighting of the Monte Carlo samples to avoid the numerical inefficiencies of resampling for every candidate design. Although not globally convergent, the proposed method is able to quickly estimate with high accuracy the optimum design. The method is applied to a numerical example, and the obtained designs are verified with regular Monte Carlo. The main focus of this work was on structural dynamics, but care was taken to make the approach general enough to allow other kinds of structural and non- structural analyses.
APA, Harvard, Vancouver, ISO, and other styles
4

Creamer, Nelson Glenn. "Identification of linear structural models." Diss., Virginia Polytechnic Institute and State University, 1987. http://hdl.handle.net/10919/53631.

Full text
Abstract:
With a great amount of research currently being aimed towards dynamic analysis and control of very large, flexible structures, the need for accurate knowledge of the properties of a structure in terms of the mass, damping, and stiffness matrices is of extreme importance. Typical problems associated with existing structural model identification methods are: (i) non-unique solutions may be obtained when utilizing only free-response measurements (unless some parameters are fixed at their nominal values), (ii) convergence may be difficult to achieve if the initial estimate of the parameters is not "close" to the truth, (iii) physically unrealistic coupling in the system matrices may occur as a consequence of the identification process, (iv) large, highly redundant parameter sets may be required to characterize the system, and (v) large measurement sets may be required. To overcome these problems, a novel identification technique is developed in this dissertation to determine the mass, damping, and stiffness matrices of an undamped, lightly damped, or significantly damped structure from a small set of measurements of both free-response data (natural frequencies, damping factors) and forced-response data (frequency response functions). The identification method is first developed for undamped structures. Through use of the spectral decomposition of the frequency response matrix and the orthogonality properties of the mode shapes, a unique identification of the mass and stiffness matrices is obtained. The method is also shown to be easily incorporated into a substructure synthesis package for identifying high-order systems. The method is then extended to include viscous damped structures. A matrix perturbation approach is developed for lightly damped structures, in which the mass and stiffness matrices are identified using the imaginary components of the measured eigenvalues and, as a post-processor, the damping matrix is obtained from the real components of the measured eigenvalues. For significantly damped structures, the mass, dauping, and stiffness matrices are identified simultaneously. A simple, practical method is also developed for identification of the time-varying relaxation modulus associated with a viscoelastic structure. By assuming time-localized elastic behavior, the relaxation modulus is determined from a series of identification tests performed at various times throughout the response history. Many interesting examples are presented throughout the dissertation to illustrate the applicability and potential of the identification method. It is observed from the numerical results that the uniquely identified structure agrees with simulated measurements of both free and forced·response records.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Cerqueira, Pedro Henrique Ramos. "Structural equation models applied to quantitative genetics." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-05112015-145419/.

Full text
Abstract:
Causal models have been used in different areas of knowledge in order to comprehend the causal associations between variables. Over the past decades, the amount of studies using these models have been growing a lot, especially those related to biological systems where studying and learning causal relationships among traits are essential for predicting the consequences of interventions in such system. Graph analysis (GA) and structural equation modeling (SEM) are tools used to explore such associations. While GA allows searching causal structures that express qualitatively how variables are causally connected, fitting SEM with a known causal structure allows to infer the magnitude of causal effects. Also SEM can be viewed as multiple regression models in which response variables can be explanatory variables for others. In quantitative genetics studies, SEM aimed to study the direct and indirect genetic effects associated to individuals through information related to them, beyond the observed characteristics, such as the kinship relations. In those studies typically the assumptions of linear relationships among traits are made. However, in some scenarios, nonlinear relationships can be observed, which make unsuitable the mentioned assumptions. To overcome this limitation, this paper proposes to use a mixed effects polynomial structural equation model, second or superior degree, to model those nonlinear relationships. Two studies were developed, a simulation and an application to real data. The first study involved simulation of 50 data sets, with a fully recursive causal structure involving three characteristics in which linear and nonlinear causal relations between them were allowed. The second study involved the analysis of traits related to dairy cows of the Holstein breed. Phenotypic relationships between traits were calving difficulty, gestation length and also the proportion of perionatal death. We compare the model of multiple traits and polynomials structural equations models, under different polynomials degrees in order to assess the benefits of the SEM polynomial of second or higher degree. For some situations the inappropriate assumption of linearity results in poor predictions of the direct, indirect and total of the genetic variances and covariance, either overestimating, underestimating, or even assign opposite signs to covariances. Therefore, we conclude that the inclusion of a polynomial degree increases the SEM expressive power.
Modelos causais têm sido muitos utilizados em estudos em diferentes áreas de conhecimento, a fim de compreender as associações ou relações causais entre variáveis. Durante as últimas décadas, o uso desses modelos têm crescido muito, especialmente estudos relacionados à sistemas biológicos, uma vez que compreender as relações entre características são essenciais para prever quais são as consequências de intervenções em tais sistemas. Análise do grafo (AG) e os modelos de equações estruturais (MEE) são utilizados como ferramentas para explorar essas relações. Enquanto AG nos permite buscar por estruturas causais, que representam qualitativamente como as variáveis são causalmente conectadas, ajustando o MEE com uma estrutura causal conhecida nos permite inferir a magnitude dos efeitos causais. Os MEE também podem ser vistos como modelos de regressão múltipla em que uma variável resposta pode ser vista como explanatória para uma outra característica. Estudos utilizando MEE em genética quantitativa visam estudar os efeitos genéticos diretos e indiretos associados aos indivíduos por meio de informações realcionadas aos indivíduas, além das característcas observadas, como por exemplo o parentesco entre eles. Neste contexto, é tipicamente adotada a suposição que as características observadas são relacionadas linearmente. No entanto, para alguns cenários, relações não lineares são observadas, o que torna as suposições mencionadas inadequadas. Para superar essa limitação, este trabalho propõe o uso de modelos de equações estruturais de efeitos polinomiais mistos, de segundo grau ou seperior, para modelar relações não lineares. Neste trabalho foram desenvolvidos dois estudos, um de simulação e uma aplicação a dados reais. O primeiro estudo envolveu a simulação de 50 conjuntos de dados, com uma estrutura causal completamente recursiva, envolvendo 3 características, em que foram permitidas relações causais lineares e não lineares entre as mesmas. O segundo estudo envolveu a análise de características relacionadas ao gado leiteiro da raça Holandesa, foram utilizadas relações entre os seguintes fenótipos: dificuldade de parto, duração da gestação e a proporção de morte perionatal. Nós comparamos o modelo misto de múltiplas características com os modelos de equações estruturais polinomiais, com diferentes graus polinomiais, a fim de verificar os benefícios do MEE polinomial de segundo grau ou superior. Para algumas situações a suposição inapropriada de linearidade resulta em previsões pobres das variâncias e covariâncias genéticas diretas, indiretas e totais, seja por superestimar, subestimar, ou mesmo atribuir sinais opostos as covariâncias. Portanto, verificamos que a inclusão de um grau de polinômio aumenta o poder de expressão do MEE.
APA, Harvard, Vancouver, ISO, and other styles
6

Grafe, Henning. "Model updating of large structural dynamics models using measured response functions." Thesis, Imperial College London, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.325047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Valeinis, Janis. "Confidence bands for structural relationship models." Doctoral thesis, [S.l.] : [s.n.], 2007. http://webdoc.sub.gwdg.de/diss/2007/valeinis.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

De, Antonio Liedo David. "Structural models for macroeconomics and forecasting." Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210142.

Full text
Abstract:
This Thesis is composed by three independent papers that investigate

central debates in empirical macroeconomic modeling.

Chapter 1, entitled “A Model for Real-Time Data Assessment with an Application to GDP Growth Rates”, provides a model for the data

revisions of macroeconomic variables that distinguishes between rational expectation updates and noise corrections. Thus, the model encompasses the two polar views regarding the publication process of statistical agencies: noise versus news. Most of the studies previous studies that analyze data revisions are based

on the classical noise and news regression approach introduced by Mankiew, Runkle and Shapiro (1984). The problem is that the statistical tests available do not formulate both extreme hypotheses as collectively exhaustive, as recognized by Aruoba (2008). That is, it would be possible to reject or accept both of them simultaneously. In turn, the model for the

DPP presented here allows for the simultaneous presence of both noise and news. While the “regression approach” followed by Faust et al. (2005), along the lines of Mankiew et al. (1984), identifies noise in the preliminary

figures, it is not possible for them to quantify it, as done by our model.

The second and third chapters acknowledge the possibility that macroeconomic data is measured with errors, but the approach followed to model the missmeasurement is extremely stylized and does not capture the complexity of the revision process that we describe in the first chapter.

Chapter 2, entitled “Revisiting the Success of the RBC model”, proposes the use of dynamic factor models as an alternative to the VAR based tools for the empirical validation of dynamic stochastic general equilibrium (DSGE) theories. Along the lines of Giannone et al. (2006), we use the state-space parameterisation of the factor models proposed by Forni et al. (2007) as a competitive benchmark that is able to capture weak statistical restrictions that DSGE models impose on the data. Our empirical illustration compares the out-of-sample forecasting performance of a simple RBC model augmented with a serially correlated noise component against several specifications belonging to classes of dynamic factor and VAR models. Although the performance of the RBC model is comparable

to that of the reduced form models, a formal test of predictive accuracy reveals that the weak restrictions are more useful at forecasting than the strong behavioral assumptions imposed by the microfoundations in the model economy.

The last chapter, “What are Shocks Capturing in DSGE modeling”, contributes to current debates on the use and interpretation of larger DSGE

models. Recent tendency in academic work and at central banks is to develop and estimate large DSGE models for policy analysis and forecasting. These models typically have many shocks (e.g. Smets and Wouters, 2003 and Adolfson, Laseen, Linde and Villani, 2005). On the other hand, empirical studies point out that few large shocks are sufficient to capture the covariance structure of macro data (Giannone, Reichlin and

Sala, 2005, Uhlig, 2004). In this Chapter, we propose to reconcile both views by considering an alternative DSGE estimation approach which

models explicitly the statistical agency along the lines of Sargent (1989). This enables us to distinguish whether the exogenous shocks in DSGE

modeling are structural or instead serve the purpose of fitting the data in presence of misspecification and measurement problems. When applied to the original Smets and Wouters (2007) model, we find that the explanatory power of the structural shocks decreases at high frequencies. This allows us to back out a smoother measure of the natural output gap than that

resulting from the original specification.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
9

Konarski, Roman. "Sensitivity analysis for structural equation models." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq22893.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gungor, Murat Kahraman. "Structural models for large software systems." Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2006. http://proquest.umi.com/login?COPT=REJTPTU0NWQmSU5UPTAmVkVSPTI=&clientId=3739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Westin, Lars. "Vintage models of spatial structural change." Doctoral thesis, Umeå universitet, Institutionen för nationalekonomi, 1990. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-73665.

Full text
Abstract:
In the study a class of multisector network models, suitable for simulation of the interaction between production, demand, trade, and infrastructure, is presented. A characteristic feature of the class is a vintage model of the production system. Hence, the rigidities in existing capacities and the temporary monopolies obtainable from investments in new capacity at favourable locations are emphasized.As special cases, the class contains models in the modelling traditions of "interregional computable general equilibriunT, Hspatial price equilibrium**, "interregional input-output" and transportation networks.On the demand side, a multihousehold spatial linear expenditure system is introduced. This allows for an endogenous representation of income effects of skill-differentiated labour.The models are represented by a set of complementarity problems. This facilitates a comparison of model properties and the choice of an appropriate solution algorithm.The study is mainly devoted to single period models. Such equilibrium models are interpreted as adiabatic approximations of processes in continuous time. A separation by the time scale of the processes and an application of the slaving principle should thus govern the choice of endogenous variables in the equilibrium formulation.
digitalisering@umu
APA, Harvard, Vancouver, ISO, and other styles
12

Mosqueda, Gilberto 1974. "Interactive educational models for structural dynamics." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/50365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Thom, Howard Henry Zappe. "Structural uncertainty in cost-effectiveness models." Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.648265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Bruche, Max. "Structural models of corporate bond prices." Thesis, London School of Economics and Political Science (University of London), 2005. http://etheses.lse.ac.uk/2419/.

Full text
Abstract:
In 1974, Merton wrote a seminal paper that explained how the then recently presented Black-Scholes model could be applied to the pricing of corporate debt. Many extensions of this model followed. The family of models is sometimes referred to as the family of structural models of corporate bond prices. It has found applications in bond pricing and risk management, but appears to have a more mixed empirical record than the so-called reduced-form models (e.g. Duffie and Singleton 1999), and is often considered imprecise. As a consequence, it is often avoided in pricing and hedging applications. This thesis examines three possible avenues for improving the performance of structural models: 1. Strategic interaction between debtors: Possible "Coordination failures" - races to recover value that can dismember firms - are a very important form of strategic interaction between debtors that can have a large influence on the value of debt. 2. The econometrics of structural models: The classic "calibration" methodology widely employed in the literature is an ad-hoc procedure that has severe problems from an econometric perspective. This thesis proposes a filtering-based approach instead that is demonstrably superior. 3. The non-default component of spreads: Corporate bond prices most probably do not only represent credit risk, but also other types of risk (e.g. liquidity risk). This thesis attempts to quantify and assess this non-default component.
APA, Harvard, Vancouver, ISO, and other styles
15

Fernandes, Cristiano Augusto Coelho. "Non-Gaussian structural time series models." Thesis, London School of Economics and Political Science (University of London), 1991. http://etheses.lse.ac.uk/1208/.

Full text
Abstract:
This thesis aims to develop a class of state space models for non-Gaussian time series. Our models are based on distributions of the exponential family, such as the Poisson, the negative-binomial, the binomial and the gamma. In these distributions the mean is allowed to change over time through a mechanism which mimics a random walk. By adopting a closed sampling analysis we are able to derive finite dimensional filters, similar to the Kalman filter. These are then used to construct the likelihood function and to make forecasts of future observations. In fact for all the specifications here considered we have been able to show that the predictions give rise to schemes based on an exponentially weighted moving average (EWMA). The models may be extended to include explanatory variables via the kind of link functions that appear in GLIM models. This enables nonstochastic slope and seasonal components to be included. The Poisson, negative binomial and bivariate Poisson models are illustrated by considering applications to real data. Monte Carlo experiments are also conducted in order to investigate properties of maximum likelihood estimators and power studies of a post sample predictive test developed for the Poisson model.
APA, Harvard, Vancouver, ISO, and other styles
16

Enache, Andreea. "Structural Econometrics for Game Theoretical Models." Paris, EHESS, 2015. http://www.theses.fr/2015EHES0126.

Full text
Abstract:
Cette thèse inclut quatre chapitres réunis autour d'un thème commun: les problèmes inverses dans les jeux stochastiques en information incomplète. L'objectif de cette thèse est d'étudier l'identification et l'estimation d'un paramètre fonctionnel dans le contexte des variables inobservables, situation courante dès qu'il y a une asymétrie d'information. On cherche à retrouver la distribution des primitives dans des modèles d'enchères et de la théorie des contrats à partir des variables observées et du concept d'équilibre "Bayesian Nash Equilibrium". La méthodologie d'identification et d'estimation dans tous les quatre essais est basée sur une réécriture du modèle économique en termes de quantiles et l'analyse économétrique est conduite entièrement dans un contexte non paramétrique. Néanmoins, pour une certaine classe de problèmes que l'on appelle "problèmes bien-posés", la vitesse de convergence de nos estimateurs est une vitesse paramétrique. En général, les modèles de la théorie des jeux mènent souvent à des problèmes inverses non-linéaires mal-posés. Cependant, les deux premiers essais de cette thèse traitent des exemples des problèmes inverses bien-posés (l'enchère au troisième prix et le modèle d'anti-sélection). Le troisième essai est une généralisation de la méthodologie proposée dans les deux premiers articles et introduit une nouvelle classe de problèmes bien-posés que l'on appelle "hazard-rate game models". Le dernier article étudie l'enchère au premier prix en utilisant l'approche par les quantiles ce qui permet d'obtenir, par opposition avec la littérature existante, une forme explicite pour la fonction des quantiles des variables latentes
This thesis consists of four essays articulated around the topic of inverse problems in games of incomplete information. The objective of the dissertation is to study the identification and the estimation of a functional parameter in a context of unobserved variables, situation often encountered in the presence of asymmetric information. We recover the distribution of primitives in auctions and contract-theory models using the data and the concept of Bayesian Nash Equilibrium. All chapters use a quantile approach both in terms of identification and estimation methodology. Another common feature is that all the economic issues are studied in a fully nonparametric setting. In spite of that, for a class of problems that turn out to be well-posed inverse problems, we find parametric speed of convergence for our estimators. Usually, many game-theoretical models belong to a class of ill-posed inverse problems. Nevertheless, the two first papers (the third-price auction model and the pure adverse selection model) of this thesis treat models that belong in fact to a class of well-posed inverse problems. The third essay generalizes the results of the first two articles by considering a general form for the strategy function of the game and introduces a new class of well-posed games called "hazard-rate game models". The last chapter of the dissertation studies the first-price auction model using the quantile approach which, by contrast with the existing literature, leads to a closed-form solution for the quantiles of the latent variables
APA, Harvard, Vancouver, ISO, and other styles
17

Gowrisankaran, Prabhakar. "Structural testbench development for DSP models." Thesis, This resource online, 1995. http://scholar.lib.vt.edu/theses/available/etd-01312009-063336/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Mihoci, Andrija. "Structural adaptive models in financial econometrics." Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2012. http://dx.doi.org/10.18452/16597.

Full text
Abstract:
Moderne statistische und ökonometrische Methoden behandeln erfolgreich stilisierte Fakten auf den Finanzmärkten. Die vorgestellten Techniken erstreben die Dynamik von Finanzmarktdaten genauer als traditionelle Ansätze zu verstehen. Wirtschaftliche und finanzielle Vorteile sind erzielbar. Die Ergebnisse werden hier in praktischen Beispielen ausgewertet, die sich vor allem auf die Prognose von Finanzmarktdaten fokussieren. Unsere Anwendungen umfassen: (i) die Modellierung und die Vorhersage des Liquiditätsangebotes, (ii) die Lokalisierung des ’Multiplicative Error Model’ und (iii) die Erbringung von Evidenz für den empirischen Zustandsfaktorparadox über Landern.
Modern methods in statistics and econometrics successfully deal with stylized facts observed on financial markets. The presented techniques aim to understand the dynamics of financial market data more accurate than traditional approaches. Economic and financial benefits are achievable. The results are here evaluated in practical examples that mainly focus on forecasting of financial data. Our applications include: (i) modelling and forecasting of liquidity supply, (ii) localizing multiplicative error models and (iii) providing evidence for the empirical pricing kernel paradox across countries.
APA, Harvard, Vancouver, ISO, and other styles
19

Haponchyk, Iryna. "Advanced models of supervised structural clustering." Doctoral thesis, Università degli studi di Trento, 2018. https://hdl.handle.net/11572/367746.

Full text
Abstract:
The strength and power of structured prediction approaches in machine learning originates from a proper recognition and exploitation of inherent structural dependencies within complex objects, which structural models are trained to output. Among the complex tasks that benefited from structured prediction approaches, clustering is of a special interest. Structured output models based on representing clusters by latent graph structures made the task of supervised clustering tractable. While in practice these models proved effective in solving the complex NLP task of coreference resolution, in this thesis, we aim at exploring their capacity to be extended to other tasks and domains, as well as the methods for performing such adaptation and for improvement in general, which, as a result, go beyond clustering and are commonly applicable in structured prediction. Studying the extensibility of the structural approaches for supervised clustering, we apply them to two different domains in two different ways. First, in the networking domain, we do clustering of network traffic by adapting the model, taking into account the continuity of incoming data. Our experiments demonstrate that the structural clustering approach is not only effective in such a scenario, but also, if changing the perspective, provides a novel potentially useful tool for detecting anomalies. The other part of our work is dedicated to assessing the amenability of the structural clustering model to joint learning with another structural model, for ranking. Our preliminary analysis in the context of the task of answer-passage reranking in question answering reveals a potential benefit of incorporating auxiliary clustering structures. Due to the intrinsic complexity of the clustering task and, respectively, its evaluation scenarios, it gave us grounds for studying the possibility and the effect from optimizing task-specific complex measures in structured prediction algorithms. It is common for structured prediction approaches to optimize surrogate loss functions, rather than the actual task-specific ones, in or- der to facilitate inference and preserve efficiency. In this thesis, we, first, study when surrogate losses are sufficient and, second, make a step towards enabling direct optimization of complex structural loss functions. We propose to learn an approximation of a complex loss by a regressor from data. We formulate a general structural framework for learning with a learned loss, which, applied to a particular case of a clustering problem – coreference resolution, i) enables the optimization of a coreference metric, by itself, having high computational complexity, and ii) delivers an improvement over the standard structural models optimizing simple surrogate objectives. We foresee this idea being helpful in many structured prediction applications, also as a means of adaptation to specific evaluation scenarios, and especially when a good loss approximation is found by a regressor from an induced feature space allowing good factorization over the underlying structure.
APA, Harvard, Vancouver, ISO, and other styles
20

Haponchyk, Iryna. "Advanced models of supervised structural clustering." Doctoral thesis, University of Trento, 2018. http://eprints-phd.biblio.unitn.it/2953/4/phd-thesis.pdf.

Full text
Abstract:
The strength and power of structured prediction approaches in machine learning originates from a proper recognition and exploitation of inherent structural dependencies within complex objects, which structural models are trained to output. Among the complex tasks that benefited from structured prediction approaches, clustering is of a special interest. Structured output models based on representing clusters by latent graph structures made the task of supervised clustering tractable. While in practice these models proved effective in solving the complex NLP task of coreference resolution, in this thesis, we aim at exploring their capacity to be extended to other tasks and domains, as well as the methods for performing such adaptation and for improvement in general, which, as a result, go beyond clustering and are commonly applicable in structured prediction. Studying the extensibility of the structural approaches for supervised clustering, we apply them to two different domains in two different ways. First, in the networking domain, we do clustering of network traffic by adapting the model, taking into account the continuity of incoming data. Our experiments demonstrate that the structural clustering approach is not only effective in such a scenario, but also, if changing the perspective, provides a novel potentially useful tool for detecting anomalies. The other part of our work is dedicated to assessing the amenability of the structural clustering model to joint learning with another structural model, for ranking. Our preliminary analysis in the context of the task of answer-passage reranking in question answering reveals a potential benefit of incorporating auxiliary clustering structures. Due to the intrinsic complexity of the clustering task and, respectively, its evaluation scenarios, it gave us grounds for studying the possibility and the effect from optimizing task-specific complex measures in structured prediction algorithms. It is common for structured prediction approaches to optimize surrogate loss functions, rather than the actual task-specific ones, in or- der to facilitate inference and preserve efficiency. In this thesis, we, first, study when surrogate losses are sufficient and, second, make a step towards enabling direct optimization of complex structural loss functions. We propose to learn an approximation of a complex loss by a regressor from data. We formulate a general structural framework for learning with a learned loss, which, applied to a particular case of a clustering problem – coreference resolution, i) enables the optimization of a coreference metric, by itself, having high computational complexity, and ii) delivers an improvement over the standard structural models optimizing simple surrogate objectives. We foresee this idea being helpful in many structured prediction applications, also as a means of adaptation to specific evaluation scenarios, and especially when a good loss approximation is found by a regressor from an induced feature space allowing good factorization over the underlying structure.
APA, Harvard, Vancouver, ISO, and other styles
21

Morris, Nathan J. "Multivariate and Structural Equation Models for Family Data." Case Western Reserve University School of Graduate Studies / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=case1247004562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Gendron, Debbie. "Model stability under a policy shift : are DSGE models really structural?" Thesis, Université Laval, 2007. http://www.theses.ulaval.ca/2007/24214/24214.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Lovreta, Lidija. "Structural Credit Risk Models: Estimation and Applications." Doctoral thesis, Universitat Ramon Llull, 2010. http://hdl.handle.net/10803/9180.

Full text
Abstract:
El risc de crèdit s'associa a l'eventual incompliment de les obligacions de pagament per part dels creditors. En aquest cas, l'interès principal de les institucions financeres és mesurar i gestionar amb precisió aquest risc des del punt de vista quantitatiu. Com a resposta a l'interès esmentat, aquesta tesi doctoral, titulada "Structural Credit Risk Models: Estimation and Applications", se centra en l'ús pràctic dels anomenats "models estructurals de risc de crèdit". Aquests models es caracteritzen perquè estableixen una relació explícita entre el risc de crèdit i diverses variables fonamentals, la qual cosa permet un ventall ampli d'aplicacions. Concretament, la tesi analitza el contingut informatiu tant del mercat d'accions com del mercat de CDS sobre la base dels models estructurals esmentats.

El primer capítol, estudia la velocitat distinta amb què el mercat d'accions i el mercat de CDS incorporen nova informació sobre el risc de crèdit. L'anàlisi se centra a respondre dues preguntes clau: quin d'aquests mercats genera una informació més precisa sobre el risc de crèdit i quins factors determinen el diferent contingut informatiu dels indicadors respectius de risc, és a dir, les primes de crèdit implícites en el mercat d'accions enfront del de CDS. La base de dades utilitzada inclou 94 empreses (40 d'europees, 32 de nordamericanes i 22 de japoneses) durant el període 2002-2004. Entre les conclusions principals destaquen la naturalesa dinàmica del procés de price discovery, una interconnexió més gran entre ambdós mercats i un major domini informatiu del mercat d'accions, associat a uns nivells més elevats del risc de crèdit, i, finalment, una probabilitat més gran de lideratge informatiu del mercat de CDS en els períodes d'estrès creditici.

El segon capítol se centra en el problema de l'estimació de les variables latents en els models estructurals. Es proposa una nova metodologia, que consisteix en un algoritme iteratiu aplicat a la funció de versemblança per a la sèrie temporal del preu de les accions. El mètode genera estimadors de pseudomàxima versemblança per al valor, la volatilitat i el retorn que s'espera obtenir dels actius de l'empresa. Es demostra empíricament que aquest nou mètode produeix, en tots els casos, valors raonables del punt de fallida. A més, aquest mètode és contrastat d'acord amb les primes de CDS generades. S'observa que, en comparació amb altres alternatives per fixar el punt de fallida (màxima versemblança estàndard, barrera endògena, punt d'impagament de KMV i nominal del deute), l'estimació per pseudomàxima versemblança proporciona menys divergències.

El tercer i darrer capítol de la tesi tracta la qüestió relativa a components distints del risc de crèdit a la prima dels CDS. Més concretament, estudia l'efecte del desequilibri entre l'oferta i la demanda, un aspecte important en un mercat on el nombre de compradors (de protecció) supera habitualment el de venedors. La base de dades cobreix, en aquest cas, 163 empreses en total (92 d'europees i 71 de nord-americanes) per al període 2002- 2008. Es demostra que el desequilibri entre l'oferta i la demanda té, efectivament, un paper important a l'hora d'explicar els moviments a curt termini en els CDS. La influència d'aquest desequilibri es detecta després de controlar l'efecte de variables fonamentals vinculades al risc de crèdit, i és més gran durant els períodes d'estrès creditici. Aquests resultats il·lustren que les primes dels CDS reflecteixen no tan sols el cost de la protecció, sinó també el cost anticipat per part dels venedors d'aquesta protecció per tancar la posició adquirida.
El riesgo de crédito se asocia al potencial incumplimiento por parte de los acreedores respecto de sus obligaciones de pago. En este sentido, el principal interés de las instituciones financieras es medir y gestionar con precisión dicho riesgo desde un punto de vista cuantitativo. Con objeto de responder a este interés, la presente tesis doctoral titulada "Structural Credit Risk Models: Estimation and Applications", se centra en el uso práctico de los denominados "Modelos Estructurales de Riesgo de Crédito". Estos modelos se caracterizan por establecer una conexión explícita entre el riesgo de crédito y diversas variables fundamentales, permitiendo de este modo un amplio abanico de aplicaciones. Para ser más explícitos, la presente tesis explora el contenido informativo tanto del mercado de acciones como del mercado de CDS sobre la base de los mencionados modelos estructurales.

El primer capítulo de la tesis estudia la distinta velocidad con la que el mercado de acciones y el mercado de CDS incorporan nueva información sobre el riesgo de crédito. El análisis se centra en contestar dos preguntas clave: cuál de estos mercados genera información más precisa sobre el riesgo de crédito, y qué factores determinan en distinto contenido informativo de los respectivos indicadores de riesgo, esto es, primas de crédito implícitas en el mercado de acciones frente a CDS. La base de datos utilizada engloba a 94 compañías (40 europeas, 32 Norteamericanas y 22 japonesas) durante el periodo 2002-2004. Entre las principales conclusiones destacan la naturaleza dinámica del proceso de price discovery, la mayor interconexión entre ambos mercados y el mayor dominio informativo del mercado de acciones asociados a mayores niveles del riesgo de crédito, y finalmente la mayor probabilidad de liderazgo informativo del mercado de CDS en los periodos de estrés crediticio.

El segundo capítulo se centra en el problema de estimación de variables latentes en modelos estructurales. Se propone una nueva metodología consistente en un algoritmo iterativo aplicado a la función de verosimilitud para la serie temporal del precio de las acciones. El método genera estimadores pseudo máximo verosímiles para el valor, volatilidad y retorno esperado de los activos de la compañía. Se demuestra empíricamente que este nuevo método produce en todos los casos valores razonables del punto de quiebra. El método es además contrastado en base a las primas de CDS generadas. Se observa que, en comparación con otras alternativas para fijar el punto de quiebra (máxima verosimilitud estándar, barrera endógena, punto de impago de KMV, y nominal de la deuda), la estimación por pseudo máxima verosimilitud da lugar a las menores divergencias.

El tercer y último capítulo de la tesis aborda la cuestión relativa a componentes distintos al riesgo de crédito en la prima de los CDS. Se estudia más concretamente el efecto del desequilibrio entre oferta y demanda, un aspecto importante en un mercado donde el número de compradores (de protección) supera habitualmente al de vendedores. La base de datos cubre en este caso un total de 163 compañías (92 europeas y 71 norteamericanas) para el periodo 2002-2008. Se demuestra que el desequilibrio entre oferta y demanda tiene efectivamente un papel importante a la hora de explicar los movimientos de corto plazo en los CDS. La influencia de este desequilibrio se detecta una vez controlado el efecto de variables fundamentales ligadas al riesgo de crédito, y es mayor durante los periodos de estrés crediticio. Estos resultados ilustran que las primas de los CDS reflejan no sólo el coste de la protección, sino el coste anticipado por parte de los vendedores de tal protección de cerrar la posición adquirida.
Credit risk is associated with potential failure of borrowers to fulfill their obligations. In that sense, the main interest of financial institutions becomes to accurately measure and manage credit risk on a quantitative basis. With the intention to respond to this task this doctoral thesis, entitled "Structural Credit Risk Models: Estimation and Applications", focuses on practical usefulness of structural credit risk models that are characterized with explicit link with economic fundamentals and consequently allow for a broad range of application possibilities. To be more specific, in essence, the thesis project explores the information on credit risk embodied in the stock market and market for credit derivatives (CDS market) on the basis of structural credit risk models. The issue addressed in the first chapter refers to relative informational content of stock and CDS market in terms of credit risk. The overall analysis is focused on answering two crucial questions: which of these markets provides more timely information regarding credit risk, and what are the factors that influence informational content of credit risk indicators (i.e. stock market implied credit spreads and CDS spreads). Data set encompasses international set of 94 companies (40 European, 32 US and 22 Japanese) during the period 2002-2004. The main conclusions uncover time-varying behaviour of credit risk discovery, stronger cross market relationship and stock market leadership at higher levels of credit risk, as well as positive relationship between the frequency of severe credit deterioration shocks and the probability of the CDS market leadership.

Second chapter concentrates on the problem of estimation of latent parameters of structural models. It proposes a new, maximum likelihood based iterative algorithm which, on the basis of the log-likelihood function for the time series of equity prices, provides pseudo maximum likelihood estimates of the default barrier and of the value, volatility, and expected return on the firm's assets. The procedure allows for credit risk estimation based only on the readily available information from stock market and is empirically tested in terms of CDS spread estimation. It is demonstrated empirically that, contrary to the standard ML approach, the proposed method ensures that the default barrier always falls within reasonable bounds. Moreover, theoretical credit spreads based on pseudo ML estimates offer the lowest credit default swap pricing errors when compared to the other options that are usually considered when determining the default barrier: standard ML estimate, endogenous value, KMV's default point, and principal value of debt.

Final, third chapter of the thesis, provides further evidence of the performance of the proposed pseudo maximum likelihood procedure and addresses the issue of the presence of non-default component in CDS spreads. Specifically, the effect of demand-supply imbalance, an important aspect of liquidity in the market where the number of buyers frequently outstrips the number of sellers, is analyzed. The data set is largely extended covering 163 non-financial companies (92 European and 71 North American) and period 2002-2008. In a nutshell, after controlling for the fundamentals reflected through theoretical, stock market implied credit spreads, demand-supply imbalance factors turn out to be important in explaining short-run CDS movements, especially during structural breaks. Results illustrate that CDS spreads reflect not only the price of credit protection, but also a premium for the anticipated cost of unwinding the position of protection sellers.
APA, Harvard, Vancouver, ISO, and other styles
24

Jin, Shaobo. "Essays on Estimation Methods for Factor Models and Structural Equation Models." Doctoral thesis, Uppsala universitet, Statistiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-247292.

Full text
Abstract:
This thesis which consists of four papers is concerned with estimation methods in factor analysis and structural equation models. New estimation methods are proposed and investigated. In paper I an approximation of the penalized maximum likelihood (ML) is introduced to fit an exploratory factor analysis model. Approximated penalized ML continuously and efficiently shrinks the factor loadings towards zero. It naturally factorizes a covariance matrix or a correlation matrix. It is also applicable to an orthogonal or an oblique structure. Paper II, a simulation study, investigates the properties of approximated penalized ML with an orthogonal factor model. Different combinations of penalty terms and tuning parameter selection methods are examined. Differences in factorizing a covariance matrix and factorizing a correlation matrix are also explored. It is shown that the approximated penalized ML frequently improves the traditional estimation-rotation procedure. In Paper III we focus on pseudo ML for multi-group data. Data from different groups are pooled and normal theory is used to fit the model. It is shown that pseudo ML produces consistent estimators of factor loadings and that it is numerically easier than multi-group ML. In addition, normal theory is not applicable to estimate standard errors. A sandwich-type estimator of standard errors is derived. Paper IV examines properties of the recently proposed polychoric instrumental variable (PIV) estimators for ordinal data through a simulation study. PIV is compared with conventional estimation methods (unweighted least squares and diagonally weighted least squares). PIV produces accurate estimates of factor loadings and factor covariances in the correctly specified confirmatory factor analysis model and accurate estimates of loadings and coefficient matrices in the correctly specified structure equation model. If the model is misspecified, robustness of PIV depends on model complexity, underlying distribution, and instrumental variables.
APA, Harvard, Vancouver, ISO, and other styles
25

Azeredo, Daniela Rita Charrua Cabral de. "Structural models to estimate financial institution´s default probability." Master's thesis, Instituto Superior de Economia e Gestão, 2014. http://hdl.handle.net/10400.5/7898.

Full text
Abstract:
Mestrado em Finanças
Neste estudo procurámos, no âmbito do Modelo de Merton (1973), determinar a Distância ao Incumprimento (DD) para uma amostra de bancos Ibéricos. Através da especificação de três diferentes Barreiras de Imcumprimento (DB), foi possivel obter diferentes resultados, sublinhando a importância da DB para output do modelo. Durante a crise, o risco de liquidez foi atenuado pelas políticas de cedência de liquidez levadas a cabo pelo BCE. As definições usadas para db1 e db2, diferem na forma como são tratados os emprestimos do BCE, permitindo implementar um procedimento assente no cálculo da DD para quantificar a redução no risco dos bancos induzida por estas medidas. Os nossos resultados demonstram que as políticas do BCE reduziram o risco de incumprimento dos bancos que constituem a amostra.
This paper is intended to model the default probabilities for selected Iberian Financial Institutions through the application of Merton's Model (1973) framework. Through the use of three different Default Barrier (db) definitions, we were able to obtain very different outputs, stressing how crucial db definition is to the structural model output. Throughout this crisis, liquidity risk was, in some dimension, offset by the ECB funding policies. db1 and db2 definitions, differing only on the way Central Bank loans were treated, were convenient to test non-standard applications of the model. In our study we introduce and test a procedure anchored on Distance to Distress calculation, to quantify the reduction in risk induced by ECB measures, finding that ECB actions effectively reduced bank's default risk.
APA, Harvard, Vancouver, ISO, and other styles
26

VIGLIETTI, ANDREA. "Low Fidelity and High Fidelity Structural Models for Hybrid Composite Aircraft Structures." Doctoral thesis, Politecnico di Torino, 2018. http://hdl.handle.net/11583/2710182.

Full text
Abstract:
This thesis extends advanced one-dimensional models derived from the use of the Carrera Unified Formulation (CUF) to a High-Fidelity modeling approach, which has been used to perform analyses of multi-component aeronautical structures guaranteeing a proper description of each component regarding geometry and material. Static analyses and free vibration analyses have been performed to validate the current CUF model for the studies of structures, which present multi-component nature, sweep angle and non-prismatic shape; subsequently, its capabilities have been exploited to investigate different aeronautical topics. At first, the model has been used for the free vibration analysis of damaged structures and the possible use of the behavior alterations for the damage detection. Thanks to the capability of the model to control the stiffness arbitrarily, several scenarios have been analyzed where the damage has been introduced, for example, in the whole component or at the local level. The layer-wise capability of the model has allowed a wide tailoring analysis of thin-walled boxes to be performed. It has been used to evaluate the free-vibration behaviors according to the lamination used in the structure. Moreover, these analyses have been used to explore the possible influences on the geometrical coupling effects due to sweep angle or tapered shapes, in order to mitigate or emphasize them. The model has also been extended to the study of Variable Angle Tow (VAT) composites characterized by curvilinear fibers. After the validation with results from the open literature, the possible advantages in the aeronautic field of this technology have been explored through vibrational analyses of prismatic thin-walled boxes. The results confirm the capabilities of the current model to deal with very complex aeronautical structures providing accurate results with a sensible reduction of the computational cost compared to the classical used FEM models. Its performances are also tested with a displacement analyses in the second part of this thesis, which presents the work done during the apprenticeship related to the research project TIVANO with the company Leonardo Finmeccanica – Aircraft division.
APA, Harvard, Vancouver, ISO, and other styles
27

Chiesa, Matteo. "Linking advanced fracture models to structural analysis." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Engineering Science and Technology, 2001. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-1593.

Full text
Abstract:

In this study a link between local material models and structural analysis is outlined. An ”ad hoc” element formulation is used in order to connect complex material models to the finite element framework used for structural analysis. An improved elasto-plastic line spring finite element formulation, used in order to take cracks into account, is linked to shell elements which are further linked to beam elements. In this way one obtain a global model of the shell structure that also accounts for local flexibilities and fractures due to defects. An important advantage with such an approach is a direct fracture mechanics assessment e.g. via computed J - integral or CTOD. A recent development in this approach is the notion of two-parameter fracture assessment. This means that the crack tip stress tri-axiality (constraint) is employed in determining the corresponding fracture toughness, giving a much more realistic capacity of cracked structures. The present thesis is organized in six research articles and an introductory chapter that reviews important background literature related to this work.

APA, Harvard, Vancouver, ISO, and other styles
28

Valdivieso, Ercos. "Essays on structural models in corporate finance." Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/62508.

Full text
Abstract:
This thesis contains two essays in Structural Corporate Finance. The first essay studies the effect of asset redeployability on the cross-section of firms’ financial leverage and credit spreads. Particularly, I show that in the data firms’ ability to sell assets — captured by a novel measure of asset redeployability — correlates positively with financial leverage, and negatively with credit spreads. At odds with traditional notions of asset redeployability, I show that these predictions remain even after controlling for proxies of creditors’ recovery rates. To understand these empirical findings, I build a quantitative model where firms’ asset redeployability decreases the degree of investment irreversibility and deadweight cost of bankruptcy. According to the model, while higher overall asset redeployability predicts larger financial leverage and lower credit spread; these relations are mainly driven by differences in the degree of investment irreversibility across firms. Also, within the model, differences in recovery rates are mainly explained by differences in deadweight costs of bankruptcy. Based on these results, I conclude that the link between firms’ asset redeployability and disinvestment flexibilities is key to understand the empirical ability of asset redeployability to predict financial leverage and credit spreads. The second essay provides new evidence about the cross-sectional distribution of debt issuance: its dispersion is highly procyclical. Furthermore, I show that this dynamic feature is mainly driven by large adjustments of the stock of debt and capital observed in good times. Previous research has highlighted the role of non-convex rigidities on inducing large adjustments on firms decisions. Then, to quantify the contribution of real and financial non-convex frictions on shaping the dynamic of the debt issuance cross-sectional distribution, I build a quantitative model where firms take investment and financing decisions. According to the model, both real and financial non-convex frictions are required to reproduce the dynamic of the cross-sectional dispersion of debt issuance. Indeed, the presence of these frictions makes firms’ decisions less responsive during recessions. Yet, in booms, both non-convex costs induce large adjustment on the capital and debt stock of high-growth firms.
Business, Sauder School of
Finance, Division of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
29

Chen, Yiwen Superfine Richard. "Probing protein structural dynamics using simplified models." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2007. http://dc.lib.unc.edu/u?/etd,1093.

Full text
Abstract:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2007.
Title from electronic title page (viewed Mar. 27, 2008). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Physics and Astronomy." Discipline: Physics and Astronomy; Department/School: Physics and Astronomy.
APA, Harvard, Vancouver, ISO, and other styles
30

Samal, Mahendra Kumar. "Nonlocal damage models for structural integrity analysis." kostenfrei, 2007. http://nbn-resolving.de/urn:nbn:de:bsz:93-opus-33369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Jung, Sunho. "Regularized structural equation models with latent variables." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66858.

Full text
Abstract:
In structural equation models with latent variables, maximum likelihood (ML) estimation is currently the most prevailing estimation method. However, the ML method fails to provide accurate solutions in a number of situations including those involving small sample sizes, nonnormality, and model misspecification. To over come these difficulties, regularized extensions of two-stage least squares estimation are proposed that incorporate a ridge type of regularization in the estimation of parameters. Two simulation studies and two empirical applications demonstrate that the proposed method is a promising alternative to both the maximum likelihood and non-regularized two-stage least squares estimation methods. An optimal value of the regularization parameter is found by the K-fold cross validation technique. A nonparametric bootstrap method is used to evaluate the stability of solutions. A goodness-of-fit measure is used for assessing the overall fit.
Dans les modèles d'équations structurales avec des variables latentes, l'estimation demaximum devraisemblance est la méthode d'estimation la plus utilisée. Par contre, la méthode de maximum devraisemblance souvent ne réussit pas á fournir des solutions exactes, par exemple lorsque les échantillons sont petits, les données ne sont pas normale, ou lorsque le modèle est mal specifié. L'estimation des moindres carrés á deux-phases est asymptotiquement sans distribution et robuste contre mauvaises spécifications, mais elle manque de robustesse quand les chantillons sont petits. Afin de surmonter les trois difficultés mentionnés ci-dessus et d'obtenir une estimation plus exacte, des extensions régularisées des moindres carrés á deux phases sont proposé á qui incorporent directement un type de régularisation dans les modèles d'équations structurales avec des variables latentes. Deux études de simulation et deux applications empiriques démontrent que la méthode propose est une alternative prometteuse aux méthodes de maximum vraisemblance et de l'estimation des moindres carrés á deux-phases. Un paramètre de régularisation valeur optimale a été trouvé par la technique de validation croisé d'ordre K. Une méthode non-paramétrique Bootstrap est utilisée afin d'évaluer la stabilité des solutions. Une mesure d'adéquation est utilisée pour estimer l'adéquation globale.
APA, Harvard, Vancouver, ISO, and other styles
32

Xiao, Yongling. "Flexible marginal structural models for survival analysis." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=107571.

Full text
Abstract:
In longitudinal studies, both treatments and covariates may vary throughout the follow-up period. Time-dependent (TD) Cox proportional hazards (PH) models can be used to model the effect of time-varying treatments on the hazard. However, two challenges exist in such modeling. First, accurate modeling of the effects of TD treatments on the hazard requires resolving the uncertainty about the etiological relevance of treatments taken in different time periods. The second challenge arises in the presence of TD confounders affected by prior treatments. By assuming the absence of the other challenge, two different methodologies, weighted cumulative exposure (WCE) and marginal structural models (MSM), have been recently proposed to separately address each challenge, respectively. In this thesis, I proposed the combination of these methodologies so as to address both challenges simultaneously, as both may commonly arise in combination in longitudinal studies.In the first manuscript, I proposed and validated a novel approach to implement the marginal structural Cox proportional hazards model (referred to as Cox MSM) with inverse-probability-of-treatment weighting (IPTW) directly via a weighted time-dependent Cox PH model, rather than via a pooled logistic regression approximation. The simulations show that the IPTW estimator yields consistent estimates of the causal effect of treatment, but it may suffer from large variability, due to some extremely high IPT weights. The precision of the IPTW estimator could be improved by normalizing the stabilized IPT weights.Simple weight truncation has been proposed and commonly used in practice as another solution to reduce the large variability of IPTW estimators. However, truncation levels are typically chosen based on ad hoc criteria which have not been systematically evaluated. Thus, in the second manuscript, I proposed a systematic data-adaptive approach to select the optimal truncation level which minimizes the estimated expected MSE of the IPTW estimates. In simulation, the new approach exhibited the performance that was as good as the approaches that simply truncate the stabilized weights at high percentiles such as the 99th or 99.5th of their distribution, in terms of reducing the variance and improving the MSE of the estimatesIn the third manuscript, I proposed a new, flexible model to estimate the cumulative effect of time-varying treatment in the presence of the time-dependent confounders/mediators. The model incorporated weighted cumulative exposure modeling in a marginal structural Cox model. Specifically, weighted cumulative exposure was used to summarize the treatment history, which was defined as the weighted sum of the past treatments. The function that assigns different weights to treatments received at different times was modeled with cubic regression splines. The stabilized IPT weights for each person at each visit were calculated to account for the time-varying confounding and mediation. The weighted Cox MSM, using stabilized IPT weights, was fitted to estimate the total causal cumulative effect of the treatments on the hazard. Simulations demonstrate that the proposed new model can estimate the total causal cumulative effect, i.e. to capture both the direct and the indirect (mediated by the TD confounder) treatment effects. Bootstrap-based 95% confidence bounds for the estimated weight function were constructed and the impact of some extreme IPT weights on the estimates of the causal cumulative effect was explored.In the last manuscript, I applied the WCE MSM to the Swiss HIV Cohort Study (SHCS) to re-assess whether the cumulative exposure to abacavir therapy may increase the potential risk of cardiovascular events, such as myocardial infarction or the cardiovascular-related death.
Dans les études longitudinales, aussi bien les covariables que les traitements peuvent varier au cours de la période de suivi. Les modèles de Cox à effets proportionnels avec variables dépendantes du temps peuvent être utilisés pour modéliser l'effet de traitement variant au cours du temps. Cependant, deux défis apparaissent pour ce type de modélisation. Tout d'abord, une modélisation précise des effets des traitements dépendants du temps sur le risque nécessite de résoudre l'incertitude quant à l'importance étiologique des traitements pris a différentes périodes de temps. Ensuite, un second défi se pose dans le cas de la présence d'une variable de confusion qui dépend du temps et qui est également un médiateur de l'effet du traitement sur le risque. Deux différentes méthodologies ont récemment été suggérées pour répondre, séparément, à chacun de ces deux défis, respectivement l'exposition cumulée pondérée et les modèles structuraux marginaux (MSM). Dans cette thèse, j'ai proposé la combinaison de ces méthodologies de façon à répondre aux deux défis simultanément, étant donné qu'ils peuvent tous les deux fréquemment se poser en même temps dans des études longitudinales. Dans le premier article, j'ai proposé et validé une nouvelle approche pour mettre en œuvre le Cox MSM avec la pondération par l'inverse de probabilité de traitement (PIPT) directement à partir d'un modèle de Cox a effets proportionnels pondéré et avec variables dépendantes du temps plutôt que par une approximation par régression logistique sur données agrégées. Les simulations montrent que l'estimateur PIPT donne des estimations consistantes de l'effet causal du traitement alors qu'il serait associé à une grande variabilité dans les estimations, à cause d'inverses de probabilités de traitement extrêmement élevés. La simple troncature de poids a été proposée et couramment utilisée dans la pratique comme une autre solution pour réduire la grande variabilité des estimateurs PIPT. Cependant, les niveaux de troncature sont généralement choisis en fonction de critères ad hoc, qui n'ont pas été systématiquement évalués. Ainsi, dans le deuxième article, j'ai proposé une approche systématique adaptative aux données systématique pour sélectionner le niveau de troncature optimal qui minimise l'erreur quadratique moyenne des estimations PIPT. Dans le troisième article, j'ai proposé un nouveau modèle flexible afin d'estimer l'effet cumulatif de traitements qui varient dans le temps en présence de facteurs de confusion/médiateurs dépendant du temps. Le modèle intègre la modélisation de l'exposition cumulative pondérée dans un Cox MSM. Plus précisément, l'exposition cumulée pondérée a été utilisée pour résumer l'histoire du traitement, qui a été définie comme la somme pondérée des traitements antérieurs. La fonction qui assigne des poids différents aux traitements reçus à différents moments a été modélisée avec des régressions par B-splines cubiques, en utilisant différentes covariables dépendantes du temps artificielles. Les poids IPT stabilisés pour chaque personne à chaque visite ont été calculés afin de tenir compte des variables de confusion et des médiateurs qui dépendent du temps. Le modèle structurel marginal de Cox à effets proportionnel et avec des covariables dépendantes du temps pondéré, qui utilise des poids stabilisés pondérés, a été ajusté pour estimer l'effet cumulatif causal total des traitements sur le risque. Les simulations montrent que le nouveau modèle proposé permet d'estimer l'effet cumulatif causal total, c'est à dire qu'il permet de capturer à la fois les effets direct et indirect.Dans le dernier article, j'ai appliqué le modèle structural marginal avec exposition cumulée pondérée à une étude de cohorte suisse sur le VIH afin de réévaluer si l'exposition cumulée à la thérapie abacavir augmentait le risque potentiel d'événements cardiovasculaires, tels que l'infarctus du myocarde ou le décès lié a un événement cardiovasculaire.
APA, Harvard, Vancouver, ISO, and other styles
33

Purewsuren, Zazral. "Sovereign risk and structural credit risk models." Thesis, University of Sheffield, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.577690.

Full text
Abstract:
This thesis is an analysis of sovereign default using option pricing models. The first part of the thesis applies the structural credit risk models of Gapen, Gray, Lim and Xiao, (GGLX) and Karmann and Maltritz (KM) to 25 countries accounting for about 75% of global GDP. The GGLX model underestimates sovereign spread and hence the probability of default. This confirms one of the main criticisms of structural credit risk models when applied to corporate default. By contrast, the estimates produced by the KM are far too high; the estimated probability of default is almost one in some cases. The second part of the thesis estimates the default risk indicators using the GGLX model in conjunction a number of different assumptions about the value of sovereign assets. It also uses market values of sovereign spread, which thus becomes an input to the model rather than an output. These approaches have not been reported in the literature before. In addition, Ito's lemma is used derive the corresponding geometric Brownian motion for sovereign spread. Using the new approach, the implied probabilities of default are larger than those obtained using standard GGLX. The model also gives revised values for domestic currency liability and its volatility. These are larger than the values reported by national agencies, thus contributing to the explanation of why structural credit risk models underestimate real-world credit spreads and the risk of default. The outputs from the model also lead to the construction of balance sheet ratios, which contribute to information about the likelihood of sovereign default. Overall, the new model results in default rankings and associated measures which are significantly more realistic than those produced by the standard GGLX model.
APA, Harvard, Vancouver, ISO, and other styles
34

Haworth, H. "Structural models of credit with default contagion." Thesis, University of Oxford, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.437010.

Full text
Abstract:
Multi-asset credit derivatives trade in huge volumes, yet no models exist that are capable of properly accounting for the spread behaviour of dependent companies. In this thesis we consider new ways of incorporating a richer and more realistic dependence structure into multi-firm models. We focus on the structural framework in which firm value is modelled as a geometric Brownian motion, with default as the first hitting time of an exponential default threshold. Specification of a dependence structure consisting of a common driving influence and firm-specific inter-company ties allows for both default causality and default asymmetry and we incorporate default contagion in the first passage framework for the first time. Building on the work by Zhou (2001a), we propose an analytical model for corporate bond yields in the presence of default contagion and two-firm credit default swap baskets. We derive closed-form solutions for credit spreads, and results clearly highlight the importance of dependence assumptions. Extending this framework numerically, we calculate CDS spreads for baskets of three firms with a wide variety of credit dependence specifications. We examine the impact of firm value correlation and credit contagion for symmetric and asymmetric baskets, and incorporate contagion that has a declining impact over time.
APA, Harvard, Vancouver, ISO, and other styles
35

Feng, Xudong. "Structural and functional models for methane monooxygenase." Thesis, Massachusetts Institute of Technology, 1991. http://hdl.handle.net/1721.1/28003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Ciraki, Dario. "Dynamic structural equation models : estimation and interference." Thesis, London School of Economics and Political Science (University of London), 2007. http://etheses.lse.ac.uk/2937/.

Full text
Abstract:
The thesis focuses on estimation of dynamic structural equation models in which some or all variables might be unobservable (latent) or measured with error. Moreover, we consider the situation where latent variables can be measured with multiple observable indicators and where lagged values of latent variables might be included in the model. This situation leads to a dynamic structural equation model (DSEM), which can be viewed as dynamic generalisation of the structural equation model (SEM). Taking the mismeasurement problem into account aims at reducing or eliminating the errors-in-variables bias and hence at minimising the chance of obtaining incorrect coefficient estimates. Furthermore, such methods can be used to improve measurement of latent variables and to obtain more accurate forecasts. The thesis aims to make a contribution to the literature in four areas. Firstly, we propose a unifying theoretical framework for the analysis of dynamic structural equation models. Secondly, we provide analytical results for both panel and time series DSEM models along with the software implementation suggestions. Thirdly, we propose non-parametric estimation methods that can also be used for obtaining starting values in maximum likelihood estimation. Finally, we illustrate these methods on several real data examples demonstrating the capabilities of the currently available software as well as importance of good starting values.
APA, Harvard, Vancouver, ISO, and other styles
37

Kwan, Tan Hwee. "Robust estimation for structural time series models." Thesis, London School of Economics and Political Science (University of London), 1990. http://etheses.lse.ac.uk/2809/.

Full text
Abstract:
This thesis aims at developing robust methods of estimation in order to draw valid inference from contaminated time series. We concentrate on additive and innovation outliers in structural time series models using a state space representation. The parameters of interest are the state, hyperparameters and coefficients of explanatory variables. Three main contributions evolve from the research. Firstly, a filter named the approximate Gaussian sum filter is proposed to cope with noisy disturbances in both the transition and measurement equations. Secondly, the Kalman filter is robustified by carrying over the M-estimation of scale for i.i.d observations to time-dependent data. Thirdly, robust regression techniques are implemented to modify the generalised least squares transformation procedure to deal with explanatory variables in time series models. All the above procedures are tested against standard non-robust estimation methods for time series by means of simulations. Two real examples are also included.
APA, Harvard, Vancouver, ISO, and other styles
38

Zeileis, Achim, Friedrich Leisch, Christian Kleiber, and Kurt Hornik. "Monitoring structural change in dynamic econometric models." SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 2002. http://epub.wu.ac.at/1296/1/document.pdf.

Full text
Abstract:
The classical approach to testing for structural change employs retrospective tests using a historical data set of a given length. Here we consider a wide array of fluctuation-type tests in a monitoring situation - given a history period for which a regression relationship is known to be stable, we test whether incoming data are consistent with the previously established relationship. Procedures based on estimates of the regression coefficients are extended in three directions: we introduce (a) procedures based on OLS residuals, (b) rescaled statistics and (c) alternative asymptotic boundaries. Compared to the existing tests our extensions offer better power against certain alternatives, improved size in finite samples for dynamic models and ease of computation respectively. We apply our methods to two data sets, German M1 money demand and U.S. labor productivity.
Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
APA, Harvard, Vancouver, ISO, and other styles
39

Oberst, Michael Karl. "Counterfactual policy introspection using structural causal models." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/124128.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 97-102).
Inspired by a growing interest in applying reinforcement learning (RL) to healthcare, we introduce a procedure for performing qualitative introspection and `debugging' of models and policies. In particular, we make use of counterfactual trajectories, which describe the implicit belief (of a model) of 'what would have happened' if a policy had been applied. These serve to decompose model-based estimates of reward into specific claims about specific trajectories, a useful tool for 'debugging' of models and policies, especially when side information is available for domain experts to review alongside the counterfactual claims. More specically, we give a general procedure (using structural causal models) to generate counterfactuals based on an existing model of the environment, including common models used in model-based RL. We apply our procedure to a pair of synthetic applications to build intuition, and conclude with an application on real healthcare data, introspecting a policy for sepsis management learned in the recently published work of Komorowski et al. (2018).
by Michael Karl Oberst.
S.M.
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
40

Codd, Casey L. "Nonlinear Structural Equation Models: Estimation and Applications." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1301409131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Kim, Yookyung. "Compressed Sensing Reconstruction Using Structural Dependency Models." Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/238613.

Full text
Abstract:
Compressed sensing (CS) theory has demonstrated that sparse signals can be reconstructed from far fewer measurements than suggested by the Nyquist sampling theory. CS has received great attention recently as an alternative to the current paradigm of sampling followed by compression. Initial CS operated under the implicit assumption that the sparsity domain coefficients are independently distributed. Recent results, however, show that exploiting statistical dependencies in sparse signals improves the recovery performance of CS. This dissertation proposes model-based CS reconstruction techniques. Statistical dependency models for several CS problems are proposed and incorporated into different CS algorithms. These models allow incorporation of a priori information into the CS reconstruction problems. Firstly, we propose the use of a Bayes least squares-Gaussian scale mixtures (BLS-GSM) model for CS recovery of natural images. The BLS-GSM model is able to exploit dependencies inherent in wavelet coefficients. This model is incorporated into several recent CS algorithms. The resulting methods significantly reduce reconstruction errors and/or the number of measurements required to obtain a desired reconstruction quality, when compared to state-of-the-art model-based CS methods in the literature. The model-based CS reconstruction techniques are then extended to video. In addition to spatial dependencies, video sequences exhibit significant temporal dependencies as well. In this dissertation, a model for jointly exploiting spatial and temporal dependencies in video CS is also proposed. The proposed method enforces structural self-similarity of image blocks within each frame as well as across neighboring frames. By sparsely representing collections of similar blocks, dominant image structures are retained while noise and incoherent undersampling artifacts are eliminated. A new video CS algorithm which incorporates this model is then proposed. The proposed algorithm iterates between enforcement of the self-similarity model and consistency with measurements. By enforcing measurement consistency in residual domain, sparsity is increased and CS reconstruction performance is enhanced. The proposed approach exhibits superior subjective image quality and significantly improves peak-signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM).Finally, a model-based CS framework is proposed for super resolution (SR) reconstruction. The SR reconstruction is formulated as a CS problem and a self-similarity model is incorporated into the reconstruction. The proposed model enforces similarity of collections of blocks through shrinkage of their transform-domain coefficients. A sharpening operation is performed in transform domain to emphasize edge recovery. The proposed method is compared with state-of-the-art SR techniques and provides high-quality SR images, both quantitatively and subjectively.
APA, Harvard, Vancouver, ISO, and other styles
42

Pfleger, Phillip Isaac. "Exploring Fit for Nonlinear Structural Equation Models." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7370.

Full text
Abstract:
Fit indices and fit measures commonly used to determine the accuracy and desirability of structural equation models are expected to be insensitive to nonlinearity in the data. This includes measures as ubiquitous as the CFI, TLI, RMSEA, SRMR, AIC, and BIC. Despite this, some software will report these measures when certain models are used. Consequently, some researchers may be led to use these fit measures without realizing the impropriety of the act. Alternative fit measures have been proposed, but these measures require further testing. As part of this thesis, a large simulation study was carried out to investigate alternative fit measures and to confirm whether the traditional measures are practically blind to nonlinearity in the data. The results of the simulation provide conclusive evidence that fit statistics and fit indices based on the chi-square distribution or the residual covariance matrix are entirely insensitive to nonlinearity. The posterior predictive p-value was also insensitive to nonlinearity. Only fit measures based on the structural residuals (i.e., HFI and R-squared) showed any sensitivity to nonlinearity. Of these, the R-squared was the only reliable measure of nonlinear model misspecification. This thesis shows that an effective strategy for determining whether a nonlinear model is preferable to a linear one involves using the R-squared to compare models that have been fit to the same data. An R-squared that is much larger for the nonlinear model than the linear model suggests that the linear model may be less desirable than the nonlinear model. The proposed method is intended to be supplementary to substantive theory. It is argued that any dependence on fit indices or fit statistics that places these measures on a higher pedestal than substantive theory will invariably lead to blindness on the part of the researcher. In other words, unwavering adherence to goodness-of-fit measures limits the researcher<'>s vision to what the measures themselves can detect.
APA, Harvard, Vancouver, ISO, and other styles
43

Consolini, Laura. "Structural glass between design, tests and models." Doctoral thesis, Università Politecnica delle Marche, 2011. http://hdl.handle.net/11566/241923.

Full text
Abstract:
La ricerca svolta si è configurata come indagine sul vetro strutturale. Il vetro è un materiale nuovo nel campo dei materiali strutturali, in quanto fino a poco tempo fa veniva utilizzato solo in finestrature e/o facciate continue. Invece negli ultimi anni si è visto che il vetro è sempre più utilizzato per parti direttamente strutturali, quali pavimentazioni, scale, balaustre, pensiline, coperture, ecc. In questi casi il vetro deve comportarsi come un materiale da costruzione a tutti gli effetti, quale il calcestruzzo armato o l'acciaio. Guardandolo da questo punto di vista, si vede come siano necessarie delle normative nell'ambito del calcolo del vetro strutturale. A tale proposito, si sono prese in considerazione le normative presenti in campo europeo ed italiano. Negli ultimi anni infatti si è sentita l'esigenza anche in Italia di una normativa completa sul vetro strutturale (come già presente in molti stati europei), senza dover sempre ricorrere all'universo delle norme UNI, molto complete, ma altrettanto dispersive. Per elaborare quindi un documento unitario, si è istituita presso il CNR una commissione volontaria per la redazione di tali normative, in cui siamo entrati a far parte nell'ambito del gruppo "modelli". La nostra indagine comunque si è incentrata sulla caratterizzazione del vetro strutturale nel modo più ampio possibile, guardandolo dal punto di vista della progettazione, delle prove sul materiale, dei modelli matematici. La progettazione si è incentrata sulla ricerca e sviluppo di un elemento strutturale facilmente producibile e vendibile in diverse configurazioni e soluzioni. La scelta è caduta sulla progettazione di una trave reticolare in vetro ed acciaio inossidabile. Caratteristiche principali di quest'elemento sono: la modularità, poiché la trave è costituita da un modulo base ripetibile fino alla lunghezza complessiva di 6.90 m; la possibilità di ottenere configurazioni curvilinee, in quanto gli elementi del modulo base possono ruotare reciprocamente; la trasportabilità, poiché ruotando gli elementi il modulo può "appiattirsi" ed essere trasportato più agevolmente. La trave è stata studiata staticamente e dinamicamente in varie configurazioni possibili ed al termine si è provveduto al deposito di un brevetto italiano. Per quanto riguarda le prove sul materiale, abbiamo condotto prove sia in campo statico che in campo dinamico. In campo statico abbiamo svolto delle prove a compressione semplice, prima senza strumentazione necessaria per gli spostamenti poi aggiungendo la strumentazione stessa. In questo modo abbiamo potuto analizzare il meccanismo di rottura del vetro, notando che i campioni a nostra disposizione in vetro laminato composto da tre strati di vetro non subiscono una rottura fragile, bensì si forma nel grafico stress-strain una sorta di pianerottolo plastico dovuto alla presenza del PVB. Le prove di tipo dinamico si sono svolte sia con l'utilizzo di accelerometri e martello strumentato, sia con l'uso di un vibrometro laser. Scopo principale di tali test è stato la comprensione del comportamento dell'interstrato e delle sue caratteristiche meccaniche. Utilizzando diversi metodi di identificazione dinamica, abbiamo ricavato i parametri modali principali quali frequenze proprie, smorzamenti modali e forme modali. I test hanno riguardano tre diversi tipi di campioni: uno in vetro monolitico, uno in vetro laminato a due strati ed uno in vetro laminato a tre strati. Come ci aspettavamo, il campione in vetro monolitico si comporta esattamente come una trave in vibrazione libera, il campione a due strati si comporta ai primi modi come se il PVB realizzasse una connessione perfettamente rigida tra gli strati di vetro, rendendo quindi il comportamento ancora analogo a quello di una trave monolitica, mentre il campione a tre strati presenta alcune anomalie di comportamento, in quanto le frequenze proprie si abbassano rispetto a quelle del due strati invece che aumentare. Abbiamo cercato in letteratura alcune possibili spiegazioni di questo fenomeno, deducendo che il fattore "temperatura" è quello che maggiormente influisce sul comportamento del PVB e che, essendo stato il campione a tre strati l'unico a subire alcuni cicli di variazioni considerevoli di temperatura, sia possibile una sua variazione di comportamento dovuta alla temperatura. Ultimo aspetto esposto è stata la trattazione del vetro laminato in ambito teorico. Utilizzando il metodo di espansione asintotica, abbiamo ricavato le pulsazioni proprie di un multistrato composto da materiali a comportamento elastico lineare con forte contrasto delle proprietà meccaniche, quali il vetro ed il PVB. Con l'utilizzo di un parametro piccolo, ", abbiamo descritto il comportamento limite del multistrato, individuandone le pulsazioni alle basse ed alle medie frequenze. Ciò è stato possibile utilizzando due diverse espansioni asintotiche per la pulsazione [omega]. In conclusione, si è condotta un'indagine sul vetro strutturale in modo più ampio possibile, toccando diverse tematiche e cercando di sollevare molte problematiche per rendere il vetro sempre più simile ad un materiale da costruzione a tutti gli effetti.
The research activity has been con gured as an investigation on structural glass. Glass is a new material if placed in the field of structural materials, because until recently it was used mainly for glazing and/or curtain walls. Instead, in recent years, we have seen that the glass is increasingly used for structural parts, such as flooring, staircases, balustrades, canopies, roofing, etc. In all these cases, the glass has to behave as a building material for all purposes, such as concrete or steel. Looking at it from this point of view, it is evident the need and the utility of regulations in the calculation of structural glass. In this regard, we considered the standards present in European and Italian systems. In recent years, the need in Italy for comprehensive legislation on the structural glass (as already present in many European countries) is very urgent, without having to resort each time to the universe of UNI, very complete, but just as widespread. Thus, to elaborate a standard unified document, a voluntary committee has set up at the CNR for the drafting of these regulations, and here we joined in the "models" group. Our investigation, however, focused on the characterization of structural glass as widely as possible, looking from the point of view of design, testing of materials, mathematical models. The design has focused on research and development of a structural element, easy to produce and sell in different configurations and solutions. The choice was on the design of a truss made of glass and stainless steel. Key features of this element are: modularity, since the beam consists of a base module repeatable until a total length of 6.90 m, the possibility of curve configurations, since the elements of the basic module can rotate mutually, and the portability, since turning the elements, the module will "flatten out" and can be transported more easily. The beam has been studied in terms of static and dynamic conditions in various configurations and at the end of the design was merged in an Italian patent. Regarding the tests on structural glass, we conducted tests in both static and dynamic eld. In statics, we have performed simple compression tests, first without necessary equipment for displacement data and then adding the instrumentation. In this way we could analyze the failure mechanism of glass, noting that our samples of laminated glass (consisting of three layers of glass) do not undergo brittle failure, but in the stress-strain graph a kind of plastic landing appeared, due to presence of PVB. The dynamic tests have taken place with the use of accelerometers and manual hammering, and then by the use of a laser vibrometer. The main aim of these tests was to understand the behavior of the interlayer and its mechanical properties. Using different methods of dynamic identification, we obtained the modal parameters, such as the natural frequencies, the modal damping and the mode shapes. The tests involved three different typologies of samples: a monolithic glass, a laminated glass composed by two layers of glass and a laminated glass composed by three layers of glass. As expected, the monolithic glass behaves just like a beam in free vibration. The two-layer sample behaves at first modes as if the PVB will achieve a perfectly rigid connection between the layers of glass, thus making the behavior similar to that of a monolithic beam. The three-layer sample has some behavior anomalies, because its frequencies are lower than those of the two-layer sample, instead of increasing. We searched in literature some possible explanations for this phenomenon, arguing that the factor "temperature" is one that most a ects the behavior of PVB. The three-layer sample was the only one that undergo cycles of considerable temperature variations, and it is possible a behavior change due to temperature. Last exposed issue was the treatment of laminated glass from theoretical point of view. Using the method of asymptotic expansion, we obtained the natural frequencies of a multi-layer element composed by linear elastic materials with strong contrast in mechanical properties, such as glass and PVB. With the use of a small parameter, [epsilon], we described the limit behavior of the multi-layer, identifying its pulsations at low and medium frequencies. This was achieved using two different asymptotic expansions for the pulsation [omega]. In conclusion, we conducted an investigation into the structural glass as wide as possible, touching on various themes and trying to raise many issues to make the glass more and more similar to a building material for all purposes.
APA, Harvard, Vancouver, ISO, and other styles
44

Bridgett, Stephen John. "Detail suppression of stress analysis models." Thesis, Queen's University Belfast, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Preacher, Kristopher J. "The Role of Model Complexity in the Evaluation of Structural Equation Models." The Ohio State University, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=osu1054130634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Hang, Huajiang Engineering &amp Information Technology Australian Defence Force Academy UNSW. "Prediction of the effects of distributed structural modification on the dynamic response of structures." Awarded by:University of New South Wales - Australian Defence Force Academy. Engineering & Information Technology, 2009. http://handle.unsw.edu.au/1959.4/44275.

Full text
Abstract:
The aim of this study is to investigate means of efficiently assessing the effects of distributed structural modification on the dynamic properties of a complex structure. The helicopter structure is normally designed to avoid resonance at the main rotor rotational frequency. However, very often military helicopters have to be modified (such as to carry a different weapon system or an additional fuel tank) to fulfill operational requirements. Any modification to a helicopter structure has the potential of changing its resonance frequencies and mode shapes. The dynamic properties of the modified structure can be determined by experimental testing or numerical simulation, both of which are complex, expensive and time-consuming. Assuming that the original dynamic characteristics are already established and that the modification is a relatively simple attachment such as beam or plate modification, the modified dynamic properties may be determined numerically without solving the equations of motion of the full-modified structure. The frequency response functions (FRFs) of the modified structure can be computed by coupling the original FRFs and a delta dynamic stiffness matrix for the modification introduced. The validity of this approach is investigated by applying it to several cases, 1) 1D structure with structural modification but no change in the number of degree of freedom (DOFs). A simply supported beam with double thickness in the middle section is treated as an example for this case; 2) 1D structure with additional DOFs. A cantilever beam to which a smaller beam is attached is treated as an example for this case, 3) 2D structure with a reduction in DOFs. A four-edge-clamped plate with a cut-out in the centre is treated as an example for this case; and 4) 3D structure with additional DOFs. A box frame with a plate attached to it as structural modification with additional DOFs and combination of different structures. The original FRFs were obtained numerically and experimentally except for the first case. The delta dynamic stiffness matrix was determined numerically by modelling the part of the modified structure including the modifying structure and part of the original structure at the same location. The FRFs of the modified structure were then computed. Good agreement is obtained by comparing the results to the FRFs of the modified structure determined experimentally as well as by numerical modelling of the complete modified structure.
APA, Harvard, Vancouver, ISO, and other styles
47

Falzon, Christopher. "Pattern solver for the static and dynamic analysis of framework models /." [Hong Kong : University of Hong Kong], 1985. http://sunzi.lib.hku.hk/hkuto/record.jsp?B12315588.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Bogle, S. M. "Linear structural models in statistics and their applications." Thesis, University of Leeds, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.353806.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Robinson, Emma Claire. "Characterising population variability in brain structure through models of whole-brain structural connectivity." Thesis, Imperial College London, 2010. http://hdl.handle.net/10044/1/5875.

Full text
Abstract:
Models of whole-brain connectivity are valuable for understanding neurological function. This thesis seeks to develop an optimal framework for extracting models of whole-brain connectivity from clinically acquired diffusion data. We propose new approaches for studying these models. The aim is to develop techniques which can take models of brain connectivity and use them to identify biomarkers or phenotypes of disease. The models of connectivity are extracted using a standard probabilistic tractography algorithm, modified to assess the structural integrity of tracts, through estimates of white matter anisotropy. Connections are traced between 77 regions of interest, automatically extracted by label propagation from multiple brain atlases followed by classifier fusion. The estimates of tissue integrity for each tract are input as indices in 77x77 ”connectivity” matrices, extracted for large populations of clinical data. These are compared in subsequent studies. To date, most whole-brain connectivity studies have characterised population differences using graph theory techniques. However these can be limited in their ability to pinpoint the locations of differences in the underlying neural anatomy. Therefore, this thesis proposes new techniques. These include a spectral clustering approach for comparing population differences in the clustering properties of weighted brain networks. In addition, machine learning approaches are suggested for the first time. These are particularly advantageous as they allow classification of subjects and extraction of features which best represent the differences between groups. One limitation of the proposed approach is that errors propagate from segmentation and registration steps prior to tractography. This can cumulate in the assignment of false positive connections, where the contribution of these factors may vary across populations, causing the appearance of population differences where there are none. The final contribution of this thesis is therefore to develop a common co-ordinate space approach. This combines probabilistic models of voxel-wise diffusion for each subject into a single probabilistic model of diffusion for the population. This allows tractography to be performed only once, ensuring that there is one model of connectivity. Cross-subject differences can then be identified by mapping individual subjects’ anisotropy data to this model. The approach is used to compare populations separated by age and gender.
APA, Harvard, Vancouver, ISO, and other styles
50

Mirjalili, Vahid. "Modelling the structural efficiency of cross-sections in limited torsion stiffness design." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99780.

Full text
Abstract:
Most of the current optimization techniques for the design of light-weight structures are unable to generate structural alternatives at the concept stage of design. This research tackles the challenge of developing an optimization method for the early stage of design. The main goal is to propose a procedure to optimize material and shape of stiff shafts in torsion.
Recently introduced for bending stiffness design, shape transformers are presented in this thesis for optimizing the design of shafts in torsion. Shape transformers are geometric parameters defined to classify shapes and to model structural efficiency. The study of shape transformers are centered on concept selection in structural design. These factors are used to formulate indices of material and shape selection for minimum mass design. An advantage of the method of shape transformers is that the contribution of the shape can be decoupled from the contribution of the size of a cross-section. This feature gives the designer insight into the effects that scaling, shape, as well as material have on the overall structural performance.
Similar to the index for bending, the performance index for torsion stiffness design is a function of the relative scaling of two cross-sections. The thesis examines analytically and graphically the impact of scaling on the torsional efficiency of alternative cross-sections. The resulting maps assist the selection of the best material and shape for cross-sections subjected to dimensional constraints. It is shown that shape transformers for torsion, unlike those for bending, are generally function of the scaling direction.
The efficiency maps ease the visual contrast among the efficiency of open-walled cross-sections and that of close-walled cross-sections. As expected, the maps show the relative inefficiency of the former compared to the latter. They can also set the validity range of thin- and thick-walled theory in torsion stiffness design. The analytical results are validated with the numerical data obtained from ANSYS to guarantee the consistency of the models. The thesis concludes with three case studies that demonstrate the method.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography