Thèses sur le sujet « SEMK model »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : SEMK model.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « SEMK model ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Chakkingal, Anoop. « Réglage de la sélectivité de la synthèse Fischer-Tropsch : aperçu de la modélisation microcinétique et de l'apprentissage automatique ». Electronic Thesis or Diss., Centrale Lille Institut, 2022. http://www.theses.fr/2022CLIL0015.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
En vue de promouvoir l’économie circulaire, de nombreux procédés chimiques sont actuellement réexaminés afin de développer des variantes plus durables. Cela a mené à une forte augmentation de production au cours des 60 dernières années, entraînant une production totale de 367 millions de tonnes en 2020.La méthodologie a ensuite été généralisée à l’aide d’apprentissage automatique non supervisé, ce qui a permis de dépasser les trois dimensions et de réduire le besoin d’intervention humaine. L’espace des descripteurs généré à partir de données microcinétiques (catalyseurs virtuels) est exploré en utilisant la méthode systématique du regroupement (clustering) et du classement (labelling) non supervisés. L’espace de la performance du catalyseur est regroupé en clusters et le nombre minimale de clusters est identifié. Chaque catalyseur virtuel (représenté par une certaine combinaison de descripteurs) est identifié du point de vue du cluster auquel il appartient. Il est ainsi possible d’obtenir l’étendue des valeurs des descripteurs dans le cluster ayant le meilleur rendement d’alcènes légers. Il est observé que les valeurs obtenues sont conformes à celles du catalyseur virtuel optimal identifié dans l’inspection visuelle précédente. On peut donc conclure qu’une méthode combinant la microcinétique et l’apprentissage automatique a été présentée pour le développement des catalyseurs et pour l’investigation détaillée de leurs propriétés, tout en diminuant le besoin d’intervention humaine.Finalement, la méthode d’apprentissage automatique a été étendue dans l’intention de pouvoir réaliser des prédictions de plusieurs sélectivités en se concentrant sur la production des alcènes légers à différentes conditions opérationnelles. Afin d’atteindre cet objectif, 4 modèles d’apprentissage automatique alternatifs ont été employés, i.e. la méthode lasso (lasso regression), la méthode des k plus proches voisins (k nearest neighbor regression ou KNN), la méthode de la machine à vecteurs de support (support vector machine regression ou SVR) et le réseau de neurones artificiels (Artificial Neural Network ou ANN). Les capacités de ces techniques sont évaluées par rapport à la reproduction du comportement linéaire de la conversion et la sélectivité en fonction des variables du procédé comme cela a été simulé par le modèle SEMK. Il est constaté que les modèles à base d’un réseau de neurones artificiels correspondent le plus aux résultats de référence du modèle SEMK. Une analyse supplémentaire utilisant la technique d’interprétation de la valeur SHAP a été appliquée aux modèles à base d’un réseau de neurones artificiels ayant la meilleure performance, en vue de mieux expliquer le fonctionnement des modèles.L’ensemble de l’étude a rapporté des connaissances essentielles, telles que les descripteurs de catalyseur optimales : les enthalpies de chimisorption atomique de l’hydrogène (QH ≈ 234 kJ/mol), du carbone (QC ≈ 622 kJ/mol) et de l’oxygène (QO ≈ 575 kJ/mol), pour la conception de catalyseurs ayant une sélectivité en alcènes légers élevée en utilisant un modèle SEMK mécaniste. De plus, l’étendue des conditions opérationnelles menant à la meilleure sélectivité en alcènes légers a été déterminée en adoptant plusieurs stratégies de modélisation (le concept des SEMK et l’apprentissage automatique). Il a été constaté que les effets de la température (580-620K) et la pression (1-2 bar) étaient les plus importants. Ensuite, une investigation est réalisée dans le but d’évaluer à quel point les résultats des modèles d’apprentissage automatique correspondent à ceux du modèle SEMK. Par exemple, une analyse préliminaire a pu être réalisée en utilisant un modèle d’apprentissage automatique pour l’analyse des données obtenues à l’aide d’expérimentation à haut débit. Ensuite, le modèle mécaniste a permis d’acquérir une compréhension chimique approfondie
Striving towards a circular economy has led to the re-investigation of many existing processes, with the target of developing more sustainable variants. In our present economy, plastics form an important and omnipresent material affecting our daily lives. They are inexpensive, durable, corrosion resistant, and light weight leading to their use in a wide variety of applications.Within the plastic chemical recycling scheme, Fischer-Tropsch synthesis (FTS) could play a key role as the syngas feedstock that is converted in it, can be generated via the gasification of the considered plastics. This syngas is then chemo-catalytically converted into hydrocarbons such as paraffins and light olefins. Typical FTS catalysts are based on supported cobalt or iron species.Among the mechanistic kinetic models, the comprehensive variant based on the Single Event MicroKinetics (SEMK) concept has been widely applied in the field of oligomerization, autoxidative curing, etc. and has proven to be a versatile tool to simulate Fischer-Tropsch synthesis. However, developing mechanistic models for every chemical engineering challenge is not always feasible due to their complexity and the in-depth knowledge required to build such models.A detailed evaluation on the potential of using machine learning approaches to match the performance of results obtained using the Single-Event MicroKinetic model was carried out. Initially, the focus was on a single dominant output scenario (methane selective catalyst). The current work thus shows that more widely applied techniques in data science can now be applied for systematic analysis and interpretation of kinetic data. Similar analysis using experimental data can also help experimenters in their preliminary analysis, to detect hidden trends in the data, and thus to identify importance features. After gaining confidence on the investigated interpretation techniques, for the FTS reaction with single dominant output, a similar investigation on the potential of iron based catalysts with enhanced light olefin selectivity is carried out next
2

Dhurandhar, Amit. « Semi-analytical method for analyzing models and model selection measures ». [Gainesville, Fla.] : University of Florida, 2009. http://purl.fcla.edu/fcla/etd/UFE0024733.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Yu, Fu. « On statistical analysis of vehicle time-headways using mixed distribution models ». Thesis, University of Dundee, 2014. https://discovery.dundee.ac.uk/en/studentTheses/d101df63-b7db-45b6-8a03-365b64345e6b.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
For decades, vehicle time-headway distribution models have been studied by many researchers and traffic engineers. A good time-headway model can be beneficial to traffic studies and management in many aspects; e.g. with a better understanding of road traffic patterns and road user behaviour, the researchers or engineers can give better estimations and predictions under certain road traffic conditions and hence make better decisions on traffic management and control. The models also help us to implement high-quality microscopic traffic simulation studies to seek good solutions to traffic problems with minimal interruption of the real traffic environment and minimum costs. Compared within previously studied models, the mixed (SPM and GQM) mod- els, especially using the gamma or lognormal distributions to describe followers headways, are probably the most recognized ones by researchers in statistical stud- ies of headway data. These mixed models are reported with good fitting results indicated by goodness-of-fit tests, and some of them are better than others in com- putational costs. The gamma-SPM and gamma-GQM models are often reported to have similar fitting qualities, and they often out-perform the lognormal-GQM model in terms of computational costs. A lognormal-SPM model cannot be formed analytically as no explicit Laplace transform is available with the lognormal dis- tribution. The major downsides of using mixed models are the difficulties and more flexibilities in fitting process as they have more parameters than those single models, and this sometimes leads to unsuccessful fitting or unreasonable fitted pa- rameters despite their success in passing GoF tests. Furthermore, it is difficult to know the connections between model parameters and realistic traffic situations or environments, and these parameters have to be estimated using headway samples. Hence, it is almost impossible to explain any traffic phenomena with the param- eters of a model. Moreover, with the gamma distribution as the only common well-known followers headway model, it is hard to justify whether it has described the headway process appropriately. This creates a barrier for better understanding the process of how drivers would follow their preceding vehicles. This study firstly proposes a framework developed using MATLAB, which would help researchers in quick implementations of any headway distributions of interest. This framework uses common methods to manage and prepare headway samples to meet those requirements in data analysis. It also provides common structures and methods on implementing existing or new models, fitting models, testing their performance hence reporting results. This will simplify the development work involved in headway analysis, avoid unnecessary repetitions of work done by others and provide results in formats that are more comparable with those reported by others. Secondly, this study focuses on the implementation of existing mixed models, i.e. the gamma-SPM, gamma-GQM and lognormal-GQM, using the proposed framework. The lognormal-SPM is also tested for the first time, with the recently developed approximation method of Laplace transform available for lognormal distributions. The parameters of these mixed models are specially discussed, as means of restrictions to simplify the fitting process of these models. Three ways of parameter pre-determinations are attempted over gamma-SPM and gamma-GQM models. A couple of response-time (RT) distributions are focused on in the later part of this study. Two RT models, i.e. Ex-Gaussian (EMG) and inverse Gaussian (IVG) are used, for first time, as single models to describe headway data. The fitting performances are greatly comparable to the best known lognormal single model. Further extending this work, these two models are tested as followers headway distributions in both SPM and GQM mixed models. The test results have shown excellent fitting performance. These now bring researchers more alternatives to use mixed models in headway analysis, and this will help to compare the be- haviours of different models when they are used to describe followers headway data. Again, similar parameter restrictions are attempted for these new mixed models, and the results show well-acceptable performance, and also corrections on some unreasonable fittings caused by the over flexibilities using 4- or 5- parameter models.
4

Goes, Adriano Almeida 1978. « Modelo de propagação empírico para sistemas RFID passivo = Emprirical propagation model for RFID passive systems ». [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261045.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Orientador: Paulo Cardieri
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-26T02:54:38Z (GMT). No. of bitstreams: 1 Goes_AdrianoAlmeida_D.pdf: 4041752 bytes, checksum: 3aebfc7442e100b6011a6873be01c5f9 (MD5) Previous issue date: 2014
Resumo: Resultados de campanhas de medição realizadas visando o desenvolvimento de uma fer-ramenta para o projeto, implantação e análise de sistemas de RFID são mostrados nesse traba-lho. Particularmente, a perda de percurso de rádio a partir de um leitor de RFID até um TAG, e de volta para o leitor é caracterizada na banda 915 MHz, para diferentes distâncias de separação leitor-TAG, alturas de TAG e de antena do leitor. Vários cenários de propagação foram considerados, incluindo ambientes exteriores e in-teriores, para os quais foi colhido um extenso número medidas. Os dados de campo são, então, comparados a uma versão melhorada do modelo clássico perda caminho 2-ray, ajustada para incluir também os padrões de radiação da antena não omnidirecionais no leitor. Além disso, foi investigado, por meio da análise de medidas de campo, o efeito da mobi-lidade do TAG no sinal recebido no leitor. Para a coleta das medidas, foi construído um aparato composto de uma esteira de velocidade controlada, onde foram instalados TAGs de teste. Os resultados de medida mostraram que a mobilidade do TAG provoca uma diminuição do valor médio e um aumento da variância do sinal recebido no leitor. Essa atenuação extra e a variância do sinal não são fortemente afetadas pelo valor da velocidade. Por fim, esses efeitos de propagação são incorporados em um modelo matemático, que pode ser utilizado para a simulação e planejamento de sistemas RFID
Abstract: Results of measurement campaigns carried out aiming at the development of a tool for design, deployment, and analysis of RFID systems are shown. Particularly, the radio path loss from an RFID reader towards the test TAG and back to the reader is characterized at the 915 MHz band. The path loss is estimated based on the received signal strength measured at the reader, for different reader¿TAG separation distances and different antenna TAG heights. Several propagation scenarios have been considered, including outdoor and indoor environments for which an extensive number of typical real manufacturing plants have been chosen. The field data are then compared to a proposed novel, improved version of the classical 2-ray path loss model, adjusted to include non-omnidirectional antenna radiation patterns at the reader. In addition, the effect of TAG mobility in the received signal at the reader was also investigated, by means of field measurements. To collect the field measurements, an apparatus was designed and constructed, consisting of a mat of controlled speed, on which test TAGs were installed. The results showed that TAG mobility decreases the average value increases the variance of the received signal at the reader. This extra attenuation and the increased variance of the signal are not strongly affected by the value of speed. Finally, these two effects are incorporated into a mathematical model that can be used for simulation and planning of RFID systems
Doutorado
Telecomunicações e Telemática
Doutor em Engenharia Elétrica
5

Nwi-Mozu, Isaac. « Robustness of Semi-Parametric Survival Model : Simulation Studies and Application to Clinical Data ». Digital Commons @ East Tennessee State University, 2019. https://dc.etsu.edu/etd/3618.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
An efficient way of analyzing survival clinical data such as cancer data is a great concern to health experts. In this study, we investigate and propose an efficient way of handling survival clinical data. Simulation studies were conducted to compare performances of various forms of survival model techniques using an R package ``survsim". Models performance was conducted with varying sample sizes as small ($n5000$). For small and mild samples, the performance of the semi-parametric outperform or approximate the performance of the parametric model. However, for large samples, the parametric model outperforms the semi-parametric model. We compared the effectiveness and reliability of our proposed techniques using a real clinical data of mild sample size. Finally, systematic steps on how to model and explain the proposed techniques on real survival clinical data was provided.
6

Pouliot, George. « A Variable Resolution Nonhydrostatic Global Atmospheric Semi-implicit Semi-Lagrangian Model ». NCSU, 2000. http://www.lib.ncsu.edu/theses/available/etd-20000403-180910.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :

ABSTRACTPOULIOT, GEORGE. A Variable Resolution Nonhydrostatic Global Atmospheric Semi-implicit Semi-Lagrangian Model. (Under the direction of Dr. Fredrick H.M. Semazzi.)The objective of this project is to develop a variable-resolution finite difference adiabatic global nonhydrostatic semi-implicit semi-Lagrangian (SISL) model based on the fully compressible nonhydrostatic atmospheric equations. To achieve this goal, a three-dimensional variable resolution dynamical core was developed and tested.The main characteristics of the dynamical core can be summarized as follows: Spherical coordinates were used in a global domain. A hydrostatic/nonhydrostatic switch was incorporated into the dynamical equations to use the fully compressible atmospheric equations. A generalized horizontal variable resolution grid was developed and incorporated into the model. For a variable resolution grid, in contrast to a uniform resolution grid, the order of accuracy of finite difference approximations is formally lost but remains close to the order of accuracy associated with the uniform resolution grid provided the grid stretching is not too significant. The SISL numerical scheme was implemented for the fully compressible set of equations. In addition, the generalized minimum residual (GMRES) method with restart and preconditioner was used to solve the three-dimensional elliptic equation derived from the discretized system of equations. The three-dimensional momentum equation was integrated in vector-form to incorporate the metric terms in the calculations of the trajectories. Using global re-analysis data for a specific test case, the model was compared to similar SISL models previously developed. Reasonable agreement between the model and the other independently developed models was obtained. The Held-Suarez test for dynamical cores was used for a long integration and the model was successfully integrated for up to 1200 days. Idealized topography was used to test the variable resolution component of the model. Nonhydrostatic effects were simulated at grid spacings of 400 meters with idealized topography and uniform flow. Using a high-resolution topographic data set and the variable resolution grid, sets of experiments with increasing resolution were performed over specific regions of interest. Using realistic initial conditions derived from re-analysis fields, nonhydrostatic effects were significant for grid spacings on the order of 0.1 degrees with orographic forcing. If the model code was adapted for use in a message passing interface (MPI) on a parallel supercomputer today, it was estimated that a global grid spacing of 0.1 degrees would be achievable for a global model. In this case, nonhydrostatic effects would be significant for most areas.A variable resolution grid in a global model provides a unified and flexible approach to many climate and numerical weather prediction problems. The ability to configure the model from very fine to very coarse resolutions allows for the simulation of atmospheric phenomena at different scales using the same code. We have developed a dynamical core illustrating the feasibility of using a variable resolution in a global model.

7

Silva, Marcelo Ferreira da. « Densidade espectral para o modelo de Anderson de duas impurezas sem correlação eletrônica ». Universidade de São Paulo, 1998. http://www.teses.usp.br/teses/disponiveis/76/76131/tde-07052014-145918/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Este trabalho calcula analítica e numericamente a densidade espectral para o modelo de Anderson de duas impurezas sem correlação eletrônica (U=0). Nossos resultados servem como passo inicial para se entender o modelo com a correlação eletrônica. O modelo estudado descreve a interação entre elétrons de um metal e impurezas magnéticas localizadas, e a simplificação, U = 0, torna o Hamiltoniano quadrático permitindo assim que se divida o mesmo em dois termos: um envolvendo apenas operadores pares (canal par) e outro envolvendo apenas operadores ímpares (canal ímpar). Cada termo encontrado difere pouco do Hamiltoniano de Nível Ressonante. Nossos resultados abrangem tanto a diagonalização analítica como a numérica pelo método do Grupo de Renormalização, adaptado para o caso de duas impurezas. A simplicidade do Hamiltoniano permite que (1) se identifique características do modelo que afetam adversamente a precisão do cálculo numeríco e (2) se encontre uma maneira de circundar tais dificuldades. Os resultados aqui encontrados ajudaram o desenvolvimento do cálculo da densidade espectral do modelo correlacionado, desenvolvido paralelamente em nosso grupo de pesquisa.
This work calculates analytically and numerically the spectral density for the two impurity uncorrelated Anderson model (U = O). Our results serve as an initial step towards understanding models with electronic correlation. The studied model describes the interaction between conduction-band electrons of a metal and localized magnetic impurities. The simplification U = O turns the Hamiltonian quadratic, allowing us to split it into two parts: one involving only even operators (even channel), the other involving odd operators (odd channel). Each term has a form differing a little from that for the Resonant Level Hamiltonian. Our results include analytic diagonalization as well as numerical calculations using the method of the Renormalization Group, adapted for the two impurity case. The traditional tridiagonalization method imposes particle-hole symmetry, while our treatment preserves the energy dependence of the coupling, between the impurities and the conduction-band, and consequently, the natural asymmetry of the model. The simplicity of the Hamiltonian allowed us to (1) identify characteristics of the model that affect adversely the acuracy of the numeric calculation and (2) find a way to surround such difficulties. The results here found helped the development of the calculation of the spectral density of the correlated model, developed simultaneously in our research group.
8

Ramos, Luís Roberto. « Propriedades termodinâmicas do Modelo de Falicov-Kimball de duas impurezas sem spin ». Universidade de São Paulo, 2002. http://www.teses.usp.br/teses/disponiveis/76/76131/tde-03062014-103216/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Neste trabalho estudamos o modelo de Falicov-Kimball, que descreve duas impurezas sem spin, localizadas e hibridizadas com elétrons de condução de um metal hospedeiro, o que faz com que a valência flutuante seja algo intrínseco do modelo. Os estados de condução são, também, espalhados eletrostaticamente quando uma carga estiver presente nos níveis locais das impurezas. O estudo foi realizado através do cálculo de propriedades termo dinâmicas do modelo, mais precisamente, da análise do calor específico e da suscetibilidade de carga em função da temperatura e para vários parâmetros diferentes do modelo. Para a obtenção do espectro de energias do Hamiltoniano que descreve o modelo, do qual as propriedades termodinâmicas são obtidas, utilizamos o Grupo de Renormalização Numérico com dois parâmetros de discretização. Em nossos estudos, mostramos alguns resultados que vão além da usual aproximação que projeta todos os momentos no nível de Fermi. Começamos nosso estudo da termodinâmica do modelo analisando regiões do espaço de parâmetros onde o Hamiltoniano toma-se mais simples (regiões onde não há hibridização ou espalhamento eletrostático) e, então, interpretações mais simples dos dados são possíveis. Verificamos, por exemplo, que quando a hibridização é diferente de zero o sistema se comporta como líquido de Fermi para temperaturas indo à zero. Para algumas escolhas de parâmetros o sistema tem o comportamento de férmions pesados. Outro ponto a se destacar é que a razão de Wilson, definida aqui como a divisão da suscetibilidade de carga pelo calor específico, tem o valor universal R = 1, quando a hibridização está presente.
In this work, we study the Falicov-Kimball model with two localized spinless impurities hybridized with conduction electrons of a host metal, therefore, valence fluctuation is intrinsic to the model. The conduction states are also electrostatically scattered whenever a charge is present em the local levels of the impurities. The study was realized computing thermodynamics properties of the model, more specifically, we analyze the temperature dependent specific heat end charge susceptibility for many different parameters of the model. The Numerical Renormalization Group with two discretization parameters is used to obtain the spectrum of the model, from what the thermodynamics is obtained. We discuss the importance of going beyond the usual approximation that projects all moment at the Fermi Level. We begun our study of the thermodynamical properties analyzing values of the parameters space, where the model becomes quadratic (that is, where hybridization or Coulomb scattering are absent), and thus simple interpretations of the data are possible. We verified, for example, that for non-zero hybridization, the system shows Fermi liquid behavior at low temperature. The Wilson ratio, defined here with the charge susceptibility instead of magnetic one, has the universal value R = 1, whenever the hybridization is present. For some choices of the model parameters the model behaviors like heavy fermion.
9

Wendler, Tim Glenn. « Algebraic Semi-Classical Model for Reaction Dynamics ». BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/5755.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
We use an algebraic method to model the molecular collision dynamics of a collinear triatomic system. Beginning with a forced oscillator, we develop a mathematical framework upon which inelastic and reactive collisions are modeled. The model is considered algebraic because it takes advantage of the properties of a Lie algebra in the derivation of a time-evolution operator. The time-evolution operator is shown to generate both phase-space and quantum dynamics of a forced oscillator simultaneously. The model is considered semi-classical because only the molecule's internal degrees-of-freedom are quantized. The relative translation between the colliding atom and molecule in an exchange reaction (AB+C ->A+BC) contains no bound states and any possible tunneling is neglected so the relative translation is treated classically. The purpose of this dissertation is to develop a working model for the quantum dynamics of a collinear reactive collision. After a reliable model is developed we apply statistical mechanics principles by averaging collisions with molecules in a thermal bath. The initial Boltzmann distribution is of the oscillator energies. The relative velocities of the colliding particles is considered a thermal average. Results are shown of quantum transition probabilities around the transition state that are highly dynamic due to the coupling between the translational and transverse coordinate.
10

Bulla, Jan. « Application of Hidden Markov and Hidden Semi-Markov Models to Financial Time Series ». Doctoral thesis, [S.l. : s.n.], 2006. http://swbplus.bsz-bw.de/bsz260867136inh.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Preacher, Kristopher J. « The Role of Model Complexity in the Evaluation of Structural Equation Models ». The Ohio State University, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=osu1054130634.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
12

Santos, Douglas Gomes dos. « Estimação de volatilidade em séries financeiras : modelos aditivos semi-paramétricos e GARCH ». reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2008. http://hdl.handle.net/10183/14892.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
A estimação e previsão da volatilidade de ativos são de suma importância para os mercados financeiros. Temas como risco e incerteza na teoria econômica moderna incentivaram a procura por métodos capazes de modelar uma variância condicional que evolui ao longo do tempo. O objetivo principal desta dissertação é comparar alguns métodos de regressão global e local quanto à extração da volatilidade dos índices Ibovespa e Standard and Poor´s 500. Para isto, são realizadas estimações e previsões com os modelos GARCH paramétricos e com os modelos aditivos semi-paramétricos. Os primeiros, tradicionalmente utilizados na estimação de segundos momentos condicionais, têm sua capacidade sugerida em diversos artigos. Os segundos provêm alta flexibilidade e descrições visualmente informativas das relações entre as variáveis, tais como assimetrias e não linearidades. Sendo assim, testar o desempenho dos últimos frente às estruturas paramétricas consagradas apresenta-se como uma investigação apropriada. A realização das comparações ocorre em períodos selecionados de alta volatilidade no mercado financeiro internacional (crises), sendo a performance dos modelos medida dentro e fora da amostra. Os resultados encontrados sugerem a capacidade dos modelos semi-paramétricos em estimar e prever a volatilidade dos retornos dos índices nos momentos analisados.
Volatility estimation and forecasting are very important matters for the financial markets. Themes like risk and uncertainty in modern economic theory have encouraged the search for methods that allow for the modeling of time varying variances. The main objective of this dissertation is to compare global and local regressions in terms of their capacity to extract the volatility of Ibovespa and Standard and Poor 500 indexes. To achieve this aim, parametric GARCH and semiparametric additive models estimation and forecasting are performed. The first ones, traditionally applied in the estimation of conditional second moments, have their capacity suggested in many papers. The second ones provide high flexibility and visually informative descriptions of the relationships between the variables, like asymmetries and nonlinearities. Therefore, testing the last ones´ performance against the acknowledged parametric structures is an appropriate investigation. Comparisons are made in selected periods of high volatility in the international financial market (crisis), measuring the models´ performance inside and outside sample. The results that were found suggest the capacity of semiparametric models to estimate and forecast the Indexes returns´ volatility at the analyzed moments.
13

Bond, S. A. « Dynamic models of semi-variance ». Thesis, University of Cambridge, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.596758.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
The semi-variance is a measure of downside risk originally suggested by Markowitz (1959). More correctly termed a second order lower partial moment, it captures the volatility of a series below a target rate of return. Under a certain set of conditions, use of the variance provides the same result as using semi-variance. However, when there is asymmetry present in the distribution of returns and in the preferences of individuals, semi-variance is preferred. Despite the potential of such risk measures, little previous work has examined the most suitable form for a dynamic (conditional) model of a lower partial moment. A number of approaches to this problem suggest themselves, such as a regime model, distribution based model or even an OLS model. The development and evaluation of such models forms the central focus of this dissertation. After an introduction in Chapter 1, Chapter 2 introduces the concept of semi-variance in detail and provides a comparison of the sample properties of the estimators for semi-variance and variance. Furthermore, Chapter 2 develops an expression for the size of the relative (in)efficiency of the sample semi-variance under the assumptions of symmetry and asymmetry of returns. The subsequent three chapters consider the form of a conditional semi-variance model. Chapter 3 develops a family of regime models of downside risk based on the SETAR ARCH model (Tong 1990). The models developed are found to outperform GARCH models and are able to explicitly identify the semi-variance. The use of asymmetry conditional density functions in GARCH models is the focus of Chapter 4. More specifically, Chapter 4 develops a GARCH model based on the double gamma distribution. This distribution has the added advantage that the conditional semi-variance can be identified from the parameters of the density function. The model is applied to a set of foreign currencies with mixed results.
14

Bush, Christopher A. « Semi-parametric Bayesian linear models / ». The Ohio State University, 1994. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487856076417948.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Brefeld, Ulf. « Semi-supervised structured prediction models ». Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2008. http://dx.doi.org/10.18452/15748.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Das Lernen aus strukturierten Eingabe- und Ausgabebeispielen ist die Grundlage für die automatisierte Verarbeitung natürlich auftretender Problemstellungen und eine Herausforderung für das Maschinelle Lernen. Die Einordnung von Objekten in eine Klassentaxonomie, die Eigennamenerkennung und das Parsen natürlicher Sprache sind mögliche Anwendungen. Klassische Verfahren scheitern an der komplexen Natur der Daten, da sie die multiplen Abhängigkeiten und Strukturen nicht erfassen können. Zudem ist die Erhebung von klassifizierten Beispielen in strukturierten Anwendungsgebieten aufwändig und ressourcenintensiv, während unklassifizierte Beispiele günstig und frei verfügbar sind. Diese Arbeit thematisiert halbüberwachte, diskriminative Vorhersagemodelle für strukturierte Daten. Ausgehend von klassischen halbüberwachten Verfahren werden die zugrundeliegenden analytischen Techniken und Algorithmen auf das Lernen mit strukturierten Variablen übertragen. Die untersuchten Verfahren basieren auf unterschiedlichen Prinzipien und Annahmen, wie zum Beispiel der Konsensmaximierung mehrerer Hypothesen im Lernen aus mehreren Sichten, oder der räumlichen Struktur der Daten im transduktiven Lernen. Desweiteren wird in einer Fallstudie zur Email-Batcherkennung die räumliche Struktur der Daten ausgenutzt und eine Lösung präsentiert, die der sequenziellen Natur der Daten gerecht wird. Aus den theoretischen Überlegungen werden halbüberwachte, strukturierte Vorhersagemodelle und effiziente Optmierungsstrategien abgeleitet. Die empirische Evaluierung umfasst Klassifikationsprobleme, Eigennamenerkennung und das Parsen natürlicher Sprache. Es zeigt sich, dass die halbüberwachten Methoden in vielen Anwendungen zu signifikant kleineren Fehlerraten führen als vollständig überwachte Baselineverfahren.
Learning mappings between arbitrary structured input and output variables is a fundamental problem in machine learning. It covers many natural learning tasks and challenges the standard model of learning a mapping from independently drawn instances to a small set of labels. Potential applications include classification with a class taxonomy, named entity recognition, and natural language parsing. In these structured domains, labeled training instances are generally expensive to obtain while unlabeled inputs are readily available and inexpensive. This thesis deals with semi-supervised learning of discriminative models for structured output variables. The analytical techniques and algorithms of classical semi-supervised learning are lifted to the structured setting. Several approaches based on different assumptions of the data are presented. Co-learning, for instance, maximizes the agreement among multiple hypotheses while transductive approaches rely on an implicit cluster assumption. Furthermore, in the framework of this dissertation, a case study on email batch detection in message streams is presented. The involved tasks exhibit an inherent cluster structure and the presented solution exploits the streaming nature of the data. The different approaches are developed into semi-supervised structured prediction models and efficient optimization strategies thereof are presented. The novel algorithms generalize state-of-the-art approaches in structural learning such as structural support vector machines. Empirical results show that the semi-supervised algorithms lead to significantly lower error rates than their fully supervised counterparts in many application areas, including multi-class classification, named entity recognition, and natural language parsing.
16

Vasconcelos, Julio Cezar Souza. « Modelo linear parcial generalizado simétrico ». Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-26072017-105153/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Neste trabalho foi proposto o modelo linear parcial generalizado simétrico, com base nos modelos lineares parciais generalizados e nos modelos lineares simétricos, em que a variável resposta segue uma distribuição que pertence à família de distribuições simétricas, considerando um preditor linear que possui uma parte paramétrica e uma não paramétrica. Algumas distribuições que pertencem a essa classe são as distribuições: Normal, t-Student, Exponencial potência, Slash e Hiperbólica, dentre outras. Uma breve revisão dos conceitos utilizados ao longo do trabalho foram apresentados, a saber: análise residual, influência local, parâmetro de suavização, spline, spline cúbico, spline cúbico natural e algoritmo backfitting, dentre outros. Além disso, é apresentada uma breve teoria dos modelos GAMLSS (modelos aditivos generalizados para posição, escala e forma). Os modelos foram ajustados utilizando o pacote gamlss disponível no software livre R. A seleção de modelos foi baseada no critério de Akaike (AIC). Finalmente, uma aplicação é apresentada com base em um conjunto de dados reais da área financeira do Chile.
In this work we propose the symmetric generalized partial linear model, based on the generalized partial linear models and symmetric linear models, that is, the response variable follows a distribution that belongs to the symmetric distribution family, considering a linear predictor that has a parametric and a non-parametric component. Some distributions that belong to this class are distributions: Normal, t-Student, Power Exponential, Slash and Hyperbolic among others. A brief review of the concepts used throughout the work was presented, namely: residual analysis, local influence, smoothing parameter, spline, cubic spline, natural cubic spline and backfitting algorithm, among others. In addition, a brief theory of GAMLSS models is presented (generalized additive models for position, scale and shape). The models were adjusted using the package gamlss available in the free R software. The model selection was based on the Akaike criterion (AIC). Finally, an application is presented based on a set of real data from Chile\'s financial area.
17

Le, Roux Daniel Y. « A semi-Lagrangian finite element barotropic ocean model ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ44492.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

White, P. D. « Semi-synthetic model studies related to cytochrome c ». Thesis, University of Exeter, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.378247.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Malsiner-Walli, Gertraud, Paul Hofmarcher et Bettina Grün. « Semi-parametric Regression under Model Uncertainty : Economic Applications ». Wiley, 2019. http://dx.doi.org/10.1111/obes.12294.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Economic theory does not always specify the functional relationship between dependent and explanatory variables, or even isolate a particular set of covariates. This means that model uncertainty is pervasive in empirical economics. In this paper, we indicate how Bayesian semi-parametric regression methods in combination with stochastic search variable selection can be used to address two model uncertainties simultaneously: (i) the uncertainty with respect to the variables which should be included in the model and (ii) the uncertainty with respect to the functional form of their effects. The presented approach enables the simultaneous identification of robust linear and nonlinear effects. The additional insights gained are illustrated on applications in empirical economics, namely willingness to pay for housing, and cross-country growth regression.
20

NUNES, THIAGO RIBEIRO. « A MODEL FOR EXPLORATION OF SEMI-STRUCTURED DATASETS ». PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2017. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=32904@1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
PROGRAMA DE EXCELENCIA ACADEMICA
Tarefas de exploração de informação são reconhecidas por possuir características tais como alta complexidade, falta de conhecimento do usuário sobre o domínio da tarefa e incertezas sobre as estratégias de solução. O estado-da-arte em exploração de dados inclui uma variedade de modelos e ferramentas baseadas em diferentes paradigmas de interação, como por exemplo, busca por palavras-chave, busca facetada e orientação-a-conjuntos. Não obstante os muitos avanços das últimas décadas, a falta de uma abordagem formal do processo de exploração, juntamente com a falta de uma adoção mais pragmática do princípio de separação-de-responsabilidades no design dessas ferramentas são a causa de muitas limitações. Dentre as limitações, essa tese aborda a falta de expressividade, caracterizada por restrições na gama de estratégias de solução possíveis, e dificuldades de análise e comparação entre as ferramentas propostas. A partir desta observação, o presente trabalho propõe um modelo formal de ações e processos de exploração, uma nova abordagem para o projeto de ferramentas de exploração e uma ferramenta que generaliza o estado-da-arte em exploração de informação. As avaliações do modelo, realizadas por meio de estudos de caso, análises e comparações o estado-da-arte, corroboram a utilidade da abordagem.
Information exploration processes are usually recognized by their inherent complexity, lack of knowledge and uncertainty, concerning both the domain and the solution strategies. Even though there has been much work on the development of computational systems supporting exploration tasks, such as faceted search and set-oriented interfaces, the lack of a formal understanding of the exploration process and the absence of a proper separation of concerns approach in the design phase is the cause of many expressivity issues and serious limitations. This work proposes a novel design approach of exploration tools based on a formal framework for representing exploration actions and processes. Moreover, we present a new exploration system that generalizes the majority of the state-of-the art exploration tools. The evaluation of the proposed framework is guided by case studies and comparisons with state-of-the-art tools. The results show the relevance of our approach both for the design of new exploration tools with higher expressiveness, and formal assessments and comparisons between different tools.
21

Sirbone, Fabio Renato Camargo. « Modelagem semi-empírica de compressores herméticos alternativos ». Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/18/18147/tde-18072007-111535/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Neste trabalho aplica-se um método semi-empírico que utiliza uma técnica de otimização não linear para determinação das eficiências volumétrica e combinada do compressor hermético alternativo. Relações para estimar aproximadamente o fluxo de massa e a potência elétrica do compressor também são propostas. Todas estas características do compressor são calculadas através das relações físicas do modelo, empregadas nos cálculos de otimização. O método é implementado no software EES (Engineering Equation Solver) e baseia-se nos trabalhos de Jahing (1999) e Jahing et al. (2000). No presente método, cinco medições experimentais do fluxo de massa e potência elétrica são suficientes para determinar os parâmetros de ajuste do modelo. Este procedimento permite a geração de mapas de compressores satisfatórios sem a necessidade de um maior número de dados experimentais como no caso da norma ARI 540. Estes resultados obtidos com o modelo podem ser usados para o projeto de novos compressores.
In the present work is applied a semi-empirical method that uses a non-linear optimization technique for determination of the volumetric and combined efficiencies of hermetic reciprocating compressor. Relations to approximately estimate the mass flow and the electric power of the compressor are also proposed. All these compressor characteristics are calculated through physical model relations, used in the optimization calculations. The method is implemented in the EES (Engineering Equation Solver) software and is based on the works of Jahing (1999) and Jahing et al. (2000). In the method, four experimental measurements of the mass flow and electric power are enough to determine the fitting parameters of the model. This procedure allows the generation of satisfactory compressor maps without the necessity of a higher number of experimental data, as in the case of norm ARI 540 application. The results obtained with the model can be used for the design of new compressors.
22

Torrent, Hudson da Silva. « Estimação não-paramétrica e semi-paramétrica de fronteiras de produção ». reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2010. http://hdl.handle.net/10183/25786.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Existe uma grande e crescente literatura sobre especificação e estimação de fronteiras de produção e, portanto, de eficiência de unidades produtivas. Nesta tese, o foco esta sobre modelos de fronteiras determinísticas, os quais são baseados na hipótese de que os dados observados pertencem ao conjunto tecnológico. Dentre os modelos estatísticos e estimadores para fronteiras determinísticas existentes, uma abordagem promissora e a adotada por Martins-Filho e Yao (2007). Esses autores propõem um procedimento de estimação composto por três estágios. Esse estimador e de fácil implementação, visto que envolve procedimentos não-paramétricos bem conhecidos. Além disso, o estimador possui características desejáveis vis-à-vis estimadores para fronteiras determinísticas tradicionais como DEA e FDH. Nesta tese, três artigos, que melhoram o modelo proposto por Martins-Filho e Yao (2007), sao propostos. No primeiro artigo, o procedimento de estimação desses autores e melhorado a partir de uma variação do estimador exponencial local, proposto por Ziegelmann (2002). Demonstra-se que estimador proposto a consistente e assintoticamente normal. Além disso, devido ao estimador exponencial local, estimativas potencialmente negativas para a função de variância condicional, que poderiam prejudicar a aplicabilidade do estimador proposto por Martins-Filho e Yao, são evitadas. No segundo artigo, e proposto um método original para estimação de fronteiras de produção em apenas dois estágios. E mostrado que se pode eliminar o segundo estágio proposto por Martins-Filho e Yao, assim como, eliminar o segundo estagio proposto no primeiro artigo desta tese. Em ambos os casos, a estimação do mesmo modelo de fronteira de produção requer três estágios, sendo versões diferentes para o segundo estagio. As propriedades assintóticas do estimador proposto são analisadas, mostrando-se consistência e normalidade assintótica sob hipóteses razoáveis. No terceiro artigo, a proposta uma variação semi-paramétrica do modelo estudado no segundo artigo. Reescreve-se aquele modelo de modo que se possa estimar a fronteira de produção e a eficiência de unidades produtivas no contexto de múltiplos insumos, sem incorrer no curse of dimensionality. A abordagem adotada coloca o modelo na estrutura de modelos aditivos, a partir de hipóteses sobre como os insumos se combinam no processo produtivo. Em particular, considera-se aqui os casos de insumos aditivos e insumos multiplicativos, os quais são amplamente considerados em teoria econômica e aplicações. Estudos de Monte Carlo são apresentados em todos os artigos, afim de elucidar as propriedades dos estimadores propostos em amostras finitas. Além disso, estudos com dados reais são apresentados em todos os artigos, nos quais são estimador rankings de eficiência para uma amostra de departamentos policiais dos EUA, a partir de dados sobre criminalidade daquele país.
There exists a large and growing literature on the specification and estimation of production frontiers and therefore efficiency of production units. In this thesis we focus on deterministic production frontier models, which are based on the assumption that all observed data lie in the technological set. Among the existing statistical models and estimators for deterministic frontiers, a promising approach is that of Martins-Filho and Yao (2007). They propose an estimation procedure that consists of three stages. Their estimator is fairly easy to implement as it involves standard nonparametric procedures. In addition, it has a number of desirable characteristics vis-a-vis traditional deterministic frontier estimators as DEA and FDH. In this thesis we propose three papers that improve the model proposed in Martins-Filho and Yao (2007). In the first paper we improve their estimation procedure by adopting a variant of the local exponential smoothing proposed in Ziegelmann (2002). Our estimator is shown to be consistent and asymptotically normal. In addition, due to local exponential smoothing, potential negativity of conditional variance functions that may hinder the use of Martins-Filho and Yao's estimator is avoided. In the second paper we propose a novel method for estimating production frontiers in only two stages. (Continue). There we show that we can eliminate the second stage of Martins-Filho and Yao as well as of our first paper, where estimation of the same frontier model requires three stages under different versions for the second stage. We study asymptotic properties showing consistency andNirtnin, asymptotic normality of our proposed estimator under standard assumptions. In the third paper we propose a semiparametric variation of the frontier model studied in the second paper. We rewrite that model allowing for estimating the production frontier and efficiency of production units in a multiple input context without suffering the curse of dimensionality. Our approach places that model within the framework of additive models based on assumptions regarding the way inputs combine in production. In particular, we consider the cases of additive and multiplicative inputs, which are widely considered in economic theory and applications. Monte Carlo studies are performed in all papers to shed light on the finite sample properties of the proposed estimators. Furthermore a real data study is carried out in all papers, from which we rank efficiency within a sample of USA Law Enforcement agencies using USA crime data.
23

Umsrithong, Anake. « Deterministic and Stochastic Semi-Empirical Transient Tire Models ». Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/26270.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
The tire is one of the most important components of the vehicle. It has many functions, such as supporting the load of the vehicle, transmitting the forces which drive, brake and guide the vehicle, and acting as the secondary suspension to absorb the effect of road irregularities before transmitting the forces to the vehicle suspension. A tire is a complex reinforced rubber composite air container. The structure of the tire is very complex. It consists of several layers of synthetic polymer, many flexible filaments of high modulus cord, and glass fiber, which are bonded to a matrix of low modulus polymeric material. As the tire is the only component of the vehicle which makes contact with the road surface, almost all forces and moments acting on the vehicle must be transferred by the tire. To predict the dynamics of the vehicle, we need to know these forces and moments generated at the tire contact patch. Therefore, tire models that accurately describe this dynamic behavior are needed for vehicle dynamic simulation. Many researchers developed tire models for vehicle dynamic simulations; however, most of the development in tire modeling has been limited to deterministic steady-state on-road tire models. The research conducted in this study is concerned with the development of semi-empirical transient tire models for on-road and off-road vehicle simulations. The semi-empirical transient tire model is developed based on existed tire models, analytical tire structure mechanics analysis, and experimental data collected by various researchers. The tire models were developed for vehicle traction, handling and ride analysis. The theoretical mechanics analysis of the tire model focused on the determination of tire and terrain deformation. Then, the results are used together with empirical data to calculate the force response and the moment response. Moreover, the influence of parametric uncertainties in tire parameters on the tire-terrain interaction is investigated. The parametric uncertainties are quantified and propagated through the tire models using a polynomial chaos theory with a collocation approach. To illustrate the capabilities of the tire models developed, both deterministic and stochastic tire models are simulated for various scenarios and maneuvers. Numerically simulated results are analyzed from the perspective of vehicle dynamics. Such an analysis can be used in tire and vehicle development and design.
Ph. D.
24

Santos, Franciane Mendonça dos. « Modelagem concentrada e semi-distribuída para simulação de vazão, produção de sedimentos e de contaminantes em bacias hidrográficas do interior de São Paulo ». Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/18/18139/tde-26112018-145857/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
A escassez de dados hidrológicos no Brasil é um problema recorrente em muitas regiões, principalmente em se tratando de dados hidrométricos, produção de sedimentos e qualidade da água. A pesquisa por modelos de bacias hidrográficas tem aumentado nas últimas décadas, porém, a estimativa de dados hidrossedimentológicos a partir de modelos mais sofisticados demanda de grande número de variáveis, que devem ser ajustadas para cada sistema natural, o que dificulta a sua aplicação. O objetivo principal desta tese foi avaliar diferentes ferramentas de modelagem utilizadas para a estimativa da vazão, produção de sedimentos e qualidade da água e, em particular, comparar os resultados obtidos de um modelo hidrológico físico semi-distribuído, o Soil Water Assessment Tool (SWAT) com os resultados obtidos a partir de modelos hidrológicos concentrados, com base na metodologia do número da curva de escoamento do Soil Conservation Service (SCS-CN) e no modelo Generalized Watershed Loading Function (GWLF). Buscou-se avaliar e apresentar em quais condições o uso de cada modelo deve ser recomendado, ou seja, quando o esforço necessário para executar o modelo semi-distribuído leva a melhores resultados efetivos. Em relação à simulação da vazão, os resultados dos dois modelos foram altamente influenciados pelos dados de precipitação, indicando que existem, possivelmente, falhas ou erros de medição que poderiam ter influenciado negativamente os resultados. Portanto, foi proposto aplicar o modelo semi-distribuído com dados de precipitação interpolados (DPI) de alta resolução para verificar a eficiência de seus resultados em comparação com os resultados obtidos com a utilização dos dados de precipitação observados (DPO). Para simulação da produção de sedimentos, e das concentrações de nitrogênio e fósforo, o SWAT realiza uma simulação hidrológica mais detalhada, portanto, fornece resultados ligeiramente melhores para parâmetros de qualidade da água. O uso do modelo semi-distribuído também foi ampliado para simular uma bacia hidrográfica sob a influência do reservatório, a fim de verificar a potencialidade do modelo para esse propósito. Os modelos também foram aplicados para identificar quais os impactos potenciais das mudanças no uso do solo previstas e em andamento. Os cenários estudados foram: I – cenário atual, II – cenário tendencial, com o aumento da mancha urbana e substituição do solo exposto e de parte da mata nativa por uso agrícola; III – cenário desejável, complementa o crescimento urbano tendencial com aumento de áreas de reflorestamento. As metodologias foram aplicadas em duas bacias hidrográficas localizadas no Sudeste do Brasil. A primeira é a bacia do rio Jacaré-Guaçu, incluída na Unidade de Gerenciamento de Recursos Hídricos 13 (UGRHI-13), a montante da confluência do rio das Cruzes, com uma área de 1934 km2. O segundo caso de estudo, é a bacia do rio Atibaia, inserida na UGRHI-5, tem uma área de 2817,88 km2 e abrange municípios dos estados de São Paulo e Minas Gerais. Como principal conclusão, o desempenho do modelo semi-distribuído para estimar a produção de sedimentos, e as concentrações de nitrogênio e fósforo foi ligeiramente melhor do que as simulações do modelo concentrado SCS-CN e GWLF, mas essa vantagem pode não compensar o esforço adicional de calibrá-lo e validá-lo.
The lack of hydrological data in Brazil is a recurrent problem in many regions, especially in hydrometric data, sediment yield and water quality. The research by simplified models has increased in the last decades, however, the estimation of hydrossedimentological data from these more sophisticated models demands many variables, which must be adjusted for each natural system, which makes it difficult to apply. At times it is necessary to respond quickly without much precision in the results, in these situations, simpler models with few parameters can be the solution. The objective of this research is to evaluate different modelling tools used estimate streamflow, sediments yield and nutrients loads values, and namely to compare the results obtained from a physically-based distributed hydrological model (SWAT) with the results from a lumped hydrological, the Soil Conservation Service (SCS-CN) and the Generalized Watershed Loading Function (GWLF) model. Both models use the curve number (CN) concept, determined from land use, soil hydrologic group and antecedent soil moisture conditions and were run with a daily time step. We are particularly interested in understanding under which conditions the use of each model is to be recommended, namely when does the addition effort required to run the distributed model leads to effective better results. The input variables and parameters of the lumped model are assumed constant throughout the watershed, while the SWAT model performs the hydrological analysis at a small unit level, designated as hydrological response units (HRUs), and integrates the results at a sub-basin level. In relation to the flow simulation, the results of the two models were highly influenced by the rainfall data, indicating that, possibly, faults or measurement errors could have negatively influenced the results. Therefore, it was proposed to apply the distributed model with high-resolution grids of daily precipitation to verify the efficiency of its results when compared to rainfall data. For simulation of sediment, nitrogen and phosphorus, SWAT performs a more detailed simulation and thus provides slightly better results. The use of the SWAT was also extended to simulate the influence of reservoir, in order to verify the potentiality of the model, in relation to the simulation. The models also were used to identify which are potential impacts of the ongoing land use changes. The scenarios were: I - Current scenario, II - trend scenario, with the increase of urban land and replacement of the exposed soil and part of the native forest by agricultural use; III - desirable scenario complements the trend urban growth with the replacement of exposed soil and part of the agricultural use by reforestation. The methodologies were applied on two watersheds located in the Southeast of Brazil. The first one is the Jacaré-Guaçu river basin, included in the Water Resources Management Unit 13 (UGRHI-13), upstream of Cruzes river confluence, with an area of 1934 km2. The second watershed is the Atibaia River Basin, a part of Water Resources Management Unit 5 (UGRHI-5). It has an area of 2817.88 km2 and covers municipalities of the states of São Paulo and Minas Gerais.
25

Das, Sourav. « Models of semi-systematic visual search ». Connect to this title online, 2007. http://etd.lib.clemson.edu/documents/1181250703/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Trommershäuser, Julia. « A semi-microscopic model of synaptic transmission and plasticity ». [S.l.] : [s.n.], 2000. http://deposit.ddb.de/cgi-bin/dokserv?idn=963474626.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Bhattacharjee, Binita 1976. « Kinetic model reduction using integer and semi-infinite programming ». Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/30062.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering, February 2004.
MIT Science Library copy in pages.
Also issued in pages.
Includes bibliographical references (leaves 147-155).
In this work an optimization-based approach to kinetic model reduction was studied with a view to generating reduced-model libaries for reacting-flow simulations. A linear integer formulation of the reaction elimination problem was developed in order to allow the model reduction problem to be solved cheaply and robustly to guaranteed global optimality. When compared with three other conventional reaction-elimination methods, only the integer-programming approach consistently identified the smallest reduced model which satisfies user-specified accuracy criteria. The proposed reaction elimination formulation was solved to generate model libraries for both, homogeneous combustion systems, and 2-D laminar flames. Good agreement was observed between the reaction trajectories predicted by the full mechanism and the reduced model library. For kinetic mechanisms having many more reactions than species, the computational speedup associated with reaction elimination was found to scale linearly with the size of the derived reduced model. Speedup factors of 4-90 were obtained for a variety of different mechanisms and reaction conditions. The integer-programming based reduction approach was tested successfully on large-scale mechanisms comprising up to [approximately] 2500 reactions. The problem of identifying optimal (maximum) ranges of validity for point-reduced kinetic models was also investigated. A number of different formulations for the range problem were proposed, all of which were shown to be variants of a standard semi-infinite program (SIP). Conventional algorithms for nonlinear semi-infinite programs are essentially all lower-bounding methods which cannot guarantee the feasibility of an incumbent at finite termination.
(cont.) Thus, they cannot be used to identify rigorous ranges of validity for reduced kinetic models. In the second part of this thesis, inclusion functions were used to develop an inner approximation method which generates a convergent series of feasible upper bounds on the minimum value of a smooth, non-linear semi-infinite program. The inclusion-constrained reformulation approach was applied successfully to a number of test problems in the SIP literature. The new upper-bounding approach was then combined with existing lower-bounding methods in a branch-and-bound framework which allows smooth nonlinear semi-infinite programs to be solved finitely to [epsilon]-optimality. The branch-and-bound algorithm was also tested on a number of small literature examples. In the final chapter of the thesis, extensions of the existing algorithm and code to solve practical engineering problems, including the range identification problem, were considered.
by Binita Bhattacharjee.
Ph.D.
28

Mockelman, Jeffrey A. (Jeffrey Alan). « Semi-analytical model of ionization oscillations in Hall thrusters ». Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/98810.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 107-108).
This thesis presents efforts to better understand the breathing-mode oscillation within Hall thrusters. These oscillations have been present and accepted within Hall thrusters for decades, but recent interest in the oscillation has occurred partly due to a possible connection between wall erosion and the oscillations. The first part of this thesis details a steady model of the ionization region in a Hall thruster that finds existence criteria for the steady solution under the hypothesis that the steady limits match the smooth sonic passage limits. Operation outside these limits would correspond to unsteady behavior which could result in either a periodic oscillatory behavior or plume extinguishment. To distinguish between periodic behavior and thruster extinguishment, an unsteady model of the ionization region is developed, but this model falls short of its goal. The transient model, however, is still useful for observation of the periodic nature of an oscillating Hall thruster. Next, an anode depletion model for Hall thrusters is formulated. This model explores one of the causes of thruster extinguishment, when the plasma cannot reach the anode. Finally, a new method for performing Boron Nitride erosion measurements is discussed and preliminary results are presented. This method imbeds Lithium ions into Boron Nitride. The depth of the Lithium can be measured before and after erosion or deposition to give a net erosion or accumulation measurement.
by Jeffrey A. Mockelman.
S.M.
29

Hong, Jiazheng. « A Semi-Analytical Load Distribution Model of Spline Joints ». The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1426110670.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
30

Clertant, Matthieu. « Semi-parametric bayesian model, applications in dose finding studies ». Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066230/document.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Les Phases I sont un domaine des essais cliniques dans lequel les statisticiens ont encore beaucoup à apporter. Depuis trente ans, ce secteur bénéficie d'un intérêt croissant et de nombreuses méthodes ont été proposées pour gérer l'allocation séquentielle des doses aux patients intégrés à l'étude. Durant cette Phase, il s'agit d'évaluer la toxicité, et s'adressant à des patients gravement atteints, il s'agit de maximiser les effets curatifs du traitement dont les retours toxiques sont une conséquence. Parmi une gamme de doses, on cherche à déterminer celle dont la probabilité de toxicité est la plus proche d'un seuil souhaité et fixé par les praticiens cliniques. Cette dose est appelée la MTD (maximum tolerated dose). La situation canonique dans laquelle sont introduites la plupart des méthodes consiste en une gamme de doses finie et ordonnée par probabilité de toxicité croissante. Dans cette thèse, on introduit une modélisation très générale du problème, la SPM (semi-parametric methods), qui recouvre une large classe de méthodes. Cela permet d'aborder des questions transversales aux Phases I. Quels sont les différents comportements asymptotiques souhaitables? La MTD peut-elle être localisée? Comment et dans quelles circonstances? Différentes paramétrisations de la SPM sont proposées et testées par simulations. Les performances obtenues sont comparables, voir supérieures à celles des méthodes les plus éprouvées. Les résultats théoriques sont étendus au cas spécifique de l'ordre partiel. La modélisation de la SPM repose sur un traitement hiérarchique inférentiel de modèles satisfaisant des contraintes linéaires de paramètres inconnus. Les aspects théoriques de cette structure sont décrits dans le cas de lois à supports discrets. Dans cette circonstance, de vastes ensembles de lois peuvent aisément être considérés, cela permettant d'éviter les cas de mauvaises spécifications
Phase I clinical trials is an area in which statisticians have much to contribute. For over 30 years, this field has benefited from increasing interest on the part of statisticians and clinicians alike and several methods have been proposed to manage the sequential inclusion of patients to a study. The main purpose is to evaluate the occurrence of dose limiting toxicities for a selected group of patients with, typically, life threatening disease. The goal is to maximize the potential for therapeutic success in a situation where toxic side effects are inevitable and increase with increasing dose. From a range of given doses, we aim to determine the dose with a rate of toxicity as close as possible to some threshold chosen by the investigators. This dose is called the MTD (maximum tolerated dose). The standard situation is where we have a finite range of doses ordered with respect to the probability of toxicity at each dose. In this thesis we introduce a very general approach to modeling the problem - SPM (semi-parametric methods) - and these include a large class of methods. The viewpoint of SPM allows us to see things in, arguably, more relevant terms and to provide answers to questions such as asymptotic behavior. What kind of behavior should we be aiming for? For instance, can we consistently estimate the MTD? How, and under which conditions? Different parametrizations of SPM are considered and studied theoretically and via simulations. The obtained performances are comparable, and often better, to those of currently established methods. We extend the findings to the case of partial ordering in which more than one drug is under study and we do not necessarily know how all drug pairs are ordered. The SPM model structure leans on a hierarchical set-up whereby certain parameters are linearly constrained. The theoretical aspects of this structure are outlined for the case of distributions with discrete support. In this setting the great majority of laws can be easily considered and this enables us to avoid over restrictive specifications than can results in poor behavior
31

Nobrega, Karliane Fernandes. « A interpreta??o sem?ntica dos auxiliares modais poder, precisar e dever : uma abordagem da sem?ntica cognitiva ». Universidade Federal do Rio Grande do Norte, 2007. http://repositorio.ufrn.br:8080/jspui/handle/123456789/16359.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Made available in DSpace on 2014-12-17T15:07:13Z (GMT). No. of bitstreams: 1 KarlianeFN.pdf: 2687270 bytes, checksum: 030c127f4ec27d1e8f2d78a1475b9ddf (MD5) Previous issue date: 2007-06-01
Apresentamos, neste trabalho, com base na sem?ntica cognitiva, uma an?lise do significado, em contexto, dos auxiliares modais poder, precisar e dever. Analisamos 120 textos produzidos por candidatos ao vestibular e por alunos do ensino fundamental, como resposta da quest?o n?mero tr?s da prova discursiva de L?ngua Portuguesa do vestibular 2005 da UFRN, que pede aos candidatos para explicitar a diferen?a de sentido entre tr?s frases, observando o uso desses tr?s verbos. Consideramos que um item lexical n?o ? incorporado a uma representa??o ling??stica sem?ntica fixa, limitada e ?nica, mas antes, ? ligado a uma representa??o ling??stica sem?ntica flex?vel e aberta que prov? acesso a muitas concep??es e sistemas conceituais dependente de cada contexto determinado. Com base em seu significado, um item lexical evoca um grupo de dom?nios cognitivos, que por sua vez, apresentam um determinado conte?do conceitual. Isto implica em afirmar que a rede de significados lexicais vai variar conforme o conhecimento de mundo de cada um (LANGACKER, 2000). A relev?ncia deste trabalho ? proporcionar uma contribui??o para a descri??o sem?ntica do portugu?s
We present, in this work, based on cognitive semantics, an analysis of the meaning in context of the modal auxiliaries can, need and must. We analysed 120 texts produced by applicants for university entrance examinations and primary school students as answer to question number three of the Portuguese Language discursive test, in the entrance examinations for UFRN, that asked the candidates to explicit the difference in meaning between three sentences, observing the use of those three verbs. We consider that a lexical item is not incorporated by a steady semantic structure, limited and unique, but instead, it is linked to an open and flexible linguistic semantic representation that provides access to many conceptions and conceptual systems depending on each determined context. Based on its meaning, a lexical item evokes a group of cognitive domains, which present a determined conceptual content. This makes possible to affirm that the net of lexical meanings will vary according to the world knowledge each one has (LANGACKER, 2000). The relevance of this work is provide a understanding of the semantic decription of portuguese
32

Huss, Anders. « Hybrid Model Approach to Appliance Load Disaggregation : Expressive appliance modelling by combining convolutional neural networks and hidden semi Markov models ». Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-179200.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
The increasing energy consumption is one of the greatest environmental challenges of our time. Residential buildings account for a considerable part of the total electricity consumption and is further a sector that is shown to have large savings potential. Non Intrusive Load Monitoring (NILM), i.e. the deduction of the electricity consumption of individual home appliances from the total electricity consumption of a household, is a compelling approach to deliver appliance specific consumption feedback to consumers. This enables informed choices and can promote sustainable and cost saving actions. To achieve this, accurate and reliable appliance load disaggregation algorithms must be developed. This Master's thesis proposes a novel approach to tackle the disaggregation problem inspired by state of the art algorithms in the field of speech recognition. Previous approaches, for sampling frequencies 1 Hz, have primarily focused on different types of hidden Markov models (HMMs) and occasionally the use of artificial neural networks (ANNs). HMMs are a natural representation of electric appliances, however with a purely generative approach to disaggregation, basically all appliances have to be modelled simultaneously. Due to the large number of possible appliances and variations between households, this is a major challenge. It imposes strong restrictions on the complexity, and thus the expressiveness, of the respective appliance model to make inference algorithms feasible. In this thesis, disaggregation is treated as a factorisation problem where the respective appliance signal has to be extracted from its background. A hybrid model is proposed, where a convolutional neural network (CNN) extracts features that correlate with the state of a single appliance and the features are used as observations for a hidden semi Markov model (HSMM) of the appliance. Since this allows for modelling of a single appliance, it becomes computationally feasible to use a more expressive Markov model. As proof of concept, the hybrid model is evaluated on 238 days of 1 Hz power data, collected from six households, to predict the power usage of the households' washing machine. The hybrid model is shown to perform considerably better than a CNN alone and it is further demonstrated how a significant increase in performance is achieved by including transitional features in the HSMM.
Den ökande energikonsumtionen är en stor utmaning för en hållbar utveckling. Bostäder står för en stor del av vår totala elförbrukning och är en sektor där det påvisats stor potential för besparingar. Non Intrusive Load Monitoring (NILM), dvs. härledning av hushållsapparaters individuella elförbrukning utifrån ett hushålls totala elförbrukning, är en tilltalande metod för att fortlöpande ge detaljerad information om elförbrukningen till hushåll. Detta utgör ett underlag för medvetna beslut och kan bidraga med incitament för hushåll att minska sin miljöpåverakan och sina elkostnader. För att åstadkomma detta måste precisa och tillförlitliga algoritmer för el-disaggregering utvecklas. Denna masteruppsats föreslår ett nytt angreppssätt till el-disaggregeringsproblemet, inspirerat av ledande metoder inom taligenkänning. Tidigare angreppsätt inom NILM (i frekvensområdet 1 Hz) har huvudsakligen fokuserat på olika typer av Markovmodeller (HMM) och enstaka förekomster av artificiella neurala nätverk. En HMM är en naturlig representation av en elapparat, men med uteslutande generativ modellering måste alla apparater modelleras samtidigt. Det stora antalet möjliga apparater och den stora variationen i sammansättningen av dessa mellan olika hushåll utgör en stor utmaning för sådana metoder. Det medför en stark begränsning av komplexiteten och detaljnivån i modellen av respektive apparat, för att de algoritmer som används vid prediktion ska vara beräkningsmässigt möjliga. I denna uppsats behandlas el-disaggregering som ett faktoriseringsproblem, där respektive apparat ska separeras från bakgrunden av andra apparater. För att göra detta föreslås en hybridmodell där ett neuralt nätverk extraherar information som korrelerar med sannolikheten för att den avsedda apparaten är i olika tillstånd. Denna information används som obervationssekvens för en semi-Markovmodell (HSMM). Då detta utförs för en enskild apparat blir det beräkningsmässigt möjligt att använda en mer detaljerad modell av apparaten. Den föreslagna Hybridmodellen utvärderas för uppgiften att avgöra när tvättmaskinen används för totalt 238 dagar av elförbrukningsmätningar från sex olika hushåll. Hybridmodellen presterar betydligt bättre än enbart ett neuralt nätverk, vidare påvisas att prestandan förbättras ytterligare genom att introducera tillstånds-övergång-observationer i HSMM:en.
33

Odstrčil, Aleš. « Semi-aktivní tlumicí jednotka vozidla ». Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-417417.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
This diploma thesis deals with the issue of semi-active damping and its use in trucks. The first part of this thesis deals with a research of existing damping systems, especially for trucks. Subsequently, different ways of damping control are compared. After this analysis, two versions of the conversion of the series dumper to semi-active were created. At the end of this work, both versions are compared.
34

Kosuri, Durga Renuka. « Direct Biocontrol of a Simulated Anthropomorphic Computer Finger Model Using SEMG ». University of Akron / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=akron1147468452.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Kyncl, Jaroslav. « Semi-lexical heads in Czech modal structures ». Thesis, University of Wolverhampton, 2008. http://hdl.handle.net/2436/33738.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
This thesis argues for a semi-lexical interpretation of Czech modal verbs. It demonstrates that Czech modals participate in syntactic structures that contain a finite verb followed by multiple infinitives (verb clusters), such as Jan musel chtít začít studovat lingvistiku ‘John had to want to begin studying linguistics.’ The term Complex Verbal Domain (CVD) is devised for the verbal part of these structures. The analysis seeks to offer a unified account of modal verbs in Czech in respect of their subcategorization frame in the Lexicon and semantic properties (‘modal meaning’). It also attempts to clarify the confusion regarding modal verbs and modality in traditional Czech grammars by shifting the attention from pragmatics to an approach based on recent development of generative syntax (Chomsky 1998, 2000, 2001). Following the examination of syntactic behaviour of Czech modals in the CVD structure, the thesis proceeds to modify Emonds’ (1985, 2000) theory of semilexicality. This approach assumes that Czech modals are neither fully functional (due to properties such as rich morphological paradigm, ability to undergo Negation, Reflexivization and PF movement), nor fully lexical (they are unable to take clausal complements and distinguish between aspectual pairs). The semi-lexical analysis also shows that there is evidence for the existence of two types of Czech modals, True modal verbs (TMVs) and Optional modal verbs (OMVs). Whilst the former cannot nominalize or denote events, but are able to convey epistemic meaning, the latter undergo nominalization and are capable of event denotation, but do not attain epistemic reading. The semi-lexical properties of both TMVs and OMVs are syntactically reflected in their specific subcategorization frame X, +MODAL, +mod, +__ [V, INF]. The cognitive syntactic feature +MODAL cospecifies the syntactic derivation of Czech modal verbs in the ‘light’ vº, which takes an infinitival VP as a complement. Therefore, I argue that the CVD is syntactically vP. If the original CVD structure involves multiple infinitives (Jan vPmusí VPchtít(INF) začít(INF) číst(INF) tu knihu ‘John has to want to begin reading that book’), the VP complement has characteristics of a flat structure, adapted from Emonds (1999a, 1999b, 2001). On the other hand, +mod is a semantic feature that specifies the lexical behaviour of Czech modals and conveys the ‘modal meaning’, which is formalized in terms of possible worlds semantics as quantification over the modal base. The semi-lexical analysis also investigates the root v. epistemic dichotomy. The thesis argues that this dichotomy does not affect the unified theory of modality in Czech in terms of its derivational and semantic status, but is a result of covert processes at the level of Logical Form (LF), which realize different levels of modal quantification.
36

Chen, Chunxia. « Semi-parametric estimation in Tobit regression models ». Kansas State University, 2013. http://hdl.handle.net/2097/15300.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Master of Science
Department of Statistics
Weixing Song
In the classical Tobit regression model, the regression error term is often assumed to have a zero mean normal distribution with unknown variance, and the regression function is assumed to be linear. If the normality assumption is violated, then the commonly used maximum likelihood estimate becomes inconsistent. Moreover, the likelihood function will be very complicated if the regression function is nonlinear even the error density is normal, which makes the maximum likelihood estimation procedure hard to implement. In the full nonparametric setup when both the regression function and the distribution of the error term [epsilon] are unknown, some nonparametric estimators for the regression function has been proposed. Although the assumption of knowing the distribution is strict, it is a widely adopted assumption in Tobit regression literature, and is also confirmed by many empirical studies conducted in the econometric research. In fact, a majority of the relevant research assumes that [epsilon] possesses a normal distribution with mean 0 and unknown standard deviation. In this report, we will try to develop a semi-parametric estimation procedure for the regression function by assuming that the error term follows a distribution from a class of 0-mean symmetric location and scale family. A minimum distance estimation procedure for estimating the parameters in the regression function when it has a specified parametric form is also constructed. Compare with the existing semiparametric and nonparametric methods in the literature, our method would be more efficient in that more information, in particular the knowledge of the distribution of [epsilon], is used. Moreover, the computation is relative inexpensive. Given lots of application does assume that [epsilon] has normal or other known distribution, the current work no doubt provides some more practical tools for statistical inference in Tobit regression model.
37

Delgado, Carlos Alberto Cardozo. « Semi-parametric generalized log-gamma regression models ». Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-15032018-185352/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
The central objective of this work is to develop statistical tools for semi-parametric regression models with generalized log-gamma errors under the presence of censored and uncensored observations. The estimates of the parameters are obtained through the multivariate version of Newton-Raphson algorithm and an adequate combination of Fisher Scoring and Backffitting algorithms. Through analytical tools and using simulations the properties of the penalized maximum likelihood estimators are studied. Some diagnostic techniques such as quantile and deviance-type residuals as well as local influence measures are derived. The methodologies are implemented in the statistical computational environment R. The package sglg is developed. Finally, we give some applications of the models to real data.
O objetivo central do trabalho é proporcionar ferramentas estatísticas para modelos de regressão semiparamétricos quando os erros seguem distribução log-gamma generalizada na presença de observações censuradas ou não censuradas. A estimação paramétrica e não paramétrica são realizadas através dos procedimentos Newton - Raphson, escore de Fisher e Backfitting (Gauss - Seidel). As propriedades assintóticas dos estimadores de máxima verossimilhança penalizada são estudadas em forma analítica, bem como através de simulações. Alguns procedimentos de diagnóstico são desenvolvidos, tais como resíduos tipo componente do desvio e resíduo quantílico, bem como medidas de influ\\^encia local sob alguns esquemas usuais de perturbação. Todos procedimentos do presente trabalho são implementados no ambiente computacional R, o pacote sglg é desenvolvido, assim como algumas aplicações a dados reais são apresentadas.
38

Huamán, René Negrón. « On integrable deformations of semi-symmetric space sigma-models ». Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-06112018-011344/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
In this thesis we review some aspects of Yang-Baxter deformations of semi-symmetric space sigma models. We start by giving a short review of the sigma model description of superstrings and then we offer a self contained introduction to the Yang-Baxter deformation technique. We then show how to obtain an integrable deformation of the hybrid sigma model. Also, we show that the gravity dual of beta-deformed ABJM theory can be obtained as a Yang-Baxter deformation. This is done by selecting a convenient combination of Cartan generators in order to construct an Abelian r-matrix satisfying the classical Yang-Baxter equation.
Nesta tese revisamos alguns aspectos das deformações de Yang-Baxter de modelos sigma em espaços semi-simétricos. Damos uma breve revisão do modelo sigma de supercordas e, em seguida, oferecemos uma introdução ao método de deformação de Yang-Baxter. Em seguida, mostramos como obter uma deformação integrável do modelo sigma híbrido. Além disso, mostramos que o dual gravitacional da teoria ABJM beta-deformada pode ser obtida como uma deformação de Yang-Baxter. Isso é feito selecionando-se uma combinação conveniente de geradores de Cartan para construir uma matriz r Abeliana satisfazendo a equação clássica de Yang-Baxter.
39

Gupta, Vivek. « Probability of SLA Violation for Semi-Markov Availability ». Ohio University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1235610777.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
40

Sabbir, Md Mainul Hasan. « Accuracy of semi-infinite diffusion theory to estimate tissue hemodynamics in layered slab models ». Miami University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=miami1627383040154061.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
41

Kat, Cor-Jacques. « Suspension forces on a tri-axle air suspended semi-trailer ». Diss., Pretoria : [s.n.], 2009. http://upetd.up.ac.za/thesis/available/etd-06242009-153546/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Johannessen, Irene. « Thrust allocation in semi-submersible rig using model predictive control ». Thesis, Norwegian University of Science and Technology, Department of Engineering Cybernetics, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8843.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :

A thrust allocation system is used to determine how the desired forces, computed by a high level control sytem, can be distributed among the thrusters. The main goal of the thrust allocation is to obtain the desired force, but other objectives can also be included. Such secondary goals can be to minimize fuel consumption, keep wear and tear of the thruster to a minimum and avoid overloading the power systems. The thrust allocation should also take forbidden sectors and actuator rate constraints into account. It is essential to safe operation that the allocation system provides a solution, and provides the solution in time. In this thesis MPC (Model predictive control) is suggested as a method to solve the control allocation problem for CyberRig I (a scaled model of a semi-submersible drilling unit). 3 MPC algorithms are simulated in matlab, and the most complete are chosen for on-line implementation. The algorithm is based on an extended thrust formulation, and allows for rotatable thrusters. The cost function penalizes change in thust magnitude and in the azimuth angle. Forbidden sector constraints and rate constrains, both for thrust magnitude and angle, are implemented. It is shown in simulations that the MPC algorithm performs well in comparison with an existing quasi-static method. Its main benefit over the quasi-static method is the ability to handle constraints. The cost of using MPC is increased computational efforts.

43

Allison, Anne-Marie E. « Analytical investigation of a semi-empirical flow-induced vibration model ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0014/NQ31170.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
44

Hagi-Bishow, Mohamed. « Assessment of LEACHM-C model for semi-arid saline irrigation ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0007/MQ44178.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

Ford, D. C. « A semi-empirical model of the spectra of dusty galaxies ». Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.599112.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
In this thesis, a semi-empirical model for the infrared emission of dust around star-forming sites in galaxies is developed and then applied to fitting a variety of observations. A simple model of radiative transport in dust clouds is combined with a state-of-the-art model of the microscopic optical properties of interstellar dust grains. In combination with the STARBURST99 stellar spectral synthesis package, this framework is able to produce synthetic spectra for galaxies which extend from the Lyman limit through to the far-infrared. Models of radiative transport in dusty media and of the optical properties of dust grains both have potential to be computationally time-consuming. This has restricted previous semi-empirical models to include detailed considerations of only one of these. In this thesis, a minimal set of simplifications are adopted in the treatment of radiative transport, such that the use of a state-of-the-art model of dust grain energetics is computationally tractable. Following an initial exploration of the predictions of the model, it is applied to fitting the spectra of M82, Arp220 and NGC 6381. M82 and Arp220 are chosen for study because they are nearby starburst galaxies, and test the ability of our model to fit extreme systems. In both cases, we need to remove some of the smallest grains from our model to fit their mid-infrared spectra, but achieve an excellent fit after doing so. NGC 6381 is chosen because it is the only NGC or UGC galaxy within the xFLS field. From our model fit, we infer a flat star formation history over the past (150±50)Myr with star-formation rate (4.69 ± 0.37) M yr-1, and that these stars are surrounded by a column density (6 ± 1) x 1025 Hm-2  of material and an old stellar population of mass (95 ± 5) x 109 M.
46

Almeida, Ricardo Carvalho de. « Simulation of atmospheric frontogenesis with a semi-Lagrangian numerical model ». Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/28536.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
47

Hamzat, Kadri Obafemi. « A semi-mechanistic model based on oil expression from groundnuts ». Thesis, Cranfield University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333986.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
48

Zhang, Longyu. « A QoE Model to Evaluate Semi-Transparent Augmented-Reality System ». Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/38833.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
With the development of three-dimensional (3D) technologies, the demand for high-quality 3D content, 3D visualization, and flexible and natural interactions are increasing. As a result, semi-transparent Augmented-Reality (AR) systems are emerging and evolving rapidly. Since there are currently no well-recognized models to evaluate the performance of these systems, we proposed a Quality-of-Experience (QoE) taxonomy for semi-transparent AR systems containing three levels of influential QoE parameters, through analyzing existing QoE models in other related areas and integrating the feedbacks received from our user study. We designed a user study to collect training and testing data for our QoE model, and built a Fuzzy-Inference-System (FIS) model to estimate the QoE evaluation and validate the proposed taxonomy. A case study was also conducted to further explore the relationships between QoE parameters and technical QoS parameters with functional components of Microsoft HoloLens AR system. In this work, we illustrate the experiments in detail and thoroughly explain the results obtained. We also present the conclusion and future work.
49

Bryson, Louise Kay. « An erosion and sediment delivery model for semi-arid catchments ». Thesis, Rhodes University, 2016. http://hdl.handle.net/10962/2892.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Sedimentation has become a significant environmental threat in South Africa as it intensifies water management problems in the water-scarce semi-arid regions of the country. As South Africa already allocates 98% of available water, the loss of storage capacity in reservoirs and degraded water quality has meant that a reliable water supply is compromised. The overall aim of this thesis was to develop a catchment scale model that represents the sediment dynamics of semi-arid regions of South Africa as a simple and practically applicable tool for water resource managers. Development of a conceptual framework for the model relied on an understanding of both the sediment dynamics of South African catchments and applicable modelling techniques. Scale was an issue in both cases as most of our understanding of the physical processes of runoff generation and sediment transport has been derived from plot scale studies. By identifying defining properties of semi-arid catchments it was possible to consider how temporal and spatial properties at higher levels emerged from properties at lower levels. These properties were effectively represented by using the Pitman rainfall-runoff model disaggregated to a daily timescale, the Modified Universal Soil Loss Equation (MUSLE) model incorporating probability function theory and through the representation of sediment storages across a semi-distributed catchment. The model was tested on two small and one large study catchment in the Karoo, South Africa, with limited observed data. Limitations to the model were found to be the large parameter data set and the dominance of structural constraints with an increase in catchment size. The next steps in model development will require a reduction of the parameter data set and an inclusion of an in-stream component for sub-catchments at a larger spatial scale. The model is applicable in areas such as South Africa where water resource managers need a simple model at the catchment scale in order to make decisions. This type of model provides a simple representation of the stochastic nature of erosion and sediment delivery over large spatial and temporal scales.
50

Prenzel, Oliver. « Process model for the development of semi-autonomous service robots ». Aachen Shaker, 2009. http://d-nb.info/996919937/04.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Vers la bibliographie