Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Random observations.

Dissertationen zum Thema „Random observations“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-27 Dissertationen für die Forschung zum Thema "Random observations" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Brophy, Edmond M. „Prophet Inequalities for Multivariate Random Variables with Cost for Observations“. Thesis, University of North Texas, 2019. https://digital.library.unt.edu/ark:/67531/metadc1538720/.

Der volle Inhalt der Quelle
Annotation:
In prophet problems, two players with different levels of information make decisions to optimize their return from an underlying optimal stopping problem. The player with more information is called the "prophet" while the player with less information is known as the "gambler." In this thesis, as in the majority of the literature on such problems, we assume that the prophet is omniscient, and the gambler does not know future outcomes when making his decisions. Certainly, the prophet will get a better return than the gambler. But how much better? The goal of a prophet problem is to find the least upper bound on the difference (or ratio) between the prophet's return, M, and the gambler's return, V. In this thesis, we present new prophet problems where we seek the least upper bound on M-V when there is a fixed cost per observations. Most prophet problems in the literature compare M and V when prophet and gambler buy (or sell) one asset. The new prophet problems presented in Chapters 3 and 4 treat a scenario where prophet and gambler optimize their return from selling two assets, when there is a fixed cost per observation. Sharp bounds for the problems on small time horizons are given; for the n-day problem, rough bounds and a description of the distributions for the random variables that maximize M-V are presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Mohammed, Hussein Syed. „Random feature subspace ensemble based approaches for the analysis of data with missing features /“. Full text available online, 2006. http://www.lib.rowan.edu/find/theses.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Rochet, Jean. „Isolated eigenvalues of non Hermitian random matrices“. Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB030/document.

Der volle Inhalt der Quelle
Annotation:
Dans cette thèse, il est question de spiked models pour des matrices aléatoires nonhermitiennes. Plus précisément, on considère des matrices de type A+P, tel que le rang de P reste borné indépendamment de la taille de la matrice qui tend vers l’infini, et tel que A est une matrice aléatoire non-hermitienne. Tout d’abord, on prouve que dans le cas où la matrice P possède des valeurs propres hors du bulk, quelques valeurs propres de A+P (appelées outliers) apparaissent loin de celui-ci. Ensuite, on regarde les fluctuations des outliers de A autour de leurs limites et on montre que celles-ci ont la même distribution que les valeurs propres d’une certaine matrice aléatoire de taille finie. Ce genre de phénomène avait déjà été observé pour des modèles hermitiens. De manière inattendue, on montre que les vitesses de convergence des outliers varient (en fonction de la Réduite de Jordan de P) et que des corrélations peuvent apparaître entre des outliers situés à une distance macroscopique l’un de l’autre. Le premier modèle de matrices non-hermitiennes que l’on considère provient du théorème du Single Ring que l’on doit à Guionnet, Krishnapur et Zeitouni. Un autre modèle étudié est celui des matrices dites “presque” hermitiennes : c’est-à-dire lorsque A est hermitienne mais P ne l’est pas. Enfin, on regarde aussi les outliers pour des matrices Elliptiques Gaussiennes. Cette thèse traite aussi de la convergence en loi de variables aléatoires du type Tr( f (A)M) où A est une matrice du théorème du Single Ring et f est une fonction holomorphe sur un voisinage du bulk et la norme de Frobenius de M est de l’ordre de √N. En corollaire de ce résultat, on obtient des théorèmes type “Centrale Limite” pour les statistiques linéaires de A (pour des fonctions tests holomorphes) et des projections de rang finies de la matrice A (comme par exemple les entrées de la matrice)
This thesis is about spiked models of non Hermitian random matrices. More specifically, we consider matrices of the type A+P, where the rank of P stays bounded as the dimension goes to infinity and where the matrix A is a non Hermitian random matrix. We first prove that if P has some eigenvalues outside the bulk, then A+P has some eigenvalues (called outliers) away from the bulk. Then, we study the fluctuations of the outliers of A around their limit and prove that they are distributed as the eigenvalues of some finite dimensional random matrices. Such facts had already been noticed for Hermitian models. More surprising facts are that outliers can here have very various rates of convergence to their limits (depending on the Jordan Canonical Form of P) and that some correlations can appear between outliers at a macroscopic distance from each other. The first non Hermitian model studied comes from the Single Ring Theorem due to Guionnet, Krishnapur and Zeitouni. Then we investigated spiked models for nearly Hermitian random matrices : where A is Hermitian but P isn’t. At last, we studied the outliers of Gaussian Elliptic random matrices. This thesis also investigates the convergence in distribution of random variables of the type Tr( f (A)M) where A is a matrix from the Single Ring Theorem and f is analytic on a neighborhood of the bulk and the Frobenius norm of M has order √N. As corollaries, we obtain central limit theorems for linear spectral statistics of A (for analytic test functions) and for finite rank projections of f (A) (like matrix entries)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Alemdar, Meltem. „A Monte Carlo study the impact of missing data in cross-classification random effects models /“. Atlanta, Ga. : Georgia State University, 2008. http://digitalarchive.gsu.edu/eps_diss/34/.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Georgia State University, 2008.
Title from title page (Digital Archive@GSU, viewed July 20, 2010) Carolyn F. Furlow, committee chair; Philo A. Hutcheson, Phillip E. Gagne, Sheryl A. Gowen, committee members. Includes bibliographical references (p. 96-100).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Iufereva, Olga. „Algorithmes de filtrage avec les observations distribuées par Poisson“. Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. https://theses.hal.science/tel-04720020.

Der volle Inhalt der Quelle
Annotation:
La théorie du filtrage concerne essentiellement l’estimation optimale de l’état dans les systèmes stochastiques, surtout avec des mesures partielles et bruitées. Ce domaine, fortement lié à la théorie du contrôle, se concentre sur la synthèse d'estimateurs effectuant des calculs en temps réel pour minimiser l'erreur quadratique moyenne. La nécessité de telles estimations devient de plus en plus critique avec la prolifération de systèmes contrôlés par réseau, tels que les véhicules autonomes et les processus industriels complexes, où les processus d'observations sont soumis au caractère aléatoire de la transmission, ce qui donne lieu à des modèles d'information variables pour résoudre le problème d'estimation.Cette thèse aborde la tâche importante de l'estimation d'état dans les systèmes dynamiques stochastiques en temps continu lorsque les mesures sont disponible aux certains instants discrets defini par un processus aléatoire. En adaptant les méthodes d'estimation classiques, nous développons des équations pour un estimateur optimal d'état, explorons leurs propriétés et les aspect pratiques, et proposons et analysons des alternatives sous-optimales, présentant des parallèles avec les techniques existantes dans le domaine d'estimation classique lorsqu'elles sont appliquées aux processus d'observation Poisson-distribués.L'étude couvre trois classes de modèles mathématiques pour le système dynamique en temps continu et le processus d'observation discret. Tout d’abord, nous considérons des équations différentielles Ito-stochastiques avec le champ de vecteur Lipschitz et un coefficient de diffusion constant, alors que le processus d’observation discrète de dimension inférieure comprend la fonction nonlinéaire de l’état et un bruit Gaussien additif. Nous proposons des estimateurs d’état sous-optimaux continus-discrets, qui sont faciles à implémenter pour cette classe de systèmes. En supposant qu'un compteur de Poisson décrit les instants discrets auxquels les observations sont disponibles, nous calculons le processus de covariance d’erreur d’estimation. L'analyse est effectuée pour fournir les conditions de limitation du processus de covariance d'erreur, ainsi que la dépendance au taux d'échantillonnage moyen.Deuxièmement, nous considérons les systèmes dynamiques décrits par des chaînes de Markov en temps continu avec un espace d'état fini, et le processus d'observation est obtenu en discrétisant un processus stochastique conventionnel piloté par un processus de Wiener. Dans ce cas, nous montrons la convergence $L_1$ de l'estimateur optimal vers l'estimateur optimal classique (purement continu) (filtre de Wonham) quand l'intensité des processus de Poisson augmente.Enfin, nous étudions les filtres à particules continus-discrets pour les processus d'Ornstein-Uhlenbeck avec des observations discrètes décrites par des fonctions d'état linéaires et un bruit Gaussien additif. Les filtres à particules ont gagné beaucoup d'intérêt pour l'estimation d'état dans les modèles à grande échelle avec des mesures bruitées où le calcul du gain optimal est soit coûteux en calcul, soit pas entièrement réalisable en raison de la complexité de la dynamique. Dans cette thèse, nous proposons des processus de diffusion de type McKean–Vlasov continus-discrets, qui servent de modèle de champ moyen pour décrire la dynamique des particules. Nous étudions plusieurs types de processus de champ moyen en fonction de la manière dont les termes de bruit sont inclus pour l'imitation du processus d'état et du modèle d'observation. Les particules résultantes sont couplées via des covariances empiriques qui sont mises à jour en temps discrets avec l'arrivée de nouvelles observations. Avec une analyse appropriée des premier et deuxième instants, nous montrons que sous certaines conditions sur les paramètres du système, les performances des filtres à particules se rapprochent du filtre optimal quand le nombre de particules augmente
Filtering theory basically relates to optimal state estimation in stochastic dynamical systems, particularly when faced with partial and noisy data. This field, closely intertwined with control theory, focuses on designing estimators doing real-time computation while maintaining an acceptable level of accuracy as measured by the mean square error. The necessity for such estimates becomes increasingly critical with the proliferation of network-controlled systems, such as autonomous vehicles and complex industrial processes, where the observation processes are subject to randomness in transmission and this gives rise to varying information patterns under which the estimation must be carried out.This thesis addresses the important task of state estimation in continuous-time stochastic dynamical systems when the observation process is available only at some discrete time instants governed by a random process. By adapting classical estimation methods, we derive equations for optimal state estimator, explore their properties and practicality, and propose and evaluates sub-optimal alternatives, showcasing parallels to the existing techniques within the classical estimation domain when applied to Poisson-distributed observation processes.The study covers three classes of mathematical models for the continuous-time dynamical system and the discrete observation process. First, we consider Ito-stochastic differential equations with Lipschitz drift terms and constant diffusion coefficient, whereas the lower-dimensional discrete observation process comprises the nonlinear mapping of the state and additive Gaussian noise. We propose easy-to-implement continuous-discrete suboptimal state estimators for this system class. Assuming that a Poisson counter governs discrete times at which the observations are available, we compute the expectation or error covariance process. Analysis is carried out to provide conditions for boundedness of the error covariance process, as well as, the dependence on the mean sampling rate.Secondly, we consider the dynamical systems described by continuous-time Markov chains with finite state space, and the observation process is obtained by discretizing a conventional stochastic process driven by a Wiener process. For this case, the $L_1$-convergence of the derived optimal estimator to the classical (purely continuous) optimal estimator (Wonham filter) is shown with respect to increasing intensity of Poisson processes.Lastly, we study continuous-discrete particle filters for Ornstein-Uhlenbeck processes with discrete observations described by linear functions of state and additive Gaussian noise. Particle filters have gained a lot of interest for state estimation in large-scale models with noisy measurements where the computation of optimal gain is either computationally expensive or not entirely feasible due to complexity of the dynamics. In this thesis, we propose continuous-discrete McKean–Vlasov type diffusion processes, which serve as the mean-field model for describing the particle dynamics. We study several kinds of mean-field processes depending on how the noise terms are included in mimicking the state process and the observation model. The resulting particles are coupled through empirical covariances which are updated at discrete times with the arrival of new observations. With appropriate analysis of the first and second moments, we show that under certain conditions on system parameters, the performance of the particle filters approaches the optimal filter as the number of particles gets larger
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Johansson, Åsa M. „Methodology for Handling Missing Data in Nonlinear Mixed Effects Modelling“. Doctoral thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-224098.

Der volle Inhalt der Quelle
Annotation:
To obtain a better understanding of the pharmacokinetic and/or pharmacodynamic characteristics of an investigated treatment, clinical data is often analysed with nonlinear mixed effects modelling. The developed models can be used to design future clinical trials or to guide individualised drug treatment. Missing data is a frequently encountered problem in analyses of clinical data, and to not venture the predictability of the developed model, it is of great importance that the method chosen to handle the missing data is adequate for its purpose. The overall aim of this thesis was to develop methods for handling missing data in the context of nonlinear mixed effects models and to compare strategies for handling missing data in order to provide guidance for efficient handling and consequences of inappropriate handling of missing data. In accordance with missing data theory, all missing data can be divided into three categories; missing completely at random (MCAR), missing at random (MAR) and missing not at random (MNAR). When data are MCAR, the underlying missing data mechanism does not depend on any observed or unobserved data; when data are MAR, the underlying missing data mechanism depends on observed data but not on unobserved data; when data are MNAR, the underlying missing data mechanism depends on the unobserved data itself. Strategies and methods for handling missing observation data and missing covariate data were evaluated. These evaluations showed that the most frequently used estimation algorithm in nonlinear mixed effects modelling (first-order conditional estimation), resulted in biased parameter estimates independent on missing data mechanism. However, expectation maximization (EM) algorithms (e.g. importance sampling) resulted in unbiased and precise parameter estimates as long as data were MCAR or MAR. When the observation data are MNAR, a proper method for handling the missing data has to be applied to obtain unbiased and precise parameter estimates, independent on estimation algorithm. The evaluation of different methods for handling missing covariate data showed that a correctly implemented multiple imputations method and full maximum likelihood modelling methods resulted in unbiased and precise parameter estimates when covariate data were MCAR or MAR. When the covariate data were MNAR, the only method resulting in unbiased and precise parameter estimates was a full maximum likelihood modelling method where an extra parameter was estimated, correcting for the unknown missing data mechanism's dependence on the missing data. This thesis presents new insight to the dynamics of missing data in nonlinear mixed effects modelling. Strategies for handling different types of missing data have been developed and compared in order to provide guidance for efficient handling and consequences of inappropriate handling of missing data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Bourmani, Sabrina. „Binary decision for observations with unknown distribution : an optimal and invariance-based framework“. Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0173.

Der volle Inhalt der Quelle
Annotation:
A travers ma thèse, nous nous sommes intéressés à la décision binaire quand lesignal d’intérêt est supposé aléatoire de distribution inconnue. Dans la littérature classique, cegenre de scénario de détection n’offre pas beaucoup de possibilités de résolution qui garantissent une certaine optimalité. À notre connaissance, seule l’approche RDT traite de ce genre de problèmes en assurant une certaine optimalité. C’est ainsi que, durant ma thèse, nous avons tout d’abord étendu l’approche RDT dans le cadre d’une configuration distribuée, toujours en considérant le signal aléatoire de distribution inconnue. Ensuite, nous avons généralisé le RDT pour le cas d’un bruit non-Gaussien (GRDT). Enfin nous nous sommes orienté vers le cadre asymptotique dans l’espoir de dépasser les limites du GRDT. Pour ce faire, nous avons considéré un model simple de signaux déterministes et nous avons démontré une formulation asymptotique du théorème de Karlin-Rubin. Tous nos travaux se sont basés sur la notion clé d’invariance que nous avons redéfinie pour convenir à une classe de problèmes de décision plus générale, i.e. quand la distribution du signal d’intérêt n’est pas connue. Outre la notion d’invariance, l’optimalité a aussi eu une place importante dans les directions que nous avons considérées étant donné l’esprit de nos travaux
During my thesis, we took interest in decision problems where signals are assumed to be stochastic with unknown distributions. In standard literature, such an assumption does not allow to seek solutions that guarantees a certain optimality. At least, aside from the RDT framework developed a few years ago in our laboratory. Hence, we took interest in the philosophy behind the RDT framework, and we follow the same guidelines concerning the unknown distribution of the signal. Apart from our optimality purposes, we also have an invariance based perspective in how we intend to solve this type of decision problems. Indeed, when there are uncertainties about the signal of interest, we can try to derive solutions that are invariant towards them. These are the two key notions we consider throughout our investigations. In this manuscript, first, we apply the RDT framework for a distributed decision to test its suitability to such decision scenarios where the signal of interest is random of unknown distribution and where the observations are collected by a network of sensors instead of just one sensor. Then, we generalise the theoretical material of the RDT framework to when the noise is not necessary Gaussian while still considering the signal of interest random of unknown distribution. Finally, we adopt an asymptotic outlook to circumvent the limitations of the RDT and the developed GRDT approach. Although the considered decision scenarios concern unconditional models in the simple case of deterministic signals, it allows to think ahead of the eventual upcoming generalisations in the asymptotic scope
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Martinez, Garcia Alba Maria. „Study of the Resistive Switching Mechanism in Novel Ultra-thin Organic-inorganic Dielectric-based RRAM through Electrical Observations“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-299358.

Der volle Inhalt der Quelle
Annotation:
The promising role resistive random-access memory (RRAM) plays in the imminent reality of wearable electronics calls for a new, updated physical model of their operating mechanism. Their high applicability as the next-generation flexible non-volatile memory (NVM) devices has promoted the recent emergence of a novel ultra-thin (< 5nm) organic/inorganic hybrid dielectric RRAM. A deep understanding of their resistive switching (RS) behavior is required to unlock their suitability in future electronics applications. However, the extremely reduced thicknesses bring about new challenges in terms of material characterization sample processing, while the RS observations through electrical characterization techniques lack uniformity in the key switching parameters, thus hindering the identification of any clear trends.  This work studies the RS mechanism in ultra-thin Al/Hf-hybrid/Ni RRAM devices through uniformity-improved electrical observations. First, the focus is to implement a ramped-pulse train method during the reset process to reduce the dispersion of the voltage and resistance fluctuations at different starting voltage amplitudes and pulse widths. After finding the optimal electrical programming conditions for reduced parameter dispersions, a temperature test was performed to study the contributions of the metal ions and oxygen vacancies (V2+) in the switching layer. Finally, a physical model describing the operating mechanism in flexible RRAM is proposed after the close observation and study of the processed devices. The model is based on the coexistence of a hetero-metallic portion composed of Al and Hf3Al2, and a V2+ portion connected to form the hybrid conducting filament (CF) and turning the device on. The CF forming processes emphasize the strong presence of these vacancies partaking in RS, as the temperature dependence results suggest the majority of their concentration to be generated during this step. Also, the different electrical potential, temperature, and concentration gradients influencing the V2+ migration during RS may explain some of the failure mechanisms in the rupture and the re-forming of the filament. Additionally, the possible presence of a thin Al-oxide layer in the Al/Hf-hybrid interface may give a reason for leaky on-states. A detailed physical model of the RS mechanism in next-generation flexible RRAMs is key to learn to unlock a range of emerging technologies fitted to today’s needs.
Den senaste introduktionen av ultratunn (<5 nm) organisk-oorganisk hybrid dielektrisk RRAM som nästa generations icke-flyktiga minnesenheter kräver en djup förståelse för hybridskiktresistiv växling (RS). Den extremt reducerade tjockleken hindrar emellertid deras bearbetbarhet för materialkarakteriseringstekniker. Dessutom hindrar den dåliga enhetligheten i viktiga omkopplingsparametrar fortfarande i RRAM att alla trender kan definieras tydligt genom elektrisk karakterisering. Detta arbete använder elektrisk manipulation genom en RPS-metod (ramped-pulse series) för att förbättra spännings- och motståndsfluktuationerna i återställningsprocessen för ultratunna Al/Hf-hybrid/Ni-enheter vid olika spänningsamplitud, pulsbredd och temperaturförhållanden. Från de erhållna RPS-optimerade resultaten föreslås en ny och detaljerad fysisk modell som beskriver driftsmekanismen. Samexistensen i den ledande filamenten (CF) av en hybridmetalldel, sammansatt av Al och Hf3Al2, och en syrevakansdel bekräftas. Vår modell betonar vakansbidraget i RS, där majoriteten genereras under CF-formningsprocessen och deltar i olika grad i filamentbrottet för RPS och ingen RPS-bearbetade enheter via Joule-uppvärmning, drift och Fick-krafter. Dessutom förklaras kopplingsfelhändelser baserat på närvaron av ett Al2O3-lager i Al/Hf-hybridgränssnittet.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Wabiri, Njeri. „Variable modeling of spatially distributed random interval observation“. Doctoral thesis, University of Cape Town, 2007. http://hdl.handle.net/11427/4365.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Arrowood, Jon A. „Using observation uncertainty for robust speech recognition“. Diss., Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-04082004-180005/unrestricted/arrowood%5Fjon%5Fa%5F200312%5Fphd.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Taillardat, Maxime. „Méthodes Non-Paramétriques de Post-Traitement des Prévisions d'Ensemble“. Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLV072.

Der volle Inhalt der Quelle
Annotation:
En prévision numérique du temps, les modèles de prévision d'ensemble sont devenus un outil incontournable pour quantifier l'incertitude des prévisions et fournir des prévisions probabilistes. Malheureusement, ces modèles ne sont pas parfaits et une correction simultanée de leur biais et de leur dispersion est nécessaire.Cette thèse présente de nouvelles méthodes de post-traitement statistique des prévisions d'ensemble. Celles-ci ont pour particularité d'être basées sur les forêts aléatoires.Contrairement à la plupart des techniques usuelles, ces méthodes non-paramétriques permettent de prendre en compte la dynamique non-linéaire de l'atmosphère.Elles permettent aussi d'ajouter des covariables (autres variables météorologiques, variables temporelles, géographiques...) facilement et sélectionnent elles-mêmes les prédicteurs les plus utiles dans la régression. De plus, nous ne faisons aucune hypothèse sur la distribution de la variable à traiter. Cette nouvelle approche surpasse les méthodes existantes pour des variables telles que la température et la vitesse du vent.Pour des variables reconnues comme difficiles à calibrer, telles que les précipitations sexti-horaires, des versions hybrides de nos techniques ont été créées. Nous montrons que ces versions hybrides (ainsi que nos versions originales) sont meilleures que les méthodes existantes. Elles amènent notamment une véritable valeur ajoutée pour les pluies extrêmes.La dernière partie de cette thèse concerne l'évaluation des prévisions d'ensemble pour les événements extrêmes. Nous avons montré quelques propriétés concernant le Continuous Ranked Probability Score (CRPS) pour les valeurs extrêmes. Nous avons aussi défini une nouvelle mesure combinant le CRPS et la théorie des valeurs extrêmes, dont nous examinons la cohérence sur une simulation ainsi que dans un cadre opérationnel.Les résultats de ce travail sont destinés à être insérés au sein de la chaîne de prévision et de vérification à Météo-France
In numerical weather prediction, ensemble forecasts systems have become an essential tool to quantifyforecast uncertainty and to provide probabilistic forecasts. Unfortunately, these models are not perfect and a simultaneouscorrection of their bias and their dispersion is needed.This thesis presents new statistical post-processing methods for ensemble forecasting. These are based onrandom forests algorithms, which are non-parametric.Contrary to state of the art procedures, random forests can take into account non-linear features of atmospheric states. They easily allowthe addition of covariables (such as other weather variables, seasonal or geographic predictors) by a self-selection of the mostuseful predictors for the regression. Moreover, we do not make assumptions on the distribution of the variable of interest. This new approachoutperforms the existing methods for variables such as surface temperature and wind speed.For variables well-known to be tricky to calibrate, such as six-hours accumulated rainfall, hybrid versions of our techniqueshave been created. We show that these versions (and our original methods) are better than existing ones. Especially, they provideadded value for extreme precipitations.The last part of this thesis deals with the verification of ensemble forecasts for extreme events. We have shown several properties ofthe Continuous Ranked Probability Score (CRPS) for extreme values. We have also defined a new index combining the CRPS and the extremevalue theory, whose consistency is investigated on both simulations and real cases.The contributions of this work are intended to be inserted into the forecasting and verification chain at Météo-France
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Taillardat, Maxime. „Méthodes Non-Paramétriques de Post-Traitement des Prévisions d'Ensemble“. Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLV072/document.

Der volle Inhalt der Quelle
Annotation:
En prévision numérique du temps, les modèles de prévision d'ensemble sont devenus un outil incontournable pour quantifier l'incertitude des prévisions et fournir des prévisions probabilistes. Malheureusement, ces modèles ne sont pas parfaits et une correction simultanée de leur biais et de leur dispersion est nécessaire.Cette thèse présente de nouvelles méthodes de post-traitement statistique des prévisions d'ensemble. Celles-ci ont pour particularité d'être basées sur les forêts aléatoires.Contrairement à la plupart des techniques usuelles, ces méthodes non-paramétriques permettent de prendre en compte la dynamique non-linéaire de l'atmosphère.Elles permettent aussi d'ajouter des covariables (autres variables météorologiques, variables temporelles, géographiques...) facilement et sélectionnent elles-mêmes les prédicteurs les plus utiles dans la régression. De plus, nous ne faisons aucune hypothèse sur la distribution de la variable à traiter. Cette nouvelle approche surpasse les méthodes existantes pour des variables telles que la température et la vitesse du vent.Pour des variables reconnues comme difficiles à calibrer, telles que les précipitations sexti-horaires, des versions hybrides de nos techniques ont été créées. Nous montrons que ces versions hybrides (ainsi que nos versions originales) sont meilleures que les méthodes existantes. Elles amènent notamment une véritable valeur ajoutée pour les pluies extrêmes.La dernière partie de cette thèse concerne l'évaluation des prévisions d'ensemble pour les événements extrêmes. Nous avons montré quelques propriétés concernant le Continuous Ranked Probability Score (CRPS) pour les valeurs extrêmes. Nous avons aussi défini une nouvelle mesure combinant le CRPS et la théorie des valeurs extrêmes, dont nous examinons la cohérence sur une simulation ainsi que dans un cadre opérationnel.Les résultats de ce travail sont destinés à être insérés au sein de la chaîne de prévision et de vérification à Météo-France
In numerical weather prediction, ensemble forecasts systems have become an essential tool to quantifyforecast uncertainty and to provide probabilistic forecasts. Unfortunately, these models are not perfect and a simultaneouscorrection of their bias and their dispersion is needed.This thesis presents new statistical post-processing methods for ensemble forecasting. These are based onrandom forests algorithms, which are non-parametric.Contrary to state of the art procedures, random forests can take into account non-linear features of atmospheric states. They easily allowthe addition of covariables (such as other weather variables, seasonal or geographic predictors) by a self-selection of the mostuseful predictors for the regression. Moreover, we do not make assumptions on the distribution of the variable of interest. This new approachoutperforms the existing methods for variables such as surface temperature and wind speed.For variables well-known to be tricky to calibrate, such as six-hours accumulated rainfall, hybrid versions of our techniqueshave been created. We show that these versions (and our original methods) are better than existing ones. Especially, they provideadded value for extreme precipitations.The last part of this thesis deals with the verification of ensemble forecasts for extreme events. We have shown several properties ofthe Continuous Ranked Probability Score (CRPS) for extreme values. We have also defined a new index combining the CRPS and the extremevalue theory, whose consistency is investigated on both simulations and real cases.The contributions of this work are intended to be inserted into the forecasting and verification chain at Météo-France
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Allahyani, Seham. „Contributions to filtering under randomly delayed observations and additive-multiplicative noise“. Thesis, Brunel University, 2017. http://bura.brunel.ac.uk/handle/2438/16297.

Der volle Inhalt der Quelle
Annotation:
This thesis deals with the estimation of unobserved variables or states from a time series of noisy observations. Approximate minimum variance filters for a class of discrete time systems with both additive and multiplicative noise, where the measurement might be delayed randomly by one or more sample times, are investigated. The delayed observations are modelled by up to N sample times by using N Bernoulli random variables with values of 0 or 1. We seek to minimize variance over a class of filters which are linear in the current measurement (although potentially nonlinear in past measurements) and present a closed-form solution. An interpretation of the multiplicative noise in both transition and measurement equations in terms of filtering under additive noise and stochastic perturbations in the parameters of the state space system is also provided. This filtering algorithm extends to the case when the system has continuous time state dynamics and discrete time state measurements. The Euler scheme is used to transform the process into a discrete time state space system in which the state dynamics have a smaller sampling time than the measurement sampling time. The number of sample times by which the observation is delayed is considered to be uncertain and a fraction of the measurement sample time. The same problem is considered for nonlinear state space models of discrete time systems, where the measurement might be delayed randomly by one sample time. The linearisation error is modelled as an additional source of noise which is multiplicative in nature. The algorithms developed are demonstrated throughout with simulated examples.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Liv, Per. „Efficient strategies for collecting posture data using observation and direct measurement“. Doctoral thesis, Umeå universitet, Yrkes- och miljömedicin, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-59132.

Der volle Inhalt der Quelle
Annotation:
Relationships between occupational physical exposures and risks of contracting musculoskeletal disorders are still not well understood; exposure-response relationships are scarce in the musculoskeletal epidemiology literature, and many epidemiological studies, including intervention studies, fail to reach conclusive results. Insufficient exposure assessment has been pointed out as a possible explanation for this deficiency. One important aspect of assessing exposure is the selected measurement strategy; this includes issues related to the necessary number of data required to give sufficient information, and to allocation of measurement efforts, both over time and between subjects in order to achieve precise and accurate exposure estimates. These issues have been discussed mainly in the occupational hygiene literature considering chemical exposures, while the corresponding literature on biomechanical exposure is sparse. The overall aim of the present thesis was to increase knowledge on the relationship between data collection design and the resulting precision and accuracy of biomechanical exposure assessments, represented in this thesis by upper arm postures during work, data which have been shown to be relevant to disorder risk. Four papers are included in the thesis. In papers I and II, non-parametric bootstrapping was used to investigate the statistical efficiency of different strategies for distributing upper arm elevation measurements between and within working days into different numbers of measurement periods of differing durations. Paper I compared the different measurement strategies with respect to the eventual precision of estimated mean exposure level. The results showed that it was more efficient to use a higher number of shorter measurement periods spread across a working day than to use a smaller number for longer uninterrupted measurement periods, in particular if the total sample covered only a small part of the working day. Paper II evaluated sampling strategies for the purpose of determining posture variance components with respect to the accuracy and precision of the eventual variance component estimators. The paper showed that variance component estimators may be both biased and imprecise when based on sampling from small parts of working days, and that errors were larger with continuous sampling periods. The results suggest that larger posture samples than are conventionally used in ergonomics research and practice may be needed to achieve trustworthy estimates of variance components. Papers III and IV focused on method development. Paper III examined procedures for estimating statistical power when testing for a group difference in postures assessed by observation. Power determination was based either on a traditional analytical power analysis or on parametric bootstrapping, both of which accounted for methodological variance introduced by the observers to the exposure data. The study showed that repeated observations of the same video recordings may be an efficient way of increasing the power in an observation-based study, and that observations can be distributed between several observers without loss in power, provided that all observers contribute data to both of the compared groups, and that the statistical analysis model acknowledges observer variability. Paper IV discussed calibration of an inferior exposure assessment method against a superior “golden standard” method, with a particular emphasis on calibration of observed posture data against postures determined by inclinometry. The paper developed equations for bias correction of results obtained using the inferior instrument through calibration, as well as for determining the additional uncertainty of the eventual exposure value introduced through calibration. In conclusion, the results of the present thesis emphasize the importance of carefully selecting a measurement strategy on the basis of statistically well informed decisions. It is common in the literature that postural exposure is assessed from one continuous measurement collected over only a small part of a working day. In paper I, this was shown to be highly inefficient compared to spreading out the corresponding sample time across the entire working day, and the inefficiency was also obvious when assessing variance components, as shown in paper II. The thesis also shows how a well thought-out strategy for observation-based exposure assessment can reduce the effects of measurement error, both for random methodological variance (paper III) and systematic observation errors (bias) (paper IV).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Theopold, Adrian Pit [Verfasser], Mathias [Akademischer Betreuer] Vetter und Viktor [Gutachter] Todorov. „Estimation of the Jump Activity Index in the Presence of Random Observation Times / Adrian Pit Theopold ; Gutachter: Viktor Todorov ; Betreuer: Mathias Vetter“. Kiel : Universitätsbibliothek Kiel, 2019. http://d-nb.info/1225741386/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Makarova, Natalya. „A simulation study of the robustness of prediction intervals for an independent observation obtained from a random sample from an assumed location-scale family of distributions“. Kansas State University, 2012. http://hdl.handle.net/2097/14749.

Der volle Inhalt der Quelle
Annotation:
Master of Science
Department of Statistics
Paul I. Nelson
Suppose that based on data consisting of independent repetitions of an experiment a researcher wants to predict the outcome of the next independent outcome of the experiment. The researcher models the data as being realizations of independent, identically distributed random variables { Xi, i=1,2,..n} having density f() and the next outcome as the value of an independent random variable Y , also having density f() . We assume that the density f() lies in one of three location-scale families: standard normal (symmetric); Cauchy (symmetric, heavy-tailed); extreme value (asymmetric.). The researcher does not know the values of the location and scale parameters. For f() = f0() lying in one of these families, an exact prediction interval for Y can be constructed using equivariant estimators of the location and scale parameters to form a pivotal quantity based on { Xi, i=1,2,..n} and Y. This report investigates via a simulation study the performance of these prediction intervals in terms of coverage rate and length when the assumption that f() = f0() is correct and when it is not. The simulation results indicate that prediction intervals based on the assumption of normality perform quite well with normal and extreme value data and reasonably well with Cauchy data when the sample sizes are large. The heavy tailed Cauchy assumption only leads to prediction intervals that perform well with Cauchy data and is not robust when the data are normal and extreme value. Similarly, the asymmetric extreme value model leads to prediction intervals that only perform well with extreme value data. Overall, this study indicates robustness with respect to a mismatch between the assumed and actual distributions in some cases and a lack of robustness in others.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Dworak, Jennifer Lynn. „Modeling defective part level due to static and dynamic defects based upon site observation and excitation balance“. Diss., Texas A&M University, 2004. http://hdl.handle.net/1969.1/323.

Der volle Inhalt der Quelle
Annotation:
Manufacture testing of digital integrated circuits is essential for high quality. However, exhaustive testing is impractical, and only a small subset of all possible test patterns (or test pattern pairs) may be applied. Thus, it is crucial to choose a subset that detects a high percentage of the defective parts and produces a low defective part level. Historically, test pattern generation has often been seen as a deterministic endeavor. Test sets are generated to deterministically ensure that a large percentage of the targeted faults are detected. However, many real defects do not behave like these faults, and a test set that detects them all may still miss many defects. Unfortunately, modeling all possible defects as faults is impractical. Thus, it is important to fortuitously detect unmodeled defects using high quality test sets. To maximize fortuitous detection, we do not assume a high correlation between faults and actual defects. Instead, we look at the common requirements for all defect detection. We deterministically maximize the observations of the leastobserved sites while randomly exciting the defects that may be present. The resulting decrease in defective part level is estimated using the MPGD model. This dissertation describes the MPGD defective part level model and shows how it can be used to predict defective part levels resulting from static defect detection. Unlike many other predictors, its predictions are a function of site observations, not fault coverage, and thus it is generally more accurate at high fault coverages. Furthermore, its components model the physical realities of site observation and defect excitation, and thus it can be used to give insight into better test generation strategies. Next, we investigate the effect of additional constraints on the fortuitous detection of defects-specifically, as we focus on detecting dynamic defects instead of static ones. We show that the quality of the randomness of excitation becomes increasingly important as defect complexity increases. We introduce a new metric, called excitation balance, to estimate the quality of the excitation, and we show how excitation balance relates to the constant τ in the MPGD model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Mullett, Margaret. „Conducting a randomised experiment in eight English prisons : a participant observation study of testing the Sycamore Tree Programme“. Thesis, University of Cambridge, 2016. https://www.repository.cam.ac.uk/handle/1810/275047.

Der volle Inhalt der Quelle
Annotation:
This dissertation is a participant observer’s account of implementing a multisite, randomised controlled trial within Her Majesty’s Prison Service. It adds to a scarce literature detailing the steps involved in implementing experiments in custodial settings by providing a candid account of the route from planning to successful implementation. The randomised controlled trial was designed to evaluate the effectiveness of the Sycamore Tree Programme. This programme’s goal is to teach prisoners the wider harm of crime and includes a face-to-face meeting between a victim of crime and the participating offenders. It derives its rehabilitative potential from restorative justice and seeks to foster hope that change is possible for offenders, thus aiding them to desist from crime. Its development and theoretical basis are described for the first time. In an in-depth narrative the dissertation details how at every stage strategies were developed to manage participant procurement, random assignment, maintaining treatment integrity, and preparing for final outcome measurements. The randomised controlled trial was designed to produce an individual experiment in eight prisons. These will be combined in a meta-analysis as well as analysed as a pooled sample. Overall the implementation process took close to two years and involved a charitable body, Her Majesty’s Prison Service, the National Offender Management Service, and two police forces. This work has demonstrated how the unstable nature of English prison populations and the risk-averse climate must be addressed when conducting experiments in that environment. It has also illustrated the gap between the rhetoric of evidence-based policy and the facilitation of research designed to seek that evidence. Nevertheless, developing trusting relationships and combining rapidly learnt skills with inherent abilities ensured that the evaluation methodology was supported and protected through the various challenges it met. Finally, the dissertation suggests conditions for closer collaboration between government executive bodies and researchers that might increase the number of experiments undertaken in prisons. It also aims to encourage researchers that prison experiments, although not easy, are feasible, defendable, and, above all, worthwhile.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Kennedy, Brian Michael Kennedy. „Leveraging Multimodal Tumor mRNA Expression Data from Colon Cancer: Prospective Observational Studies for Hypothesis Generating and Predictive Modeling“. The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1498742562364379.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

„Random observations on random observations: Sparse signal acquisition and processing“. Thesis, 2010. http://hdl.handle.net/1911/62225.

Der volle Inhalt der Quelle
Annotation:
In recent years, signal processing has come under mounting pressure to accommodate the increasingly high-dimensional raw data generated by modern sensing systems. Despite extraordinary advances in computational power, processing the signals produced in application areas such as imaging, video, remote surveillance, spectroscopy, and genomic data analysis continues to pose a tremendous challenge. Fortunately, in many cases these high-dimensional signals contain relatively little information compared to their ambient dimensionality. For example, signals can often be well-approximated as a sparse linear combination of elements from a known basis or dictionary. Traditionally, sparse models have been exploited only after acquisition, typically for tasks such as compression. Recently, however, the applications of sparsity have greatly expanded with the emergence of compressive sensing, a new approach to data acquisition that directly exploits sparsity in order to acquire analog signals more efficiently via a small set of more general, often randomized, linear measurements. If properly chosen, the number of measurements can be much smaller than the number of Nyquist-rate samples. A common theme in this research is the use of randomness in signal acquisition, inspiring the design of hardware systems that directly implement random measurement protocols. This thesis builds on the field of compressive sensing and illustrates how sparsity can be exploited to design efficient signal processing algorithms at all stages of the information processing pipeline, with a particular focus on the manner in which randomness can be exploited to design new kinds of acquisition systems for sparse signals. Our key contributions include: (i) exploration and analysis of the appropriate properties for a sparse signal acquisition system; (ii) insight into the useful properties of random measurement schemes; (iii) analysis of an important family of algorithms for recovering sparse signals from random measurements; (iv) exploration of the impact of noise, both structured and unstructured, in the context of random measurements; and (v) algorithms that process random measurements to directly extract higher-level information or solve inference problems without resorting to full-scale signal recovery, reducing both the cost of signal acquisition and the complexity of the post-acquisition processing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Paparizos, Leonidas G. „Some Observations on the Random Response of Hysteretic Systems“. Thesis, 1987. https://thesis.library.caltech.edu/11467/2/Paparizos_LG_1987.pdf.

Der volle Inhalt der Quelle
Annotation:

In this thesis, the nature of hysteretic response behavior of structures subjected to strong seismic excitation, is examined. The earthquake ground motion is modeled as a stochastic process and the dependence of the response on system and excitation parameters, is examined. Consideration is given to the drift of structural systems and its dependence on the low frequency content of the earthquake spectrum. It is shown that commonly used stochastic excitation models, are not able to accurately represent the low frequency content of the excitation. For this reason, a stochastic model obtained by filtering a modulated white noise signal through a second order linear filter is used in this thesis.

A new approach is followed in the analysis of the elasto-plastic system. The problem is formulated in terms of the drift, defined as the sum of yield increments associated with inelastic response. The solution scheme is based on the properties of discrete Markov process models of the yield increment process, while the yield increment statistics are expressed in terms of the probability density of the velocity and elastic component of the displacent response. Using this approach, an approximate exponential and Rayleigh distribution for the yield increment and yield duration, respectively, are established. It is suggested that, for duration of stationary seismic excitation of practical significance, the drift can be considered as Brownian motion. Based on this observation, the approximate Gaussian distribution and the linearly divergent mean square value of the process, as well as an expression for the probability distribution of the peak drift response, are obtained. The validation of these properties is done by means of a Monte Carlo simulation study of the random response of an elastoplastic system.

Based on this analysis, the first order probability density and first passage probabilities for the drift are calculated from the probability density of the velocity and elastic component of the response, approximately obtained by generalized equivalent linearization. It is shown that the drift response statistics are strongly dependent on the normalized characteristic frequency and strength of the excitation, while a weaker dependence on the bandwidth of excitation is noted.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

„Analysis of structural equation models of polytomous variables with missing observations“. Chinese University of Hong Kong, 1991. http://library.cuhk.edu.hk/record=b5886876.

Der volle Inhalt der Quelle
Annotation:
by Man-lai Tang.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1991.
Includes bibliographical references.
Chapter PART I : --- ANALYSIS OF DATA WITH POLYTOMOUS VARIABLES --- p.1
Chapter Chapter 1 --- Introduction --- p.1
Chapter Chapter 2 --- Estimation of the Model with Incomplete Data --- p.5
Chapter §2.1 --- The Model --- p.5
Chapter §2.2 --- Two-stage Estimation Method --- p.7
Chapter Chapter 3 --- Generalization to Several Populations --- p.16
Chapter §3.1 --- The Model --- p.16
Chapter §3.2 --- Two-stage Estimation Method --- p.18
Chapter Chapter 4 --- Computation of the Estimates --- p.23
Chapter §4.1 --- Maximum Likelihood Estimates in Stage I --- p.23
Chapter §4.2 --- Generalized Least Squares Estimates in Stage II --- p.27
Chapter §4.3 --- Approximation for the weight matrix W --- p.28
Chapter Chapter 5 --- Some Illustrative Examples --- p.31
Chapter §5.1 --- Single Population --- p.31
Chapter §5.2 --- Multisample --- p.37
Chapter PART II : --- ANALYSIS OF CONTINUOUS AND POLYTOMOUS VARIABLES --- p.42
Chapter Chapter 6 --- Introduction --- p.42
Chapter Chapter 7 --- Several Populations Structural Equation Models with Continuous and Polytomous Variables --- p.44
Chapter §7.1 --- The Model --- p.44
Chapter §7.2 --- Analysis of the Model --- p.45
Chapter Chapter 8 --- Analysis of Structural Equation Models of Polytomous and Continuous Variables with Incomplete Data by Multisample Technique --- p.54
Chapter §8.1 --- Motivation --- p.54
Chapter §8.2 --- The Model --- p.55
Chapter §8.3 --- The Method --- p.56
Chapter Chapter 9 --- Computation of the Estimates --- p.60
Chapter §9.1 --- Optimization Procedure --- p.60
Chapter §9.2 --- Derivatives --- p.61
Chapter Chapter 10 --- Some Illustrative Examples --- p.65
Chapter §10.1 --- Multisample Example --- p.65
Chapter §10.2 --- Incomplete Data Example --- p.67
Chapter §10.3 --- The LISREL Program --- p.69
Chapter Chapter 11 --- Conclusion --- p.71
Tables --- p.73
Appendix --- p.85
References --- p.89
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Klok, Zacharie-Francis. „Analyse du comportement hétérogène des usagers dans un réseau“. Thèse, 2014. http://hdl.handle.net/1866/11905.

Der volle Inhalt der Quelle
Annotation:
Le nombre important de véhicules sur le réseau routier peut entraîner des problèmes d'encombrement et de sécurité. Les usagers des réseaux routiers qui nous intéressent sont les camionneurs qui transportent des marchandises, pouvant rouler avec des véhicules non conformes ou emprunter des routes interdites pour gagner du temps. Le transport de matières dangereuses est réglementé et certains lieux, surtout les ponts et les tunnels, leur sont interdits d'accès. Pour aider à faire appliquer les lois en vigueur, il existe un système de contrôles routiers composé de structures fixes et de patrouilles mobiles. Le déploiement stratégique de ces ressources de contrôle mise sur la connaissance du comportement des camionneurs que nous allons étudier à travers l'analyse de leurs choix de routes. Un problème de choix de routes peut se modéliser en utilisant la théorie des choix discrets, elle-même fondée sur la théorie de l'utilité aléatoire. Traiter ce type de problème avec cette théorie est complexe. Les modèles que nous utiliserons sont tels, que nous serons amenés à faire face à des problèmes de corrélation, puisque plusieurs routes partagent probablement des arcs. De plus, puisque nous travaillons sur le réseau routier du Québec, le choix de routes peut se faire parmi un ensemble de routes dont le nombre est potentiellement infini si on considère celles ayant des boucles. Enfin, l'étude des choix faits par un humain n'est pas triviale. Avec l'aide du modèle de choix de routes retenu, nous pourrons calculer une expression de la probabilité qu'une route soit prise par le camionneur. Nous avons abordé cette étude du comportement en commençant par un travail de description des données collectées. Le questionnaire utilisé par les contrôleurs permet de collecter des données concernant les camionneurs, leurs véhicules et le lieu du contrôle. La description des données observées est une étape essentielle, car elle permet de présenter clairement à un analyste potentiel ce qui est accessible pour étudier les comportements des camionneurs. Les données observées lors d'un contrôle constitueront ce que nous appellerons une observation. Avec les attributs du réseau, il sera possible de modéliser le réseau routier du Québec. Une sélection de certains attributs permettra de spécifier la fonction d'utilité et par conséquent la fonction permettant de calculer les probabilités de choix de routes par un camionneur. Il devient alors possible d'étudier un comportement en se basant sur des observations. Celles provenant du terrain ne nous donnent pas suffisamment d'information actuellement et même en spécifiant bien un modèle, l'estimation des paramètres n'est pas possible. Cette dernière est basée sur la méthode du maximum de vraisemblance. Nous avons l'outil, mais il nous manque la matière première que sont les observations, pour continuer l'étude. L'idée est de poursuivre avec des observations de synthèse. Nous ferons des estimations avec des observations complètes puis, pour se rapprocher des conditions réelles, nous continuerons avec des observations partielles. Ceci constitue d'ailleurs un défi majeur. Nous proposons pour ces dernières, de nous servir des résultats des travaux de (Bierlaire et Frejinger, 2008) en les combinant avec ceux de (Fosgerau, Frejinger et Karlström, 2013). Bien qu'elles soient de nature synthétiques, les observations que nous utilisons nous mèneront à des résultats tels, que nous serons en mesure de fournir une proposition concrète qui pourrait aider à optimiser les décisions des responsables des contrôles routiers. En effet, nous avons réussi à estimer, sur le réseau réel du Québec, avec un seuil de signification de 0,05 les valeurs des paramètres d'un modèle de choix de routes discrets, même lorsque les observations sont partielles. Ces résultats donneront lieu à des recommandations sur les changements à faire dans le questionnaire permettant de collecter des données.
Using transportation roads enables workers to reach their work facilities. Security and traffic jam issues are all the more important given that the number of vehicles is always increasing and we will focus on merchandise transporters in this study. Dangerous items transportation is under strict control as it is for example forbidden for them to be carried through a tunnel or across a bridge. Some transporters may drive a vehicle that has defects or/and they may be ta\-king some forbidden roads so as to reach their destination faster. Transportation of goods is regulated by the law and there exists a control system, whose purpose is to detect frauds and to make sure controlled vehicles are in order. The strategic deployment of control resources can be based on the knowledge of transporters behaviour, which is going to be studied through their route choice analysis. The number of routes can be unbounded especially if we consider loops, which leads to a complex problem to be solved. We can also mention issues closely related to route choice problem using discrete choice models such as correlation between routes sharing links and point out the fact that human decision process is not considered something easy. A route choice problem can be modelled based on the random utility theory and as a consequence we will focus on the discrete choice models. We are going to use such model on the real road network of Quebec and we will derive an expression of the probability, for a transporter, to pick one route. We are going to explain the way we did our study. It started first by doing a data description job as we are convinced this is a step that will help other analysts to have a clear view of the data situation. Some data are network related and the corresponding attributes collected will be used to model the road network of Quebec. We will use some attributes to explain the utility function, which leads to the definition of the function that gives the probability that a user takes a given route. Once this function is fully specified, the behaviour study can be done, except that we have a set of observations that are absolutely incomplete. When observations are a gathering of data collected during a road control, the information they provide us is not enough and thus, the parameters estimation will fail. We might seem blocked but in fact, we brought the idea of using simulated observations. We are going to estimate model parameters with firstly complete observations and in order to imitate the real conditions, we then are going to use partial observations. This constitutes a main challenge and we overcome it by using the results presented in (Bierlaire et Frejinger, 2008) combined with those from (Fosgerau, Frejinger et Karlström, 2013). We will demonstrate that even though the observations used are simulated, we will deliver conclusions that can be useful for road network managers. The main results we provide in this work is that estimation can be done with a 0,05 signification level on real road network of Quebec, while the observations are incomplete. Eventually, our results should motivate network managers to improve the set of questions they use to collect data as it would help them to strengthen their knowledge about the merchandise transporters and hopefully, the decision process will lead to optimized resource deployments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Lei, Xiaoxuan. „The Variation of a Teacher's Classroom Observation Ratings across Multiple Classrooms“. 2017. http://scholarworks.gsu.edu/eps_diss/156.

Der volle Inhalt der Quelle
Annotation:
Classroom observations have been increasingly used for teacher evaluations, and thus it is important to examine the measurement quality and the use of observation ratings. When a teacher is observed in multiple classrooms, his or her observation ratings may vary across classrooms. In that case, using ratings from one classroom per teacher may not be adequate to represent a teacher’s quality of instruction. However, the fact that classrooms are nested within teachers is usually not considered while classroom observation data is analyzed. Drawing on the Measures of Effective Teaching dataset, this dissertation examined the variation of a teacher’s classroom observation ratings across his or her multiple classrooms. In order to account for the teacher-level, school-level, and rater-level variation, a cross-classified random effects model was used for the analysis. Two research questions were addressed: (1) What is the variation of a teacher’s classroom observation ratings across multiple classrooms? (2) To what extent is the classroom-level variation within teachers explained by observable classroom characteristics? The results suggested that the math classrooms shared 4.9% to 14.7% of the variance in the classroom observation ratings and English Language and Arts classrooms shared 6.7% to 15.5% of the variance in the ratings. The results also showed that the classroom characteristics (i.e., class size, percent of minority students, percent of male students, percent of English language learners, percent of students eligible for free or reduced lunch, and percent of students with disabilities) had limited contributions to explaining the classroom-level variation in the ratings. The results of this dissertation indicate that teachers’ multiple classrooms should be taken into consideration when classroom observation ratings are used to evaluate teachers in high-stakes settings. In addition, other classroom-level factors that could contribute to explaining the classroom-level variation in classroom observation ratings should be investigated in future research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Lin, Mi-Hua, und 林米華. „The Observation of Strain Induced Drain Current Instability in Advanced CMOS Devices Using Random Telegraph Noise Analysis“. Thesis, 2009. http://ndltd.ncl.edu.tw/handle/41446892163805206369.

Der volle Inhalt der Quelle
Annotation:
碩士
國立交通大學
電子工程系所
98
Recent study on the reliability issues, the strained devices show a higher impact ionization rate, i.e., the device degradation is proportional to the current enhancement. For n-MOSFET devices, CESL (contact etching stop layer) strain (uniaxial) is much better in terms of reliability, performance, and process simplicity; SiC on source and drain structure shows high driving current ability. For p-MOSFET device, uniaxial structure with SiGe on source and drain with EDB (embedded diffusion barrier) seems to be promising in terms of its performance and reliability. In this thesis, the hot-carrier stress induced oxide traps and its correlation with enhanced degradation in strained CMOS devices have been reported. First, the ID-RTN (Drain Current Random Telegraph Noise) has been employed to study the stress induced slow traps in uniaxial strained n-MOSFETs and p-MOSFETs. The carrier trapping and detrapping effect in the gate dielectric can be observed. The drain current fluctuation is at low level when carrier is trapped and is at high level when carrier is detrapped. Through statistically extracting and calculating the capture and emission time, we can figure out the trap properties. Secondly, different process-induced strain effect for n-MOSFETs and p-MOSFETs has been observed respectively. By extracting the normalized drain current amplitude from the drain current spectra, experimental results show that the vertical compressive strain generates extra oxide defects and induces more scattering after HC stress in CESL device. This vertical strain in CESL also contributes to a non-negligible amount of extra devices degradation. While, SiGe S/D on p-MOSFET device shows different behavior in that the compressive strain along the channel shows no impact on its reliability. The process induced strain among different strained techniques can be investigated by the ID-RTN measurement. Furthermore, the application to the study of the strained SiC on S/D has also been demonstrated. Results also show that the uniaxial strain in such device exhibits less impact on the device reliability. Therefore, this strained SiC device is similar to the SiGe S/D device in terms of the ID-RTN characteristics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Chiou, Kuo-Ching, und 邱國欽. „Expected Time of Type-II Right Censoring for Pareto Distribution and Imputing Censored Observation under Random Right Censoring for Weibull Distribution“. Thesis, 2000. http://ndltd.ncl.edu.tw/handle/33180008867688117607.

Der volle Inhalt der Quelle
Annotation:
博士
國立交通大學
工業工程與管理系
88
In reliability analysis, considering the energy, money and material resources, engineers always try to find an economic way to complete an experiment in the shortest time. Censoring models are frequently used in reliability analysis to reduce experiment time. Three types of censoring models are type-I, type-II and random censoring. One of the objectives of this study is to derive a ratio formula based on expected type-II right censoring time and expected complete sampling time. The ratio formula can provide information about how much experiment time can be saved by using a type-II right censoring plan to replace a complete sampling plan when the failure time follows a Pareto distribution. The engineers may use the proposed ratio formula to decide the censoring number, experimental sample size, and the other relative parameters to save the total experiment time. In the case of the random right censoring plan, the failure time becomes a censored observation if it exceeds its associated censoring time. Many papers suggested using the censoring time to impute the censored observation. However, The censoring time usually underestimates the true failure time. Herein, another objective of this study is to propose two methods to impute the censored observations in a random right censoring plan for a Weibull distribution. By a Monte Carlo simulation, the performances of the proposed imputation methods are assessed base on their relative mean square error (RMSE) of the percentile estimates. Simulation results indicate that the two proposed imputing censored observations are superior to the censoring time in the case where the shape parameter of Weibull distribution exceeds 1, except for the lower percentiles. 1.1 研究動機與目的 1 1.2 論文架構 4 第二章 文獻探討 5 2.1 型II右側設限型態中之期望時間 5 2.2 隨機右側設限型態設限點之替代 6 第三章 型II設限時間期望值的研究 8 3.1 型II設限時間期望值與變異數之推導 8 3.1.1 一般連續變數之排序統計量Xr:n的期望值與變異數之通式 8 3.1.2 Pareto分配REET公式之推導 9 3.2 E(Xr:n) 和 REET表之建立及其使用程序之說明 10 3.2.1 E(Xr:n) 和 REET表之建立 10 3.2.2 使用程序之說明 13 3.3 實例探討 14 3.4 分析結果 15 第四章 隨機右側設限資料對設限點替代公式之推導 16 4.1 近似條件期望值的推導與Weibull分配之條件期望值 16 4.2 隨機右側設限型態下設限點替代公式和a-分位數的估計 18 4.3 模擬過程與結果分析 19 4.3.1 模擬過程 19 4.3.2 結果分析 21 第五章 結論與未來研究方向的探討 27 5.1 結論 27 5.2 未來研究方向的探討 28 參考文獻 30 附錄 32 附錄1 若隨機變數X為Pareto(a, b) 分配,則證明 (2.5) 和 (2.6) 式成立 32 附錄2 34 附錄2.1 X為Pareto(a, b) 分配,參數a及b之ML估計式 34 附錄2.2 X為Pareto(a, b) 分配,參數b之 信賴區間 35 附錄3 REET的曲線圖 ( p = 0.2,p = 0.5 ) 36 附錄4 Weibull分配之分位數之最小的相對均方誤差(RMSE)表 37
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Chang, Chia-Ming, und 張家銘. „The Observation of Gate Current Instability in High-k Gate Dielectric MOSFET by a New Gate Current Random Telegraph Noise Approach“. Thesis, 2008. http://ndltd.ncl.edu.tw/handle/28175742806058344891.

Der volle Inhalt der Quelle
Annotation:
碩士
國立交通大學
電子工程系所
96
In order to meet the requirement for low power circuit application, high-k gate dielectrics are being implemented in Si CMOS technologies with aggressive oxide thickness scaling. For the same EOT practical high-k gate dielectrics, one can provide significant reductions (>103) in the gate leakage. Reliability characteristics will be one of the primary goals of future development work, in which a large amount of traps in high-k bulk layer demonstrates the trapping and detrapping phenomena of carries. It causes the instability of threshold voltage, drain current, etc. In this thesis, a newly method, Gate Current Random Telegraph Noise, will be utilized to analyze the phenomenon of carriers trapping/detrapping in high-k gate dielectrics. We observe gate current by biasing the gate at fixed voltage and gate direct tunneling current will show two or three levels. The cause is carriers trapping in the trap site during tunneling through gate dielectrics and detrapping by thermal emission. Gate current is suppressed when traps capture carriers and recovers as traps empty. By statistically extracting capture and emission time, we can understand the trap properties. Besides, the influence will be understood by observing the variation of current fluctuation. We then apply this method to study three types of traps, including process induced traps, stress induced traps at distinct stress voltages, and post soft-breakdown character. Through the observation of gate current instability the degradation of gate dielectrics can be recognized. The experiment result shows the capture/emission mechanism affected by degrees of degradation. On the other hand, the appearance of gate current random telegraph noise is effectively investigated by measuring at different temperatures and the reliability of devices can be well understood.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie