Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Corrector estimates.

Дисертації з теми "Corrector estimates"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-36 дисертацій для дослідження на тему "Corrector estimates".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

VO, ANK KHOA. "Corrector homogenization estimates for PDE Systems with coupled fluxes posed in media with periodic microstructures." Doctoral thesis, Gran Sasso Science Institute, 2018. http://hdl.handle.net/20.500.12571/9693.

Повний текст джерела
Анотація:
The purpose of this thesis is the derivation of corrector estimates justifying the upscaling of systems of partial differential equations (PDEs) with coupled fluxes posed in media with microstructures (like porous media). Such models play an important role in the understanding of, for example, drug-delivery mechanisms, where the involved chemical species diffusing inside the domain are assumed to obey perhaps other transport mechanisms and certain non-dissipative nonlinear processes within the pore space and at the boundaries of the perforated media (e.g. interaction, chemical reaction, aggregation, deposition). In this thesis, our corrector estimates provide a quantitative analysis in terms of convergence rates in suitable norms, i.e. as the small homogenization parameter tends to zero, the differences between the micro- and macro-concentrations and between the corresponding micro- and macro-concentration gradients are controlled in terms of the small parameter. As preparation, we are first concerned with the weak solvability of the microscopic models as well as with the fundamental asymptotic homogenization procedures that are behind the derivation of the corresponding upscaled models. We report results on three connected mathematical problems: 1. Asymptotic analysis of microscopic semi-linear elliptic equations/systems. We explore the asymptotic analysis of a prototype model including the interplay between stationary diffusion and both surface and volume chemical reactions in porous media. Our interest lies in deriving homogenization limits (upscaling) for alike systems, and particularly, in justifying rigorously the obtained averaged descriptions. We prove the well-posedness of the microscopic problem ensuring also the positivity and boundedness of the involved concentrations. Then we use the structure of the two-scale expansions to derive corrector estimates delimitating quantitatively the convergence rate of the asymptotic approximates to the macroscopic limit concentrations and their gradients. High-order corrector estimates are also obtained. The semi-linear auxiliary problems are tackled by a fixed-point homogenization argument. Our techniques include also Moser-like iteration techniques, a variational formulation, two-scale asymptotic expansions as well as suitable energy estimates. 2. Corrector estimates for a Smoluchowski-Soret-Dufour model. We consider a thermodiffusion system, which is a coupled system of PDEs and ODEs that account for the heat-driven diffusion dynamics of hot colloids in periodic heterogeneous media. This model describes the joint evolution of temperature and colloidal concentrations in a saturated porous tissue where the Smoluchowski interactions for aggregation process and a linear deposition process take place. By a fixed-point argument, we prove the local existence and uniqueness results for the upscaled system. To obtain the corrector estimates, we exploit the concept of macroscopic reconstructions as well as suitable integral estimates to control boundary interactions. 3. Corrector estimates for a non-stationary Stokes-Nernst-Planck-Poisson system. We investigate a non-stationary Stokes-Nernst-Planck-Poisson system posed in a perforated domain as originally proposed by Knabner and his co-authors (see e.g. [98] and [99]). Starting off with the setting from [99], we complete the results by proving corrector estimates for the homogenization procedure. Main difficulties are connected to the choice of boundary conditions for the Poisson part of the system as well as with the scaling of the Stokes part of the system.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Högström, Martin. "Wind Climate Estimates - Validation of Modelled Wind Climate and Normal Year Correction." Thesis, Uppsala University, Air and Water Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-8023.

Повний текст джерела
Анотація:

Long time average wind conditions at potential wind turbine sites are of great importance when deciding if an investment will be economically safe. Wind climate estimates such as these are traditionally done with in situ measurements for a number of months. During recent years, a wind climate database has been developed at the Department of Earth Sciences, Meteorology at Uppsala University. The database is based on model runs with the higher order closure mesoscale MIUU-model in combination with long term statistics of the geostrophic wind, and is now used as a complement to in situ measurements, hence speeding up the process of turbine siting. With this background, a study has been made investigating how well actual power productions during the years 2004-2006 from 21 Swedish wind turbines correlate with theoretically derived power productions for the corresponding sites.

When comparing theoretically derived power productions based on long term statistics with measurements from a shorter time period, correction is necessary to be able to make relevant comparisons. This normal year correction is a main focus, and a number of different wind energy indices which are used for this purpose are evaluated. Two publicly available (Swedish and Danish Wind Index) and one derived theoretically from physical relationships and NCEP/NCAR reanalysis data (Geostrophic Wind Index). Initial testing suggests in some cases very different results when correcting with the three indices and further investigation is necessary. An evaluation of the Geostrophic Wind Index is made with the use of in situ measurements.

When correcting measurement periods limited in time to a long term average, a larger statistical dispersion is expected with shorter measurement periods, decreasing with longer periods. In order to investigate this assumption, a wind speed measurement dataset of 7 years were corrected with the Geostrophic Wind Index, simulating a number of hypothetical measurement periods of various lengths. When normal year correcting a measurement period of specific length, the statistical dispersion decreases significantly during the first 10 months. A reduction to about half the initial statistical dispersion can be seen after just 5 months of measurements.

Results show that the theoretical normal year corrected power productions in general are around 15-20% lower than expected. A probable explanation for the larger part of this bias is serious problems with the reported time-not-in-operation for wind turbines in official power production statistics. This makes it impossible to compare actual power production with theoretically derived without more detailed information. The theoretically derived Geostrophic Wind Index correlates well to measurements, however a theoretically expected cubed relationship of wind speed seem to account for the total energy of the wind. Such an amount of energy can not be absorbed by the wind turbines when wind speed conditions are a lot higher than normal.


Vindklimatet vid tänkbara platser för uppförande av vindkraftverk är avgörande när det beslutas huruvida det är en lämplig placering eller ej. Bedömning av vindklimatet görs vanligtvis genom vindmätningar på plats under ett antal månader. Under de senaste åren har en vindkarteringsdatabas utvecklats vid Institutionen för Geovetenskaper, Meteorologi vid Uppsala universitet. Databasen baseras på modellkörningar av en högre ordningens mesoskale-modell, MIUU-modellen, i kombination med klimatologisk statistik för den geostrofiska vinden. Denna används numera som komplement till vindmätningar på plats, vilket snabbar upp bedömningen av lämpliga platser. Mot denna bakgrund har en studie genomförts som undersöker hur bra faktisk energiproduktion under åren 2004-2006 från 21 vindkraftverk stämmer överens med teoretiskt härledd förväntad energiproduktion för motsvarande platser. Om teoretiskt härledd energiproduktion baserad på långtidsstatistik ska jämföras med mätningar från en kortare tidsperiod måste korrektion ske för att kunna göra relevanta jämförelser. Denna normalårskorrektion genomförs med hjälp av olika vindenergiindex. En utvärdering av de som finns allmänt tillgängliga (Svenskt vindindex och Danskt vindindex) och ett som härletts teoretiskt från fysikaliska samband och NCEP/NCAR återanalysdata (Geostrofiskt vindindex) görs. Inledande tester antyder att man får varierande resultat med de tre indexen och en djupare utvärdering genomförs, framförallt av det Geostrofiska vindindexet där vindmätningar används för att söka verifiera dess giltighet.

När kortare tidsbegränsade mätperioder korrigeras till ett långtidsmedelvärde förväntas en större statistisk spridning vid kortare mätperioder, minskande med ökande mätlängd. För att undersöka detta antagande används 7 års vindmätningar som korrigeras med det Geostrofiska vindindexet. I detta simuleras ett antal hypotetiskt tänkta mätperioder av olika längd. När en mätperiod av specifik längd normalårskorrigeras minskar den statistiska spridningen kraftigt under de första 10 månaderna. En halvering av den inledande statistiska spridningen kan ses efter endast 5 månaders mätningar.

Resultaten visar att teoretiskt härledd normalårskorrigerad energiproduktion generellt är ungefär 15-20% lägre än väntat. En trolig förklaring till merparten av denna skillnad är allvarliga problem med rapporterad hindertid för vindkraftverk i den officiella statistiken. Något som gör det omöjligt att jämföra faktisk energiproduktion med teoretiskt härledd utan mer detaljerad information. Det teoretiskt härledda Geostrofiska vindindexet stämmer väl överens med vindmätningar. Ett teoretiskt förväntat förhållande där energi är proportionellt mot kuben av vindhastigheten visar sig rimligen ta hänsyn till den totala energin i vinden. En sådan energimängd kan inte tas till vara av vindkraftverk när vindhastighetsförhållandena är avsevärt högre än de normala.

Стилі APA, Harvard, Vancouver, ISO та ін.
3

Brenner, Andreas [Verfasser], Eberhard [Gutachter] Bänsch, and Charalambos [Gutachter] Makridakis. "A-posteriori error estimates for pressure-correction schemes / Andreas Brenner ; Gutachter: Eberhard Bänsch, Charalambos Makridakis." Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2016. http://d-nb.info/1114499692/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Remund, Todd Gordon. "A Naive, Robust and Stable State Estimate." BYU ScholarsArchive, 2008. https://scholarsarchive.byu.edu/etd/1424.

Повний текст джерела
Анотація:
A naive approach to filtering for feedback control of dynamic systems that is robust and stable is proposed. Simulations are run on the filters presented to investigate the robustness properties of each filter. Each simulation with the comparison of the filters is carried out using the usual mean squared error. The filters to be included are the classic Kalman filter, Krein space Kalman, two adjustments to the Krein filter with input modeling and a second uncertainty parameter, a newly developed filter called the Naive filter, bias corrected Naive, exponentially weighted moving average (EWMA) Naive, and bias corrected EWMA Naive filter.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Pelletier, Stéphane. "High-resolution video synthesis from mixed-resolution video based on the estimate-and-correct method." Thesis, McGill University, 2003. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=79253.

Повний текст джерела
Анотація:
A technique to increase the frame rate of digital video cameras at high-resolution is presented. The method relies on special video hardware capable of simultaneously generating low-speed, high-resolution frames and high-speed, low-resolution frames. The algorithm follows an estimate-and-correct approach, in which a high-resolution estimate is first produced by translating the pixels of the high-resolution frames produced by the camera with respect to the motion dynamic observed in the low-resolution ones. The estimate is then compared against the current low-resolution frame and corrected locally as necessary for consistency with the latter. This is done by replacing the wrong pixels of the estimate with pixels from a bilinear interpolation of the current low-resolution frame. Because of their longer exposure time, high-resolution frames are more prone to motion blur than low-resolution frames, so a motion blur reduction step is also applied. Simulations demonstrate the ability of our technique in synthesizing high-quality, high-resolution frames at modest computational expense.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Horne, Lyman D., and Ricky G. Dye. "AN INEXPENSIVE DATA ACQUISITION SYSTEM FOR MEASURING TELEMETRY SIGNALS ON TEST RANGES TO ESTIMATE CHANNEL CHARACTERISTICS." International Foundation for Telemetering, 1995. http://hdl.handle.net/10150/608407.

Повний текст джерела
Анотація:
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada
In an effort to determine a more accurate characterization of the multipath fading effects on telemetry signals, the BYU telemetering group is implementing an inexpensive data acquisition system to measure these effects. It is designed to measure important signals in a diversity combining system. The received RF envelope, AGC signal, and the weighting signal for each beam, as well as the IRIG B time stamp will be sampled and stored. This system is based on an 80x86 platform for simplicity, compactness, and ease of use. The design is robust and portable to accommodate measurements in a variety of locations including aircraft, ground, and mobile environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Sogunro, Babatunde Oluwasegun. "Nonresponse in industrial censuses in developing countries : some proposals for the correction of biased estimators." Thesis, University of Hull, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.278593.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Lincoln, James. "The Generalized Empirical Likelihood estimator with grouped data : bias correction in Linked Employer-Employee models." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/the-generalized-empirical-likelihood-estimator-with-grouped-data--bias-correction-in-linked-employeremployee-models(5b100c37-0093-472b-95ad-aee519d1efd4).html.

Повний текст джерела
Анотація:
This thesis is comprised of two parts: Part I It is common in empirical economic applications that use micro-data to exhibit a natural ordering into groups. Angrist (1991) use dummy variables based on such grouping to form instruments for consistent estimation. Khatoon et al. (2014) and Andrews et al. (2016) extend the GEL class of estimators to the case where moment conditions are specified on a group-by-group basis and refer to the resulting estimator as group-GEL. A natural consequence of basing instruments or moment conditions on groups is the degree of over-identification can increase significantly. Following Bekker (1994) it is recognized that inference based on conventional standard errors is incorrect in the presence of many instruments. Furthermore, when using many moment conditions, two-stage GMM is biased. Although the bias of Generalized empirical likelihood (GEL) is robust to the number of instruments, Newey and Windmeijer (2009) show that the conventional standard errors are too small. They propose an alternative variance estimator for GEL that is consistent under conventional and many-weak moment asymptotics. In this part of the thesis I demonstrate that for a particular specification of moment conditions, the group-GEL estimator is more efficient than GEL. I also extend the Newey and Windmeijer (2009) many-moment asymptotic framework to group-GEL. Simulation results demonstrate that group-GEL is robust to many moment conditions, and t-statistic rejection frequencies using the alternative variance estimator are much improved compared to using conventional standard errors. Part II Following the seminal paper of Abowd et al. (1999), Linked Employer-Employee datasets are commonly used in studies decomposing sources of wage variation into unobservable worker and firm effects. If it is assumed that the correlation between these worker and firm effects can be interpreted as a measure of sorting in labour markets, then an efficient matching process between workers and firms would result in a positive correlation. However, empirical evidence has failed to support this ascertain. As a possible answer to this apparent paradox, Andrews et al. (2008) show the estimation of the correlation is biased as a function of the amount of movement of workers between firms, so-called Limited Mobility Bias (LMB); furthermore they provide formula to correct this bias. However, due to computational restrictions, application of these corrections is infeasible, given the size of datasets typically used. In this part of the thesis I introduce an estimation technique to make the bias-correction estimators of Andrews et al. (2008) feasible. Monte Carlo experiments using the bias-corrected estimators demonstrate that LMB can be eliminated from datasets of comparable size to real data. Finally, I apply the bias-correction techniques to a linear model based on the Danish IDA, and find that correcting the correlation between the worker and firm effects due to LMB provides insufficient evidence to resolve the above paradox.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Kitchen, John. "The effect of quadrature hybrid errors on a phase difference frequency estimator and methods for correction /." Title page, contents and summary only, 1991. http://web4.library.adelaide.edu.au/theses/09AS/09ask62.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Abyad, Emad. "Modeled Estimates of Solar Direct Normal Irradiance and Diffuse Horizontal Irradiance in Different Terrestrial Locations." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/36499.

Повний текст джерела
Анотація:
The transformation of solar energy into electricity is starting to impact to overall worldwide energy production mix. Photovoltaic-generated electricity can play a significant role in minimizing the use of non-renewable energy sources. Sunlight consists of three main components: global horizontal irradiance (GHI), direct normal irradiance (DNI) and diffuse horizontal irradiance (DHI). Typically, these components are measured using specialized instruments in order to study solar radiation at any location. However, these measurements are not always available, especially in the case of the DNI and DHI components of sunlight. Consequently, many models have been developed to estimate these components from available GHI data. These models have their own merits. For this thesis, solar radiation data collected at four locations have been analyzed. The data come from Al-Hanakiyah (Saudi Arabia), Boulder (U.S.), Ma’an (Jordan), and Ottawa (Canada). The BRL, Reindl*, DISC, and Perez models have been used to estimate DNI and DHI data from the experimentally measured GHI data. The findings show that the Reindl* and Perez model outcomes offered similar accuracy of computing DNI and DHI values when comparing with detailed experimental data for Al-Hanakiyah and Ma’an. For Boulder, the Perez and BRL models have similar estimation abilities of DHI values and the DISC and Perez models are better estimators of DNI. The Reindl* model performs better when modeling DHI and DNI for Ottawa data. The BRL and DISC models show similar metrics error analyses, except in the case of the Ma’an location where the BRL model shows high error metrics values in terms of MAE, RMSE, and standard deviation (σ). The Boulder and Ottawa locations datasets were not complete and affected the outcomes with regards to the model performance metrics. Moreover, the metrics show very high, unreasonable values in terms of RMSE and σ. It is advised that a global model be developed by collecting data from many locations as a way to help minimize the error between the actual and modeled values since the current models have their own limitations. Availability of multi-year data, parameters such as albedo and aerosols, and one minute to hourly time steps data could help minimize the error between measured and modeled data. In addition to having accurate data, analysis of spectral data is important to evaluate their impact on solar technologies.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Ringard, Justine. "Estimation des précipitations sur le plateau des Guyanes par l'apport de la télédétection satellite." Thesis, Guyane, 2017. http://www.theses.fr/2017YANE0010/document.

Повний текст джерела
Анотація:
Le plateau des Guyanes est une région qui est caractérisée à 90% d’une forêt tropicale primaire et compte pour environ 20% des réserves mondiales d’eau douce. Ce territoire naturel, au vaste réseau hydrographique, montre des intensités pluviométriques annuelles atteignant 4000 mm/an ; ce qui fait de ce plateau une des régions les plus arrosées du monde. De plus les précipitations tropicales sont caractérisées par une variabilité spatiale et temporelle importante. Outre les aspects liés au climat, l’impact des précipitations dans cette région du globe est important en termes d’alimentation énergétique (barrages hydroélectriques). Il est donc important de développer des outils permettant d’estimer quantitativement et qualitativement et à haute résolution spatiale et temporelle les précipitations dans cette zone. Cependant ce vaste espace géographique est caractérisé par un réseau de stations pluviométriques peu développé et hétérogène, ce qui a pour conséquence une méconnaissance de la répartition spatio-temporelle précise des précipitations et de leurs dynamiques.Les travaux réalisées dans cette thèse visent à améliorer la connaissance des précipitations sur le plateau des Guyanes grâce à l’utilisation des données de précipitations satellites (Satellite Precipitation Product : SPP) qui offrent dans cette zone une meilleure résolution spatiale et temporelle que les mesures in situ, au prix d’une qualité moindre en terme de précision.Cette thèse se divise en 3 parties. La première partie compare les performances de quatre produits d’estimations satellitaires sur la zone d’étude et tente de répondre à la question : quelle est la qualité de ces produits au Nord de l’Amazone et sur la Guyane française dans les dimensions spatiales et temporelles ? La seconde partie propose une nouvelle technique de correction de biais des SPP qui procède en trois étapes : i) utiliser les mesures in situ de précipitations pour décomposer la zone étudiée en aires hydro-climatiques ii) paramétrer une méthode de correction de biais appelée quantile mapping sur chacune de ces aires iii) appliquer la méthode de correction aux données satellitaires relatives à chaque aire hydro-climatique. On cherche alors à répondre à la question suivante : est-ce que le paramétrage de la méthode quantile mapping sur différentes aires hydro-climatiques permet de corriger les données satellitaires de précipitations sur la zone d’étude ? Après avoir montré l’intérêt de prendre en compte les différents régimes pluviométriques pour mettre en œuvre la méthode de correction QM sur des données SPP, la troisième partie analyse l’impact de la résolution temporelle des données de précipitations utilisées sur la qualité de la correction et sur l’étendue spatiale des données SPP potentiellement corrigeables (données SPP sur lesquelles la méthode de correction peut s’appliquer avec efficacité). Concrètement l’objectif de cette partie est d’évaluer la capacité de notre méthode à corriger sur une large échelle spatiale le biais des données TRMM-TMPA 3B42V7 en vue de rendre pertinente l’exploitation de ce produit pour différentes applications hydrologiques.Ce travail a permis de corriger les séries satellites journalières à haute résolution spatiale et temporelle sur le plateau des Guyanes selon une approche nouvelle qui utilise la définition de zones hydro-climatiques. Les résultats positifs en terme de réduction du biais et du RMSE obtenus grâce à cette nouvelle approche, rendent possible la généralisation de cette nouvelle méthode dans des zones peu équipées en pluviomètres
The Guiana Shield is a region that is characterized by 90% of a primary rainforest and about 20% of the world’s freshwater reserves. This natural territory, with its vast hydrographic network, shows annual rainfall intensities up to 4000 mm/year; making this plateau one of the most watered regions in the world. In addition, tropical rainfall is characterized by significant spatial and temporal variability. In addition to climate-related aspects, the impact of rainfall in this region of the world is significant in terms of energy supply (hydroelectric dams). It is therefore important to develop tools to estimate quantitatively and qualitatively and at high spatial and temporal resolution the precipitation in this area. However, this vast geographical area is characterized by a network of poorly developed and heterogeneous rain gauges, which results in a lack of knowledge of the precise spatio-temporal distribution of precipitation and their dynamics.The work carried out in this thesis aims to improve the knowledge of precipitation on the Guiana Shield by using Satellite Precipitation Product (SPP) data that offer better spatial and temporal resolution in this area than the in situ measurements, at the cost of poor quality in terms of precision.This thesis is divided into 3 parts. The first part compares the performance of four products of satellite estimates on the study area and attempts to answer the question : what is the quality of these products in the Northern Amazon and French Guiana in spatial and time dimensions ? The second part proposes a new SPP bias correction technique that proceeds in three steps: i) using rain gauges measurements to decompose the studied area into hydro climatic areas ii) parameterizing a bias correction method called quantile mapping on each of these areas iii) apply the correction method to the satellite data for each hydro-climatic area. We then try to answer the following question : does the parameterization of the quantile mapping method on different hydro-climatic areas make it possible to correct the precipitation satellite data on the study area ? After showing the interest of taking into account the different rainfall regimes to implement the QM correction method on SPP data, the third part analyzes the impact of the temporal resolution of the precipitation data used on the quality of the correction and the spatial extent of potentially correctable SPP data (SPP data on which the correction method can be applied effectively). In summary, the objective of this section is to evaluate the ability of our method to correct on a large spatial scale the bias of the TRMM-TMPA 3B42V7 data in order to make the exploitation of this product relevant for different hydrological applications.This work made it possible to correct the daily satellite series with high spatial and temporal resolution on the Guiana Shield using a new approach that uses the definition of hydro-climatic areas. The positive results in terms of reduction of the bias and the RMSE obtained, thanks to this new approach, makes possible the generalization of this new method in sparselygauged areas
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Tran, Van-Tinh. "Selection Bias Correction in Supervised Learning with Importance Weight." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE1118/document.

Повний текст джерела
Анотація:
Dans la théorie de l'apprentissage supervisé, l'hypothèse selon laquelle l'échantillon de d'apprentissage et de test proviennent de la même distribution de probabilité, joue un rôle crucial. Malheureusement, cette hypothèse essentielle est souvent violée en présence d'un biais de sélection. Dans ce contexte, les algorithmes d'apprentissage supervisés standards peuvent souffrir d'un biais significatif. Dans cette thèse, nous abordons le problème du biais de sélection en apprentissage supervisé en utilisant la méthode de pondération de l'importance ("importance weighting" en anglais).Dans un premier temps, nous présentons le cadre formel de l'apprentissage supervisé et discutons des effets potentiellement néfastes du biais sur les performances prédictives. Nous étudions ensuite en détail comment les techniques de pondération de l'importance permettent, sous certaines hypothèses, de corriger le biais de sélection durant l'apprentissage de modèles génératifs et discriminants. Nous étudions enfin le potentiel des réseaux bayésiens comme outils de représentation graphique des relations d'indépendances conditionnelles entre les variables du problème et celles liées au mécanisme de sélection lui-même. Nous illustrons sur des exemples simples comment la graphe, construit avec de la connaissance experte, permet d'identifier a posteriori un sous-ensemble restreint de variables sur lesquelles « agir » pour réduire le biais.Dans un second temps, nous accordons une attention particulière au « covariate shift », i.e. un cas particulier de biais de sélection où la distribution conditionnelle P(y|x) est invariante entre l'échantillon d'apprentissage et de test. Nous proposons deux méthodes pour améliorer la pondération de l'importance en présence de covariate shift. Nous montrons d'abord que le modèle non pondéré est localement moins biaisé que le modèle pondéré sur les échantillons faiblement pondérés, puis nous proposons une première méthode combinant les modèles pondérés et non pondérés afin d'améliorer les performances prédictives dans le domaine cible. Enfin, nous étudions la relation entre le covariate shift et le problème des données manquantes dans les jeux de données de petite taille et proposons une seconde méthode qui utilise des techniques d'imputation de données manquantes pour corriger le covariate shift dans des scénarios simples mais réalistes. Ces méthodes sont validées expérimentalement sur de nombreux jeux de données
In the theory of supervised learning, the identical assumption, i.e. the training and test samples are drawn from the same probability distribution, plays a crucial role. Unfortunately, this essential assumption is often violated in the presence of selection bias. Under such condition, the standard supervised learning frameworks may suffer a significant bias. In this thesis, we address the problem of selection bias in supervised learning using the importance weighting method. We first introduce the supervised learning frameworks and discuss the importance of the identical assumption. We then study the importance weighting framework for generative and discriminative learning under a general selection scheme and investigate the potential of Bayesian Network to encode the researcher's a priori assumption about the relationships between the variables, including the selection variable, and to infer the independence and conditional independence relationships that allow selection bias to be corrected.We pay special attention to covariate shift, i.e. a special class of selection bias where the conditional distribution P(y|x) of the training and test data are the same. We propose two methods to improve importance weighting for covariate shift. We first show that the unweighted model is locally less biased than the weighted one on low importance instances, and then propose a method combining the weighted and the unweighted models in order to improve the predictive performance in the target domain. Finally, we investigate the relationship between covariate shift and the missing data problem for data sets with small sample sizes and study a method that uses missing data imputation techniques to correct the covariate shift in simple but realistic scenarios
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Klas, Juliana. "Advanced applications for state estimators in smart grids : identification, detection and correction of simultaneous measurement, parameter and topology cyber-attacks." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/185233.

Повний текст джерела
Анотація:
Growing demand and concern over climate change are key drivers for renewable sources of electricity and grid modernization. Grid modernization, or the so called smart grid, not only enables renewable sources but also opens the door to new applications with far-reaching impacts such as preventing or restoring outages (self-healing capabilities), and enabling consumers to have greater control over their electricity consumption and to actively participate in the electricity market. According to the Electric Power Research Institute (EPRI), one of the biggest challenges facing smart grid deployment is related to the cyber security of the systems. The current cyber-security landscape is characterized by rapidly evolving threats and vulnerabilities that pose challenges for the reliability, security, and resilience of the electricity sector. Power system state estimators (PSSE) are critical tools for grid reliability, under a system observable scenario, they allow power flow optimization and detection of incorrect data. In this work cyber-attacks are modeled as malicious data injections on system measurements, parameters and topology. The contributions of this work are twofold. First, a model for cyber-attack as a false data injection detection and identification is presented. The presented model considers the minimization of the composed measurement error while applying the Lagrangian relaxation. The presented contribution, enables false data injection attacks detection even if this belongs to the subspace spanned by the columns of the Jacobian matrix and in network areas with low measurement redundancy Second, state-of-the-art solutions consider correction of parameters or topology when measurements are free of error. However, how may one correct measurements if parameters or topology might be simultaneously in error? To solve this problem, a relaxed model is presented and solved iteratively in a continuous manner. Once identified and detected, cyber-attacks in parameters, topology and measurements are corrected. The proposed solution is based on a Taylor series relaxed, composed normalized error (CNE) hybrid approach with Lagrange multipliers. Validation is made on the IEEE-14 and IEEE-57 bus systems. Comparative results highlight the proposed methodology’s contribution to the current state-of-the-art research on this subject. Providing mitigation, response and system recovery capabilities to the state estimator with reduced computational burden, the proposed model and methodology have strong potential to be integrated into SCADA state estimators for real-world applications.
O aumento da demanda e a preocupação com as mudanças climáticas são importantes motivadores para as fontes de energia renováveis e a modernização da rede elétrica. A modernização da rede elétrica inteligentes (REI) ou smart grid, não somente possibilita as fontes de energia renováveis mas também abre portas à novas aplicações de grande impacto como a prevenção e restauração automática de falhas e a possibilidade dos consumidores terem grande controle sobre o consumo de eletricidade e atuação participativa no mercado de energia. De acordo com o Instituto Norte Americano de Pesquisas do Setor Elétrico, um dos principais desafios a ser enfrentado no desenvolvimento das REIs é relacionado a segurança cibernética dos sistemas. O cenário da segurança cibernética atual é caracterizado pela rápida evolução dos riscos e vulnerabilidades que impõe desafios para a confiabilidade, segurança e resiliência do setor elétrico. Neste contexto, estimadores de estado do sistema de potência são ferramentas críticas para a confiabilidade da rede, sob um cenário de observabilidade do sistema eles possibilitam o fluxo de potência do sistema e a análise de dados incorretos. Neste trabalho, ataques cibernéticos são modelados como injeção de dados incorretos em medidas, parâmetros e topologia do sistema. A metodologia proposta possibilita detecção de ataques mesmo se eles pertencerem ao subespaço ortogonal formado pelas colunas da matriz Jacobiana e em áreas do sistema com reduzida redundância de medidas. A solução proposta pelo estado da arte considera correções em parâmetros ou topologia quando medidas estão livres de erros. Porém, como pode-se corrigir medidas se parâmetros ou a topologia estão simultaneamente com erros? Para resolver este problema um modelo relaxado é proposto e resolvido iterativamente. Assim que detectado e identificado, ataques cibernéticos em parâmetros, topologia e/ou medidas são corrigidos. As contribuições específicas do trabalho são: cálculo do desvio padrão para pseudomedidas (iguais à zero) e medidas de baixa magnitude baseado em medidas correlatas e propriedades da covariância; modelo baseado em relaxação lagrangiana e erro composto de medida para identificação e detecção de ataques cibernéticos; estratégia hibrida de relaxamento iterativo (EHRI) para correção de ataque cibernético em parâmetros da rede de modo contínuo e com reduzido esforço computacional e metodologia baseada em ciclo holístico de resiliência para estimadores de estado sob ataques cibernéticos simultâneos em parâmetros, topologia e medidas. A validação é feita através dos sistemas de teste do IEEE de 14 e 57 barras, testes comparativos elucidam as contribuições da metodologia proposta ao estado da arte nesta área de pesquisa. Trazendo as capacidades de mitigação, resposta e recuperação ao estimador de estado com esforço computacional reduzido, o modelo e metodologia propostos tem grande potencial de ser integrado em SCADAs para aplicação em casos reais.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

da, Glória Abage de Lima Maria. "Essays on heteroskedasticity." Universidade Federal de Pernambuco, 2008. https://repositorio.ufpe.br/handle/123456789/7140.

Повний текст джерела
Анотація:
Made available in DSpace on 2014-06-12T18:29:15Z (GMT). No. of bitstreams: 2 arquivo4279_1.pdf: 1161561 bytes, checksum: 80aee0b17f88de11dd7d0999ad1594a1 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2008
Esta tese de doutorado trata da realização de inferências no modelo de regressão linear sob heteroscedasticidade de forma desconhecida. No primeiro capítulo, nós desenvolvemos estimadores intervalares que são robustos à presença de heteroscedasticidade. Esses estimadores são baseados em estimadores consistentes de matrizes de covariâncias propostos na literatura, bem como em esquemas bootstrap. A evidência numérica favorece o estimador intervalar HC4. O Capítulo 2 desenvolve uma seqüência corrigida por viés de estimadores de matrizes de covariâncias sob heteroscedasticidade de forma desconhecida a partir de estimador proposto por Qian eWang (2001). Nós mostramos que o estimador de Qian-Wang pode ser generalizado em uma classe mais ampla de estimadores consistentes para matrizes de covariâncias e que nossos resultados podem ser facilmente estendidos a esta classe de estimadores. Finalmente, no Capítulo 3 nós usamos métodos de integração numérica para calcular as distribuições nulas exatas de diferentes estatísticas de testes quasi-t, sob a suposição de que os erros são normalmente distribuídos. Os resultados favorecem o teste HC4
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Hall, Benjamin. "NONPARAMETRIC ESTIMATION OF DERIVATIVES WITH APPLICATIONS." UKnowledge, 2010. http://uknowledge.uky.edu/gradschool_diss/114.

Повний текст джерела
Анотація:
We review several nonparametric regression techniques and discuss their various strengths and weaknesses with an emphasis on derivative estimation and confidence band creation. We develop a generalized C(p) criterion for tuning parameter selection when interest lies in estimating one or more derivatives and the estimator is both linear in the observed responses and self-consistent. We propose a method for constructing simultaneous confidence bands for the mean response and one or more derivatives, where simultaneous now refers both to values of the covariate and to all derivatives under consideration. In addition we generalize the simultaneous confidence bands to account for heteroscedastic noise. Finally, we consider the characterization of nanoparticles and propose a method for identifying a proper subset of the covariate space that is most useful for characterization purposes.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Barbié, Laureline. "Raffinement de maillage multi-grille local en vue de la simulation 3D du combustible nucléaire des Réacteurs à Eau sous Pression." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4742.

Повний текст джерела
Анотація:
Le but de cette étude est d'améliorer les performances, en termes d'espace mémoire et de temps de calcul, des simulations actuelles de l'Interaction mécanique Pastille-Gaine (IPG), phénomène complexe pouvant avoir lieu lors de fortes montées en puissance dans les réacteurs à eau sous pression. Parmi les méthodes de raffinement de maillage, méthodes permettant de simuler efficacement des singularités locales, une approche multi-grille locale a été choisie car elle présente l'intérêt de pouvoir utiliser le solveur en boîte noire tout en ayant un faible nombre de degrés de liberté à traiter par niveau. La méthode Local Defect Correction (LDC), adaptée à une discrétisation de type éléments finis, a tout d'abord été analysée et vérifiée en élasticité linéaire, sur des configurations issues de l'IPG, car son utilisation en mécanique des solides est peu répandue. Différentes stratégies concernant la mise en oeuvre pratique de l'algorithme multi-niveaux ont également été comparées. La combinaison de la méthode LDC et de l'estimateur d'erreur a posteriori de Zienkiewicz-Zhu, permettant d'automatiser la détection des zones à raffiner, a ensuite été testée. Les performances obtenues sur des cas bidimensionnels et tridimensionnels sont très satisfaisantes, l'algorithme proposé se montrant plus performant que des méthodes de raffinement h-adaptatives. Enfin, l'algorithme a été étendu à des problèmes mécaniques non linéaires. Les questions d'un raffinement espace/temps mais aussi de la transmission des conditions initiales lors du remaillage ont entre autres été abordées. Les premiers résultats obtenus sont encourageants et démontrent l'intérêt de la méthode LDC pour des calculs d'IPG
The aim of this study is to improve the performances, in terms of memory space and computational time, of the current modelling of the Pellet-Cladding mechanical Interaction (PCI),complex phenomenon which may occurs during high power rises in pressurised water reactors. Among the mesh refinement methods - methods dedicated to efficiently treat local singularities - a local multi-grid approach was selected because it enables the use of a black-box solver while dealing few degrees of freedom at each level. The Local Defect Correction (LDC) method, well suited to a finite element discretisation, was first analysed and checked in linear elasticity, on configurations resulting from the PCI, since its use in solid mechanics is little widespread. Various strategies concerning the implementation of the multilevel algorithm were also compared. Coupling the LDC method with the Zienkiewicz-Zhu a posteriori error estimator in orderto automatically detect the zones to be refined, was then tested. Performances obtained on two-dimensional and three-dimensional cases are very satisfactory, since the algorithm proposed is more efficient than h-adaptive refinement methods. Lastly, the LDC algorithm was extended to nonlinear mechanics. Space/time refinement as well as transmission of the initial conditions during the remeshing step were looked at. The first results obtained are encouraging and show the interest of using the LDC method for PCI modelling
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Allodji, Setcheou Rodrigue. "Prise en compte des erreurs de mesure dans l’analyse du risque associe a l’exposition aux rayonnements ionisants dans une cohorte professionnelle : application à la cohorte française des mineurs d'uranium." Thesis, Paris 11, 2011. http://www.theses.fr/2011PA11T092/document.

Повний текст джерела
Анотація:
Dans les études épidémiologiques, les erreurs de mesure de l’exposition étudiée peuvent biaiser l’estimation des risques liés à cette exposition. Un grand nombre de méthodes de correction de l’effet de ces erreurs a été développé mais en pratique elles ont été rarement appliquées, probablement à cause du fait que leur capacité de correction et leur mise en œuvre sont peu maîtrisées. Une autre raison non moins importante est que, en l’absence de données répétées ou de données de validation, ces méthodes de correction exigent la connaissance détaillée des caractéristiques (taille, nature, structure et distribution) des erreurs de mesure. L’objectif principal de cette thèse est d’étudier l’impact de la prise en compte des erreurs de mesure dans les analyses du risque de décès par cancer du poumon associé à l’exposition au radon à partir de la cohorte française des mineurs d’uranium (qui ne dispose ni de données répétées, ni de données de validation). Les objectifs spécifiques étaient (1) de caractériser les erreurs de mesure associées aux expositions radiologiques (radon et ses descendants, poussières d’uranium et rayonnements gamma), (2) d’étudier l’impact des erreurs de mesure de l’exposition au radon et à ses descendants sur l’estimation de l’excès de risque relatif (ERR) de décès par cancer du poumon et (3) d’étudier et comparer la performance des méthodes de correction de l’effet de ces erreurs. La cohorte française des mineurs d’uranium comprend plus de 5000 individus exposés de manière chronique au radon et à ses descendants qui ont été suivis en moyenne pendant 30 ans. Les erreurs de mesure ont été caractérisées en prenant en compte l’évolution des méthodes d’extraction et de la surveillance radiologique des mineurs au fil du temps. Une étude de simulation basée sur la cohorte française des mineurs d’uranium a été mise en place pour étudier l’impact de ces erreurs sur l’ERR ainsi que pour comparer la performance des méthodes de correction. Les résultats montrent que les erreurs de mesure de l’exposition au radon et à ses descendants ont diminué au fil des années. Pour les premières années, avant 1970, elles dépassaient 45 % et après 1980 elles étaient de l’ordre de 10 %. La nature de ces erreurs a aussi changé au cours du temps ; les erreurs essentiellement de nature Berkson ont fait place à des erreurs de nature classique après la mise en place des dosimètres individuels à partir de 1983. Les résultats de l’étude de simulation ont montré que les erreurs de mesure conduisent à une atténuation de l’ERR vers la valeur nulle, avec un biais important de l’ordre de 60 %. Les trois méthodes de correction d’erreurs considérées ont permis une réduction notable mais partielle du biais d’atténuation. Un avantage semble exister pour la méthode de simulation extrapolation (SIMEX) dans notre contexte, cependant, les performances des trois méthodes de correction sont fortement tributaires de la détermination précise des caractéristiques des erreurs de mesure.Ce travail illustre l’importance de l’effet des erreurs de mesure sur les estimations de la relation entre l’exposition au radon et le risque de décès par cancer du poumon. L’obtention d’estimation de risque pour laquelle l’effet des erreurs de mesure est corrigé devrait s’avérer d’un intérêt majeur en support des politiques de protection contre le radon en radioprotection et en santé publique
In epidemiological studies, measurement errors in exposure can substantially bias the estimation of the risk associated to exposure. A broad variety of methods for measurement error correction has been developed, but they have been rarely applied in practice, probably because their ability to correct measurement error effects and their implementation are poorly understood. Another important reason is that many of the proposed correction methods require to know measurement errors characteristics (size, nature, structure and distribution).The aim of this thesis is to take into account measurement error in the analysis of risk of lung cancer death associated to radon exposure based on the French cohort of uranium miners. The mains stages were (1) to assess the characteristics (size, nature, structure and distribution) of measurement error in the French uranium miners cohort, (2) to investigate the impact of measurement error in radon exposure on the estimated excess relative risk (ERR) of lung cancer death associated to radon exposure, and (3) to compare the performance of methods for correction of these measurement error effects.The French cohort of uranium miners includes more than 5000 miners chronically exposed to radon with a follow-up duration of 30 years. Measurement errors have been characterized taking into account the evolution of uranium extraction methods and of radiation protection measures over time. A simulation study based on the French cohort of uranium miners has been carried out to investigate the effects of these measurement errors on the estimated ERR and to assess the performance of different methods for correcting these effects.Measurement error associated to radon exposure decreased over time, from more than 45% in the early 70’s to about 10% in the late 80’s. Its nature also changed over time from mostly Berkson to classical type from 1983. Simulation results showed that measurement error leads to an attenuation of the ERR towards the null, with substantial bias on ERR estimates in the order of 60%. All three error-correction methods allowed a noticeable but partial reduction of the attenuation bias. An advantage was observed for the simulation-extrapolation method (SIMEX) in our context, but the performance of the three correction methods highly depended on the accurate determination of the characteristics of measurement error.This work illustrates the importance of measurement error correction in order to obtain reliable estimates of the exposure-risk relationship between radon and lung cancer. Corrected risk estimates should prove of great interest in the elaboration of protection policies against radon in radioprotection and in public health
Стилі APA, Harvard, Vancouver, ISO та ін.
18

de, Luque Söllheim Ángel Luis. "Two satellite-based rainfall algorithms, calibration methods and post-processing corrections applied to Mediterranean flood cases." Doctoral thesis, Universitat de les Illes Balears, 2008. http://hdl.handle.net/10803/9434.

Повний текст джерела
Анотація:
Esta tesis explora la precisión de dos métodos de estimación de precipitación, Auto-Estimator y CRR (Convective Rainfall Rate), generados a partir de imágenes infrarrojas y visibles del Meteosat. Ambos métodos junto con una serie de correcciones de la intensidad de lluvia estimada se aplican y se verifican en dos casos de inundaciones acaecidas en zonas mediterráneas. El primer caso ocurrió en Albania del 21 al 23 de septiembre de 2002 y el segundo, conocido como caso Montserrat, ocurrió en Cataluña la noche del 9 al 10 se junio de 2000. Por otro lado se investiga la posibilidad de realizar calibraciones de ambos métodos directamente con datos de estaciones pluviométricas cuando lo común es calibrar con datos de radares meteorológicos. También se propone cambios en algunas de las correcciones ya que parecen mejorar los resultados y se propone una nueva corrección muy eficiente que utiliza las descargas eléctricas para determinar la zonas más convectivas y de mayor precipitación de los sistemas nubosos.
This Thesis work explores the precision of two methods to estimate rainfall called Auto-Estimator and CRR (Convective Rainfall Rate). They are obtained by using infrared and visible images from Meteosat. Both Algorithms within a set of correction factors are applied and verified in two severe flood cases that took place in Mediterranean regions. The first case has occurred in Albania from 21 to 23 September 2002 and the second, known as the Montserrat case, has occurred in Catalonia the night from the 9 to 10 of June 2000. On the other hand it is explored new methods to perform calibrations to both satellite algorithms using direct rain rates from rain gauges. These kinds of adjustments are usually done using rain rates from meteorological radars. In addition it is proposed changes on some correction factors that seem to improve the results on estimations and it is defined an efficient correction factor that employ electrical discharges to detect the most convective and rainy areas in cloud systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Ramos, Ramos Víctor Manuel. "Transmission robuste et fiable du multimédia sur Internet." Phd thesis, Université de Nice Sophia-Antipolis, 2004. http://tel.archives-ouvertes.fr/tel-00311798.

Повний текст джерела
Анотація:
Dans cette thèse, nous proposons des modèles pour évaluer les performances des applications multimédias temps-réel. De plus, nous proposons un modèle pour les protocoles de type AIMD. Le premier sujet étudié est un mécanisme de correction d'erreurs (FEC). Premièrement, nous utilisons une file d attente M/M/1/K pour modéliser le réseau. Nous considérons que la qualité de la voix varie linéairement par rapport au taux de redondance de la FEC. La redondance du i-ème paquet est portée dans le paquet i+f. Notre analyse montre que, même pour le cas f->inf, ce mécanisme n'améliore pas la qualité de l'audio. Deuxièmement, nous modélisons notre système par une file M/G/1/K. Nous considérons deux aspects qui peuvent contribuer à améliorer la qualité de l'audio: (a) multiplexer l'audio avec un flux exogène, et (b) des fonctions d'utilité non-linéaires. Sous ces contraintes, on montre qu il est possible d'améliorer la qualité de l'audio avec la méthode FEC étudiée. Le deuxième sujet traité concerne les mécanismes de contrôle du délai de diffusion. Nous proposons un ensemble d'algorithmes de moyenne mobile permettant de contrôler le taux de pertes dans une session audio. Les performances de nos algorithmes ont été évaluées et comparées grâce à des traces réelles. Le troisième sujet abordé concerne les protocoles de type AIMD. Nous proposons un modèle analytique, prenant en compte la variabilité du délai. Notre modèle utilise des équations de différences stochastiques. Il fournit une expression close pour le débit et pour la taille de la fenêtre. Nous montrons, par analyse et par simulation, qu'une augmentation de la variabilité du délai améliore les performances d'un protocole AIMD.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Barros, Fabiana Uchôa. "Refinamentos assintóticos em modelos lineares generalizados heteroscedáticos." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-12052017-103436/.

Повний текст джерела
Анотація:
Nesta tese, desenvolvemos refinamentos assintóticos em modelos lineares generalizados heteroscedásticos (Smyth, 1989). Inicialmente, obtemos a matriz de covariâncias de segunda ordem dos estimadores de máxima verossimilhança corrigidos pelos viés de primeira ordem. Com base na matriz obtida, sugerimos modificações na estatística de Wald. Posteriormente, derivamos os coeficientes do fator de correção tipo-Bartlett para a estatística do teste gradiente. Em seguida, obtemos o coeficiente de assimetria assintótico da distribuição dos estimadores de máxima verossimilhança dos parâmetros do modelo. Finalmente, exibimos o coeficiente de curtose assintótico da distribuição dos estimadores de máxima verossimilhança dos parâmetros do modelo. Analisamos os resultados obtidos através de estudos de simulação de Monte Carlo.
In this thesis, we have developed asymptotic refinements in heteroskedastic generalized linear models (Smyth, 1989). Initially, we obtain the second-order covariance matrix for the maximum likelihood estimators corrected by the bias of first-order. Based on the obtained matrix, we suggest changes in Wald statistics. In addition, we derive the coeficients of the Bartlett-type correction factor for the statistical gradient test. After, we get asymptotic skewness of the distribution of the maximum likelihood estimators of the model parameters. Finally, we show the asymptotic kurtosis coeficient of the distribution of the maximum likelihood estimators of the model parameters. Monte Carlo simulation studies are developed to evaluate the results obtained.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Moutinho, Victor Manuel Ferreira. "Essays on the determinants of energy related CO2 emissions." Doctoral thesis, Universidade de Aveiro, 2015. http://hdl.handle.net/10773/15156.

Повний текст джерела
Анотація:
Doutoramento em Sistemas Energéticos e Alterações Climáticas
Overall, amongst the most mentioned factors for Greenhouse Gases (GHG) growth are the economic growth and the energy demand growth. To assess the determinants GHG emissions, this thesis proposed and developed a new analysis which links the emissions intensity to its main driving factors. In the first essay, we used the 'complete decomposition' technique to examine CO2 emissions intensity and its components, considering 36 economic sectors and the 1996-2009 periods in Portugal. The industry (in particular 5 industrial sectors) is contributing largely to the effects of variation of CO2 emissions intensity. We concluded, among others, the emissions intensity reacts more significantly to shocks in the weight of fossil fuels in total energy consumption compared to shocks in other variables. In the second essay, we conducted an analysis for 16 industrial sectors (Group A) and for the group of the 5 most polluting manufacturing sectors (Group B) based on the convergence examination for emissions intensity and its main drivers, as well as on an econometric analysis. We concluded that there is sigma convergence for all the effects with exception to the fossil fuel intensity, while gamma convergence was verified for all the effects, with exception of CO2 emissions by fossil fuel and fossil fuel intensity in Group B. From the econometric approach we concluded that the considered variables have a significant importance in explaining CO2 emissions and CO2 emissions intensity. In the third essay, the Tourism Industry in Portugal over 1996-2009 period was examined, specifically two groups of subsectors that affect the impacts on CO2 emissions intensity. The generalized variance decomposition and the impulse response functions pointed to sectors that affect tourism more directly, i. e. a bidirectional causality between the intensity of emissions and energy intensity. The effect of intensity of emissions is positive on energy intensity, and the effect of energy intensity on emissions intensity is negative. The percentage of fossil fuels used reacts positively to the economic structure and to carbon intensity, i. e., the more the economic importance of the sector, the more it uses fossil fuels, and when it raises its carbon intensity, in the future the use of fossil fuel may rise. On the other hand, positive shocks on energy intensity tend to reduce the percentage of fossil fuels used. In fourth essay, we conducted an analysis to identify the effects that contribute to the intensity of GHG emissions (EI) in agriculture as well as their development. With that aim, we used the 'complete decomposition' technique in the 1995-2008 periods, for a set of European countries. It is shown that the use of Nitrogen per cultivated area is an important factor of emissions and in those countries where labour productivity increases (the inverse of average labour productivity in agriculture decreases), emissions intensity tends to decrease. These results imply that the way to reduce emissions in agriculture would be to provide better training of agricultural workers to increase their productivity, which would lead to a less need for energy and use of Nitrogen. The purpose of the last essay is to examine the long and short-run causality of the share of renewable sources on the environmental relation CO2 per KWh electricity generation- real GDP for 20 European countries over the 2001-2010 periods. It is important to analyze how the percentage of renewable energy used for electricity production affects the relationship between economic growth and emissions from this sector. The study of these relationships is important from the point of view of environmental and energy policy as it gives us information on the costs in terms of economic growth, on the application of restrictive levels of emissions and also on the effects of the policies concerning the use of renewable energy in the electricity sector (see for instance European Commission Directive 2001/77/EC, [4]). For that purpose, in this study we use Cointegration Analysis on the set of cross-country panel data between CO2 emissions from electricity generation (CO2 kWh), economic growth (GDP) and the share of renewable energy for 20 European countries. We estimated the long–run equilibrium to validate the EKC with a new approach specification. Additionally, we have implemented the Innovative Accounting Approach (IAA) that includes Forecast Error Variance Decomposition and Impulse Response Functions (IRFs), applied to those variables. This can allow us, for example, to know (i) how CO2 kWh responds to an impulse in GDP and (ii) how CO2 kWh responds to an impulse in the share of renewable sources. The contributions of this thesis to the energy-related CO2 emissions at sectorial level are threefold: First, it provides a new econometric decomposition approach for analysing and developing CO2 emissions in collaboration with science societies that can serve as a starting point for future research approaches. Second, it presents a hybrid energy-economy mathematic and econometric model which relates CO2 emissions in Portugal based on economic theory. Third, it contributes to explain the change of CO2 emissions in important economic sectors in Europe, in particular in Portugal, taking normative considerations into account more openly and explicitly, with political implications at energy-environment level within the European commitment.
De uma forma geral, entre os fatores mais apontados para o crescimento das emissões de Gases de Efeito de Estufa (GEE), estão o crescimento económico e o crescimento das necessidades energéticas. Para identificar os determinantes das emissões de GEE, esta dissertação propôs e desenvolveu uma nova análise que liga a intensidade das emissões aos seus principais responsáveis. No primeiro ensaio, foi utilizada a técnica da ‘decomposição total’ para examinar a intensidade das emissões de CO2 e os seus componentes, considerando 36 setores económicos e o período entre 1996-2009 em Portugal. A indústria (em particular cinco setores industriais) contribui fortemente para os efeitos da variação da intensidade de CO2. Conclui-se, entre outros, que a intensidade das emissões reage mais significativamente a choques no peso dos combustíveis fósseis no consumo total da energia, comparativamente a choques em outras variáveis. No segundo ensaio, conduziu-se uma análise para 16 sectores industriais (Grupo A) e para o grupo dos cinco setores industriais mais poluentes (Grupo B), baseada no estudo da convergência para a intensidade das emissões e para os seus principais determinantes, bem como numa análise econométrica. Concluiu-se que existe convergência sigma para todos os efeitos, à exceção da intensidade dos combustíveis fósseis, enquanto a convergência gama se verificou para todos os efeitos com a exceção das emissões de CO2 por combustível fóssil e intensidade de combustível fóssil, no Grupo B. A partir da abordagem econométrica, concluiu-se que as variáveis consideradas têm uma importância significativa na explicação da intensidade das emissões de CO2. No terceiro ensaio foi analisada a indústria do turismo em Portugal durante o período de 1996-2009, em particular para dois grupos de subsetores que afetam a intensidade das emissões de CO2. A decomposição generalizada de variância e as funções de impulso-resposta apontaram uma causalidade bidirecional entre intensidade de emissões e intensidade de energia para setores que afetam o turismo mais diretamente. O efeito da intensidade de emissões é positivo na intensidade da energia e o efeito da intensidade da energia na intensidade das emissões é negativo. A percentagem de combustíveis fósseis utilizados reage positivamente à estrutura económica e à intensidade do carbono, isto é, quando um setor ganha importância económica, tende a usar mais combustível fóssil e quando aumenta a intensidade do carbono, no futuro, o uso de combustíveis fósseis pode aumentar. Por outro lado, choques positivos na intensidade de energia tendem a reduzir a percentagem de combustíveis fósseis utilizados. O objectivo do quarto ensaio é identificar os efeitos que contribuem para a intensidade dos gases de estufa na agricultura, bem como a sua evolução, Para isso, utilizou-se a técnica de ‘decomposição total’ no período 1995-2008 para um grupo de países europeus. Ficou demonstrado que o uso de nitrogénio por área cultivada é um fator importante nas emissões e naqueles países cuja produtividade do trabalho aumenta, a intensidade das emissões tende a aumentar. O resultado implica que o caminho para reduzir as emissões na agricultura pode passar por uma melhor formação dos trabalhadores ligados à agricultura para melhorar a sua produtividade, o que pode conduzir a uma menor necessidade e uso de nitrogénio. O objectivo do último ensaio é examinar a causalidade de longo e curto prazo da quota de fontes renováveis na relação ambiental entre o desenvolvimento económico (PIB) e as emissões de CO2 por KWh de eletricidade produzida num conjunto de 20 países Europeus no período de 2001-2010. Esta nova abordagem sugere que a quota de fontes renováveis na produção de eletricidade é um determinante importante para explicar as diferenças na relação Rendimento-emissões de CO2 por Kwh nos países Europeus e que as evidências empíricas suportam a relação ambiental da curva de Kuznets. As contribuições desta dissertação para os assuntos relacionados com as emissões de CO2 a um nível setorial são as seguintes: primeiro, oferece uma nova abordagem econométrica da decomposição para analisar a evolução das emissões de CO2 que pode servir como um ponto de partida para futuras investigações. Segundo, apresenta uma abordagem híbrida, juntando a matemática e a economia de energia e um modelo econométrico para relacionar as emissões de CO2 na Europa e, em particular, em Portugal com base em teorias económicas. Terceiro, contribui para explicar as mudanças nas emissões de CO2 em setores económicos importantes para Portugal, conjugando considerações normativas aberta e explicitamente, com implicações políticas no comprometimento europeu, ao nível energético-ambiental.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Barbié, Lauréline. "Raffinement de maillage multi-grille local en vue de la simulation 3D du combustible nucléaire des Réacteurs à Eau sous Pression." Phd thesis, Aix-Marseille Université, 2013. http://tel.archives-ouvertes.fr/tel-00926550.

Повний текст джерела
Анотація:
Le but de cette étude est d'améliorer les performances, en termes d'espace mémoire et de temps de calcul, des simulations actuelles de l'Interaction mécanique Pastille-Gaine (IPG), phénomène complexe pouvant avoir lieu lors de fortes montées en puissance dans les réacteurs à eau sous pression. Parmi les méthodes de raffinement de maillage, méthodes permettant de simuler efficacement des singularités locales, une approche multi-grille locale a été choisie car elle présente l'intérêt de pouvoir utiliser le solveur en boîte noire tout en ayant un faible nombre de degrés de liberté à traiter par niveau. La méthode Local Defect Correction (LDC), adaptée à une discrétisation de type éléments finis, a tout d'abord été analysée et vérifiée en élasticité linéaire, sur des configurations issues de l'IPG, car son utilisation en mécanique des solides est peu répandue. Différentes stratégies concernant la mise en oeuvre pratique de l'algorithme multi-niveaux ont également été comparées. La combinaison de la méthode LDC et de l'estimateur d'erreur a posteriori de Zienkiewicz-Zhu, permettant d'automatiser la détection des zones à raffiner, a ensuite été testée. Les performances obtenues sur des cas bidimensionnels et tridimensionnels sont très satisfaisantes, l'algorithme proposé se montrant plus performant que des méthodes de raffinement h-adaptatives. Enfin, l'algorithme a été étendu à des problèmes mécaniques non linéaires. Les questions d'un raffinement espace/temps mais aussi de la transmission des conditions initiales lors du remaillage ont entre autres été abordées. Les premiers résultats obtenus sont encourageants et démontrent l'intérêt de la méthode LDC pour des calculs d'IPG.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Повний текст джерела
Анотація:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

"High order corrected estimator of time-average variance constant." 2015. http://repository.lib.cuhk.edu.hk/en/item/cuhk-1291717.

Повний текст джерела
Анотація:
Chan, Kin Wai.
Thesis M.Phil. Chinese University of Hong Kong 2015.
Includes bibliographical references (leaves 46-49).
Abstracts also in Chinese.
Title from PDF title page (viewed on 08, November, 2016).
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Kuo, Jiun You, and 郭俊佑. "Applying direct solar irradiance to estimate atmospheric correction parameters of remote sensing data." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/88014413819065004134.

Повний текст джерела
Анотація:
碩士
國立中央大學
太空科學研究所
81
The atmospheric radiation correction plays an important role in the preprocessing of remote sensing images. After a long period of researching and developing, there are some well- developed packages, such as LOWTRAN 7 and 5S, which are good in calculating the atmospheric transmittance. In general, some packages require the input parameters, like visibility, asymme- try factor, phase function, ozone content...etc., but they can' t be acquired quantitatively by users. For this reason, the precision of the computed atmospheric transmittance is often poor. The purpose of this research is to estimate the visibility quantitatively. Aerosol extinction, Rayleigh extinction and gases absorption cause the main variation of the atmospheric transmittance. But only the aerosol extinction is related to the variation of visibility. In order to estimate the visibility, those effects that are indifferent of the variation of visibility must be eliminated from the calculation of atmospheric transmittance. By using some wide-band models to do so, the visibility can be estimated from the aerosol extinction. Thereafter, accompanied with the observed visibility, one can compare it with the theoretical results to diagnose the characteristics of individual models. The precision of using visibility to describe the detail of aerosol is limitted. Nevertheless, as the input parameter for the packages mentioned above, the estimated visibility in this study is reliable and acceptable.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Lin, Chang-yu, and 林常宇. "Maximum Norm A Posteriori Error Estimate for the Quantum-Corrected Energy Transport Model Part I: Theory." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/08658155064016193121.

Повний текст джерела
Анотація:
碩士
國立高雄大學
應用數學系碩士班
97
The quantum-corrected energy transport (QCET) model consisting of seven self-adjoint nonlinear PDEs describes the steady state of electron and hole flows, their energy transport, and classical and quantum potentials within a nano-scale semiconductor device. We develop a second-order maximum norm a posteriori error estimate proposed by Kopteva [11] for the QCET which after scaling involves the scaled Debye length, intrinsic carrier density, Planck constant, and thermal conductivity as the singular perturbation parameters. This estimate can be used as an error indicator for the refinement process in an adaptive algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Jheng, Jian-Jhih, and 鄭健志. "Maximum Norm A Posteriori Error Estimate for the Quantum-Corrected Energy Transport Model Part II: Simulation." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/92290674878894808501.

Повний текст джерела
Анотація:
碩士
國立高雄大學
應用數學系碩士班
98
The quantum-corrected energy transport (QCET) model consisting of seven self-adjoint nonlinear PDEs describes the steady state of electron and hole flows, their energy transport, and classical and quantum potentials within a nano-scale semiconductor device. We develop a second-order maximum norm a posteriori error estimate proposed by Kopteva [11] for the QCET which after scaling involves the scaled Debye length, intrinsic carrier density, Planck constant, and thermal conductivity as the singular perturbation parameters. This estimate can be used as an error indicator for the refinement process in an adaptive algorithm. We present explicit formulas for computing the error indicators which are indispensable for the adaptive computations of the semiconductor device simulation for advanced nano-devices. Our numerical experiments on the a posteriori error estimation for the 1D diode model problem have shown good results of the proposed error indicators for all seven PDEs of the QCET model.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

"Error Detection and Error Correction for PMU Data as Applied to Power System State Estimators." Master's thesis, 2013. http://hdl.handle.net/2286/R.I.20951.

Повний текст джерела
Анотація:
abstract: In modern electric power systems, energy management systems (EMSs) are responsi-ble for monitoring and controlling the generation system and transmission networks. State estimation (SE) is a critical `must run successful' component within the EMS software. This is dictated by the high reliability requirements and need to represent the closest real time model for market operations and other critical analysis functions in the EMS. Tradi-tionally, SE is run with data obtained only from supervisory control and data acquisition (SCADA) devices and systems. However, more emphasis on improving the performance of SE drives the inclusion of phasor measurement units (PMUs) into SE input data. PMU measurements are claimed to be more accurate than conventional measurements and PMUs `time stamp' measurements accurately. These widely distributed devices meas-ure the voltage phasors directly. That is, phase information for measured voltages and currents are available. PMUs provide data time stamps to synchronize measurements. Con-sidering the relatively small number of PMUs installed in contemporary power systems in North America, performing SE with only phasor measurements is not feasible. Thus a hy-brid SE, including both SCADA and PMU measurements, is the reality for contemporary power system SE. The hybrid approach is the focus of a number of research papers. There are many practical challenges in incorporating PMUs into SE input data. The higher reporting rates of PMUs as compared with SCADA measurements is one of the salient problems. The disparity of reporting rates raises a question whether buffering the phasor measurements helps to give better estimates of the states. The research presented in this thesis addresses the design of data buffers for PMU data as used in SE applications in electric power systems. The system theoretic analysis is illustrated using an operating electric power system in the southwest part of the USA. Var-ious instances of state estimation data have been used for analysis purposes. The details of the research, results obtained and conclusions drawn are presented in this document.
Dissertation/Thesis
M.S. Electrical Engineering 2013
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Kitchen, John. "The effect of quadrature hybrid errors on a phase difference frequency estimator and methods for correction." Thesis, 1992. http://hdl.handle.net/2440/122344.

Повний текст джерела
Анотація:
This thesis is concerned with the estimation of the frequency of a single sinusoid of fixed but unknown frequency and which has been corrupted by errors in a quadrature hybrid, coherent receiver system. The quadrature hybrid receiver has been employed to produce in-phase and quadrature baseband signal components which can be sampled and undergo an analogue to digital conversion. A simple, discrete frequency estimator is derived from the rate of change of signal phase between successive sampling instants after analogue to digital conversion. The statistical effects of the errors and imbalances, inherent in the quadrature hybrid, upon the discrete frequency estimator are studied. This study has been carried out in four stages forming the content of the four chapters : 2,3,4,5. In chapter 2, an analytic study of the estimation of the frequency of a single sinusoid, which has passed through a quadrature hybrid system, is carried out. This study is further subdivided so that each of the quadrature hybrid errors is examined individually. A summary of the results derived for each error case is provided, in tabular form, in appendix XII. In chapter 3, the quadrature hybrid and input signal are modelled as a computer simulation. This chapter is subdivided, as with chapter 2, so that each error case is simulated on an individual basis. In each case the error or imbalance is varied and the frequency of the input sinusoid is varied so that most of the possible error conditions and possible input frequencies are studied. Simulation results are presented in graphical form and compared with a similar graphical presentation of the theoretical results from chapter 2. In chapter 4, a real quadrature hybrid, receiver system is examined and the inherent system errors are measured. These measurements serve to support both the simulations and the theory. In chapter 5, techniques are derived in order to reduce the degradation of frequency estimation caused by the quadrature hybrid system errors and both simulation results and a real example are given. It is also demonstrated that the theoretical lower limit for frequency estimation in the presence of normally distributed noise (the Cramér-Rao lower variance bound) can be achieved for this system.
Thesis (MAppSc) -- University of Adelaide, Dept. of Applied Mathematics, 1991
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Tsai, Hsin-Tse, and 蔡炘澤. "Maximum Norm A Posteriori Error Estimate for the Quantum-Corrected Energy Transport Model Part III: 1D and 2D Simulations." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/63577846834705598340.

Повний текст джерела
Анотація:
碩士
國立新竹教育大學
應用數學系碩士班
99
The quantum-corrected energy transport (QCET) model consisting of seven self-adjoint nonlinear PDEs describes the steady state of electron and hole flows, their energy transport, and classical and quantum potentials within a nano-scale semiconductor device. We develop a second-order maximum norm a posteriori error estimate proposed by Kopteva [14] for the QCET which after scaling involves the scaled Debye length, intrinsic carrier density, Planck constant, and thermal conductivity as the singular perturbation parameters. This estimate can be used as an error indicator for the refinement process in an adaptive algorithm. We present explicit formulas for computing the error indicators which are indispensable for the adaptive computations of the semiconductor device simulation for advanced nano-devices. Our numerical experiments on the a posteriori error estimation for the 1D QCET model problem have shown good results of the proposed error indicators for all seven PDEs of the QCET model. With the 1D QCET Model's numerical results, we take the seven PDEs to make adaptive finite element mesh in the 2D QCET model. For the 2D QCET problem, it is shown that the total number of nodes of the final adaptive mesh using the new estimation method has been decreased to 11% from that of the old method under the same stopping criteria of the adptive algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Lou, Wen-Lin, and 樓文琳. "Using Finite Elemente Method to Correct Topography Effect of Sub-bottom Thermal gradient and Estimate the Base of Gas Hydrate Stability Zone." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/59552086323672805870.

Повний текст джерела
Анотація:
碩士
國立臺灣大學
海洋研究所
97
By the effect of rough topography, heat flow will tend to be preferentially convergent towards valleys or low areas, and divergent from ridges or peaks on the sea floor. Hence, the measured result of the seafloor gradient cannot correctly estimate the real thermal gradient of the sub-bottom. In order to calculate the variation in temperature more efficiently, we use ANSYS software using the finite element method to correct such effects and compute the Base of Gas Hydrate Stability Zone; BGHS. During the preliminary test, we found that the temperature of the sub-bottom deviated for more than one hundred meters due to topography effects in the offshore of southwestern Taiwan. This means that it is necessary to correct for the topography effects. To achieve this, we try to remove the topographic effects on the temperature gradients from cruise No. 786 of the R/V Ocean Researcher I. When applying ANSYS to calculate the temperature gradients, we adopted the data collected from one site as the boundary condition, and corrected the errors that resulted from the topography effect in this area. The calculated seafloor thermal gradient of the other site, situated on a similar sediment deposited depression, is close to the measured value; but, on the third site, the result is significantly different from the measured value. We believe the difference is due to the third site being located on the ridge, thus causing heat to be refracted away from depressions deposited with thick sediments. This example emphasizes that not only the topographic effects, but also the sedimentation is important to the temperature gradient calculation, and therefore the estimate of BGHS. For the application, we use ANSYS to correct the topographic effects on sites KP-4, KP-5-1, and KP-5-2 which are drilling sites for gas-hydrate investigation proposed by the Central Geological Survey. After the correction we obtained, the BGHSt (corrected BGHS) for the three sites are 311, 298, and 350 mbsf, respectively. The results are useful information for borehole drilling reference. The model proposed in this study is limited by the lack of sedimentary data and excludes the basement relief effect. Actually, the sub-bottom thermal gradient variation is affected by the topography, the thickness of the sediments, the basement relief, and the structure of the rock. It will be better if we can get more information about the structures, the thickness of the sediments, and thermal conductivities for each layer.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Alfonse, Lauren Elizabeth. "Effects of template mass, complexity, and analysis method on the ability to correctly determine the number of contributors to DNA mixtures." Thesis, 2015. https://hdl.handle.net/2144/16179.

Повний текст джерела
Анотація:
In traditional forensic DNA casework, the inclusion or exclusion of individuals who may have contributed to an item of evidence may be dependent upon the assumption on the number of individuals from which the evidence arose. Typically, the determination of the minimum number of contributors (NOC) to a mixture is achieved by counting the number of alleles observed above a given analytical threshold (AT); this technique is known as maximum allele count (MAC). However, advances in polymerase chain reaction (PCR) chemistries and improvements in analytical sensitivities have led to an increase in the detection of complex, low template DNA (LtDNA) mixtures for which MAC is an inadequate means of determining the actual NOC. Despite the addition of highly polymorphic loci to multiplexed PCR kits and the advent of interpretation softwares which deconvolve DNA mixtures, a gap remains in the DNA analysis pipeline, where an effective method of determining the NOC needs to be established. The emergence of NOCIt -- a computational tool which provides the probability distribution on the NOC, may serve as a promising alternative to traditional, threshold- based methods. Utilizing user-provided calibration data consisting of single source samples of known genotype, NOCIt calculates the a posteriori probability (APP) that an evidentiary sample arose from 0 to 5 contributors. The software models baseline noise, reverse and forward stutter proportions, stutter and allele dropout rates, and allele heights. This information is then utilized to determine whether the evidentiary profile originated from one or many contributors. In short, NOCIt provides information not only on the likely NOC, but whether more than one value may be deemed probable. In the latter case, it may be necessary to modify downstream interpretation steps such that multiple values for the NOC are considered or the conclusion that most favors the defense is adopted. Phase I of this study focused on establishing the minimum number of single source samples needed to calibrate NOCIt. Once determined, the performance of NOCIt was evaluated and compared to that of two other methods: the maximum likelihood estimator (MLE) -- accessed via the forensim R package, and MAC. Fifty (50) single source samples proved to be sufficient to calibrate NOCIt, and results indicate NOCIt was the most accurate method of the three. Phase II of this study explored the effects of template mass and sample complexity on the accuracy of NOCIt. Data showed that the accuracy decreased as the NOC increased: for 1- and 5-contributor samples, the accuracy was 100% and 20%, respectively. The minimum template mass from any one contributor required to consistently estimate the true NOC was 0.07 ng -- the equivalent of approximately 10 cells' worth of DNA. Phase III further explored NOCIt and was designed to assess its robustness. Because the efficacy of determining the NOC may be affected by the PCR kit utilized, the results obtained from NOCIt analysis of 1-, 2-, 3-, 4-, and 5-contributor mixtures amplified with AmpFlstr® Identifiler® Plus and PowerPlex® 16 HS were compared. A positive correlation was observed for all NOCIt outputs between kits. Additionally, NOCIt was found to result in increased accuracies when analyzed with 1-, 3-, and 4-contributor samples amplified with Identifiler® Plus and with 5-contributor samples amplified with PowerPlex® 16 HS. The accuracy rates obtained for 2-contributor samples were equivalent between kits; therefore, the effect of amplification kit type on the ability to determine the NOC was not substantive. Cumulatively, the data indicate that NOCIt is an improvement to traditional methods of determining the NOC and results in high accuracy rates with samples containing sufficient quantities of DNA. Further, the results of investigations into the effect of template mass on the ability to determine the NOC may serve as a caution that forensic DNA samples containing low-target quantities may need to be interpreted using multiple or different assumptions on the number of contributors, as the assumption on the number of contributors is known to affect the conclusion in certain casework scenarios. As a significant degree of inaccuracy was observed for all methods of determining the NOC at severe low template amounts, the data presented also challenge the notion that any DNA sample can be utilized for comparison purposes. This suggests that the ability to detect extremely complex, LtDNA mixtures may not be commensurate with the ability to accurately interpret such mixtures, despite critical advances in software-based analysis. In addition to the availability of advanced comparison algorithms, limitations on the interpretability of complex, LtDNA mixtures may also be dependent on the amount of biological material present on an evidentiary substrate.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Baptista, Sara Reis Gomes. "Evaluation of the efficiency of innovation: evidence for European Union regions (NUT-II)." Master's thesis, 2018. http://hdl.handle.net/10773/24520.

Повний текст джерела
Анотація:
At a time when Innovation is seen as one of the main drivers of regional economic growth, this study aims to assess the efficiency of innovation of 104 regions (NUT-II) of the European Union from 2006 to 2012. In this way, the study creates a ranking of the most efficient regions based on innovation indicators and seeks to understand what factors are at the origin of these ranking results. On the other hand, the global financial crisis of 2008 has also shaken all prospects of sustained growth for Europe, so the impact of the crisis on Innovation and efficiency of the regions is taken into account. For this purpose, the DEA methodology was used in a first phase to determine the levels of efficiency found and scoring of the regions, and in a second approach the use of the PCSE and GMM methodologies to analyse the factors that influence the efficiency of the innovation measured by the proposed indicator. The results show large disparities between regions, namely due to the crisis, with the most efficient regions being Romania, Belgium and Bulgaria. The results also point to human resources as being the most significant factor for the positive evolution of Innovation Efficiency.
Numa altura em que a Inovação é vista como um dos motores principais para o crescimento económico regional, este trabalho visa avaliar a eficiência da inovação de 104 regiões (NUT-II) da União Europeia de 2006 a 2012. Desta forma, o estudo cria um ranking das regiões mais eficientes baseado em indicadores de inovação e procura perceber quais os fatores que estão na origem desses resultados do ranking. Por outro lado, também a crise financeira global de 2008 veio abalar todas as perspetivas de crescimento sustentado para a Europa pelo que o impacto da mesma na Inovação e eficiência das regiões é tido em conta. Para isso foi utilizada a metodologia DEA, numa primeira fase para determinar os níveis de eficiência encontrados e scoring das regiões, e numa segunda abordagem a utilização das metodologias PCSE e GMM, para analisar os fatores que influenciam a eficiência da inovação medida pelo indicador proposto. Os resultados obtidos revelam grandes disparidades entre regiões, nomeadamente devido à crise, sendo que as regiões mais eficientes pertencem à Roménia, Bélgica e Bulgária. Os resultados apontam ainda para os recursos humanos como sendo o fator mais significativo para a evolução positiva da eficiência de Inovação.
Mestrado em Economia
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Liu, Jia-Wei, and 劉家瑋. "The Retention Time Alignment for Nontargeted LC/MS Analysis Using Kernel Density Estimation with a Novel Bandwidth Estimator and Phase Correction of Metabolomic 1D 1H-NMR Spectra." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/81822997660854800593.

Повний текст джерела
Анотація:
碩士
國立臺灣大學
資訊網路與多媒體研究所
103
This dissertation presents two developed algorithms for solving computational problems of detecting small molecules in the field of metabolomics analysis. In the first part of this dissertation, we present the tool – LAKE, which is a tool for detected peak alignment to align retention time for chromatographic methods coupled to spectrophotometers such as high performance liquid chromatography for metabolomics works. The existed tools for retention time correction still can’t properly aligning retention times of detected peaks from multiple batches and some detected peaks are left misalignment. LAKE resolves peak shifts from high data similarity to low data similarity. In each turn, detected peaks would be clustered in mass-over-charged (m/z) dimension and then retention time (RT) dimension. For each m/z-RT cluster, bandwidth used in RT density estimation with kernel density estimation (KDE) is estimated with bandwidth selector. At the end of each turn of retention time shift resolution, the m/z and RT of detected peaks would be updated with average m/z and average RT of the m/z-RT group before next turn of detected peak alignment. LAKE can be applied to aligning retention time from mixed exogenic compounds samples, multiple exogenic compounds added in biofluid samples and complicate endogenous compounds contained metabolomics samples in multiple batches. In the second part of this dissertation, we present the tool – PHASION, which is a tool for automatic phase correction on multiple 1D proton nuclear magnetic resonance (1H-NMR) spectra for metabolomics works. The phase error is an unavoidable error happened when FID signal is recorded, after Fourier transformed into spectrum mixed with phase error. The phase correction is to find zeroth-order and first-order phase error to make misphased spectrum into phase-corrected spectrum before any further data processing. Current 1D 1H-NMR phase correction methods usually require manual parameter and filter tuning by experienced users to obtain desirable results from complex metabolomics spectra – thus becoming prone to correction variation and biased quantification. We present a novel alternative method, PHASION, for automatically estimating the phase angles of 1D 1H-NMR metabolomics data. PHASION finds optimal phase angles by calculating proposed objective score for relative stable segments of spectrum and calculates the score for baseline of spectrum phased with phase angles (PH0, PH1) and approach to the optimal phase angles for the spectrum with Nelder-Mead Simplex Optimizer.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Dongmo, Jiongo Valéry. "Inférence robuste à la présence des valeurs aberrantes dans les enquêtes." Thèse, 2015. http://hdl.handle.net/1866/13720.

Повний текст джерела
Анотація:
Cette thèse comporte trois articles dont un est publié et deux en préparation. Le sujet central de la thèse porte sur le traitement des valeurs aberrantes représentatives dans deux aspects importants des enquêtes que sont : l’estimation des petits domaines et l’imputation en présence de non-réponse partielle. En ce qui concerne les petits domaines, les estimateurs robustes dans le cadre des modèles au niveau des unités ont été étudiés. Sinha & Rao (2009) proposent une version robuste du meilleur prédicteur linéaire sans biais empirique pour la moyenne des petits domaines. Leur estimateur robuste est de type «plugin», et à la lumière des travaux de Chambers (1986), cet estimateur peut être biaisé dans certaines situations. Chambers et al. (2014) proposent un estimateur corrigé du biais. En outre, un estimateur de l’erreur quadratique moyenne a été associé à ces estimateurs ponctuels. Sinha & Rao (2009) proposent une procédure bootstrap paramétrique pour estimer l’erreur quadratique moyenne. Des méthodes analytiques sont proposées dans Chambers et al. (2014). Cependant, leur validité théorique n’a pas été établie et leurs performances empiriques ne sont pas pleinement satisfaisantes. Ici, nous examinons deux nouvelles approches pour obtenir une version robuste du meilleur prédicteur linéaire sans biais empirique : la première est fondée sur les travaux de Chambers (1986), et la deuxième est basée sur le concept de biais conditionnel comme mesure de l’influence d’une unité de la population. Ces deux classes d’estimateurs robustes des petits domaines incluent également un terme de correction pour le biais. Cependant, ils utilisent tous les deux l’information disponible dans tous les domaines contrairement à celui de Chambers et al. (2014) qui utilise uniquement l’information disponible dans le domaine d’intérêt. Dans certaines situations, un biais non négligeable est possible pour l’estimateur de Sinha & Rao (2009), alors que les estimateurs proposés exhibent un faible biais pour un choix approprié de la fonction d’influence et de la constante de robustesse. Les simulations Monte Carlo sont effectuées, et les comparaisons sont faites entre les estimateurs proposés et ceux de Sinha & Rao (2009) et de Chambers et al. (2014). Les résultats montrent que les estimateurs de Sinha & Rao (2009) et de Chambers et al. (2014) peuvent avoir un biais important, alors que les estimateurs proposés ont une meilleure performance en termes de biais et d’erreur quadratique moyenne. En outre, nous proposons une nouvelle procédure bootstrap pour l’estimation de l’erreur quadratique moyenne des estimateurs robustes des petits domaines. Contrairement aux procédures existantes, nous montrons formellement la validité asymptotique de la méthode bootstrap proposée. Par ailleurs, la méthode proposée est semi-paramétrique, c’est-à-dire, elle n’est pas assujettie à une hypothèse sur les distributions des erreurs ou des effets aléatoires. Ainsi, elle est particulièrement attrayante et plus largement applicable. Nous examinons les performances de notre procédure bootstrap avec les simulations Monte Carlo. Les résultats montrent que notre procédure performe bien et surtout performe mieux que tous les compétiteurs étudiés. Une application de la méthode proposée est illustrée en analysant les données réelles contenant des valeurs aberrantes de Battese, Harter & Fuller (1988). S’agissant de l’imputation en présence de non-réponse partielle, certaines formes d’imputation simple ont été étudiées. L’imputation par la régression déterministe entre les classes, qui inclut l’imputation par le ratio et l’imputation par la moyenne sont souvent utilisées dans les enquêtes. Ces méthodes d’imputation peuvent conduire à des estimateurs imputés biaisés si le modèle d’imputation ou le modèle de non-réponse n’est pas correctement spécifié. Des estimateurs doublement robustes ont été développés dans les années récentes. Ces estimateurs sont sans biais si l’un au moins des modèles d’imputation ou de non-réponse est bien spécifié. Cependant, en présence des valeurs aberrantes, les estimateurs imputés doublement robustes peuvent être très instables. En utilisant le concept de biais conditionnel, nous proposons une version robuste aux valeurs aberrantes de l’estimateur doublement robuste. Les résultats des études par simulations montrent que l’estimateur proposé performe bien pour un choix approprié de la constante de robustesse.
This thesis focuses on the treatment of representative outliers in two important aspects of surveys: small area estimation and imputation for item non-response. Concerning small area estimation, robust estimators in unit-level models have been studied. Sinha & Rao (2009) proposed estimation procedures designed for small area means, based on robustified maximum likelihood parameters estimates of linear mixed model and robust empirical best linear unbiased predictors of the random effect of the underlying model. Their robust methods for estimating area means are of the plug-in type, and in view of the results of Chambers (1986), the resulting robust estimators may be biased in some situations. Biascorrected estimators have been proposed by Chambers et al. (2014). In addition, these robust small area estimators were associated with the estimation of the Mean Square Error (MSE). Sinha & Rao (2009) proposed a parametric bootstrap procedure based on the robust estimates of the parameters of the underlying linear mixed model to estimate the MSE. Analytical procedures for the estimation of the MSE have been proposed in Chambers et al. (2014). However, their theoretical validity has not been formally established and their empirical performances are not fully satisfactorily. Here, we investigate two new approaches for the robust version the best empirical unbiased estimator: the first one relies on the work of Chambers (1986), while the second proposal uses the concept of conditional bias as an influence measure to assess the impact of units in the population. These two classes of robust small area estimators also include a correction term for the bias. However, they are both fully bias-corrected, in the sense that the correction term takes into account the potential impact of the other domains on the small area of interest unlike the one of Chambers et al. (2014) which focuses only on the domain of interest. Under certain conditions, non-negligible bias is expected for the Sinha-Rao method, while the proposed methods exhibit significant bias reduction, controlled by appropriate choices of the influence function and tuning constants. Monte Carlo simulations are conducted, and comparisons are made between: the new robust estimators, the Sinha-Rao estimator, and the bias-corrected estimator. Empirical results suggest that the Sinha-Rao method and the bias-adjusted estimator of Chambers et al (2014) may exhibit a large bias, while the new procedures offer often better performances in terms of bias and mean squared error. In addition, we propose a new bootstrap procedure for MSE estimation of robust small area predictors. Unlike existing approaches, we formally prove the asymptotic validity of the proposed bootstrap method. Moreover, the proposed method is semi-parametric, i.e., it does not rely on specific distributional assumptions about the errors and random effects of the unit-level model underlying the small-area estimation, thus it is particularly attractive and more widely applicable. We assess the finite sample performance of our bootstrap estimator through Monte Carlo simulations. The results show that our procedure performs satisfactorily well and outperforms existing ones. Application of the proposed method is illustrated by analyzing a well-known outlier-contaminated small county crops area data from North-Central Iowa farms and Landsat satellite images. Concerning imputation in the presence of item non-response, some single imputation methods have been studied. The deterministic regression imputation, which includes the ratio imputation and mean imputation are often used in surveys. These imputation methods may lead to biased imputed estimators if the imputation model or the non-response model is not properly specified. Recently, doubly robust imputed estimators have been developed. However, in the presence of outliers, the doubly robust imputed estimators can be very unstable. Using the concept of conditional bias as a measure of influence (Beaumont, Haziza and Ruiz-Gazen, 2013), we propose an outlier robust version of the doubly robust imputed estimator. Thus this estimator is denoted as a triple robust imputed estimator. The results of simulation studies show that the proposed estimator performs satisfactorily well for an appropriate choice of the tuning constant.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Mahembe, Edmore. "Development aid and its impact on poverty reduction in developing countries : a dynamic panel data approach." Thesis, 2019. http://hdl.handle.net/10500/26490.

Повний текст джерела
Анотація:
Foreign aid has been used on the one hand by donors as an important international relations policy tool and on the other hand by developing countries as a source of funds for development. Since its inception in the 1940s, foreign aid has been one of the most researched topics in development economics. This study adds to this growing aid effectiveness literature, with a particular focus on the under-researched relationship between foreign aid and extreme poverty. The main empirical assessment is based on a sample of 120 developing countries from 1981 to 2013. The study had two main objectives, namely: (i) to estimate the impact of foreign aid on poverty reduction and (ii) to examine the direction of causality between foreign aid and poverty in developing countries. From these two broad objectives, there are six specific objectives, which include to: (i) examine the overall impact of foreign aid (total official development assistance) on extreme poverty, (ii) investigate the impact of different proxies of foreign aid on the three proxies of extreme poverty, (iii) assess whether political freedom (democracy) or economic freedom enhances the effectiveness of foreign aid, (iv) compare the impact of foreign aid on extreme poverty by developing country income groups, and (v) examine the direction of causality between extreme poverty and foreign aid. To achieve these objectives, the study employed two main dynamic panel data econometric estimation methods, namely the systemgeneralised method of moments (SGMM) technique and the panel vector error correction model (VECM) Granger causality framework. While the SGMM was used to assess the impact of foreign aid on extreme poverty, the panel VECM Granger causality was used to examine the direction of causality between foreign aid poverty. The SGMM was used because of its ability to deal with endogeneity by controlling for simultaneity and unobserved heterogeneity, whereas the panel VECM was preferred because the variables were stationary and cointegrated.
Economics
D. Phil. (Economics)
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії