Littérature scientifique sur le sujet « Corrector estimates »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Corrector estimates ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Corrector estimates"

1

MUNTEAN, A., et T. L. VAN NOORDEN. « Corrector estimates for the homogenization of a locally periodic medium with areas of low and high diffusivity ». European Journal of Applied Mathematics 24, no 5 (2 avril 2013) : 657–77. http://dx.doi.org/10.1017/s0956792513000090.

Texte intégral
Résumé :
We prove an upper bound for the convergence rate of the homogenization limit ε → 0 for a linear transmission problem for a advection–diffusion(–reaction) system posed in areas with low and high diffusivity, where ε is a suitable scale parameter. In this way we rigorously justify the formal homogenization asymptotics obtained in [37] (van Noorden, T. and Muntean, A. (2011) Homogenization of a locally-periodic medium with areas of low and high diffusivity.Eur. J. Appl. Math.22, 493–516). We do this by providing a corrector estimate. The main ingredients for the proof of the correctors include integral estimates for rapidly oscillating functions with prescribed average, properties of the macroscopic reconstruction operators, energy bounds, and extra two-scale regularity estimates. The whole procedure essentially relies on a good understanding of the analysis of the limit two-scale problem.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Reichelt, Sina. « Corrector estimates for a class of imperfect transmission problems ». Asymptotic Analysis 105, no 1-2 (6 octobre 2017) : 3–26. http://dx.doi.org/10.3233/asy-171432.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Bella, Peter, et Felix Otto. « Corrector Estimates for Elliptic Systems with Random Periodic Coefficients ». Multiscale Modeling & ; Simulation 14, no 4 (janvier 2016) : 1434–62. http://dx.doi.org/10.1137/15m1037147.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Muntean, Adrian, et Sina Reichelt. « Corrector Estimates for a Thermodiffusion Model with Weak Thermal Coupling ». Multiscale Modeling & ; Simulation 16, no 2 (janvier 2018) : 807–32. http://dx.doi.org/10.1137/16m109538x.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Goldman, R. E., A. Bajo et N. Simaan. « Algorithms for autonomous exploration and estimation in compliant environments ». Robotica 31, no 1 (28 mars 2012) : 71–87. http://dx.doi.org/10.1017/s0263574712000100.

Texte intégral
Résumé :
SUMMARYThis paper investigates algorithms for enabling surgical slave robots to autonomously explore shape and stiffness of surgical fields. The paper addresses methods for estimating shape and impedance parameters of tissue and methods for autonomously exploring perceived impedance during tool interaction inside a tissue cleft. A hybrid force-motion controller and a cycloidal motion path are proposed to address shape exploration. An adaptive exploration algorithm for segmentation of surface features and a predictor-corrector algorithm for exploration of deep features are introduced based on discrete impedance estimates. These estimates are derived from localized excitation of tissue coupled with simultaneous force measurements. Shape estimation is validated in ex-vivo bovine tissue and attains surface estimation errors of less than 2.5 mm with force sensing resolutions achievable with current technologies in minimally invasive surgical robots. The effect of scan patterns on the accuracy of the shape estimate is demonstrated by comparing the shape estimate of a Cartesian raster scan with overlapping cycloid scan pattern. It is shown that the latter pattern filters the shape estimation bias due to frictional drag forces. Surface impedance exploration is validated to successfully segment compliant environments on flexible inorganic models. Simulations and experiments show that the adaptive search algorithm reduces overall time requirements relative to the complexity of the underlying structures. Finally, autonomous exploration of deep features is demonstrated in an inorganic model and ex-vivo bovine tissue. It is shown that estimates of least constraint based on singular value decomposition of locally estimated tissue stiffness can generate motion to accurately follow a tissue cleft with a predictor-corrector algorithm employing alternating steps of position and admittance control. We believe that these results demonstrate the potential of these algorithms for enabling “smart” surgical devices capable of autonomous execution of intraoperative surgical plans.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Rosenfeld, Joel A., et Warren E. Dixon. « Convergence rate estimates for the kernelized predictor corrector method for fractional order initial value problems ». Fractional Calculus and Applied Analysis 24, no 6 (22 novembre 2021) : 1879–98. http://dx.doi.org/10.1515/fca-2021-0081.

Texte intégral
Résumé :
Abstract This manuscript presents a kernelized predictor corrector (KPC) method for fractional order initial value problems, which replaces linear interpolation with interpolation by a radial basis function (RBF) in a predictor-corrector scheme. Specifically, the class of Wendland RBFs is employed as the basis function for interpolation, and a convergence rate estimate is proved based on the smoothness of the particular kernel selected. Use of the Wendland RBFs over Mittag-Leffler kernel functions employed in a previous iteration of the kernelized method removes the problems encountered near the origin in [11]. This manuscript performs several numerical experiments, each with an exact known solution, and compares the results to another frequently used fractional Adams-Bashforth-Moulton method. Ultimately, it is demonstrated that the KPC method is more accurate but requires more computation time than the algorithm in [4].
Styles APA, Harvard, Vancouver, ISO, etc.
7

Capdeboscq, Yves, Timo Sprekeler et Endre Süli. « Finite element approximation of elliptic homogenization problems in nondivergence-form ». ESAIM : Mathematical Modelling and Numerical Analysis 54, no 4 (16 juin 2020) : 1221–57. http://dx.doi.org/10.1051/m2an/2019093.

Texte intégral
Résumé :
We use uniform W2,p estimates to obtain corrector results for periodic homogenization problems of the form A(x/ε):D2uε = f subject to a homogeneous Dirichlet boundary condition. We propose and rigorously analyze a numerical scheme based on finite element approximations for such nondivergence-form homogenization problems. The second part of the paper focuses on the approximation of the corrector and numerical homogenization for the case of nonuniformly oscillating coefficients. Numerical experiments demonstrate the performance of the scheme.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Oghonyon, Jimevwo Godwin, Matthew Etinosa Egharevba et Ogbu Famous Imaga. « The Extended Block Predictor-Block Corrector Method for Computing Fuzzy Differential Equations ». WSEAS TRANSACTIONS ON MATHEMATICS 22 (19 septembre 2022) : 1–12. http://dx.doi.org/10.37394/23206.2023.22.1.

Texte intégral
Résumé :
Over the years, scholars have developed predictor-corrector method to provide estimates for ordinary differential equations (ODEs). Predictor-corrector methods have been reduced to predicting-correcting method with no concern for finding the convergence-criteria for each loop with no suitable vary step size in order to maximize error. This study aim to consider computing fuzzy differential equations employing the extended block predictor-block corrector method (EBP-BCM). The method of interpolation and collocation combined with multinomial power series as the basis function approximation will used. The principal local truncation errors of the block predictor-block corrector method will be utilized to bring forth the convergence criteria to ensure speedy convergence of each iteration thereby maximizing error(s). Thus, these findings will reveal the ability of this technique to speed up the rate of convergence as a result of variegating the step size and to ensure error control. Some examples will solve to showcase the efficiency and accuracy of this technique.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Gloria, Antoine, et Félix Otto. « Quantitative estimates on the periodic approximation of the corrector in stochastic homogenization ». ESAIM : Proceedings and Surveys 48 (janvier 2015) : 80–97. http://dx.doi.org/10.1051/proc/201448003.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Fatima, Tasnim, Adrian Muntean et Mariya Ptashnyk. « Unfolding-based corrector estimates for a reaction–diffusion system predicting concrete corrosion ». Applicable Analysis 91, no 6 (juin 2012) : 1129–54. http://dx.doi.org/10.1080/00036811.2011.625016.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Corrector estimates"

1

VO, ANK KHOA. « Corrector homogenization estimates for PDE Systems with coupled fluxes posed in media with periodic microstructures ». Doctoral thesis, Gran Sasso Science Institute, 2018. http://hdl.handle.net/20.500.12571/9693.

Texte intégral
Résumé :
The purpose of this thesis is the derivation of corrector estimates justifying the upscaling of systems of partial differential equations (PDEs) with coupled fluxes posed in media with microstructures (like porous media). Such models play an important role in the understanding of, for example, drug-delivery mechanisms, where the involved chemical species diffusing inside the domain are assumed to obey perhaps other transport mechanisms and certain non-dissipative nonlinear processes within the pore space and at the boundaries of the perforated media (e.g. interaction, chemical reaction, aggregation, deposition). In this thesis, our corrector estimates provide a quantitative analysis in terms of convergence rates in suitable norms, i.e. as the small homogenization parameter tends to zero, the differences between the micro- and macro-concentrations and between the corresponding micro- and macro-concentration gradients are controlled in terms of the small parameter. As preparation, we are first concerned with the weak solvability of the microscopic models as well as with the fundamental asymptotic homogenization procedures that are behind the derivation of the corresponding upscaled models. We report results on three connected mathematical problems: 1. Asymptotic analysis of microscopic semi-linear elliptic equations/systems. We explore the asymptotic analysis of a prototype model including the interplay between stationary diffusion and both surface and volume chemical reactions in porous media. Our interest lies in deriving homogenization limits (upscaling) for alike systems, and particularly, in justifying rigorously the obtained averaged descriptions. We prove the well-posedness of the microscopic problem ensuring also the positivity and boundedness of the involved concentrations. Then we use the structure of the two-scale expansions to derive corrector estimates delimitating quantitatively the convergence rate of the asymptotic approximates to the macroscopic limit concentrations and their gradients. High-order corrector estimates are also obtained. The semi-linear auxiliary problems are tackled by a fixed-point homogenization argument. Our techniques include also Moser-like iteration techniques, a variational formulation, two-scale asymptotic expansions as well as suitable energy estimates. 2. Corrector estimates for a Smoluchowski-Soret-Dufour model. We consider a thermodiffusion system, which is a coupled system of PDEs and ODEs that account for the heat-driven diffusion dynamics of hot colloids in periodic heterogeneous media. This model describes the joint evolution of temperature and colloidal concentrations in a saturated porous tissue where the Smoluchowski interactions for aggregation process and a linear deposition process take place. By a fixed-point argument, we prove the local existence and uniqueness results for the upscaled system. To obtain the corrector estimates, we exploit the concept of macroscopic reconstructions as well as suitable integral estimates to control boundary interactions. 3. Corrector estimates for a non-stationary Stokes-Nernst-Planck-Poisson system. We investigate a non-stationary Stokes-Nernst-Planck-Poisson system posed in a perforated domain as originally proposed by Knabner and his co-authors (see e.g. [98] and [99]). Starting off with the setting from [99], we complete the results by proving corrector estimates for the homogenization procedure. Main difficulties are connected to the choice of boundary conditions for the Poisson part of the system as well as with the scaling of the Stokes part of the system.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Högström, Martin. « Wind Climate Estimates - Validation of Modelled Wind Climate and Normal Year Correction ». Thesis, Uppsala University, Air and Water Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-8023.

Texte intégral
Résumé :

Long time average wind conditions at potential wind turbine sites are of great importance when deciding if an investment will be economically safe. Wind climate estimates such as these are traditionally done with in situ measurements for a number of months. During recent years, a wind climate database has been developed at the Department of Earth Sciences, Meteorology at Uppsala University. The database is based on model runs with the higher order closure mesoscale MIUU-model in combination with long term statistics of the geostrophic wind, and is now used as a complement to in situ measurements, hence speeding up the process of turbine siting. With this background, a study has been made investigating how well actual power productions during the years 2004-2006 from 21 Swedish wind turbines correlate with theoretically derived power productions for the corresponding sites.

When comparing theoretically derived power productions based on long term statistics with measurements from a shorter time period, correction is necessary to be able to make relevant comparisons. This normal year correction is a main focus, and a number of different wind energy indices which are used for this purpose are evaluated. Two publicly available (Swedish and Danish Wind Index) and one derived theoretically from physical relationships and NCEP/NCAR reanalysis data (Geostrophic Wind Index). Initial testing suggests in some cases very different results when correcting with the three indices and further investigation is necessary. An evaluation of the Geostrophic Wind Index is made with the use of in situ measurements.

When correcting measurement periods limited in time to a long term average, a larger statistical dispersion is expected with shorter measurement periods, decreasing with longer periods. In order to investigate this assumption, a wind speed measurement dataset of 7 years were corrected with the Geostrophic Wind Index, simulating a number of hypothetical measurement periods of various lengths. When normal year correcting a measurement period of specific length, the statistical dispersion decreases significantly during the first 10 months. A reduction to about half the initial statistical dispersion can be seen after just 5 months of measurements.

Results show that the theoretical normal year corrected power productions in general are around 15-20% lower than expected. A probable explanation for the larger part of this bias is serious problems with the reported time-not-in-operation for wind turbines in official power production statistics. This makes it impossible to compare actual power production with theoretically derived without more detailed information. The theoretically derived Geostrophic Wind Index correlates well to measurements, however a theoretically expected cubed relationship of wind speed seem to account for the total energy of the wind. Such an amount of energy can not be absorbed by the wind turbines when wind speed conditions are a lot higher than normal.


Vindklimatet vid tänkbara platser för uppförande av vindkraftverk är avgörande när det beslutas huruvida det är en lämplig placering eller ej. Bedömning av vindklimatet görs vanligtvis genom vindmätningar på plats under ett antal månader. Under de senaste åren har en vindkarteringsdatabas utvecklats vid Institutionen för Geovetenskaper, Meteorologi vid Uppsala universitet. Databasen baseras på modellkörningar av en högre ordningens mesoskale-modell, MIUU-modellen, i kombination med klimatologisk statistik för den geostrofiska vinden. Denna används numera som komplement till vindmätningar på plats, vilket snabbar upp bedömningen av lämpliga platser. Mot denna bakgrund har en studie genomförts som undersöker hur bra faktisk energiproduktion under åren 2004-2006 från 21 vindkraftverk stämmer överens med teoretiskt härledd förväntad energiproduktion för motsvarande platser. Om teoretiskt härledd energiproduktion baserad på långtidsstatistik ska jämföras med mätningar från en kortare tidsperiod måste korrektion ske för att kunna göra relevanta jämförelser. Denna normalårskorrektion genomförs med hjälp av olika vindenergiindex. En utvärdering av de som finns allmänt tillgängliga (Svenskt vindindex och Danskt vindindex) och ett som härletts teoretiskt från fysikaliska samband och NCEP/NCAR återanalysdata (Geostrofiskt vindindex) görs. Inledande tester antyder att man får varierande resultat med de tre indexen och en djupare utvärdering genomförs, framförallt av det Geostrofiska vindindexet där vindmätningar används för att söka verifiera dess giltighet.

När kortare tidsbegränsade mätperioder korrigeras till ett långtidsmedelvärde förväntas en större statistisk spridning vid kortare mätperioder, minskande med ökande mätlängd. För att undersöka detta antagande används 7 års vindmätningar som korrigeras med det Geostrofiska vindindexet. I detta simuleras ett antal hypotetiskt tänkta mätperioder av olika längd. När en mätperiod av specifik längd normalårskorrigeras minskar den statistiska spridningen kraftigt under de första 10 månaderna. En halvering av den inledande statistiska spridningen kan ses efter endast 5 månaders mätningar.

Resultaten visar att teoretiskt härledd normalårskorrigerad energiproduktion generellt är ungefär 15-20% lägre än väntat. En trolig förklaring till merparten av denna skillnad är allvarliga problem med rapporterad hindertid för vindkraftverk i den officiella statistiken. Något som gör det omöjligt att jämföra faktisk energiproduktion med teoretiskt härledd utan mer detaljerad information. Det teoretiskt härledda Geostrofiska vindindexet stämmer väl överens med vindmätningar. Ett teoretiskt förväntat förhållande där energi är proportionellt mot kuben av vindhastigheten visar sig rimligen ta hänsyn till den totala energin i vinden. En sådan energimängd kan inte tas till vara av vindkraftverk när vindhastighetsförhållandena är avsevärt högre än de normala.

Styles APA, Harvard, Vancouver, ISO, etc.
3

Brenner, Andreas [Verfasser], Eberhard [Gutachter] Bänsch et Charalambos [Gutachter] Makridakis. « A-posteriori error estimates for pressure-correction schemes / Andreas Brenner ; Gutachter : Eberhard Bänsch, Charalambos Makridakis ». Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2016. http://d-nb.info/1114499692/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Remund, Todd Gordon. « A Naive, Robust and Stable State Estimate ». BYU ScholarsArchive, 2008. https://scholarsarchive.byu.edu/etd/1424.

Texte intégral
Résumé :
A naive approach to filtering for feedback control of dynamic systems that is robust and stable is proposed. Simulations are run on the filters presented to investigate the robustness properties of each filter. Each simulation with the comparison of the filters is carried out using the usual mean squared error. The filters to be included are the classic Kalman filter, Krein space Kalman, two adjustments to the Krein filter with input modeling and a second uncertainty parameter, a newly developed filter called the Naive filter, bias corrected Naive, exponentially weighted moving average (EWMA) Naive, and bias corrected EWMA Naive filter.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Pelletier, Stéphane. « High-resolution video synthesis from mixed-resolution video based on the estimate-and-correct method ». Thesis, McGill University, 2003. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=79253.

Texte intégral
Résumé :
A technique to increase the frame rate of digital video cameras at high-resolution is presented. The method relies on special video hardware capable of simultaneously generating low-speed, high-resolution frames and high-speed, low-resolution frames. The algorithm follows an estimate-and-correct approach, in which a high-resolution estimate is first produced by translating the pixels of the high-resolution frames produced by the camera with respect to the motion dynamic observed in the low-resolution ones. The estimate is then compared against the current low-resolution frame and corrected locally as necessary for consistency with the latter. This is done by replacing the wrong pixels of the estimate with pixels from a bilinear interpolation of the current low-resolution frame. Because of their longer exposure time, high-resolution frames are more prone to motion blur than low-resolution frames, so a motion blur reduction step is also applied. Simulations demonstrate the ability of our technique in synthesizing high-quality, high-resolution frames at modest computational expense.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Horne, Lyman D., et Ricky G. Dye. « AN INEXPENSIVE DATA ACQUISITION SYSTEM FOR MEASURING TELEMETRY SIGNALS ON TEST RANGES TO ESTIMATE CHANNEL CHARACTERISTICS ». International Foundation for Telemetering, 1995. http://hdl.handle.net/10150/608407.

Texte intégral
Résumé :
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada
In an effort to determine a more accurate characterization of the multipath fading effects on telemetry signals, the BYU telemetering group is implementing an inexpensive data acquisition system to measure these effects. It is designed to measure important signals in a diversity combining system. The received RF envelope, AGC signal, and the weighting signal for each beam, as well as the IRIG B time stamp will be sampled and stored. This system is based on an 80x86 platform for simplicity, compactness, and ease of use. The design is robust and portable to accommodate measurements in a variety of locations including aircraft, ground, and mobile environments.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Sogunro, Babatunde Oluwasegun. « Nonresponse in industrial censuses in developing countries : some proposals for the correction of biased estimators ». Thesis, University of Hull, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.278593.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Lincoln, James. « The Generalized Empirical Likelihood estimator with grouped data : bias correction in Linked Employer-Employee models ». Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/the-generalized-empirical-likelihood-estimator-with-grouped-data--bias-correction-in-linked-employeremployee-models(5b100c37-0093-472b-95ad-aee519d1efd4).html.

Texte intégral
Résumé :
This thesis is comprised of two parts: Part I It is common in empirical economic applications that use micro-data to exhibit a natural ordering into groups. Angrist (1991) use dummy variables based on such grouping to form instruments for consistent estimation. Khatoon et al. (2014) and Andrews et al. (2016) extend the GEL class of estimators to the case where moment conditions are specified on a group-by-group basis and refer to the resulting estimator as group-GEL. A natural consequence of basing instruments or moment conditions on groups is the degree of over-identification can increase significantly. Following Bekker (1994) it is recognized that inference based on conventional standard errors is incorrect in the presence of many instruments. Furthermore, when using many moment conditions, two-stage GMM is biased. Although the bias of Generalized empirical likelihood (GEL) is robust to the number of instruments, Newey and Windmeijer (2009) show that the conventional standard errors are too small. They propose an alternative variance estimator for GEL that is consistent under conventional and many-weak moment asymptotics. In this part of the thesis I demonstrate that for a particular specification of moment conditions, the group-GEL estimator is more efficient than GEL. I also extend the Newey and Windmeijer (2009) many-moment asymptotic framework to group-GEL. Simulation results demonstrate that group-GEL is robust to many moment conditions, and t-statistic rejection frequencies using the alternative variance estimator are much improved compared to using conventional standard errors. Part II Following the seminal paper of Abowd et al. (1999), Linked Employer-Employee datasets are commonly used in studies decomposing sources of wage variation into unobservable worker and firm effects. If it is assumed that the correlation between these worker and firm effects can be interpreted as a measure of sorting in labour markets, then an efficient matching process between workers and firms would result in a positive correlation. However, empirical evidence has failed to support this ascertain. As a possible answer to this apparent paradox, Andrews et al. (2008) show the estimation of the correlation is biased as a function of the amount of movement of workers between firms, so-called Limited Mobility Bias (LMB); furthermore they provide formula to correct this bias. However, due to computational restrictions, application of these corrections is infeasible, given the size of datasets typically used. In this part of the thesis I introduce an estimation technique to make the bias-correction estimators of Andrews et al. (2008) feasible. Monte Carlo experiments using the bias-corrected estimators demonstrate that LMB can be eliminated from datasets of comparable size to real data. Finally, I apply the bias-correction techniques to a linear model based on the Danish IDA, and find that correcting the correlation between the worker and firm effects due to LMB provides insufficient evidence to resolve the above paradox.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Kitchen, John. « The effect of quadrature hybrid errors on a phase difference frequency estimator and methods for correction / ». Title page, contents and summary only, 1991. http://web4.library.adelaide.edu.au/theses/09AS/09ask62.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Abyad, Emad. « Modeled Estimates of Solar Direct Normal Irradiance and Diffuse Horizontal Irradiance in Different Terrestrial Locations ». Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/36499.

Texte intégral
Résumé :
The transformation of solar energy into electricity is starting to impact to overall worldwide energy production mix. Photovoltaic-generated electricity can play a significant role in minimizing the use of non-renewable energy sources. Sunlight consists of three main components: global horizontal irradiance (GHI), direct normal irradiance (DNI) and diffuse horizontal irradiance (DHI). Typically, these components are measured using specialized instruments in order to study solar radiation at any location. However, these measurements are not always available, especially in the case of the DNI and DHI components of sunlight. Consequently, many models have been developed to estimate these components from available GHI data. These models have their own merits. For this thesis, solar radiation data collected at four locations have been analyzed. The data come from Al-Hanakiyah (Saudi Arabia), Boulder (U.S.), Ma’an (Jordan), and Ottawa (Canada). The BRL, Reindl*, DISC, and Perez models have been used to estimate DNI and DHI data from the experimentally measured GHI data. The findings show that the Reindl* and Perez model outcomes offered similar accuracy of computing DNI and DHI values when comparing with detailed experimental data for Al-Hanakiyah and Ma’an. For Boulder, the Perez and BRL models have similar estimation abilities of DHI values and the DISC and Perez models are better estimators of DNI. The Reindl* model performs better when modeling DHI and DNI for Ottawa data. The BRL and DISC models show similar metrics error analyses, except in the case of the Ma’an location where the BRL model shows high error metrics values in terms of MAE, RMSE, and standard deviation (σ). The Boulder and Ottawa locations datasets were not complete and affected the outcomes with regards to the model performance metrics. Moreover, the metrics show very high, unreasonable values in terms of RMSE and σ. It is advised that a global model be developed by collecting data from many locations as a way to help minimize the error between the actual and modeled values since the current models have their own limitations. Availability of multi-year data, parameters such as albedo and aerosols, and one minute to hourly time steps data could help minimize the error between measured and modeled data. In addition to having accurate data, analysis of spectral data is important to evaluate their impact on solar technologies.
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Corrector estimates"

1

Great Britain. Lord Chancellor's Department. The Lord Chancellor's Departments supply estimates : Main estimates 2001-02, correction slips and winter supplementary estimate 2001-02. London : Stationery Office, 2001.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Heckman, James J. Bias corrected estimates of GED returns. Cambridge, Mass : National Bureau of Economic Research, 2006.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Henry, S. G. B. UK manufacturing investment in the 1980s : Some estimates using error correction models. London : National Institute of Economic and Social Research, 1988.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Labor, United States Congress House Committee on Education and. Access demonstration programs correction : Report (to accompany H.R. 678) (including cost estimate of the Congressional Budget Office). [Washington, D.C. ? : U.S. G.P.O., 1989.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Anderson, Richard G. Analysis of panel vector error correction models using maximum likelihood, the bootstrap, and canonical-correlation estimators. St. Louis, Mo.] : Federal Reserve Bank of St. Louis, 2006.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Isaac, Rani. How population growth estimates affect housing market projections : Will economic growth hold up under the weight of the housing correction ? Sacramento, CA : California State Library, California Research Bureau, 2007.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

United States. Congress. House. Committee on Commerce. Medicaid health insuring organization correction : Report (to accompany H.R. 3056) (including cost estimate of the Congressional Budget Office). [Washington, D.C. ? : U.S. G.P.O., 1996.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

United States. Congress. House. Committee on the Judiciary. Carjacking Correction Act of 1996 : Report (to accompany H.R. 3676) (including cost estimate of the Congressional Budget Office). [Washington, D.C. ? : U.S. G.P.O., 1996.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Colorado River Indian Reservation Boundary Correction Act : Report (to accompany H.R. 2941) (including cost estimate of the Congressional Budget Office. [Washington, D.C : U.S. G.P.O., 2004.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

United States. Congress. House. Committee on the Judiciary. Correction of committee names in United States Code : Report (to accompany H.R. 4777) (including cost estimate of the Congressional Budget Office). [Washington, D.C. ? : U.S. G.P.O., 1994.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Corrector estimates"

1

Gosse, Laurent. « Lyapunov Functional for Linear Error Estimates ». Dans Computing Qualitatively Correct Approximations of Balance Laws, 41–61. Milano : Springer Milan, 2013. http://dx.doi.org/10.1007/978-88-470-2892-0_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Laurel, Jacob, et Sasa Misailovic. « Continualization of Probabilistic Programs With Correction ». Dans Programming Languages and Systems, 366–93. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-44914-8_14.

Texte intégral
Résumé :
AbstractProbabilistic Programming offers a concise way to represent stochastic models and perform automated statistical inference. However, many real-world models have discrete or hybrid discrete-continuous distributions, for which existing tools may suffer non-trivial limitations. Inference and parameter estimation can be exceedingly slow for these models because many inference algorithms compute results faster (or exclusively) when the distributions being inferred are continuous. To address this discrepancy, this paper presents Leios. Leios is the first approach for systematically approximating arbitrary probabilistic programs that have discrete, or hybrid discrete-continuous random variables. The approximate programs have all their variables fully continualized. We show that once we have the fully continuous approximate program, we can perform inference and parameter estimation faster by exploiting the existing support that many languages offer for continuous distributions. Furthermore, we show that the estimates obtained when performing inference and parameter estimation on the continuous approximation are still comparably close to both the true parameter values and the estimates obtained when performing inference on the original model.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Swierczynski, Piotr, et Barbara Wohlmuth. « Maximum Norm Estimates for Energy-Corrected Finite Element Method ». Dans Lecture Notes in Computational Science and Engineering, 973–81. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-96415-7_92.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Clarotti, C. A., M. De Marinis et A. Manella. « Field Data as a Correction of Mil-Handbook Estimates ». Dans Reliability Data Collection and Use in Risk and Availability Assessment, 177–85. Berlin, Heidelberg : Springer Berlin Heidelberg, 1986. http://dx.doi.org/10.1007/978-3-642-82773-0_19.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Zhang, Guoyi, et Rong Liu. « Bias-corrected Estimators of Scalar Skew Normal ». Dans New Developments in Statistical Modeling, Inference and Application, 203–14. Cham : Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-42571-9_11.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Sobotka, Daniel, Roxane Licandro, Michael Ebner, Ernst Schwartz, Tom Vercauteren, Sebastien Ourselin, Gregor Kasprian, Daniela Prayer et Georg Langs. « Reproducibility of Functional Connectivity Estimates in Motion Corrected Fetal fMRI ». Dans Smart Ultrasound Imaging and Perinatal, Preterm and Paediatric Image Analysis, 123–32. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-32875-7_14.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Jeske, Daniel R. « Jackknife Bias Correction of a Clock Offset Estimator ». Dans Advances in Mathematical and Statistical Modeling, 245–54. Boston : Birkhäuser Boston, 2008. http://dx.doi.org/10.1007/978-0-8176-4626-4_18.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Tencer, Tomáš, Peter Milo, Jan Havelka, Anna Mária Rekemová, David Šálka et Michal Vágner. « Magnetic survey in the process of large-scale construction sites ». Dans Advances in On- and Offshore Archaeological Prospection, 761–70. Kiel : Universitätsverlag Kiel | Kiel University Publishing, 2023. http://dx.doi.org/10.38072/978-3-928794-83-1/p77.

Texte intégral
Résumé :
Archaeogeophysical prospection is necessary when planning large constructions. Motorized magnetometry systems offer an effective method to identify archaeology prior the construction. A sufficiently large area must be surveyed to estimate the archaeological potential correctly.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Pázman, A., et L. Pronzato. « Approximate Densities of Two Bias-Corrected Nonlinear LS Estimators ». Dans Contributions to Statistics, 145–52. Heidelberg : Physica-Verlag HD, 1998. http://dx.doi.org/10.1007/978-3-642-58988-1_16.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Ambarki, K., J. Petr, A. Wåhlin, R. Wirestam, L. Zarrinkoob, J. Malm et A. Eklund. « Partial Volume Correction of Cerebral Perfusion Estimates Obtained by Arterial Spin Labeling ». Dans IFMBE Proceedings, 17–19. Cham : Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-12967-9_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Corrector estimates"

1

Gadsden, Andrew, et Saeid Habibi. « Target Tracking Using the Smooth Variable Structure Filter ». Dans ASME 2009 Dynamic Systems and Control Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/dscc2009-2632.

Texte intégral
Résumé :
This article discusses the application of the smooth variable structure filter (SVSF) on a target tracking problem. The SVSF is a relatively new predictor-corrector method used for state and parameter estimation. It is a sliding mode estimator, where gain switching is used to ensure that the estimates converge to true state values. An internal model of the system, either linear or nonlinear, is used to predict an a priori state estimate. A corrective term is then applied to calculate the a posteriori state estimate, and the estimation process is repeated iteratively. The results of applying this filter on a target tracking problem demonstrate its stability and robustness. Both of these attributes make using the SVSF advantageous over the well-known Kalman and extended Kalman filters. The performances of these algorithms are quantified in terms of robustness, resilience to poor initial conditions and measurement outliers, tracking accuracy and computational complexity.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Lloyd, George M. « A Kalman Filter Framework for High-Dimensional Sensor Fusion Using Stochastic Non-Linear Networks ». Dans ASME 2014 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/imece2014-37834.

Texte intégral
Résumé :
The textbook Kalman Filter (LKF) seeks to estimate the state of a linear system based on having two things in hand: a.) a reasonable state-space model of the underlying process and its noise components; b.) imperfect (noisy) measurements obtained from the process via one or more sensors. The LKF approach results in a predictor-corrector algorithm which can be applied recursively to correct predictions from the state model so as to yield posterior estimates of the current process state, as new sensor data are made available. The LKF can be shown to be optimal in a Gaussian setting and is eminently useful in practical settings when the models and measurements are stochastic and non-stationary. Numerous extensions of the KF filter have been proposed for the non-linear problem, such as extended Kalman Filters (EKF) and ‘ensemble’ filters (EnKF). Proofs of optimality are difficult to obtain but for many problems where the ‘physics’ is of modest complexity EKF’s yield algorithms which function well in a practical sense; the EnKF also shows promise but is limited by the requirement for sampling the random processes. In multi-physics systems, for example, several complications arise, even beyond non-Gaussianity. On the one hand, multi-physics effects may include multi-scale responses and path dependency, which may be poorly sampled by a sensor suite (tending to favor low gains). One the other hand, as more multi-physics effects are incorporated into a model, the model itself becomes a less and less perfect model of reality (tending to favor high gains). For reasons such as these suitable estimates of the joint system response are difficult to obtain, as are corresponding joint estimates of the sensor ensemble. This paper will address these issues in a two-fold way — first by a generalized process model representation based on regularized stochastic non-linear networks (Snn), and second by transformation of the process itself by an adaptive low-dimensional subspace in which the update step on the residual can be performed in a space commensurate with the available information content of the process and measured response.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Gadsden, S. Andrew, et Saeid R. Habibi. « Fault Detection of an Electrohydrostatic Actuator With the SVSF Time-Varying Boundary Layer Concept ». Dans ASME/BATH 2013 Symposium on Fluid Power and Motion Control. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/fpmc2013-4498.

Texte intégral
Résumé :
The electrohydrostatic actuator (EHA) is an efficient type of actuator commonly used in aerospace applications. It makes use of a closed hydraulic circuit, a number of control valves, an electric motor, and a fluid pump (usually a type of gear pump). The smooth variable structure filter (SVSF) is a relatively new estimation strategy based on sliding mode concepts formulated in a predictor-corrector fashion. The SVSF offers a number of advantages over other traditional estimation methods, including robust and stable estimates, and an additional performance metric. A fixed smoothing boundary layer was implemented in an effort to ensure stable estimates, and is defined based on the amount of uncertainties and noise present in the estimation process. Recent advances in SVSF theory include a time-varying smoothing boundary layer. This method, known as the SVSF-VBL, offers an optimal formulation of the SVSF as well as a method for detecting changes or faults in a system. This paper implements the SVSF-VBL in an effort to detect faults in an EHA. The results are compared with traditional Kalman filter-based methods.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Zhang, Fu, Ehsan Keikha, Behrooz Shahsavari et Roberto Horowitz. « Adaptive Mismatch Compensation for Rate Integrating Vibratory Gyroscopes With Improved Convergence Rate ». Dans ASME 2014 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/dscc2014-6053.

Texte intégral
Résumé :
This paper presents an online adaptive algorithm to compensate damping and stiffness frequency mismatches in rate integrating Coriolis Vibratory Gyroscopes (CVGs). The proposed adaptive compensator consists of a least square estimator that estimates the damping and frequency mismatches, and an online compensator that corrects the mismatches. In order to improve the adaptive compensator’s convergence rate, we introduce a calibration phase where we identify relations between the unknown parameters (i.e. mismatches, rotation rate and rotation angle). Calibration results show that the unknown parameters lie on a hyperplane. When the gyro is in operation, we project parameters estimated from the least square estimator onto the hyperplane. The projection will reduce the degrees of freedom in parameter estimates, thus guaranteeing persistence of excitation and improving convergence rate. Simulation results show that utilization of the projection method will drastically improve convergence rate of the least square estimator and improve gyro performance.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Li, Yiteng, Marwa Alsinan, Xupeng He, Evgeny Ugolkov, Hyung Kwak et Hussein Hoteit. « NMR T2 Response in Rough Pore Systems : Modeling and Analysis ». Dans SPE Annual Technical Conference and Exhibition. SPE, 2022. http://dx.doi.org/10.2118/210169-ms.

Texte intégral
Résumé :
Abstract Estimating pore size distribution from NMR T2 responses typically assumes a smooth solid-pore interface. However, surface roughness accelerates NMR T2 relaxation and thus leads to an underestimation of the pore size distribution. Until now, only a few studies investigated the surface roughness effect. This work systematically studies the influence of surface roughness on NMR T2 responses and introduces a correction factor to bring incorrect T2 values back to the correct values. This study includes three main sections: creating 3D pore structures with roughness, simulating NMR T2 relaxation using the random walk method, and quantifying the roughness effect. Constrained Latin hypercube sampling is used to create representative examples in a space-filling manner, constrained by the fast diffusion limit. Then random walk simulations are implemented, and NMR T2 responses in smooth and rough pores are calculated. To accurately estimate pore radius, a "value-to-value" model is developed to map the nonlinear relationship between a 3D roughness parameter and the proposed correction factor. The accuracy of the proposed model is validated by comparing the corrected NMR T2 responses to the reference results obtained from smooth pore systems. Numerical results show that the proposed model can correctly evaluate pore sizes from decreased NMR T2 responses caused by the surface roughness effect. Previous works incorporated this effect into surface relaxivity as they attempted to retain the pore radius and meanwhile reproduce the faster relaxation rate. However, this may break down the assumption of fast diffusion limit. Instead, this study mitigates this limitation by separating the roughness effect from surface relaxivity. The proposed correction factor offers an alternative approach to calculating the correct pore radius by accounting for the influence of surface roughness at the pore scale.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Gastpar, Michael C., Patrick R. Gill et Frederic E. Theunissen. « Anthropic correction of information estimates ». Dans 2009 IEEE Information Theory Workshop on Networking and Information Theory (ITW). IEEE, 2009. http://dx.doi.org/10.1109/itwnit.2009.5158561.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Tehranchi, Babak, et Dennis G. Howe. « Direct Access Compact Disc vs. CD-ROM Reliability Estimates ». Dans Symposium on Optical Memory. Washington, D.C. : Optica Publishing Group, 1996. http://dx.doi.org/10.1364/isom.1996.otub.20.

Texte intégral
Résumé :
The Cross Interleaved Reed-Solomon Code (CIRC) used in CD-ROM for basic error correction/detection consists of two catenated, distance 5 Reed-Solomon codes. These two subcodes, which are called the C1 and C2 codes, have length 32 bytes and 28 bytes respectively. C1 and C2 can each correct any combination of t errors and e erasures in a given codeword, where t and e are jointly constrained by the inequality Specific values of t and e may be selected by a decoder when each C1 and C2 codeword is processed. The algorithm used to adaptively select the t and e values that apply when the C1 and C2 codes are jointly decoded is called the CIRC decoding strategy.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Lacroix, Valéry, Kunio Hasegawa, Yinsheng Li et Yoshihito Yamaguchi. « Empirical Correction Factor to Estimate the Plastic Collapse Bending Moment of Pipes With Circumferential Surface Flaw ». Dans ASME 2022 Pressure Vessels & Piping Conference. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/pvp2022-84958.

Texte intégral
Résumé :
Abstract The structural integrity assessment of pipes with circumferential surface flaw under the plastic collapse regime consists of net section collapse analysis. In recent years various researchers showed that this analysis, which has been developed based on classic beam theory, has certain inaccuracies. As such, assessment purely based on net-section collapse and beam theory can lead to both conservative and unconservative results. To address those inaccuracies, in this paper the authors propose a correction factor which aims to mitigate the difference between the experimental results and ASME B&PV Code Section XI equations. This correction factor is calculated using an empirical formula developed on the basis of a large experimental database of pipe collapse bending tests containing variety of diameters, thicknesses, flaw depths and flaw lengths. In this work, the authors took a systematic approach to identify the most influencing factors on such a correction factor and showed that by applying this correction factor to the current solution of ASME B&PV Code Section XI, the resulting solution becomes more accurate. This corrected approach also is in line with ASME B&PV Code Section XI Appendix C practice for axial flaw in pipes, where a semi-empirical correction factor has been considered as well.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Mackay, Ed. « Correction for Bias in Return Values of Wave Heights Caused by Hindcast Uncertainty ». Dans ASME 2011 30th International Conference on Ocean, Offshore and Arctic Engineering. ASMEDC, 2011. http://dx.doi.org/10.1115/omae2011-49409.

Texte intégral
Résumé :
Hindcasts of wave conditions can be subject to large uncertainties, especially in storms. Even if estimates of extremes are unbiased on average, the variance of the errors can lead to a bias in estimates of extremes derived from hindcast data. The convolution of the error distribution and wave height distribution causes a stretching of the measured distribution. This can lead to substantial positive biases in estimates of return values. An iterative deconvolution procedure is proposed to estimate the size of the bias, based on the measured distribution and knowledge of the error distribution. The effectiveness of the procedure is illustrated in several case studies using Monte Carlo simulation.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Kobelev, P. G., L. A. Trefilova, A. V. Belov, E. A. Eroshenko et V. G. Yanke. « Reference stations method usage for excluding snow effect by 2018-2019 data. » Dans Physics of Auroral Phenomena. FRC KSC RAS, 2020. http://dx.doi.org/10.37614/2588-0039.2020.43.012.

Texte intégral
Résumé :
In this article the influence of the surrounding snow cover on the neutron monitors count rate of the World Wide Web was estimated using the method of reference stations. The applied technique also makes it possible to estimate the thickness of the snow cover at the observation point, which was done for more than two dozen stations. A comparison of the results of data correction for snow is carried out for case of automatic correction, based on the developed algorithm, and for manual one, with an error estimate.
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Corrector estimates"

1

Heckman, James, et Paul LaFontaine. Bias Corrected Estimates of GED Returns. Cambridge, MA : National Bureau of Economic Research, février 2006. http://dx.doi.org/10.3386/w12018.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Ziegler, Nancy, Nicholas Webb, Adrian Chappell et Sandra LeGrand. Scale invariance of albedo-based wind friction velocity. Engineer Research and Development Center (U.S.), mai 2021. http://dx.doi.org/10.21079/11681/40499.

Texte intégral
Résumé :
Obtaining reliable estimates of aerodynamic roughness is necessary to interpret and accurately predict aeolian sediment transport dynamics. However, inherent uncertainties in field measurements and models of surface aerodynamic properties continue to undermine aeolian research, monitoring, and dust modeling. A new relation between aerodynamic shelter and land surface shadow has been established at the wind tunnel scale, enabling the potential for estimates of wind erosion and dust emission to be obtained across scales from albedo data. Here, we compare estimates of wind friction velocity (u*) derived from traditional methods (wind speed profiles) with those derived from the albedo model at two separate scales using bare soil patch (via net radiometers) and landscape (via MODIS 500 m) datasets. Results show that profile-derived estimates of u* are highly variable in anisotropic surface roughness due to changes in wind direction and fetch. Wind speed profiles poorly estimate soil surface (bed) wind friction velocities necessary for aeolian sediment transport research and modeling. Albedo-based estimates of u* at both scales have small variability because the estimate is integrated over a defined, fixed area and resolves the partition of wind momentum be-tween roughness elements and the soil surface. We demonstrate that the wind tunnel-based calibration of albedo for predicting wind friction velocities at the soil surface (us*) is applicable across scales. The albedo-based approach enables consistent and reliable drag partition correction across scales for model and field estimates of us* necessary for wind erosion and dust emission modeling.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Kaffenberger, Michelle, et Lant Pritchett. Women’s Education May Be Even Better Than We Thought : Estimating the Gains from Education When Schooling Ain’t Learning. Research on Improving Systems of Education (RISE), septembre 2020. http://dx.doi.org/10.35489/bsg-rise-wp_2020/049.

Texte intégral
Résumé :
Women’s schooling has long been regarded as one of the best investments in development. Using two different cross-nationally comparable data sets which both contain measures of schooling, assessments of literacy, and life outcomes for more than 50 countries, we show the association of women’s education (defined as schooling and the acquisition of literacy) with four life outcomes (fertility, child mortality, empowerment, and financial practices) is much larger than the standard estimates of the gains from schooling alone. First, estimates of the association of outcomes with schooling alone cannot distinguish between the association of outcomes with schooling that actually produces increased learning and schooling that does not. Second, typical estimates do not address attenuation bias from measurement error. Using the new data on literacy to partially address these deficiencies, we find that the associations of women’s basic education (completing primary schooling and attaining literacy) with child mortality, fertility, women’s empowerment and the associations of men’s and women’s basic education with positive financial practices are three to five times larger than standard estimates. For instance, our country aggregated OLS estimate of the association of women’s empowerment with primary schooling versus no schooling is 0.15 of a standard deviation of the index, but the estimated association for women with primary schooling and literacy, using IV to correct for attenuation bias, is 0.68, 4.6 times bigger. Our findings raise two conceptual points. First, if the causal pathway through which schooling affects life outcomes is, even partially, through learning then estimates of the impact of schooling will underestimate the impact of education. Second, decisions about how to invest to improve life outcomes necessarily depend on estimates of the relative impacts and relative costs of schooling (e.g., grade completion) versus learning (e.g., literacy) on life outcomes. Our results do share the limitation of all previous observational results that the associations cannot be given causal interpretation and much more work will be needed to be able to make reliable claims about causal pathways.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Kott, Phillip S. Calibration-Weighting a Stratified Simple Random Sample with SUDAAN. RTI Press, mars 2022. http://dx.doi.org/10.3768/rtipress.2022.mr.0048.2204.

Texte intégral
Résumé :
This report shows how to apply the calibration-weighting procedures in SAS-callable SUDAAN (Version 11) to a stratified simple random sample drawn from a complete list frame for an establishment survey. The results are calibrated weights produced via raking, raking to a size variable, and pseudo-optimal calibration that potentially reduce and appropriately measure the standard errors of estimated totals. The report then shows how to use these procedures to remove selection bias caused by unit nonresponse under a plausible response model. Although unit nonresponse is usually assumed to be a function of variables with known population or full-sample estimated totals, calibration weighting can often be used when nonresponse is assumed to be a function of a variable known only for unit respondents (i.e., not missing at random). When producing calibrated weights for an establishment survey, one advantage the SUDAAN procedures have over most of their competitors is that their linearization-based variance estimators can capture the impact of finite-population correction.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Kott, Phillip S. The Role of Weights in Regression Modeling and Imputation. RTI Press, avril 2022. http://dx.doi.org/10.3768/rtipress.2022.mr.0047.2203.

Texte intégral
Résumé :
When fitting observations from a complex survey, the standard regression model assumes that the expected value of the difference between the dependent variable and its model-based prediction is zero, regardless of the values of the explanatory variables. A rarely failing extended regression model assumes only that the model error is uncorrelated with the model’s explanatory variables. When the standard model holds, it is possible to create alternative analysis weights that retain the consistency of the model-parameter estimates while increasing their efficiency by scaling the inverse-probability weights by an appropriately chosen function of the explanatory variables. When a regression model is used to impute for missing item values in a complex survey and when item missingness is a function of the explanatory variables of the regression model and not the item value itself, near unbiasedness of an estimated item mean requires that either the standard regression model for the item in the population holds or the analysis weights incorporate a correctly specified and consistently estimated probability of item response. By estimating the parameters of the probability of item response with a calibration equation, one can sometimes account for item missingness that is (partially) a function of the item value itself.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Abadie, Alberto, et Guido Imbens. Simple and Bias-Corrected Matching Estimators for Average Treatment Effects. Cambridge, MA : National Bureau of Economic Research, octobre 2002. http://dx.doi.org/10.3386/t0283.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Anderson, Richard G., Hailong Qian et Robert H. Rasche. Analysis of Panel Vector Error Correction Models Using Maximum Likelihood, the Bootstrap, and Canonical Correlation Estimators. Federal Reserve Bank of St. Louis, 2006. http://dx.doi.org/10.20955/wp.2006.050.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Tsidylo, Ivan M., Serhiy O. Semerikov, Tetiana I. Gargula, Hanna V. Solonetska, Yaroslav P. Zamora et Andrey V. Pikilnyak. Simulation of intellectual system for evaluation of multilevel test tasks on the basis of fuzzy logic. CEUR Workshop Proceedings, juin 2021. http://dx.doi.org/10.31812/123456789/4370.

Texte intégral
Résumé :
The article describes the stages of modeling an intelligent system for evaluating multilevel test tasks based on fuzzy logic in the MATLAB application package, namely the Fuzzy Logic Toolbox. The analysis of existing approaches to fuzzy assessment of test methods, their advantages and disadvantages is given. The considered methods for assessing students are presented in the general case by two methods: using fuzzy sets and corresponding membership functions; fuzzy estimation method and generalized fuzzy estimation method. In the present work, the Sugeno production model is used as the closest to the natural language. This closeness allows for closer interaction with a subject area expert and build well-understood, easily interpreted inference systems. The structure of a fuzzy system, functions and mechanisms of model building are described. The system is presented in the form of a block diagram of fuzzy logical nodes and consists of four input variables, corresponding to the levels of knowledge assimilation and one initial one. The surface of the response of a fuzzy system reflects the dependence of the final grade on the level of difficulty of the task and the degree of correctness of the task. The structure and functions of the fuzzy system are indicated. The modeled in this way intelligent system for assessing multilevel test tasks based on fuzzy logic makes it possible to take into account the fuzzy characteristics of the test: the level of difficulty of the task, which can be assessed as “easy”, “average", “above average”, “difficult”; the degree of correctness of the task, which can be assessed as “correct”, “partially correct”, “rather correct”, “incorrect”; time allotted for the execution of a test task or test, which can be assessed as “short”, “medium”, “long”, “very long”; the percentage of correctly completed tasks, which can be assessed as “small”, “medium”, “large”, “very large”; the final mark for the test, which can be assessed as “poor”, “satisfactory”, “good”, “excellent”, which are included in the assessment. This approach ensures the maximum consideration of answers to questions of all levels of complexity by formulating a base of inference rules and selection of weighting coefficients when deriving the final estimate. The robustness of the system is achieved by using Gaussian membership functions. The testing of the controller on the test sample brings the functional suitability of the developed model.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Knight, R. D., B. A. Kjarsgaard, E G Potter et A. Plourde. Uranium, thorium, and potassium analyses using pXRF spectrometry. Natural Resources Canada/CMSS/Information Management, 2021. http://dx.doi.org/10.4095/328973.

Texte intégral
Résumé :
The application of portable XRF spectrometry (pXRF) for determining concentrations of uranium (U), thorium (Th) and potassium (K) was evaluated using a combination of 12 Certified Reference Materials, 17 Standard Reference Materials, and 25 rock samples collected from areas of known U occurrences or mineralization. Samples were analysed by pXRF in Soil, Mining Cu/Zn and Mining Ta/Hf modes. Resulting pXRF data were compared to published recommended values, obtained by total or near total digestion methods with ICP-MS and ICP-OES analysis. Results for pXRF show a linear relationship, for thorium, potassium, and uranium (<5000 ppm U) as compared to the recommended concentrations. However, above 5000 ppm U, pXRF results show an exponential relationship with under reporting of pXRF concentrations compared to recommended values. Accuracy of the data can be improved by post-analysis correction using linear regression equations for potassium and thorium, and samples with <5000 ppm uranium; an exponential correction curve is required at >5000 ppm U. In addition, pXRF analyses of samples with high concentrations of uranium (e.g. >1 wt.% U) significantly over-estimated potassium contents as compared to the published values, indicating interference between the two elements not calibrated by the manufacturer software.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Cheng, Wen, Yongping Zhang et Edward Clay. Comprehensive Performance Assessment of Passive Crowdsourcing for Counting Pedestrians and Bikes. Mineta Transportation Institute, février 2022. http://dx.doi.org/10.31979/mti.2022.2025.

Texte intégral
Résumé :
Individuals who walk and cycle experience a variety of health and economic benefits while simultaneously benefiting their local environments and communities. It is essential to correctly obtain pedestrian and bicyclist counts for better design and planning of active transportation-related facilities. In recent years, crowdsourcing has seen a rise in popularity due to the multiple advantages relative to traditional methods. Nevertheless, crowdsourced data have been applied in fewer studies, and their reliability and performance relative to other conventional methods are rarely documented. To this end, this research examines the consistency between crowdsourced and traditionally collected count data. Additionally, the research aims to develop the adjustment factor between the crowdsourced and permanent counter data and to estimate the annual average daily traffic (AADT) data based on hourly volume and other predictor variables such as time, day, weather, land use, and facility type. With some caveats, the results demonstrate that the Street Light crowdsourcing count data for pedestrians and bicyclists appear to be a promising alternative to the permanent counters.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie