Academic literature on the topic 'Latent Models, Small Area Estimation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Latent Models, Small Area Estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Latent Models, Small Area Estimation"

1

Porter, Aaron T., Christopher K. Wikle, and Scott H. Holan. "Small Area Estimation via Multivariate Fay-Herriot Models with Latent Spatial Dependence." Australian & New Zealand Journal of Statistics 57, no. 1 (February 22, 2015): 15–29. http://dx.doi.org/10.1111/anzs.12101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Choi, Jungsoon, and Andrew B. Lawson. "Bayesian spatially dependent variable selection for small area health modeling." Statistical Methods in Medical Research 27, no. 1 (June 16, 2016): 234–49. http://dx.doi.org/10.1177/0962280215627184.

Full text
Abstract:
Statistical methods for spatial health data to identify the significant covariates associated with the health outcomes are of critical importance. Most studies have developed variable selection approaches in which the covariates included appear within the spatial domain and their effects are fixed across space. However, the impact of covariates on health outcomes may change across space and ignoring this behavior in spatial epidemiology may cause the wrong interpretation of the relations. Thus, the development of a statistical framework for spatial variable selection is important to allow for the estimation of the space-varying patterns of covariate effects as well as the early detection of disease over space. In this paper, we develop flexible spatial variable selection approaches to find the spatially-varying subsets of covariates with significant effects. A Bayesian hierarchical latent model framework is applied to account for spatially-varying covariate effects. We present a simulation example to examine the performance of the proposed models with the competing models. We apply our models to a county-level low birth weight incidence dataset in Georgia.
APA, Harvard, Vancouver, ISO, and other styles
3

Thomas, Neal. "Assessing Model Sensitivity of the Imputation Methods Used in the National Assessment of Educational Progress." Journal of Educational and Behavioral Statistics 25, no. 4 (December 2000): 351–71. http://dx.doi.org/10.3102/10769986025004351.

Full text
Abstract:
The National Assessment of Educational Progress (NAEP) uses latent trait item response models to summarize performance of students on assessments of educational proficiency in different subject areas such as mathematics and reading. Because of limited examination time and concerns about student motivation. NAEP employs sparse matrix sampling designs that assign a small number of examination items to each sampled student to measure broad curriculums. As a consequence, each sampled student’s latent trait is not accurately measured, and NAEP uses multiple imputation missing data statistical methods to account for the uncertainty about the latent traits. The sensitivity of these model-based estimation and reporting procedures to statistical and psychometric assumptions is assessed. Estimation of the mean of the latent trait train different subpopulations was very robust to the modeling assumptions. Many of the other currently reported summaries, however; may depend on the modeling assumptions underlying the estimation procedures; these assumptions, motivated primarily by analytic tractability, are unlikely to attain, raising concerns about current reporting practices. The results indicate that more conservative criteria should be considered when forming intervals about estimates, and when assessing significance. A possible expansion of the imputation model is suggested that may improve its performance.
APA, Harvard, Vancouver, ISO, and other styles
4

Marcq, S., and J. Weiss. "Influence of leads widths distribution on turbulent heat transfer between the ocean and the atmosphere." Cryosphere Discussions 5, no. 5 (October 18, 2011): 2765–97. http://dx.doi.org/10.5194/tcd-5-2765-2011.

Full text
Abstract:
Abstract. Leads are linear-like structures of open water within the sea ice cover that develop as the result of fracturing due to divergence or shear. Through leads, air and water come into contact and directly exchange latent and sensible heat through convective processes driven by the large temperature and moisture differences between them. In the central Arctic, leads only cover 1 to 2% of the ocean during winter, but account for more than 80% of the heat fluxes. Furthermore, narrow leads (several meters) are more than twice as efficient at transmitting turbulent heat than larger ones (several hundreds of meters). We show that lead widths are power law distributed, P(X)~X−a with a>1, down to very small spatial scales (20 m or below). This implies that the open water fraction is by far dominated by very small leads. Using two classical formulations, which provide first order turbulence closure for the fetch-dependence of heat fluxes, we find that the mean heat fluxes (sensible and latent) over open water are up to 55 % larger when considering the lead width distribution obtained from a SPOT satellite image of the ice cover, compared to the situation where the open water fraction constitutes one unique large lead and the rest of the area is covered by ice, as it is usually considered in climate models at the grid scale. This difference may be even larger if we assume that the power law scaling of lead widths extents down to smaller (~1 m) scales. Such estimations may be a first step towards a subgrid scale parameterization of the spatial distribution of open water for heat fluxes calculations in ocean/sea ice coupled models.
APA, Harvard, Vancouver, ISO, and other styles
5

Marcq, S., and J. Weiss. "Influence of sea ice lead-width distribution on turbulent heat transfer between the ocean and the atmosphere." Cryosphere 6, no. 1 (February 2, 2012): 143–56. http://dx.doi.org/10.5194/tc-6-143-2012.

Full text
Abstract:
Abstract. Leads are linear-like structures of open water within the sea ice cover that develop as the result of fracturing due to divergence or shear. Through leads, air and water come into contact and directly exchange latent and sensible heat through convective processes driven by the large temperature and moisture differences between them. In the central Arctic, leads only cover 1 to 2% of the ocean during winter, but account for more than 70% of the upward heat fluxes. Furthermore, narrow leads (several meters) are more than twice as efficient at transmitting turbulent heat than larger ones (several hundreds of meters). We show that lead widths are power law distributed, P(X)~X−a with a>1, down to very small spatial scales (20 m or below). This implies that the open water fraction is by far dominated by very small leads. Using two classical formulations, which provide first order turbulence closure for the fetch-dependence of heat fluxes, we find that the mean heat fluxes (sensible and latent) over open water are up to 55% larger when considering the lead-width distribution obtained from a SPOT satellite image of the ice cover, compared to the situation where the open water fraction constitutes one unique large lead and the rest of the area is covered by ice, as it is usually considered in climate models at the grid scale. This difference may be even larger if we assume that the power law scaling of lead widths extends down to smaller (~1 m) scales. Such estimations may be a first step towards a subgrid scale parameterization of the spatial distribution of open water for heat fluxes calculations in ocean/sea ice coupled models.
APA, Harvard, Vancouver, ISO, and other styles
6

Nassar, Ayman, Alfonso Torres-Rua, Lawrence Hipps, William Kustas, Mac McKee, David Stevens, Héctor Nieto, Daniel Keller, Ian Gowing, and Calvin Coopmans. "Using Remote Sensing to Estimate Scales of Spatial Heterogeneity to Analyze Evapotranspiration Modeling in a Natural Ecosystem." Remote Sensing 14, no. 2 (January 13, 2022): 372. http://dx.doi.org/10.3390/rs14020372.

Full text
Abstract:
Understanding the spatial variability in highly heterogeneous natural environments such as savannas and river corridors is an important issue in characterizing and modeling energy fluxes, particularly for evapotranspiration (ET) estimates. Currently, remote-sensing-based surface energy balance (SEB) models are applied widely and routinely in agricultural settings to obtain ET information on an operational basis for use in water resources management. However, the application of these models in natural environments is challenging due to spatial heterogeneity in vegetation cover and complexity in the number of vegetation species existing within a biome. In this research effort, small unmanned aerial systems (sUAS) data were used to study the influence of land surface spatial heterogeneity on the modeling of ET using the Two-Source Energy Balance (TSEB) model. The study area is the San Rafael River corridor in Utah, which is a part of the Upper Colorado River Basin that is characterized by arid conditions and variations in soil moisture status and the type and height of vegetation. First, a spatial variability analysis was performed using a discrete wavelet transform (DWT) to identify a representative spatial resolution/model grid size for adequately solving energy balance components to derive ET. The results indicated a maximum wavelet energy between 6.4 m and 12.8 m for the river corridor area, while the non-river corridor area, which is characterized by different surface types and random vegetation, does not show a peak value. Next, to evaluate the effect of spatial resolution on latent heat flux (LE) estimation using the TSEB model, spatial scales of 6 m and 15 m instead of 6.4 m and 12.8 m, respectively, were used to simplify the derivation of model inputs. The results indicated small differences in the LE values between 6 m and 15 m resolutions, with a slight decrease in detail at 15 m due to losses in spatial variability. Lastly, the instantaneous (hourly) LE was extrapolated/upscaled to daily ET values using the incoming solar radiation (Rs) method. The results indicated that willow and cottonwood have the highest ET rates, followed by grass/shrubs and treated tamarisk. Although most of the treated tamarisk vegetation is in dead/dry condition, the green vegetation growing underneath resulted in a magnitude value of ET.
APA, Harvard, Vancouver, ISO, and other styles
7

Babel, W., S. Huneke, and T. Foken. "A framework to utilize turbulent flux measurements for mesoscale models and remote sensing applications." Hydrology and Earth System Sciences Discussions 8, no. 3 (May 25, 2011): 5165–225. http://dx.doi.org/10.5194/hessd-8-5165-2011.

Full text
Abstract:
Abstract. Meteorologically measured fluxes of energy and matter between the surface and the atmosphere originate from a source area of certain extent, located in the upwind sector of the device. The spatial representativeness of such measurements is strongly influenced by the heterogeneity of the landscape. The footprint concept is capable of linking observed data with spatial heterogeneity. This study aims at upscaling eddy covariance derived fluxes to a grid size of 1 km edge length, which is typical for mesoscale models or low resolution remote sensing data. Here an upscaling strategy is presented, utilizing footprint modelling and SVAT modelling as well as observations from a target land-use area. The general idea of this scheme is to model fluxes from adjacent land-use types and combine them with the measured flux data to yield a grid representative flux according to the land-use distribution within the grid cell. The performance of the upscaling routine is evaluated with real datasets, which are considered to be land-use specific fluxes in a grid cell. The measurements above rye and maize fields stem from the LITFASS experiment 2003 in Lindenberg, Germany and the respective modelled timeseries were derived by the SVAT model SEWAB. Contributions from each land-use type to the observations are estimated using a forward lagrangian stochastic model. A representation error is defined as the error in flux estimates made when accepting the measurements unchanged as grid representative flux and ignoring flux contributions from other land-use types within the respective grid cell. Results show that this representation error can be reduced up to 56 % when applying the spatial integration. This shows the potential for further application of this strategy, although the absolute differences between flux observations from rye and maize were so small, that the spatial integration would be rejected in a real situation. Corresponding thresholds for this decision have been estimated as a minimum mean absolute deviation in modelled timeseries of the different land-use types with 35 W m−2 for the sensible heat flux and 50 W m−2 for the latent heat flux. Finally, a quality lagging scheme to classify the data with respect to representativeness for a given grid cell is proposed, based on an overall flux error estimate. This enables the data user to infer the uncertainty of mesoscale models and remote sensing products with respect to ground observations. Major uncertainty sources remaining are the lack of an adequate method for energy balance closure correction as well as model structure and parameter estimation, when applying the model for surfaces without flux measurements.
APA, Harvard, Vancouver, ISO, and other styles
8

Klees, R., E. A. Zapreeva, H. C. Winsemius, and H. H. G. Savenije. "The bias in GRACE estimates of continental water storage variations." Hydrology and Earth System Sciences Discussions 3, no. 6 (November 21, 2006): 3557–94. http://dx.doi.org/10.5194/hessd-3-3557-2006.

Full text
Abstract:
Abstract. The estimation of terrestrial water storage variations at river basin scale is among the best documented applications of the GRACE (Gravity and Climate Experiment) satellite gravity mission. In particular, it is expected that GRACE closes the water balance at river basin scale and allows the verification, improvement and modeling of the related hydrological processes by combining GRACE amplitude estimates with hydrological models' output and in-situ data. When computing monthly mean storage variations from GRACE gravity field models, spatial filtering is mandatory to reduce GRACE errors, but at the same time yields biased amplitude estimates. The objective of this paper is three-fold. Firstly, we want to compute and analyze amplitude and time behaviour of the bias in GRACE estimates of monthly mean water storage variations for several target areas in Southern Africa. In particular, we want to know the relation between bias and the choice of the filter correlation length, the size of the target area, and the amplitude of mass variations inside and outside the target area. Secondly, we want to know to what extent the bias can be corrected for using a priori information about mass variations. Thirdly, we want to quantify errors in the estimated bias due to uncertainties in the a priori information about mass variations that are used to compute the bias. The target areas are located in Southern Africa around the Zambezi river basin. The latest release of monthly GRACE gravity field models have been used for the period from January 2003 until March 2006. An accurate and properly calibrated regional hydrological model has been developed for this area and its surroundings and provides the necessary a priori information about mass variations inside and outside the target areas. The main conclusion of the study is that spatial smoothing significantly biases GRACE estimates of the amplitude of annual and monthly mean water storage variations. For most of the practical applications, the bias will be positive, which implies that GRACE underestimates the amplitudes. The bias is mainly determined by the filter correlation length; in the case of 1000 km smoothing, which is shown to be an appropriate choice for the target areas, the annual bias attains values up to 50% of the annual storage; the monthly bias is even larger with a maximum value of 75% of the monthly storage. A priori information about mass variations can provide reasonably accurate estimates of the bias, which significantly improves the quality of GRACE water storage amplitudes. For the target areas in Southern Africa, we show that after bias correction, GRACE annual amplitudes differ between 0 and 30 mm from the output of a regional hydrological model, which is between 0% and 25% of the storage. Annual phase shifts are small, not exceeding 0.25 months, i.e. 7.5 deg. Our analysis suggests that bias correction of GRACE water storage amplitudes is indispensable if GRACE is used to calibrate hydrological models.
APA, Harvard, Vancouver, ISO, and other styles
9

Klees, R., E. A. Zapreeva, H. C. Winsemius, and H. H. G. Savenije. "The bias in GRACE estimates of continental water storage variations." Hydrology and Earth System Sciences 11, no. 4 (May 3, 2007): 1227–41. http://dx.doi.org/10.5194/hess-11-1227-2007.

Full text
Abstract:
Abstract. The estimation of terrestrial water storage variations at river basin scale is among the best documented applications of the GRACE (Gravity and Climate Experiment) satellite gravity mission. In particular, it is expected that GRACE closes the water balance at river basin scale and allows the verification, improvement and modeling of the related hydrological processes by combining GRACE amplitude estimates with hydrological models' output and in-situ data. When computing monthly mean storage variations from GRACE gravity field models, spatial filtering is mandatory to reduce GRACE errors, but at the same time yields biased amplitude estimates. The objective of this paper is three-fold. Firstly, we want to compute and analyze amplitude and time behaviour of the bias in GRACE estimates of monthly mean water storage variations for several target areas in Southern Africa. In particular, we want to know the relation between bias and the choice of the filter correlation length, the size of the target area, and the amplitude of mass variations inside and outside the target area. Secondly, we want to know to what extent the bias can be corrected for using a priori information about mass variations. Thirdly, we want to quantify errors in the estimated bias due to uncertainties in the a priori information about mass variations that are used to compute the bias. The target areas are located in Southern Africa around the Zambezi river basin. The latest release of monthly GRACE gravity field models have been used for the period from January 2003 until March 2006. An accurate and properly calibrated regional hydrological model has been developed for this area and its surroundings and provides the necessary a priori information about mass variations inside and outside the target areas. The main conclusion of the study is that spatial smoothing significantly biases GRACE estimates of the amplitude of annual and monthly mean water storage variations and that bias correction using existing hydrological models significantly improves the quality of GRACE estimates. For most of the practical applications, the bias will be positive, which implies that GRACE underestimates the amplitudes. The bias is mainly determined by the filter correlation length; in the case of 1000 km smoothing, which is shown to be an appropriate choice for the target areas, the annual bias attains values up to 50% of the annual storage; the monthly bias is even larger with a maximum value of 75% of the monthly storage. A priori information about mass variations can provide reasonably accurate estimates of the bias, which significantly improves the quality of GRACE water storage amplitudes. For the target areas in Southern Africa, we show that after bias correction, GRACE annual amplitudes differ between 0 and 30 mm from the output of a regional hydrological model, which is between 0% and 25% of the storage. Annual phase shifts are small, not exceeding 0.25 months, i.e. 7.5 deg. It is shown that after bias correction, the fit between GRACE and a hydrological model is overoptimistic, if the same hydrological model is used to estimate the bias and to compare with GRACE. If another hydrological model is used to compute the bias, the fit is less, although the improvement is still very significant compared with uncorrected GRACE estimates of water storage variations. Therefore, the proposed approach for bias correction works for the target areas subject to this study. It may also be an option for other target areas provided that some reasonable a priori information about water storage variations are available.
APA, Harvard, Vancouver, ISO, and other styles
10

Kubokawa, Tatsuya. "Linear Mixed Models and Small Area Estimation." Japanese journal of applied statistics 35, no. 3 (2006): 139–61. http://dx.doi.org/10.5023/jappstat.35.139.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Latent Models, Small Area Estimation"

1

BERTARELLI, GAIA. "LATENT MARKOV MODELS FOR AGGREGATE DATA: APPLICATION TO DISEASE MAPPING AND SMALL AREA ESTIMATION." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2015. http://hdl.handle.net/10281/96252.

Full text
Abstract:
Latent Markov Models (LMMs) are a particular class of statistical models in which a latent process is assumed. LMMs allow for the analysis of longitudinal data when the response variables measure common characteristics of interest, which are not directly observable. In LMMs the characteristics of interest, and their evolution in time, are represented by a latent process that follows a first order discrete Markov chain and units are allowed to change latent state over time. In studying LMMs, it is important to distinguish between two components: the measurement model, i.e. the conditional distribution of the response variables given the latent process, and the latent model, i.e. the distribution of the latent process. This thesis focuses on LMMs for aggregated data. It considers two fields of applications: disease mapping and small area estimation. The goal of disease mapping is the study of the geographical pattern and variation of a disease measured through counts and incidence rates. From a methodological point of view, this work extends LMMs to include a spatial pattern in the latent model. This extension allows the probability of being in a latent state and the probability to move from a latent state to another over time to be influenced by the neighbouring areas. The model is fitted within a Bayesian framework using Gibbs and Random Metropolis Hastings algorithms with augmented data that allows for a more efficient sampling of model parameters. Simulations studies are also conducted to investigate the performance of the proposed model on data generated under different settings. The model has also been applied to a data set of county specific lung cancer deaths counts in the state of Ohio, USA, during the years 1968-1988. Small area estimation (SAE) methods are used in inference for finite populations to obtain estimates of parameters of interest when domain sample sizes are too small to provide adequate precision for direct domain estimators. The second work develops a new area-level SAE method using LMMs. In particular, since area-level SAE models consider a sampling and a linking model, a LMM is used as the linking model. In a hierarchical Bayesian framework the sampling model is introduced as the highest level of hierarchy. In this context, data are considered aggregated because direct estimates are usually mean and frequencies. Under the assumption of normality the response variable, the model is estimated using a Gibbs sampling in a data augmentation context. The application field in this second work is particularly relevant: it uses yearly unemployment rates at Local Labour Market Areas level for the period 2004-2014 from the Labour Force Survey conducted by the Italian National Statistical Institute (ISTAT).
APA, Harvard, Vancouver, ISO, and other styles
2

Moura, Fernando Antonio da Silva. "Small area estimation using multilevel models." Thesis, University of Southampton, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Oleson, Jacob J. "Bayesian spatial models for small area estimation /." free to MU campus, to others for purchase, 2002. http://wwwlib.umi.com/cr/mo/fullcit?p3052203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Qiong. "Small area quantile estimation under unit-level models." Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/62871.

Full text
Abstract:
Sample surveys are widely used as a cost-effective way to collect information on variables of interest in target populations. In applications, we are generally interested in parameters such as population means, totals, and quantiles. Similar parameters for subpopulations or areas, formed by geographic areas and socio-demographic groups, are also of interest in applications. However, the sample size might be small or even zero in subpopulations due to the probability sampling and the budget limitation. There has been intensive research on how to produce reliable estimates for characteristics of interest for subpopulations for which the sample size is small or even zero. We call this line of research Small Area Estimation (SAE). In this thesis, we study the performance of a number of small area quantile estimators based on a popular unit-level model and its variations. When a finite population can be regarded as a sample from some model, we may use the whole sample from the finite population to determine the model structure with a good precision. The information can then be used to produce more reliable estimates for small areas. However, if the model assumption is wrong, the resulting estimates can be misleading and their mean squared errors can be underestimated. Therefore, it is critical to check the robustness of estimators under various model mis-specification scenarios. In this thesis, we first conduct simulation studies to investigate the performance of three small area quantile estimators in the literature. They are found not to be very robust in some likely situations. Based on these observations, we propose an approach to obtain more robust small area quantile estimators. Simulation results show that the proposed new methods have superior performance either when the error distribution in the model is non-normal or the data set contain many outliers.
Science, Faculty of
Statistics, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
5

Ganesh, Nadarajasundaram. "Small area estimation and prediction problems spatial models, Bayesian multiple comparisons and robust MSE estimation /." College Park, Md. : University of Maryland, 2007. http://hdl.handle.net/1903/7241.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2007.
Thesis research directed by: Mathematics. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
6

Stukel, Diana M. (Diane Maria) Carleton University Dissertation Mathematics. "Small area estimation under one and two-fold nested error regression models." Ottawa, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wanjoya, Antony Kibira. "A Flexible Characterization of models for small area estimation: Theoretical developments and Applications." Doctoral thesis, Università degli studi di Padova, 2011. http://hdl.handle.net/11577/3421673.

Full text
Abstract:
The demand for reliable small area estimates derived from survey data has increased greatly in recent years due to, among other things, their growing use in formulating policies and programs, allocation of government funds, regional planning, small area business decisions and other applications. Traditional area-specific (direct) estimates may not provide acceptable precision for small areas because sample sizes are seldom large enough in many small areas of interest. This makes it necessary to borrow information across related areas through indirect estimation based on models, using auxiliary information such as recent census data and current administrative data. Methods based on models are now widely accepted. The principal focus of this thesis is the development of a flexible modeling strategy in small area estimation with demonstrations and evaluations using the 1989 United States census bureau median income dataset. This dissertation is divided into two main parts, the first part deals with development of the proposed model and comparision of this model to the standard area-level Fay-Herriot model through the empirical Bayes (EB) approach. Results from these two models are compared in terms of average relative bias, average squared relative bias, average absolute bias, average squared deviation as well as the empirical mean square error. The proposed model exhibits remarkably better performance over the standard Fay-Herriot model. The second part represents our attempt to construct a hierarchical Bayes (HB) approach to estimate parameters for the proposed model, with implementation carried out by Markov chain Monte Carlo (MCMC) techniques. MCMC was implemented via the Gibbs sampling algorithm using R software package. We used several graphical tools to assess convergence and determine the length of the burn-in period. Results from the two models are compared in terms of average relative bias, average squared relative bias and average absolute bias. Our empirical results highlight the superiority of using the proposed model over the Fay-Herriot model. However, the advantage of the proposed model comes at a price since its implementation is mildly more difficult than the Fay-Herriot model.
L'esigenza di stime affidabili per piccole aree tratte da sondaggi è cresciuta notevolmente negli ultimi anni, grazie all'aumento del loro utilizzo nella formulazione delle politiche, nella ripartizione dei fondi statali, nella pianificazione regionale, nelle applicazioni business e in altre applicazioni. Le tradizionali stime specifiche per l'area (stime dirette) potrebbero non fornire una precisione accettabile, perché la numerosità campionaria in molte delle piccole aree d'interesse potrebbe essere ridotta o nulla. Questo rende neccessario sfruttare le informazioni dalle zone simili, tramitte una stima indiretta basata sui modelli per informazioni ausiliarie come i dati dei censimenti o i dati amministrativi. I metodi basati sui modelli sono ora piuttosto diffusi. L'attenzione principale di questa tesi è sviluppare una strategia di modellazione flessibile nella stima di piccole aree, e la sua valutazione utilizzando il Censimento negli Stati Uniti sul reddito mediano, del 1989. Questa dissertazione è composta di due parti : la prima tratta lo sviluppo del modello e il confronto del modello proposto con il modello standard di Fay-Herriot tramite l'approcio di Bayes empirico. I risultati per questi due modelli sono stati confrontati in termini del bias relativo medio, del bias quadratico medio, del bias medio assoluto, della deviazione quadratica media ed inotre in termini del errore quadratico medio empirico. Il modello proposto dimostra un rendimento assai migliore rispetto al modello standard di Fay-Herriot. La seconda parte presenta il nostro tentativo di costruire un approccio di Bayes Gerarchico per la stima dei parametri del modello proposto, con l'attuazione delle tecniche di Markov Chain Monte Carlo (MCMC). MCMC è stato utilizzato tramitte l'algoritmo di campionamento Gibbs, utilizzando il software R. I risultati dai due modelli sono stati confrontati in termini di bias relativo medio, bias relativo quadratico medio e il bias assoluto medio. I nostri risultati empirici sottolineano la superiorità del modello proposto rispetto al modello Fay-Herriot. Tuttavia, il vantaggio del modello proposto è limitato visto che la sua attuazione è leggermente più complicata rispetto al modello di Fay-Herriot.
APA, Harvard, Vancouver, ISO, and other styles
8

Warnholz, Sebastian [Verfasser]. "Small Area Estimation Using Robust Extensions to Area Level Models : Theory, Implementation and Simulation Studies / Sebastian Warnholz." Berlin : Freie Universität Berlin, 2016. http://d-nb.info/1112553045/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yu, Mingyu Carleton University Dissertation Mathematics. "Nested-error regression models and small area estimation combining cross-sectional and time series data." Ottawa, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Shiao. "Bayesian Analysis of Crime Survey Data with Nonresponse." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-theses/1175.

Full text
Abstract:
Bayesian hierarchical models are effective tools for small area estimation by pooling small datasets together. The pooling procedures allow individual areas to “borrow strength” from each other to desirably improve the estimation. This work is an extension of Nandram and Choi (2002), NC, to perform inference on finite population proportions when there exists non-identifiability of the missing pattern for nonresponse in binary survey data. We review the small-area selection model (SSM) in NC which is able to incorporate the non-identifiability. Moreover, the proposed SSM, together with the individual-area selection model (ISM), and the small-area pattern-mixture model (SPM) are evaluated by real crime data in Stasny (1991). Furthermore, the methodology is compared to ISM and SPM using simulated small area datasets. Computational issues related to the MCMC are also discussed.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Latent Models, Small Area Estimation"

1

Sugasawa, Shonosuke, and Tatsuya Kubokawa. Mixed-Effects Models and Small Area Estimation. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-9486-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Analysis of poverty data by small area estimation. Chichester, West Sussex, United Kingdom: John Wiley & Sons, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Morales, Domingo, María Dolores Esteban, Agustín Pérez, and Tomáš Hobza. A Course on Small Area Estimation and Mixed Models. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-63757-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pratesi, Monica. Analysis of Poverty Data by Small Area Estimation. Wiley & Sons, Incorporated, John, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pratesi, Monica. Analysis of Poverty Data by Small Area Estimation. Wiley & Sons, Incorporated, John, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pratesi, Monica. Analysis of Poverty Data by Small Area Estimation. Wiley & Sons, Incorporated, John, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pratesi, Monica. Analysis of Poverty Data by Small Area Estimation. Wiley & Sons, Limited, John, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Morales, Domingo, María Dolores Esteban, Agustín Pérez, and Tomás Hobza. Course on Small Area Estimation and Mixed Models: Methods, Theory and Applications in R. Springer International Publishing AG, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cernat, Alexandru, and Joseph W. Sakshaug, eds. Measurement Error in Longitudinal Data. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198859987.001.0001.

Full text
Abstract:
Understanding change is essential in most scientific fields. This is highlighted by the importance of issues such as shifts in public health and changes in public opinion regarding politicians and policies. Nevertheless, our measurements of the world around us are often imperfect. For example, measurements of attitudes might be biased by social desirability, while estimates of health may be marred by low sensitivity and specificity. In this book we tackle the important issue of how to understand and estimate change in the context of data that are imperfect and exhibit measurement error. The book brings together the latest advances in the area of estimating change in the presence of measurement error from a number of different fields, such as survey methodology, sociology, psychology, statistics, and health. Furthermore, it covers the entire process, from the best ways of collecting longitudinal data, to statistical models to estimate change under uncertainty, to examples of researchers applying these methods in the real world. The book introduces the reader to essential issues of longitudinal data collection such as memory effects, panel conditioning (or mere measurement effects), the use of administrative data, and the collection of multi-mode longitudinal data. It also introduces the reader to some of the most important models used in this area, including quasi-simplex models, latent growth models, latent Markov chains, and equivalence/DIF testing. Further, it discusses the use of vignettes in the context of longitudinal data and estimation methods for multilevel models of change in the presence of measurement error.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Latent Models, Small Area Estimation"

1

Longford, Nicholas T. "Small-area estimation." In Models for Uncertainty in Educational Testing, 199–230. New York, NY: Springer New York, 1995. http://dx.doi.org/10.1007/978-1-4613-8463-2_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Morales, Domingo, María Dolores Esteban, Agustín Pérez, and Tomáš Hobza. "Small Area Estimation." In A Course on Small Area Estimation and Mixed Models, 1–11. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63757-6_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Haughton, Dominique, and Jonathan Haughton. "Multilevel Models and Small-Area Estimation." In Living Standards Analytics, 273–87. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4614-0385-2_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Esteban, María Dolores, Domingo Morales, and Agustín Pérez. "Area-level Spatio-temporal Small Area Estimation Models." In Analysis of Poverty Data by Small Area Estimation, 205–26. Chichester, UK: John Wiley & Sons, Ltd, 2016. http://dx.doi.org/10.1002/9781118814963.ch11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sugasawa, Shonosuke, and Tatsuya Kubokawa. "Extensions of Basic Small Area Models." In Mixed-Effects Models and Small Area Estimation, 99–121. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-9486-9_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sugasawa, Shonosuke, and Tatsuya Kubokawa. "Advanced Theory of Basic Small Area Models." In Mixed-Effects Models and Small Area Estimation, 67–81. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-9486-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Morales, Domingo, María Dolores Esteban, Agustín Pérez, and Tomáš Hobza. "Area-Level Poisson Mixed Models." In A Course on Small Area Estimation and Mixed Models, 547–72. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63757-6_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bocci, Chiara, and Alessandra Petrucci. "Spatial Information and Geoadditive Small Area Models." In Analysis of Poverty Data by Small Area Estimation, 245–60. Chichester, UK: John Wiley & Sons, Ltd, 2016. http://dx.doi.org/10.1002/9781118814963.ch13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sugasawa, Shonosuke, and Tatsuya Kubokawa. "General Mixed-Effects Models and BLUP." In Mixed-Effects Models and Small Area Estimation, 5–22. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-9486-9_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Swamy, P. A. V. B., J. S. Mehta, G. S. Tavlas, and S. G. Hall. "Small Area Estimation with Correctly Specified Linking Models." In Recent Advances in Estimating Nonlinear Models, 193–228. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-8060-0_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Latent Models, Small Area Estimation"

1

He, Jia, Changying Du, Changde Du, Fuzhen Zhuang, Qing He, and Guoping Long. "Nonlinear Maximum Margin Multi-View Learning with Adaptive Kernel." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/254.

Full text
Abstract:
Existing multi-view learning methods based on kernel function either require the user to select and tune a single predefined kernel or have to compute and store many Gram matrices to perform multiple kernel learning. Apart from the huge consumption of manpower, computation and memory resources, most of these models seek point estimation of their parameters, and are prone to overfitting to small training data. This paper presents an adaptive kernel nonlinear max-margin multi-view learning model under the Bayesian framework. Specifically, we regularize the posterior of an efficient multi-view latent variable model by explicitly mapping the latent representations extracted from multiple data views to a random Fourier feature space where max-margin classification constraints are imposed. Assuming these random features are drawn from Dirichlet process Gaussian mixtures, we can adaptively learn shift-invariant kernels from data according to Bochners theorem. For inference, we employ the data augmentation idea for hinge loss, and design an efficient gradient-based MCMC sampler in the augmented space. Having no need to compute the Gram matrix, our algorithm scales linearly with the size of training set. Extensive experiments on real-world datasets demonstrate that our method has superior performance.
APA, Harvard, Vancouver, ISO, and other styles
2

Sofronov, Georgy. "A hybrid algorithm for spatial small area estimation under models with complex contiguity." In 2013 IEEE Symposium on Differential Evolution (SDE). IEEE, 2013. http://dx.doi.org/10.1109/sde.2013.6601438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bass, D., and K. Haddara. "Roll Damping For Small Fishing Vessels." In SNAME 22nd American Towing Tank Conference. SNAME, 1989. http://dx.doi.org/10.5957/attc-1989-050.

Full text
Abstract:
The successes of Naval Architects in predicting ship motions in waves, have been mainly confined to motions other than roll. For roll motions it has long been recognized that roll damping for wave encounter frequencies near the natural roll frequency is not only an extremely significant parameter but also very difficult to predict accurately. Because of the need to consider viscous flow (or even 'slightly viscous flow') to correctly model roll damping phenomena, there is still some way to go before an adequate numerical model of the hydrodynamics of roll is forthcoming. For this reason empirical and semi-empirical methods have played and will continue to play an important role in the prediction of roll motions. For small fishing vessels with deep skegs, hard chines and nonstandard hull shapes, the prediction of roll damping is particularly difficult due to lack of available data bases, semi- empirical formulae, and of course adequate theoretical models. It was for this reason that the present study was undertaken. In all, six small fishing vessels will be studied, each of which lies in the 'less-than 25 meter' class. They are all of similar dimensions but have varying hull forms ranging from the angular (e.g. model '363') to the rounded hull form of '366' (see figure 1). Apart from providing a data base for the estimation of damping for different hull forms, the study will be used in the analysis of mathematical and/or numerical methods for the prediction of roll damping that the authors hope to develop (or hope will be developed) in the future. The present paper describes preliminary investigations of roll damping characteristics for just 3 of the boats. The methodology employed was that of the classical roll decay test with an innovative feature-namely the use of a newly developed method of analysis which enabled the authors to obtain the non- dimensional damping coefficient from the complete roll decay curve taken over just one full cycle. This method of analysis is based on an energy approach and is explained in [l]. Using this approach, the roll damping moment dependence on the initial roll angle is easy to obtain. The emphasis in this paper is a 'frequency domain' analysis of the results with equivalent linear damping as the primary target. The advantage of the simple decay test is that it allows for analysis in both the frequency and time domains. A study of the results in the time domain will be presented in a later paper. The simplicity of the roll decay experiment also means that many experiments can be perfomed and regression analysis carried out on the results. Over one thousand such tests were performed for the three models in this study. The body plans for the three models are shown in figures 1 (a), (b), (c), and their particulars are given in Table l. In the experiments the models were attached to a dynamometer with just 2 degrees of freedom; the model was free to roll and heave, but restrained in all other modes.
APA, Harvard, Vancouver, ISO, and other styles
4

Morales-Valdez, Jesús, and Luis Alvarez-Icaza. "Building Stiffness Estimation by Wave Traveling Times." In ASME 2014 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/dscc2014-6314.

Full text
Abstract:
A novel technique to estimate stiffness in buildings is presented. In contrast with most of the available work in the literature that resorts to diverse forms of modal analysis, this local technique is based on the propagation of a Ricker pulse through the structure and on measuring the wave arrival times at each story of the building, represented as a single layer in a multiple stratum model. These arrival times are later used to recuperate building stiffness at each story. Wave propagation is based on the Thomson-Haskell method, that allows to generalize the wave propagation method to multi-story buildings without significant changes to the original formulation. The number of calculated parameters is small in comparison with methods based on modal analysis. This technique provides and quick and easy methodology to assess building integrity and is an interesting alternative to verify results obtained by other identification methods. Simulation results for building with heterogeneous characteristics across the stories confirm the feasibility of the proposal.
APA, Harvard, Vancouver, ISO, and other styles
5

Givi, Behrang Sadeghi. "A Model for Analyzing Flow Transients in a Single Closed Loop." In ASME 2008 Fluids Engineering Division Summer Meeting collocated with the Heat Transfer, Energy Sustainability, and 3rd Energy Nanotechnology Conferences. ASMEDC, 2008. http://dx.doi.org/10.1115/fedsm2008-55091.

Full text
Abstract:
A mathematical model is developed upon which the velocity of flow during a centrifugal pump failure transient is determined analytically without the use of pump characteristics’ curves. The influence of the two most important parameters, kinetic energy in the piping system and kinetic energy of the pump, are considered in the form of a ratio called hereafter an effective energy ratio. The results show that the effect of a mechanical friction loss on the flow rate is very small in the early stage of pump failure transient, and the time of two-third decay of the flow is not affected very much by the friction loss. However, this effect is larger in the later stage of the flow decay. Therefore the time when the flow rate becomes zero, depends very much on the estimation of this loss. Good agreement is noted when the results of the analytical method are compared with those obtained by the use of experimental characteristics’ curves. The model is also used to analyze the transient during a fast startup. Also further comparison of the analytical model with the collected flow transient experimental data show a very good agreement.
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Ungki, and Ikjin Lee. "Sampling-Based Reliability Analysis Using Deep Feedforward Neural Network (DFNN)." In ASME 2020 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/detc2020-22275.

Full text
Abstract:
Abstract Reliability analysis that evaluates a probabilistic constraint is an important part of reliability-based design optimization (RBDO). Inverse reliability analysis evaluates the percentile value of the performance function that satisfies the reliability. To compute the percentile value, analytical methods, surrogate model based methods, and sampling-based methods are commonly used. In case the dimension or nonlinearity of the performance function is high, sampling-based methods such as Monte Carlo simulation, Latin hypercube sampling, and importance sampling can be directly used for reliability analysis since no analytical formulation or surrogate model is required in these methods. The sampling-based methods have high accuracy but require a large number of samples, which can be very time-consuming. Therefore, this paper proposes methods that can improve the accuracy of reliability analysis when the number of samples is not enough and the sampling-based methods are considered to be better candidates. This study starts with the idea of training the relationship between the realization of the performance function at a small sample size and the corresponding true percentile value of the performance function. Deep feedforward neural network (DFNN), which is one of the promising artificial neural network models that approximates high dimensional models using deep layered structures, is trained using the realization of various performance functions at a small sample size and the corresponding true percentile values as input and target training data, respectively. In this study, various polynomial functions and random variables are used to create training data sets consisting of various realizations and corresponding true percentile values. A method that approximates the realization of the performance function through kernel density estimation and trains the DFNN with the discrete points representing the shape of the kernel distribution to reduce the dimension of the training input data is also presented. Along with the proposed reliability analysis methods, a strategy that reuses samples of the previous design point to enhance the efficiency of the percentile value estimation is explained. The results show that the reliability analysis using the DFNN is more accurate than the method using only samples. In addition, compared to the method that trains the DFNN using the realization of the performance function, the method that trains the DFNN with the discrete points representing the shape of the kernel distribution improves the accuracy of reliability analysis and reduces the training time. The proposed sample reuse strategy is verified that the burden of function evaluation at the new design point can be reduced by reusing the samples of the previous design point when the design point changes while performing RBDO.
APA, Harvard, Vancouver, ISO, and other styles
7

Xu, Steven X., and Kim Wallin. "Exploratory Analysis to Estimate Axial Fracture Toughness for Zr-2.5Nb Pressure Tubes Using Test Data From Small Curved Compact Specimens." In ASME 2017 Pressure Vessels and Piping Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/pvp2017-66116.

Full text
Abstract:
Zr-2.5Nb pressure tubes are in-core, primary coolant containment of CANDU(1) nuclear reactors. Technical requirements for in-service evaluation of pressure tubes are provided in the Canadian Standards Associate (CSA) N285.8. These requirements include the evaluation of service conditions for protection against fracture of operating pressure tubes and demonstration of leak-before-break. Axial fracture toughness for pressure tubes is a key input in the evaluation of fracture protection and leak-before-break. The 2015 Edition of CSA N285.8 provides a pressure tube axial fracture toughness prediction model that is applicable to pressure tubes late life conditions. The fracture toughness prediction model in CSA N285.8-15 is based on rising pressure burst tests performed on pressure tube sections with axial cracks under simulated pressure tube late life conditions. Due to the associated high cost of testing and high consumption of pressure tube material, it is not practical to perform a large number of fracture toughness burst tests. On the other hand, more fracture toughness data is required to improve the existing pressure tube axial fracture toughness prediction model. There is strong motivation to estimate pressure tube axial fracture toughness using test data from small specimens. The estimated pressure tube fracture toughness using test data from small specimens can fill the gaps in the burst test toughness data, as well as provide information on material variability and data scatter. Against this background, an exploratory analysis of estimating pressure tube axial fracture toughness using test data from small curved compact specimens has been performed and is described in this paper. The estimated values of pressure tube axial fracture toughness using the test data from small curved compact specimens are compared with the measured toughness from burst tests of pressure tube sections with axial cracks to check the feasibility of this approach.
APA, Harvard, Vancouver, ISO, and other styles
8

Durocher, Antoine, Jiayi Wang, Gilles Bourque, and Jeffrey M. Bergthorson. "Impact of Boundary Condition and Kinetic Parameter Uncertainties on NOx Predictions in Methane-Air Stagnation Flame Experiments." In ASME Turbo Expo 2021: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/gt2021-59404.

Full text
Abstract:
Abstract A comprehensive understanding of uncertainty sources in experimental measurements is required to develop robust thermochemical models for use in industrial applications. Due to the complexity of the combustion process in gas turbine engines, simpler flames are generally used to study fundamental combustion properties and measure concentrations of important species to validate and improve modelling. Stable, laminar flames have increasingly been used to study nitrogen oxide (NOx) formation in lean-to-rich compositions in low-to-high pressures to assess model predictions and improve accuracy to help develop future low-emissions systems. They allow for non-intrusive diagnostics to measure sub-ppm concentrations of pollutant molecules, as well as important precursors, and provide well-defined boundary conditions to directly compare experiments with simulations. The uncertainties of experimentally-measured boundary conditions and the inherent kinetic uncertainties in the nitrogen chemistry are propagated through one-dimensional stagnation flame simulations to quantify the relative importance of the two sources and estimate their impact on predictions. Measurements in lean, stoichiometric, and rich methane-air flames are used to investigate the production pathways active in those conditions. Various spectral expansions are used to develop surrogate models with different levels of accuracy to perform the uncertainty analysis for 15 important reactions in the nitrogen chemistry and the 6 boundary conditions (ϕ, Tin, uin, du/dzin, Tsurf, P) simultaneously. After estimating the individual parametric contributions, the uncertainty of the boundary conditions are shown to have a relatively small impact on the prediction of NOx compared to kinetic uncertainties in these laboratory experiments. These results show that properly calibrated laminar flame experiments can, not only provide validation targets for modelling, but also accurate indirect measurements that can later be used to infer individual kinetic rates to improve thermochemical models.
APA, Harvard, Vancouver, ISO, and other styles
9

Kraus, Adam R., Rui Hu, Darius D. Lisowski, and Matthew Bucknor. "Simulation of Buoyancy-Driven Flow for Various Power Levels at the NSTF." In 2016 24th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/icone24-60763.

Full text
Abstract:
The Reactor Cavity Cooling System (RCCS) is an important passive safety system that is being incorporated in a number of high temperature reactor design concepts. The Natural convection Shutdown heat removal Test Facility (NSTF), located at Argonne National Laboratory, is an experiment with the objective of investigating the flow and thermal behavior of a particular air-cooled RCCS design. It consists of 12 ducts surrounded by a cavity with a heated wall, through which air flows via natural convection before exiting through two chimneys. The NSTF is a ½-scale facility, and is well instrumented in order to provide data for code validation, including Computational Fluid Dynamics (CFD)-grade data in a number of locations. Instrumentation includes fiber-optic Distributed Temperature Sensors (DTS) throughout one of the riser ducts and in the upper plenum. In conjunction with the experimental tests, CFD simulations were performed to support the design and optimization of these natural convection systems. The CFD simulations were performed using the “as-tested” geometry of the NSTF. All CFD simulations were steady-state. Both a full natural convection model and a smaller forced primary flow model were tested. The influence of boundary conditions, notably at the cavity walls, was tested. Initial simulations assumed adiabatic walls but these were later adapted to simulate heat losses, aided by thermal images taken of the exterior NSTF surfaces during testing. Simulations were run for tests at two different power levels. A number of turbulence models were compared to test their influence. Simulation results were compared with experimental data. Convergence was generally good for both models. It was found that the natural convection model was indeed beneficial for correctly estimating local temperatures in a number of areas, particularly near the top of the riser ducts and from DTS measurements along the flow path. Flow in the heated cavity was complex. In general, the experimental trends were predicted well by CFD, although magnitudes could be improved in some areas. The turbulence models tested had a relatively small effect on the shape of the temperature profile in the ducts and on heated surface temperatures. Results from the simulations have been of direct use in improving test procedures and choosing locations for more accurate instrumentation. In future work, full natural convection simulations of more tests will be performed. After this has been completed, best practices can be established for accurately simulating these general types of natural convection systems across a wide range of operating conditions.
APA, Harvard, Vancouver, ISO, and other styles
10

Maisondieu, Christophe, O̸yvind Breivik, Jens-Christian Roth, Arthur A. Allen, Bertrand Forest, and Marc Pavec. "Methods for Improvement of Drift Forecast Models." In ASME 2010 29th International Conference on Ocean, Offshore and Arctic Engineering. ASMEDC, 2010. http://dx.doi.org/10.1115/omae2010-20219.

Full text
Abstract:
Over the past decades, various operational drift forecast models were developed for trajectory prediction of objects lost at sea for search and rescue operations. Most of these models are now based on a stochastic, Monte Carlo definition of the object’s initial position and its time-evolving search area through computation of an ensemble of equally probable trajectories (Breivik [1]). Uncertainties in environmental forcing, mainly surface currents and wind, as well as the uncertainties inherent in the simplified computation of leeway speed and direction relative to the wind are also accounted for through this ensemble-based approach. Accuracy of the drift forecast obviously depends to a large extent on the quality of the environmental forecast data provided by numerical weather prediction models and ocean models, but it also depends on the level of uncertainty associated with the estimation of the drift properties (leeway) of the objects themselves. The present work mostly focuses on this second aspect of the problem. Drift properties of objects can be described by means of their downwind and crosswind leeway coefficients, according to the definition of leeway as stated by Allen [2, 3]. Assessment of the leeway coefficients is based on a direct method, which requires measurements acquired during field tests. Such field experiments basically entail deploying one or more objects at sea and simultaneously recording the environmental parameters (namely wind speed and motion of the object relative to the ambient water masses, i.e., its leeway) as well as the object’s position while adrift for periods ranging from several hours to several days. Using this method, a large database providing leeway coefficients for more than sixty object classes ranging from medical waste to a person-in-water to small fishing vessels was compiled over the years by the United States Coast Guard (Allen [2]). More recently additional trials were conducted, which allowed evaluation of new objects, including 20-ft shipping containers. We present in this paper the methods and analysis procedures for field determination of leeway coefficients of typical search-and-rescue objects. As an example we present the case study of a 20-ft container and discuss results obtained from a drift forecast model assessing sensitivity of such a model to the quality of environmental data as well as uncertainty levels of some reference parameters.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography