Academic literature on the topic 'Optimum Sampling Design'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Optimum Sampling Design.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Optimum Sampling Design"

1

Omule, S. A. Y. "Optimum Design in Multivariate Stratified Sampling." Biometrical Journal 27, no. 8 (1985): 907–12. http://dx.doi.org/10.1002/bimj.4710270813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Francis, R. I. C. Chris. "Optimum design for catch sampling of eels." Marine and Freshwater Research 50, no. 4 (1999): 343. http://dx.doi.org/10.1071/mf98147.

Full text
Abstract:
Data from the first two seasons of a catch-sampling programme for New Zealand freshwater eels (Anguilla australis and A. diefenbachii) are described. These are used in two simulation experiments to provide information to optimize future sampling. Results are presented in the form of equations that predict the precision of estimates (of species composition, mean size, and mean age at two different sizes) as a function of various survey design variables. Precision typically depends more on the number of landings (catches) that are sampled than the number of eels sampled per landing. Also, the precision obtainable from a given design varies substantially from fishery to fishery. For example, if 50 eels were measured and 10 otoliths collected from each of five landings, estimated standard errors varied by a factor of 9 for species composition, 19 for mean weight, and 5 for mean age at the minimum legal size, depending on which fishery was sampled. Results for mean age estimates are more restricted and less certain (than those for species composition and mean size) because age data were fewer. Three further optimization issues are discussed: sampling costs, the importance of ‘minor’ species, and the pool of fishers from which samples are collected.
APA, Harvard, Vancouver, ISO, and other styles
3

Ansari, Athar Hussain, Rahul Varshney, Najmussehar, and Mohammad Jameel Ahsan. "An optimum multivariate-multiobjective stratified sampling design." METRON 69, no. 3 (December 2011): 227–50. http://dx.doi.org/10.1007/bf03263559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sultan, Torky I. "Optimum design of sampling plans in electronic industry." Microelectronics Reliability 34, no. 8 (August 1994): 1369–73. http://dx.doi.org/10.1016/0026-2714(94)90152-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Asai, Takahiro. "Optimum design of multistage synchronous sampling rate converter." Electronics and Communications in Japan (Part III: Fundamental Electronic Science) 86, no. 4 (December 13, 2002): 26–37. http://dx.doi.org/10.1002/ecjc.10015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ritter, Axel, and Carlos M. Regalado. "Roving revisited, towards an optimum throughfall sampling design." Hydrological Processes 28, no. 1 (October 23, 2012): 123–33. http://dx.doi.org/10.1002/hyp.9561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bakhshi, Ziaul. "STOCHASTIC OPTIMIZATION IN MULTIVARIATE STRATIFIED DOUBLE SAMPLING DESIGN." International Journal of Engineering Technologies and Management Research 5, no. 1 (February 8, 2020): 115–22. http://dx.doi.org/10.29121/ijetmr.v5.i1.2018.54.

Full text
Abstract:
This paper deals with optimum allocation of sample size in stratified double sampling when costs are considered as random in the objective function. When costs function are random, by applying modified E. model, objective function is converted into an equivalent deterministic form. A Numerical example is presented to illustrate the computational procedure and the problem is solved by using LINGO Software.
APA, Harvard, Vancouver, ISO, and other styles
8

Varshney, Rahul, Srikant Gupta, and Irfan Ali. "An Optimum Multivariate-Multiobjective Stratified Sampling Design: Fuzzy Programming Approach." Pakistan Journal of Statistics and Operation Research 13, no. 4 (December 1, 2017): 829. http://dx.doi.org/10.18187/pjsor.v13i4.1834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hao, Peng, Bo Wang, and Gang Li. "Surrogate-Based Optimum Design for Stiffened Shells with Adaptive Sampling." AIAA Journal 50, no. 11 (November 2012): 2389–407. http://dx.doi.org/10.2514/1.j051522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

El-Shall, Hassan, and Brij M. Moudgil. "Design of Optimum Sampling Plans for Dry Powders and Slurries." KONA Powder and Particle Journal 31 (2014): 82–91. http://dx.doi.org/10.14356/kona.2014014.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Optimum Sampling Design"

1

Ringer, William P. "Design, construction and analysis of a 14-bit direct digital antenna utilizing optical sampling and optimum SNS encoding." Monterey, California. Naval Postgraduate School, 1997. http://hdl.handle.net/10945/8215.

Full text
Abstract:
Approved for public release; distribution is unlimited.
Direct digital direction finding (DF) antennas will allow an incoming signal to be digitally encoded at the antenna with high dynamic range (14 bits approx. equal 86 dB) without the use of down conversion that is typically necessary. As a shipboard DF device, it also allows for the encoding of wide band, high power signals (e.g., +/- 43 volts) that can often appear on shipboard antennas due to the presence of in band transmitters that are located close by. This design utilizes three pulsed laser driven Mach-Zehnder optical interferometers to sample the RF signal. Each channel requires only 6 bit accuracy (64 comparators) to produce an Optimum Symmetrical Number System (OSNS) residue representation of the input signal. These residues are then sent to a locally programmed Field Programmable Gate Array (FPGA) for decoding into a 14 bit digital representation of the input RF voltage. Modern day FPGA devices are rapidly becoming the state of the art in programmable logic. The inclusion of on chip flip flops allows for a fast and efficient pipelined approach to OSNS decoding. This thesis documents the first 14 bit digital antenna which utilizes an FPGA algorithm as a method of OSNS decoding. This design uses FPGA processors for both OSNS decoding and Parity processing
APA, Harvard, Vancouver, ISO, and other styles
2

De, Schaetzen Werner. "Optimal calibration and sampling design for hydraulic network models." Thesis, University of Exeter, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.322278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Afrifa-Yamoah, Ebenezer. "Imputation, modelling and optimal sampling design for digital camera data in recreational fisheries monitoring." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2021. https://ro.ecu.edu.au/theses/2387.

Full text
Abstract:
Digital camera monitoring has evolved as an active application-oriented scheme to help address questions in areas such as fisheries, ecology, computer vision, artificial intelligence, and criminology. In recreational fisheries research, digital camera monitoring has become a viable option for probability-based survey methods, and is also used for corroborative and validation purposes. In comparison to onsite surveys (e.g. boat ramp surveys), digital cameras provide a cost-effective method of monitoring boating activity and fishing effort, including night-time fishing activities. However, there are challenges in the use of digital camera monitoring that need to be resolved. Notably, missing data problems and the cost of data interpretation are among the most pertinent. This study provides relevant statistical support to address these challenges of digital camera monitoring of boating effort, to improve its utility to enhance recreational fisheries management in Western Australia and elsewhere, with capacity to extend to other areas of application. Digital cameras can provide continuous recordings of boating and other recreational fishing activities; however, interruptions of camera operations can lead to significant gaps within the data. To fill these gaps, some climatic and other temporal classification variables were considered as predictors of boating effort (defined as number of powerboat launches and retrievals). A generalized linear mixed effect model built on fully-conditional specification multiple imputation framework was considered to fill in the gaps in the camera dataset. Specifically, the zero-inflated Poisson model was found to satisfactorily impute plausible values for missing observations for varied durations of outages in the digital camera monitoring data of recreational boating effort. Additional modelling options were explored to guide both short- and long-term forecasting of boating activity and to support management decisions in monitoring recreational fisheries. Autoregressive conditional Poisson (ACP) and integer-valued autoregressive (INAR) models were identified as useful time series models for predicting short-term behaviour of such data. In Western Australia, digital camera monitoring data that coincide with 12-month state-wide boat-based surveys (now conducted on a triennial basis) have been read but the periods between the surveys have not been read. A Bayesian regression framework was applied to describe the temporal distribution of recreational boating effort using climatic and temporally classified variables to help construct data for such missing periods. This can potentially provide a useful cost-saving alternative of obtaining continuous time series data on boating effort. Finally, data from digital camera monitoring are often manually interpreted and the associated cost can be substantial, especially if multiple sites are involved. Empirical support for low-level monitoring schemes for digital camera has been provided. It was found that manual interpretation of camera footage for 40% of the days within a year can be deemed as an adequate level of sampling effort to obtain unbiased, precise and accurate estimates to meet broad management objectives. A well-balanced low-level monitoring scheme will ultimately reduce the cost of manual interpretation and produce unbiased estimates of recreational fishing indexes from digital camera surveys.
APA, Harvard, Vancouver, ISO, and other styles
4

Cole, James Jacob. "Assessing Nonlinear Relationships through Rich Stimulus Sampling in Repeated-Measures Designs." OpenSIUC, 2018. https://opensiuc.lib.siu.edu/dissertations/1587.

Full text
Abstract:
Explaining a phenomenon often requires identification of an underlying relationship between two variables. However, it is common practice in psychological research to sample only a few values of an independent variable. Young, Cole, and Sutherland (2012) showed that this practice can impair model selection in between-subject designs. The current study expands that line of research to within-subjects designs. In two Monte Carlo simulations, model discrimination under systematic sampling of 2, 3, or 4 levels of the IV was compared with that under random uniform sampling and sampling from a Halton sequence. The number of subjects, number of observations per subject, effect size, and between-subject parameter variance in the simulated experiments were also manipulated. Random sampling out-performed the other methods in model discrimination with only small, function-specific costs to parameter estimation. Halton sampling also produced good results but was less consistent. The systematic sampling methods were generally rank-ordered by the number of levels they sampled.
APA, Harvard, Vancouver, ISO, and other styles
5

Ryan, Elizabeth G. "Contributions to Bayesian experimental design." Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/79628/1/Elizabeth_Ryan_Thesis.pdf.

Full text
Abstract:
This thesis progresses Bayesian experimental design by developing novel methodologies and extensions to existing algorithms. Through these advancements, this thesis provides solutions to several important and complex experimental design problems, many of which have applications in biology and medicine. This thesis consists of a series of published and submitted papers. In the first paper, we provide a comprehensive literature review on Bayesian design. In the second paper, we discuss methods which may be used to solve design problems in which one is interested in finding a large number of (near) optimal design points. The third paper presents methods for finding fully Bayesian experimental designs for nonlinear mixed effects models, and the fourth paper investigates methods to rapidly approximate the posterior distribution for use in Bayesian utility functions.
APA, Harvard, Vancouver, ISO, and other styles
6

Basudhar, Anirban. "Computational Optimal Design and Uncertainty Quantification of Complex Systems Using Explicit Decision Boundaries." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/201491.

Full text
Abstract:
This dissertation presents a sampling-based method that can be used for uncertainty quantification and deterministic or probabilistic optimization. The objective is to simultaneously address several difficulties faced by classical techniques based on response values and their gradients. In particular, this research addresses issues with discontinuous and binary (pass or fail) responses, and multiple failure modes. All methods in this research are developed with the aim of addressing problems that have limited data due to high cost of computation or experiment, e.g. vehicle crashworthiness, fluid-structure interaction etc.The core idea of this research is to construct an explicit boundary separating allowable and unallowable behaviors, based on classification information of responses instead of their actual values. As a result, the proposed method is naturally suited to handle discontinuities and binary states. A machine learning technique referred to as support vector machines (SVMs) is used to construct the explicit boundaries. SVM boundaries can be highly nonlinear, which allows one to use a single SVM for representing multiple failure modes.One of the major concerns in the design and uncertainty quantification communities is to reduce computational costs. To address this issue, several adaptive sampling methods have been developed as part of this dissertation. Specific sampling methods have been developed for reliability assessment, deterministic optimization, and reliability-based design optimization. Adaptive sampling allows the construction of accurate SVMs with limited samples. However, like any approximation method, construction of SVM is subject to errors. A new method to quantify the prediction error of SVMs, based on probabilistic support vector machines (PSVMs) is also developed. It is used to provide a relatively conservative probability of failure to mitigate some of the adverse effects of an inaccurate SVM. In the context of reliability assessment, the proposed method is presented for uncertainties represented by random variables as well as spatially varying random fields.In order to validate the developed methods, analytical problems with known solutions are used. In addition, the approach is applied to some application problems, such as structural impact and tolerance optimization, to demonstrate its strengths in the context of discontinuous responses and multiple failure modes.
APA, Harvard, Vancouver, ISO, and other styles
7

Belouni, Mohamad. "Plans d'expérience optimaux en régression appliquée à la pharmacocinétique." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM056/document.

Full text
Abstract:
Le problème d'intérêt est d'estimer la fonction de concentration et l'aire sous la courbe (AUC) à travers l'estimation des paramètres d'un modèle de régression linéaire avec un processus d'erreur autocorrélé. On construit un estimateur linéaire sans biais simple de la courbe de concentration et de l'AUC. On montre que cet estimateur construit à partir d'un plan d'échantillonnage régulier approprié est asymptotiquement optimal dans le sens où il a exactement la même performance asymptotique que le meilleur estimateur linéaire sans biais (BLUE). De plus, on montre que le plan d'échantillonnage optimal est robuste par rapport à la misspecification de la fonction d'autocovariance suivant le critère du minimax. Lorsque des observations répétées sont disponibles, cet estimateur est consistant et a une distribution asymptotique normale. Les résultats obtenus sont généralisés au processus d'erreur de Hölder d'indice compris entre 0 et 2. Enfin, pour des tailles d'échantillonnage petites, un algorithme de recuit simulé est appliqué à un modèle pharmacocinétique avec des erreurs corrélées
The problem of interest is to estimate the concentration curve and the area under the curve (AUC) by estimating the parameters of a linear regression model with autocorrelated error process. We construct a simple linear unbiased estimator of the concentration curve and the AUC. We show that this estimator constructed from a sampling design generated by an appropriate density is asymptotically optimal in the sense that it has exactly the same asymptotic performance as the best linear unbiased estimator (BLUE). Moreover, we prove that the optimal design is robust with respect to a misspecification of the autocovariance function according to a minimax criterion. When repeated observations are available, this estimator is consistent and has an asymptotic normal distribution. All those results are extended to the error process of Hölder with index including between 0 and 2. Finally, for small sample sizes, a simulated annealing algorithm is applied to a pharmacokinetic model with correlated errors
APA, Harvard, Vancouver, ISO, and other styles
8

Benamara, Tariq. "Full-field multi-fidelity surrogate models for optimal design of turbomachines." Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2368.

Full text
Abstract:
L’optimisation des différents composants d’une turbomachine reste encore un sujet épineux, malgré les récentes avancées théoriques, expérimentales ou informatiques. Cette thèse propose et investigue des techniques d’optimisation assistées par méta-modèles vectoriels multi-fidélité basés sur la Décomposition aux Valeurs Propres (POD). Le couplage de la POD à des techniques de modélisation multifidélité permet de suivre l’évolution des structures dominantes de l’écoulement en réponse à des déformations géométriques. Deux méthodes d’optimisation multi-fidélité basées sur la POD sont ici proposées. La première consiste en une stratégie d’enrichissement adaptée aux modèles multi-fidelité par Gappy-POD (GPOD). Celle-ci vise surtout des problèmes associés à des simulations basse-fidélité à coût de restitution négligeable, ce qui la rend difficilement utilisable pour la conception aérodynamique de turbomachines. Elle est néanmoins validée sur une étude du domaine de vol d’une aile 2D issue de la littérature. La seconde méthodologie est basée sur une extension multi-fidèle des modèles par POD Non-Intrusive (NIPOD). Cette extension naît de la ré-interprétation du concept de POD Contrainte (CPOD) et permet l’enrichissement de l’espace réduit par ajout important d’information basse-fidélité approximative. En seconde partie de cette thèse, un cas de validation est introduit pour valider les méthodologies d’optimisation vectorielle multi-fidélité. Cet exemple présente des caractéristiques représentatives des problèmes d’optimisation de turbomachines. La capacité de généralisation des méta-modèles par NIPOD multifidélité proposés est comparée, aussi bien sur cas analytique qu’industriel, à des techniques de méta-modélisation issues de la littérature. Enfin, nous utilisons la méthode développée au cours de cette thèse pour l’optimisation d’un étage et demi d’un compresseur basse-pression et comparons les résultats obtenus à des approches à l’état de l’art
Optimizing turbomachinery components stands as a real challenge despite recent advances in theoretical, experimental and High-Performance Computing (HPC) domains. This thesis introduces and validates optimization techniques assisted by full-field Multi-Fidelity Surrogate Models (MFSMs) based on Proper Orthogonal Decomposition (POD). The combination of POD and Multi-Fidelity Modeling (MFM) techniques allows to capture the evolution of dominant flow features with geometry modifications. Two POD based multi-fidelity optimization methods are proposed. Thefirst one consists in an enrichment strategy dedicated to Gappy-POD (GPOD)models. It is more suitable for instantaneous low-fidelity computations whichmakes it hardly tractable for aerodynamic design of turbomachines. This methodis demonstrated on the flight domain study of a 2D airfoil from the literature. The second methodology is based on a multi-fidelity extension to Non-IntrusivePOD (NIPOD) models. This extension starts with a re-interpretation of theConstrained POD (CPOD) concept and allows to enrich the reduced spacedefinition with abondant, albeit inaccurate, low-fidelity information. In the second part of the thesis, a benchmark test case is introduced to test fullfield multi-fidelity optimization methodologies on an example presenting featuresrepresentative of turbomachinery problems. The predictability of the proposedMulti-Fidelity NIPOD (MFNIPOD) surrogate models is compared to classical surrogates from the literature on both analytical and industrial-scale applications. Finally, we employ the proposed tool to the shape optimization of a 1.5-stage boosterand we compare the obtained results with standard state of the art approaches
APA, Harvard, Vancouver, ISO, and other styles
9

Yngman, Gunnar. "Individualization of fixed-dose combination regimens : Methodology and application to pediatric tuberculosis." Thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-242059.

Full text
Abstract:
Introduction: No Fixed-Dose Combination (FDC) formulations currently exist for pediatric tuberculosis (TB) treatment. Earlier work implemented, in the software NONMEM, a rational method for optimizing design and individualization of pediatric anti-TB FDC formulations based on patient body weight, but issues with parameter estimation, dosage strata heterogeneity and representative pharmacokinetics remained. Aim: To further develop the rational model-based methodology aiding the selection of appropriate FDC formulation designs and dosage regimens, in pediatric TB treatment. Materials and Methods: Optimization of the method with respect to the estimation of body weight breakpoints was sought. Heterogeneity of dosage groups with respect to treatment efficiency was sought to be improved. Recently published pediatric pharmacokinetic parameters were implemented and the model translated to MATLAB, where also the performance was evaluated by stochastic estimation and graphical visualization. Results: A logistic function was found better suited as an approximation of breakpoints. None of the estimation methods implemented in NONMEM were more suitable than the originally used FO method. Homogenization of dosage group treatment efficiency could not be solved. MATLAB translation was successful but required stochastic estimations and highlighted high densities of local minima. Representative pharmacokinetics were successfully implemented. Conclusions: NONMEM was found suboptimal for the task due to problems with discontinuities and heterogeneity, but a stepwise method with representative pharmacokinetics were successfully implemented. MATLAB showed more promise in the search for a method also addressing the heterogeneity issue.
APA, Harvard, Vancouver, ISO, and other styles
10

"Optimal Sampling Designs for Functional Data Analysis." Doctoral diss., 2020. http://hdl.handle.net/2286/R.I.57156.

Full text
Abstract:
abstract: Functional regression models are widely considered in practice. To precisely understand an underlying functional mechanism, a good sampling schedule for collecting informative functional data is necessary, especially when data collection is limited. However, scarce research has been conducted on the optimal sampling schedule design for the functional regression model so far. To address this design issue, efficient approaches are proposed for generating the best sampling plan in the functional regression setting. First, three optimal experimental designs are considered under a function-on-function linear model: the schedule that maximizes the relative efficiency for recovering the predictor function, the schedule that maximizes the relative efficiency for predicting the response function, and the schedule that maximizes the mixture of the relative efficiencies of both the predictor and response functions. The obtained sampling plan allows a precise recovery of the predictor function and a precise prediction of the response function. The proposed approach can also be reduced to identify the optimal sampling plan for the problem with a scalar-on-function linear regression model. In addition, the optimality criterion on predicting a scalar response using a functional predictor is derived when the quadratic relationship between these two variables is present, and proofs of important properties of the derived optimality criterion are also provided. To find such designs, an algorithm that is comparably fast, and can generate nearly optimal designs is proposed. As the optimality criterion includes quantities that must be estimated from prior knowledge (e.g., a pilot study), the effectiveness of the suggested optimal design highly depends on the quality of the estimates. However, in many situations, the estimates are unreliable; thus, a bootstrap aggregating (bagging) approach is employed for enhancing the quality of estimates and for finding sampling schedules stable to the misspecification of estimates. Through case studies, it is demonstrated that the proposed designs outperform other designs in terms of accurately predicting the response and recovering the predictor. It is also proposed that bagging-enhanced design generates a more robust sampling design under the misspecification of estimated quantities.
Dissertation/Thesis
Doctoral Dissertation Statistics 2020
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Optimum Sampling Design"

1

Ringer, William P. Design, construction and analysis of a 14-bit direct digital antenna utilizing optical sampling and optimum SNS encoding. Monterey, Calif: Naval Postgraduate School, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Design, Construction and Analysis of a 14-Bit Direct Digital Antenna Utilizing Optical Sampling and Optimum SNS Encoding. Storming Media, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lai, Han-Lin. Evaluation and validation of age determination for sablefish, pollock, pacific cod and yellowfin sole: Optimum sampling design using age-length key; and implications of aging variability in pollock. 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Optimum Sampling Design"

1

Särndal, Carl-Erik, Bengt Swensson, and Jan Wretman. "Searching for Optimal Sampling Designs." In Model Assisted Survey Sampling, 447–84. New York, NY: Springer New York, 1992. http://dx.doi.org/10.1007/978-1-4612-4378-6_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Zehua, Zhidong Bai, and Bimal K. Sinha. "Unbalanced Ranked Set Sampling and Optimal Designs." In Lecture Notes in Statistics, 73–101. New York, NY: Springer New York, 2004. http://dx.doi.org/10.1007/978-0-387-21664-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Çela, Arben, Mongi Ben Gaid, Xu-Guang Li, and Silviu-Iulian Niculescu. "Stability of DCESs Under the Hyper-Sampling Mode." In Optimal Design of Distributed Control and Embedded Systems, 185–205. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-02729-6_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Çela, Arben, Mongi Ben Gaid, Xu-Guang Li, and Silviu-Iulian Niculescu. "Optimization of the Hyper-Sampling Sequence for DCESs." In Optimal Design of Distributed Control and Embedded Systems, 207–21. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-02729-6_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Çela, Arben, Mongi Ben Gaid, Xu-Guang Li, and Silviu-Iulian Niculescu. "Optimal Relation Between Quantization Precision and Sampling Rates." In Optimal Design of Distributed Control and Embedded Systems, 109–33. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-02729-6_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Qian, Jiahe, and Alina A. von Davier. "Optimal Sampling Design for IRT Linking with Bimodal Data." In Quantitative Psychology Research, 165–79. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-07503-7_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Marchant, B. P., and R. M. Lark. "Sampling in Precision Agriculture, Optimal Designs from Uncertain Models." In Geostatistical Applications for Precision Agriculture, 65–87. Dordrecht: Springer Netherlands, 2010. http://dx.doi.org/10.1007/978-90-481-9133-8_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nikolov, Aleksandar, Mohit Singh, and Uthaipon Tao Tantipongpipat. "Proportional Volume Sampling and Approximation Algorithms for A-Optimal Design." In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, 1369–86. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2019. http://dx.doi.org/10.1137/1.9781611975482.84.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kumar, M. "Optimal Design of Reliability Acceptance Sampling Plans for Multi-stage Production Process." In Springer Series in Reliability Engineering, 123–42. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-93623-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yan, Hainan, Yiting Zhang, Sheng Liu, Ka Ming Cheung, and Guohua Ji. "Optimization of Daylight and Thermal Performance of Building Façade: A Case Study of Office Buildings in Nanjing." In Proceedings of the 2021 DigitalFUTURES, 168–78. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-5983-6_16.

Full text
Abstract:
AbstractIn China's hot summer and cold winter areas, the façade design of buildings needs to respond to a variety of performance objectives. This study focuses on the optimization of daylight and solar radiation of building façade of office buildings in Nanjing and proposes a simple and efficient method. The method mainly includes a random sampling of design models, simplified operation of daylight performance criteria and selection of optimal solution. The results show that the building façade can improve the indoor lighting uniformity and reduce the indoor illumination level compared with the unshaded reference building. Besides, the amount of solar radiation received by office buildings in summer and winter becomes more balanced with the building façade. The optimization design method of building façade proposed in this study can be of guiding significance for office buildings in Nanjing.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Optimum Sampling Design"

1

Huang, Yayun, Weidong Wang, and Dongmei Wu. "Development and optimum design of a mobile manipulator for CBRN sampling." In 2014 11th World Congress on Intelligent Control and Automation (WCICA). IEEE, 2014. http://dx.doi.org/10.1109/wcica.2014.7052768.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

van Beek, Anton, Umar Farooq Ghumman, Joydeep Munshi, Siyu Tao, TeYu Chien, Ganesh Balasubramanian, Matthew Plumlee, Daniel Apley, and Wei Chen. "Scalable Objective-Driven Batch Sampling in Simulation-Based Design for Models With Heteroscedastic Noise." In ASME 2020 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/detc2020-22629.

Full text
Abstract:
Abstract Objective-driven adaptive sampling is a widely used tool for the optimization of deterministic black-box functions. However, the optimization of stochastic simulation models as found in the engineering, biological, and social sciences is still an elusive task. In this work, we propose a scalable adaptive batch sampling scheme for the optimization of stochastic simulation models with input-dependent noise. The developed algorithm has two primary advantages: (i) by recommending sampling batches, the designer can benefit from parallel computing capabilities, and (ii) by replicating of previously observed sampling locations the method can be scaled to higher-dimensional and more noisy functions. Replication improves numerical tractability as the computational cost of Bayesian optimization methods is known to grow cubicly with the number of unique sampling locations. Deciding when to replicate and when to explore depends on what alternative minimizes the posterior prediction accuracy at and around the spatial locations expected to contain the global optimum. The algorithm explores a new sampling location to reduce the interpolation uncertainty and replicates to improve the accuracy of the mean prediction at a single sampling location. Through the application of the proposed sampling scheme to two numerical test functions and one real engineering problem, we show that we can reliably and efficiently find the global optimum of stochastic simulation models with input-dependent noise.
APA, Harvard, Vancouver, ISO, and other styles
3

Phoomboplab, T., and D. Ceglarek. "Process Yield Improvement Through Optimum Design of Fixture Layouts in 3D Multi-Station Assembly Systems." In ASME 2007 International Manufacturing Science and Engineering Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/msec2007-31192.

Full text
Abstract:
This paper presents a new approach to improve process yield by determining an optimum set of fixture layouts for a given multi-station assembly system which can satisfy: (i) parts and subassemblies locating stability in each fixture layout; and (ii) fixture system robustness against environmental noises in order to minimize product dimensional variability. Three major challenges of the multi-stage assembly processes are addressed: (i) high-dimensional design space; (ii) large and complex design space of each locator; and (iii) the nonlinear relations between locator positions, also called Key Control Characteristics, and Key Product Characteristics. The proposed methodology conducts two-step optimization based on the integration of Genetic Algorithm and Hammersley Sequence Sampling. First, Genetic Algorithm is used for design space reduction by determining the areas of optimal fixture locations in initial design spaces. Then, Hammersley Sequence Sampling uniformly samples the candidate sets of fixture layouts from the areas predetermined by GA for the optimum. The process yield and part instability index are design objectives in evaluating candidate sets of fixture layouts. An industrial case study illustrates and validates the proposed methodology.
APA, Harvard, Vancouver, ISO, and other styles
4

Tsuda, Naozumi, and David B. Bogy. "HDD Slider Air Bearing Design Optimization Using a Surrogate Model." In World Tribology Congress III. ASMEDC, 2005. http://dx.doi.org/10.1115/wtc2005-64163.

Full text
Abstract:
This report addresses a new optimization method in which the DIRECT algorithm is used in conjunction with a surrogate model. The DIRECT algorithm itself can find the global optimum with a high convergence rate. However the convergence rate can be much improved by coupling DIRECT with a surrogate model. The surrogate model known as the Kriging model is used in this research. It is determined by using sampling points generated by the DIRECT algorithm. This model expresses the shape of a hyper surface approximation of the cost function over the entire search space. Finding the optimum point on this hyper surface is very fast because it is not necessary to solve the time consuming air bearing equations. By using this optimum candidate as one of the DIRECT sampling points, we can eliminate many cost function evaluations. To illustrate the power of this approach we first present some simple optimization examples using known difficult functions. Then we determine the optimum design of a slider with 5nm flying height (FH) starting with a design that has a 7nm FH.
APA, Harvard, Vancouver, ISO, and other styles
5

Saremi, Alireza, Nasr Al-Hinai, G. Gary Wang, and Tarek ElMekkawy. "Multi Agent Normal Sampling Technique (MANST) for Global Optimization." In ASME 2007 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/detc2007-35506.

Full text
Abstract:
The current work discusses a novel global optimization method called the Multi-Agent Normal Sampling Technique (MANST). MANST is based on systematic sampling of points around agents; each agent in MANST represents a candidate solution of the problem. All agents compete with each other for a larger share of available resources. The performance of all agents is periodically evaluated and a specific number of agents who show no promising achievements are deleted; new agents are generated in the proximity of those promising agents. This process continues until the agents converge to the global optimum. MANST is a standalone global optimization technique. It is benchmarked with six well-known test cases and the results are then compared with those obtained from Matlab™ 7.1 GA Toolbox. The test results showed that MANST outperformed Matlab™ 7.1 GA Toolbox for the benchmark problems in terms of accuracy, number of function evaluations, and CPU time.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Liqun, Songqing Shan, and G. Gary Wang. "A New Global Optimization Method for Simultaneous Computation on Expensive Black-Box Functions." In ASME 2003 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2003. http://dx.doi.org/10.1115/detc2003/dac-48763.

Full text
Abstract:
The presence of black-box functions in engineering design, which are usually computation-intensive, demands efficient global optimization methods. This work proposes a new global optimization method for black-box functions. The global optimization method is based on a novel mode-pursuing sampling (MPS) method which systematically generates more sample points in the neighborhood of the function mode while statistically covers the entire search space. Quadratic regression is performed to detect the region containing the global optimum. The sampling and detection process iterates until the global optimum is obtained. Through intensive testing, this method is found to be effective, efficient, robust, and applicable to both continuous and discontinuous functions. It supports simultaneous computation and applies to both unconstrained and constrained optimization problems. Because it does not call any existing global optimization tool, it can be used as a standalone global optimization method for inexpensive problems as well. Limitation of the method is also identified and discussed.
APA, Harvard, Vancouver, ISO, and other styles
7

Hamza, Karim, and Mohammed Shalaby. "A Framework for Parallel Sampling of Design Space With Application to Vehicle Crashworthiness Optimization." In ASME 2012 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/detc2012-71112.

Full text
Abstract:
This paper presents a framework for simulation-based design optimization of computationally-expensive problems, where economizing the generation of sample designs is highly desirable. Various meta-modeling schemes are used in practice in order to approximate the input-output relationships in the designed system and suggest candidate locations in the design space where high quality designs are likely to be found. One such popular approach is known as Efficient Global Optimization (EGO), where an initial set of design samples is used to construct a Kriging model, which approximates the system output and provides a prediction of the uncertainty in the approximations. Variations of EGO suggest new sample designs according to various infill criteria that seek to maximize the chance of finding high quality designs. The new samples are then used to update the Kriging model and the process is iterated. This paper attempts to address one of the limitations of EGO, which is the generation of the infill samples often becoming a difficult optimization problem in its own right for a larger number of design variables. This is done by adapting a previously developed approach for locating the optimum of a Kriging model to a modified EGO infill sampling criterion. The new implementation also allows the generation of multiple new samples at a time in order to take advantage of parallel computing. After testing on analytical functions, the algorithm is applied to vehicle crashworthiness design of a full vehicle model of a Geo Metro subject to frontal crash conditions.
APA, Harvard, Vancouver, ISO, and other styles
8

Saremi, Alireza, Amir H. Birjandi, G. Gary Wang, Tarek ElMekkawy, and Eric Bibeau. "Enhanced Multi-Agent Normal Sampling Technique for Global Optimization." In ASME 2008 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/detc2008-49991.

Full text
Abstract:
This paper describes an enhanced version of a new global optimization method, Multi-Agent Normal Sampling Technique (MANST) described in reference [1]. Each agent in MANST includes a number of points that sample around the mean point with a certain standard deviation. In each step the point with the minimum value in the agent is chosen as the center point for the next step normal sampling. Then the chosen points of all agents are compared to each other and agents receive a certain share of the resources for the next step according to their lowest mean function value at the current step. The performance of all agents is periodically evaluated and a specific number of agents who show no promising achievements are deleted; new agents are generated in the proximity of those promising agents. This process continues until the agents converge to the global optimum. MANST is a standalone global optimization technique and does not require equations or knowledge about the objective function. The unique feature of this method in comparison with other global optimization methods is its dynamic normal distribution search. This work presents our recent research in enhancing MANST to handle variable boundaries and constraints. Moreover, a lean group sampling approach is implemented to prevent sampling in the same region for different agents. The overall capability and efficiency of the MANST has been improved as a result in the newer version. The enhanced MANST is highly competitive with other stochastic methods such as Genetic Algorithm (GA). In most of the test cases, the performance of the MANST is significantly higher than the Matlab™ GA Toolbox.
APA, Harvard, Vancouver, ISO, and other styles
9

Sturlesi, Doron, and D. C. O'shea. "Exploring the design space: a systematic approach to lens design." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1989. http://dx.doi.org/10.1364/oam.1989.mz4.

Full text
Abstract:
This paper deals with the search for optimum solutions in optical designs. The design space of optical systems is typically a complicated multidimensional parameter space, which often has a large number of local minima. The limited successes of the conventional design method in finding the global minimum is due to the fact that the search in most optimization routines is crucially dependent on the initial configuration. This proposed search algorithm provides a set of promising initial configurations that can serve as an input to the second stage which is based on conventional optimization. First, the complete space is searched using a coarse interval for a collection of promising regions. This sampling procedure is enhanced by using parallel processing mechanisms. Specific heuristic optics rules are then applied to screen out some unfavorable configurations. Finally, after further optimization, a set of different configurations is constructed. This sampling algorithm was tested on some cases and was found to be an attractive new approach to optical design. It gives complete information on the performance limits, sensitivities, and trade-offs from a given design space. Furthermore, there is no need here to come up with any expert guesses for an initial configuration because of the systematic nature of this algorithm.
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Ikjin, Kyung K. Choi, and David Gorsich. "Equivalent Standard Deviation to Convert High-Reliability Model to Low-Reliability Model for Efficiency of Sampling-Based RBDO." In ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/detc2011-47537.

Full text
Abstract:
This study presents a methodology to convert an RBDO problem requiring very high reliability to an RBDO problem requiring relatively low reliability by increasing input standard deviations for efficient computation in sampling-based RBDO. First, for linear performance functions with independent normal random inputs, an exact probability of failure is derived in terms of the ratio of the input standard deviation, which is denoted by δ. Then, the probability of failure estimation is generalized for any random input and performance functions. For the generalization of the probability of failure estimation, two coefficients need to be determined by equating the probability of failure and its sensitivity with respect to the standard deviation at the current design point. The sensitivity of the probability of failure with respect to the standard deviation is obtained using the first-order score function for the standard deviation. To apply the proposed method to an RBDO problem, a concept of an equivalent standard deviation, which is an increased standard deviation corresponding to the low reliability model, is also introduced. Numerical results indicate that the proposed method can estimate the probability of failure accurately as a function of the input standard deviation compared to the Monte Carlo simulation results. As anticipated, the sampling-based RBDO using the surrogate models and the equivalent standard deviation helps find the optimum design very efficiently while yielding relatively accurate optimum design which is close to the one obtained using the original standard deviation.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Optimum Sampling Design"

1

George and Grant. PR-015-14609-R01 Study of Sample Probe Minimum Insertion Depth Requirements. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), May 2015. http://dx.doi.org/10.55274/r0010844.

Full text
Abstract:
Probes for natural gas sample collection and analysis must extend far enough into the pipeline to avoid contaminants at the pipe wall, but must not be so long that there is a risk of flow-induced resonant vibration and failure. PRCI has sponsored a project to determine the minimum probe depth for obtaining a representative single-phase gas sample in flows with small amounts of contaminants. To this end, Phase 1 of the project involved a review of existing literature and industry standards to identify key probe design parameters. Several current standards for sampling clean, dry natural gas were reviewed, and their requirements for sample probe dimensions and mounting arrangements were compared. Some of these standard requirements suggested probe designs and sampling approaches that could be used to collect gas-only samples from two-phase flows. A literature review identified many useful studies of two-phase flows and phase behavior. While few of these studies evaluated probe designs, the majority examined the behavior of gas and liquid in two-phase flows, methods of predicting flow regimes, and methods of predicting flow conditions that define the minimum probe depth for gas-only samples in gas-liquid flows. Useful recommendations were provided for selecting general probe features where liquids must be rejected from the gas sample. A basic design procedure was also provided to select the minimum sample probe insertion length and optimum installation position for known flow conditions. Plans to test the recommendations and the design procedure in Phase 2 of the project were also discussed. This report has a related webinar.
APA, Harvard, Vancouver, ISO, and other styles
2

Zanoni, Wladimir, and Ailin He. Citizenship and the Economic Assimilation of Canadian Immigrants. Inter-American Development Bank, March 2021. http://dx.doi.org/10.18235/0003117.

Full text
Abstract:
In this paper, we examine whether acquiring citizenship improves the economic assimilation of Canadian migrants. We took advantage of a natural experiment made possible through changes in the Canadian Citizenship Act of 2014, which extended the physical presence requirement for citizenship from three to four years. Using quasi-experimental methods, we found that delaying citizenship eligibility by one year adversely affected Canadian residents' wages. Access to better jobs explains a citizenship premium of 11 percent in higher wages among naturalized migrants. Our estimates are robust to model specifications, differing sampling windows to form the treatment and comparison groups, and whether the estimator is a non-parametric rather than a parametric one. We discuss how our findings are relevant to the optimal design of naturalization policies regarding efficiency and equity.
APA, Harvard, Vancouver, ISO, and other styles
3

Russo, David, Daniel M. Tartakovsky, and Shlomo P. Neuman. Development of Predictive Tools for Contaminant Transport through Variably-Saturated Heterogeneous Composite Porous Formations. United States Department of Agriculture, December 2012. http://dx.doi.org/10.32747/2012.7592658.bard.

Full text
Abstract:
The vadose (unsaturated) zone forms a major hydrologic link between the ground surface and underlying aquifers. To understand properly its role in protecting groundwater from near surface sources of contamination, one must be able to analyze quantitatively water flow and contaminant transport in variably saturated subsurface environments that are highly heterogeneous, often consisting of multiple geologic units and/or high and/or low permeability inclusions. The specific objectives of this research were: (i) to develop efficient and accurate tools for probabilistic delineation of dominant geologic features comprising the vadose zone; (ii) to develop a complementary set of data analysis tools for discerning the fractal properties of hydraulic and transport parameters of highly heterogeneous vadose zone; (iii) to develop and test the associated computational methods for probabilistic analysis of flow and transport in highly heterogeneous subsurface environments; and (iv) to apply the computational framework to design an “optimal” observation network for monitoring and forecasting the fate and migration of contaminant plumes originating from agricultural activities. During the course of the project, we modified the third objective to include additional computational method, based on the notion that the heterogeneous formation can be considered as a mixture of populations of differing spatial structures. Regarding uncertainly analysis, going beyond approaches based on mean and variance of system states, we succeeded to develop probability density function (PDF) solutions enabling one to evaluate probabilities of rare events, required for probabilistic risk assessment. In addition, we developed reduced complexity models for the probabilistic forecasting of infiltration rates in heterogeneous soils during surface runoff and/or flooding events Regarding flow and transport in variably saturated, spatially heterogeneous formations associated with fine- and coarse-textured embedded soils (FTES- and CTES-formations, respectively).We succeeded to develop first-order and numerical frameworks for flow and transport in three-dimensional (3-D), variably saturated, bimodal, heterogeneous formations, with single and dual porosity, respectively. Regarding the sampling problem defined as, how many sampling points are needed, and where to locate them spatially in the horizontal x₂x₃ plane of the field. Based on our computational framework, we succeeded to develop and demonstrate a methdology that might improve considerably our ability to describe quntitaively the response of complicated 3-D flow systems. The results of the project are of theoretical and practical importance; they provided a rigorous framework to modeling water flow and solute transport in a realistic, highly heterogeneous, composite flow system with uncertain properties under-specified by data. Specifically, they: (i) enhanced fundamental understanding of the basic mechanisms of field-scale flow and transport in near-surface geological formations under realistic flow scenarios, (ii) provided a means to assess the ability of existing flow and transport models to handle realistic flow conditions, and (iii) provided a means to assess quantitatively the threats posed to groundwater by contamination from agricultural sources.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography