Academic literature on the topic 'Second phase sampling'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Second phase sampling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Second phase sampling"

1

Delmelle, Eric M., and Pierre Goovaerts. "Second-phase sampling designs for non-stationary spatial variables." Geoderma 153, no. 1-2 (October 2009): 205–16. http://dx.doi.org/10.1016/j.geoderma.2009.08.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, H. G., H. T. Schreuder, D. D. Van Hooser, and G. E. Brink. "Estimating Strata Means in Double Sampling with Corrections Based on Second-Phase Sampling." Biometrics 48, no. 1 (March 1992): 189. http://dx.doi.org/10.2307/2532749.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kamba, Adamu Isah, Amos Adedayo Adewara, and Audu Ahmed. "Modified product estimator under two–phase sampling." Global Journal of Pure and Applied Sciences 25, no. 2 (September 6, 2019): 201–8. http://dx.doi.org/10.4314/gjpas.v25i2.10.

Full text
Abstract:
In this paper, modification of product estimator under two-phase sampling was suggested. The modified product estimator was obtained through transformation in two cases using sample mean of auxiliary variables. Case one was when the second sample was drawn from the first sample while case two was when the second sample was drawn from the population. The bias and mean square error (MSE) of the modified product estimator was obtained. The theoretical and numerical validity of the modified product estimator under the two cases were determined to show it superiority to some considered existing product estimators. Numerical results shows that the modified product estimator under the two cases were more efficient than the considered existing estimators.Keywords: Product estimator, Two-Phase Sampling, Bias, Mean Square Error
APA, Harvard, Vancouver, ISO, and other styles
4

Singh, Rajesh, and Prayas Sharma. "Efficient Estimators Using Auxiliary Variable under Second Order Approximation in Simple Random Sampling and Two-Phase Sampling." Advances in Statistics 2014 (September 3, 2014): 1–9. http://dx.doi.org/10.1155/2014/974604.

Full text
Abstract:
This paper suggests some estimators for population mean of the study variable in simple random sampling and two-phase sampling using information on an auxiliary variable under second order approximation. Bahl and Tuteja (1991) and Singh et al. (2008) proposed some efficient estimators and studied the properties of the estimators to the first order of approximation. In this paper, we have tried to find out the second order biases and mean square errors of these estimators using information on auxiliary variable based on simple random sampling and two-phase sampling. Finally, an empirical study is carried out to judge the merits of the estimators over others under first and second order of approximation.
APA, Harvard, Vancouver, ISO, and other styles
5

Ji, Chao, James D. Englehardt, and Cynthia Juyne Beegle-Krause. "Design of Real—Time Sampling Strategies for Submerged Oil Based on Probabilistic Model Predictions." Journal of Marine Science and Engineering 8, no. 12 (December 3, 2020): 984. http://dx.doi.org/10.3390/jmse8120984.

Full text
Abstract:
Locating and tracking submerged oil in the mid depths of the ocean is challenging during an oil spill response, due to the deep, wide-spread and long-lasting distributions of submerged oil. Due to the limited area that a ship or AUV can visit, efficient sampling methods are needed to reveal the real distributions of submerged oil. In this paper, several sampling plans are developed for collecting submerged oil samples using different sampling methods combined with forecasts by a submerged oil model, SOSim (Subsurface Oil Simulator). SOSim is a Bayesian probabilistic model that uses real time field oil concentration data as input to locate and forecast the movement of submerged oil. Sampling plans comprise two phases: the first phase for initial field data collection prior to SOSim assessments, and the second phase based on the SOSim assessments. Several environmental sampling techniques including the systematic random, modified station plans as well zig-zag patterns are evaluated for the first phase. The data using the first phase sampling plan are then input to SOSim to produce submerged oil distributions in time. The second phase sampling methods (systematic random combined with the kriging-based sampling method and naive zig-zag sampling method) are applied to design the sampling plans within the submerged oil area predicted by SOSim. The sampled data obtained using the second phase sampling methods are input to SOSim to update the model’s assessments. The performance of the sampling methods is evaluated by comparing SOSim predictions using the sampled data from the proposed sampling methods with simulated submerged oil distributions during the Deepwater Horizon spill by the OSCAR (oil spill contingency and response) oil spill model. The proposed sampling methods, coupled with the use of the SOSim model, are shown to provide an efficient approach to guide oil spill response efforts.
APA, Harvard, Vancouver, ISO, and other styles
6

Sang, Hailin, Kenneth K. Lopiano, Denise A. Abreu, Andrea C. Lamas, Pam Arroway, and Linda J. Young. "Adjusting for Misclassification: A Three-Phase Sampling Approach." Journal of Official Statistics 33, no. 1 (March 1, 2017): 207–22. http://dx.doi.org/10.1515/jos-2017-0011.

Full text
Abstract:
Abstract The United States Department of Agriculture’s National Agricultural Statistics Service (NASS) conducts the June Agricultural Survey (JAS) annually. Substantial misclassification occurs during the prescreening process and from field-estimating farm status for nonresponse and inaccessible records, resulting in a biased estimate of the number of US farms from the JAS. Here, the Annual Land Utilization Survey (ALUS) is proposed as a follow-on survey to the JAS to adjust the estimates of the number of US farms and other important variables. A three-phase survey design-based estimator is developed for the JAS-ALUS with nonresponse adjustment for the second phase (ALUS). A design-unbiased estimator of the variance is provided in explicit form.
APA, Harvard, Vancouver, ISO, and other styles
7

Mandallaz, Daniel. "A three-phase sampling extension of the generalized regression estimator with partially exhaustive information." Canadian Journal of Forest Research 44, no. 4 (April 2014): 383–88. http://dx.doi.org/10.1139/cjfr-2013-0449.

Full text
Abstract:
We consider three-phase sampling schemes in which one component of the auxiliary information is known in the very large sample of the so-called null phase and the second component is available only in the large sample of the first phase, whereas the second phase provides the terrestrial inventory data. We extend to three-phase sampling the generalized regression estimator that applies when the null phase is exhaustive, for global and local estimation, and derive its asymptotic design-based variance. The new three-phase regression estimator is particularly useful for reducing substantially the computing time required to treat exhaustively very large data sets generated by modern remote sensing technology such as LiDAR.
APA, Harvard, Vancouver, ISO, and other styles
8

Fischer, Christoph, and Joachim Saborowski. "Variance estimation for mean growth from successive double sampling for stratification." Canadian Journal of Forest Research 50, no. 12 (December 2020): 1405–11. http://dx.doi.org/10.1139/cjfr-2020-0058.

Full text
Abstract:
Double sampling for stratification (2SS) is a sampling design that is widely used for forest inventories. We present the mathematical derivation of two appropriate variance estimators for mean growth from repeated 2SS with updated stratification on each measurement occasion. Both estimators account for substratification based on the transition of sampling units among the strata due to the updated allocation. For the first estimator, sizes of the substrata were estimated from the second-phase sample (sample plots), whereas the respective sizes in the second variance estimator relied on the larger first-phase sample. The estimators were empirically compared with a modified version of Cochran’s well-known 2SS variance estimator that ignores substratification. This was done by performing bootstrap resampling on data from two German forest districts. The major findings were as follows: (i) accounting for substratification, as implemented in both new estimators, has substantial impact in terms of significantly smaller variance estimates and bias compared with the estimator without substratification, and (ii) the second estimator with substrata sizes being estimated from the first-phase sample shows a smaller bias than the first estimator.
APA, Harvard, Vancouver, ISO, and other styles
9

Šmelková, Ľ. "Inventory of plant material in forest nurseries by combining an ocular estimate and sampling measurements." Journal of Forest Science 48, No. 4 (May 17, 2019): 156–65. http://dx.doi.org/10.17221/11869-jfs.

Full text
Abstract:
Two procedures of the plant material inventories in forest nurseries, used until now, are evaluated: ocular estimate and sampling. A new two-phase sampling procedure has been proposed on the basis of a suitable combination of estimation and counting and/or measurement of seedlings and plants. The optimum size of the sampling unit (length of bed segment) has been defined. The necessary number of bed segments on which the ocular estimation should be performed in the first phase (n1), and subsequently a more exact assessment of the number of individuals and/or their other qualitative and quantitative traits should be done in the second phase (n2), to achieve the required precision of results of ± 2 to 10% with reliability of 95%. A theoretical justification of the proposal as well as a detailed procedure of the accomplishment is presented. The frames have been specified where the proposed method is economically twice as beneficial as the classic sampling method.
APA, Harvard, Vancouver, ISO, and other styles
10

Mandallaz, Daniel, Jochen Breschan, and Andreas Hill. "New regression estimators in forest inventories with two-phase sampling and partially exhaustive information: a design-based Monte Carlo approach with applications to small-area estimation." Canadian Journal of Forest Research 43, no. 11 (November 2013): 1023–31. http://dx.doi.org/10.1139/cjfr-2013-0181.

Full text
Abstract:
We consider two-phase sampling schemes where one component of the auxiliary information is known in every point (“wall-to-wall”) and a second component is available only in the large sample of the first phase, whereas the second phase yields a subsample with the terrestrial inventory. This setup is of growing interest in forest inventory thanks to the recent advances in remote sensing, in particular, the availability of LiDAR data. We propose a new two-phase regression estimator for global and local estimation and derive its asymptotic design-based variance. The new estimator performs better than the classical regression estimator. Furthermore, it can be generalized to cluster sampling and two-stage tree sampling within plots. Simulations and a case study with LiDAR data illustrate the theory.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Second phase sampling"

1

Delmelle, Eric. "Optimization of second-phase spatial sampling using auxiliary information." 2005. http://proquest.umi.com/pqdweb?did=982789611&sid=12&Fmt=2&clientId=39334&RQT=309&VName=PQD.

Full text
Abstract:
Thesis (Ph.D.)--State University of New York at Buffalo, 2005.
Title from PDF title page (viewed on Mar. 14, 2006) Available through UMI ProQuest Digital Dissertations. Thesis adviser: Rogerson, Peter A. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Second phase sampling"

1

Chalabi, Azadeh. A Cross-Case Analysis of NHRAPs of Fifty-Three Countries. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198822844.003.0006.

Full text
Abstract:
Part III, ‘Empirical Perspectives’, contains only one chapter, Chapter 5, which presents the results of a cross-case analysis of national human rights action plans of fifty-three countries. Adopting a purposive sampling technique, these countries are selected on the basis of four main criteria, namely human rights record, geographical diversity, political regimes, and cultural diversity. This comprehensive cross-case study follows two objectives. The first objective of this chapter is to unearth significant problems in the ‘pre-phase’ and the four phases of planning, namely ‘preparatory phase’, ‘development phase’, ‘implementing phase’, and ‘assessment phase’. These problems are significantly detrimental to the effective implementation of human rights and their identification will substantially help generate response strategies. These are best addressed by attempting to mitigate their root causes as opposed to only correcting the immediately obvious symptoms. This brings us to the chapter’s second objective, which is to explore the underlying causes of these problems.
APA, Harvard, Vancouver, ISO, and other styles
2

Pezo-Lanfranco, Luis Nicanor. Bioarqueologia e Antropologia Forense: Métodos de escavação, recuperação e curadoria de ossos humanos. Brazil Publishing, 2021. http://dx.doi.org/10.31012/978-65-5861-376-3.

Full text
Abstract:
This book presents a synthesis on the necessary methods and techniques for the correct excavation, recovery and conservation of human remains, as well as notions of sampling and analysis of bones, useful for an adequate study of funeral contexts in conventional (bio)archaeological research or forensic-anthropology. As this book was written primarily for archeology students and archeologists with little training in bone handling, the language is easy-to-follow. The book is divided into two sections that roughly correspond to the two phases in which the method of analysis of human bones can be divided. In the first section, we describe the Phase I or field work that includes recovery methods, from the prospection and identification of burial sites, excavation and recording, field-sampling techniques, to the packaging and transport of bones to the laboratory. In the second part of the book, Phase II or laboratory work, we describe the treatment that should be given to bones from their arrival to laboratory of analysis to the final storage. In this section, we show the methods of cleaning and preparation of bones for further analysis, some basic notions on restoration and conservation, and relevant information about sampling techniques and their scientific principles to obtain information from the examined individual. Along the text we emphasize the informative potential of each analysis from the bioarchaeological and anthropological-forensic viewpoint.
APA, Harvard, Vancouver, ISO, and other styles
3

Kwame Harrison, Anthony. Research Design. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199371785.003.0002.

Full text
Abstract:
Chapter 2 demystifies practices of ethnographic research by discussing the balance between structure and serendipity surrounding its design. The author pursues this in two ways: first, by discussing the dynamic mode of structured improvisation through which ethnographers perform their research and, second, by introducing a framework for ethnographic decision-making—based on the concept of social science sampling—which highlights many of the major considerations affecting the research choices ethnographers make. Through this discussion, the author illustrates the complementary strategic and improvisational imperatives that in-the-field ethnographers embody. The second part of the chapter is organized around several key phases of the research process including (a) the choice of a research topic; (b) decisions regarding research settings; (c) aspects of data collection—including expanding on the first chapter’s discussions of positionality, fieldnote writing, and interviewing; and (d) techniques and sensibilities through which researchers analyze their data.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Second phase sampling"

1

Delmelle, Eric M. "Model-Based Criteria Heuristics for Second-Phase Spatial Sampling." In Spatio-Temporal Design, 54–71. Chichester, UK: John Wiley & Sons, Ltd, 2012. http://dx.doi.org/10.1002/9781118441862.ch3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dickson, Maria Michela, and Diego Giuliani. "Procedures for the Estimation of Forest Inventory Quantities." In Springer Tracts in Civil Engineering, 103–18. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98678-0_5.

Full text
Abstract:
AbstractThis chapter aims at illustrating the statistical procedures adopted to estimate the unknown values of the parameters of interest of the forest inventory. In particular, it firstly describes how the data collected during the second phase of the sampling plan have been used to estimate the areal extents of the different land use and cover categories. Secondly, it illustrates the procedures to properly estimate the total and density values of the quantities measured during the third phase of the survey campaign. These procedures were developed for INFC2005 and, as explained in this chapter, are still valid for INFC2015.
APA, Harvard, Vancouver, ISO, and other styles
3

Hankin, David G., Michael S. Mohr, and Ken B. Newman. "Multi-phase sampling." In Sampling Theory, 200–218. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198815792.003.0010.

Full text
Abstract:
Attention is restricted to two-phase or double sampling. A large first-phase sample is used to generate a very good estimate of the mean or total of an auxiliary variable, x, which is relatively cheap to measure. Then, a second-phase sample is selected, usually from the first-phase sample, and both auxiliary and target variables are measured in selected second-phase population units. Two-phase ratio or regression estimators can be used effectively in this context. Errors of estimation reflect first-phase uncertainty in the mean or total of the auxiliary variable, and second-phase errors reflect the nature of the relation and correlation between auxiliary and target variables. Accuracy of the two-phase estimator of a proportion depends on sensitivity and specificity. Sensitivity is the probability that a unit possessing a trait (y = 1) will be correctly classified as such whenever the auxiliary variable, x, has value 1, whereas specificity is the probability that a unit not possessing a trait (y = 0) will be correctly classified as such whenever the auxiliary variable, x, has value 0. Optimal allocation results for estimation of means, totals, and proportions allow the most cost-effective allocation of total sampling effort to the first- and second-phases. In double sampling with stratification, a large first-phase sample estimates stratum weights, a second-phase sample estimates stratum means, and a stratified estimator gives an estimate of the overall population mean or total.
APA, Harvard, Vancouver, ISO, and other styles
4

Roszkowska, A., K. Łuczykowski, N. Warmuzińska, and B. Bojko. "SPME and Related Techniques in Biomedical Research." In Evolution of Solid Phase Microextraction Technology, 357–418. The Royal Society of Chemistry, 2023. http://dx.doi.org/10.1039/bk9781839167300-00357.

Full text
Abstract:
This chapter describes a wide range of applications of the SPME technique in biomedical research, beginning from investigations focused on the monitoring of the level of drugs used in the treatment of different diseases, through targeted analysis of endogenous compounds (metabolites) to untargeted metabolomics studies. The reader will find information about diverse SPME sampling strategies adopted to address demanding tasks, e.g., single cell analysis or on site sampling at the surgery room, discussion of unique features of SPME, and the areas of science, where the technology can be successfully deployed. In the first part of this chapter, various SPME protocols in the analysis of drugs used in cardiovascular and central nervous system diseases, immunosuppressants, anticancer drugs, and medications used in pain therapy are summarized. In addition, the aspects related to the application of SPME sampling in drug binding studies are described. In the second part of this chapter, the overview of the SPME technique in the determination of non-volatile and volatile compounds within targeted and untargeted metabolomic approaches along with their applications in the microbial, cellular, tissue, and biofluid analysis within different areas of medical science is presented. Finally, the authors discuss the issues related to the stability of target compounds based on several investigations utilizing SPME technology in comparison to traditional techniques described in the literature. Finally, present and future perspectives about the SPME technology in the area of bioanalysis and medical diagnostics are provided.
APA, Harvard, Vancouver, ISO, and other styles
5

Dhamodharavadhani S. and Rathipriya R. "Enhanced Logistic Regression (ELR) Model for Big Data." In Handbook of Research on Big Data Clustering and Machine Learning, 152–76. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-0106-1.ch008.

Full text
Abstract:
Regression model is an important tool for modeling and analyzing data. In this chapter, the proposed model comprises three phases. The first phase concentrates on sampling techniques to get best sample for building the regression model. The second phase is to predict the residual of logistic regression (LR) model using time series analysis method: autoregressive. The third phase is to develop enhanced logistic regression (ELR) model by combining both LR model and residual prediction (RP) model. The empirical study is carried out to study the performance of the ELR model using large diabetic dataset. The results show that ELR model has a higher level of accuracy than the traditional logistic regression model.
APA, Harvard, Vancouver, ISO, and other styles
6

Gedikli, Erman, and Yeter Demir Uslu. "Strategic Pathway Determination for a State Hospital in Terms of an Integrated Facility Management System." In Advances in Healthcare Information Systems and Administration, 404–38. IGI Global, 2023. http://dx.doi.org/10.4018/978-1-6684-8103-5.ch024.

Full text
Abstract:
The purpose of this study is to develop strategies to provide effective, efficient, and patient safety facility management in a public hospital. Exploratory sequential mixed method research design was employed. Quantitative approaches were used in the second stage of the investigation after qualitative methods in the first stage. The universe of the research consists of all the staff. The criterion sampling technique was applied during the qualitative phase. In the first phase 39 managers were interviewed individually as part of data collection process. To assess the priority of the need group, the AHP questionnaire was used to seven managers in ultimate decision-making positions during the second phase of data collecting. Finally, decision-makers should prioritize taking action to address the needs identified under the respective topics of emergency preparedness and business continuity, human factors, and communication, respectively. The environmental management and sustainability was determined as the last priority group.
APA, Harvard, Vancouver, ISO, and other styles
7

Dunn, Graham. "Statistics and the design of experiments and surveys." In New Oxford Textbook of Psychiatry, 137–43. Oxford University Press, 2012. http://dx.doi.org/10.1093/med/9780199696758.003.0017.

Full text
Abstract:
Research into mental illness uses a much wider variety of statistical methods than those familiar to a typical medical statistician. In many ways there is more similarity to the statistical toolbox of the sociologist or educationalist. It would be a pointless exercise to try to describe this variety here but, instead, we shall cover a few areas that are especially characteristic of psychiatry. The first and perhaps the most obvious is the problem of measurement. Measurement reliability and its estimation are discussed in the next section. Misclassification errors are a concern of the third section, a major part of which is concerned with the estimation of prevalence through the use of fallible screening questionnaires. This is followed by a discussion of both measurement error and misclassification error in the context of modelling patterns of risk. Another major concern is the presence of missing data. Although this is common to all areas of medical research, it is of particular interest to the psychiatric epidemiologist because there is a long tradition (since the early 1970s) of introducing missing data by design. Here we are thinking of two-phase or double sampling (often confusingly called two-stage sampling by psychiatrists and other clinical research workers). In this design a first-phase sample are all given a screen questionnaire. They are then stratified on the basis of the results of the screen (usually, but not necessarily, using two strata—likely cases and likely non-cases) and subsampled for a second-phase diagnostic interview. This is the major topic of the third section. If we are interested in modelling patterns of risk, however, we are not usually merely interested in describing patterns of association. Typically we want to know if genetic or environmental exposures have a causal effect on the development of illness. Similarly, a clinician is concerned with answers to the question ‘What is the causal effect of treatment on outcome?’ How do we define a causal effect? How do we measure or estimate it? How do we design studies in order that we can get a valid estimate of a causal effect of treatment? Here we are concerned with the design and analysis of randomized controlled trials (RCTs). This is the focus of the fourth section of the present chapter. Finally, at the end of this chapter pointers are given to where the interested reader might find other relevant and useful material on psychiatric statistics.
APA, Harvard, Vancouver, ISO, and other styles
8

Milic, Ljiljana. "Lth-Band Digital Filters." In Multirate Filtering for Digital Signal Processing, 206–41. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-178-0.ch007.

Full text
Abstract:
Digital Lth-band FIR and IIR filters are the special classes of digital filters, which are of particular interest both in single-rate and multirate signal processing. The common characteristic of Lth-band lowpass filters is that the 6 dB (or 3 dB) cutoff angular frequency is located at p/L, and the transition band is approximately symmetric around this frequency. In time domain, the impulse response of an Lth-band digital filter has zero valued samples at the multiples of L samples counted away from the central sample to the right and left directions. Actually, an Lth-band filter has the zero crossings at the regular distance of L samples thus satisfying the so-called zero intersymbol interference property. Sometimes the Lthband filters are called the Nyquist filters. The important benefit in applying Lth band FIR and IIR filters is the efficient implementation, particularly in the case L = 2 when every second coefficient in the transfer function is zero valued. Due to the zero intersymbol interference property, the Lth-band filters are very important for digital communication transmission systems. Another application is the construction of Hilbert transformers, which are used to generate the analytical signals. The Lth-band filters are also used as prototypes in constructing critically sampled multichannel filter banks. They are very popular in the sampling rate alteration systems as well, where they are used as decimation and interpolation filters in single-stage and multistage systems. This chapter starts with the linear-phase Lth-band FIR filters. We introduce the main definitions and present by means of examples the efficient polyphase implementation of the Lth-band FIR filters. We discuss the properties of the separable (factorizable) linear-phase FIR filter transfer function, and construct the minimum-phase and the maximum-phase FIR transfer functions. In sequel, we present the design and efficient implementation of the halfband FIR filters (L = 2). The class of IIR Lth-band and halfband filters is presented next. Particular attention is addressed to the design and implementation of IIR halfband filters. Chapter concludes with several MATLAB exercises for self study.
APA, Harvard, Vancouver, ISO, and other styles
9

Miksza, Peter, Julia T. Shaw, Lauren Kapalka Richerme, Phillip M. Hash, Donald A. Hodges, and Elizabeth Cassidy Parker. "Common Elements of Qualitative Research Reports." In Music Education Research, 129—C8P172. Oxford University PressNew York, 2023. http://dx.doi.org/10.1093/oso/9780197639757.003.0008.

Full text
Abstract:
Abstract This chapter provides an overview of key methodological decisions to be made when undertaking a qualitative study, as well as ways that researchers commonly describe the methodological approaches they have taken in their published research reports. Participant selection in qualitative research is accomplished through purposeful sampling strategies, with individuals selected for their perspective on or experience with the central phenomenon. Among the most commonly used data generation techniques are interviewing, observing, and collecting documents and other artifacts. Qualitative data analysis features both inductive and deductive reasoning, with the balance between these varying according to the design used. The majority of approaches entail coding data; organizing codes into broader categories, themes, or second-order constructs; visually representing themes that emerge from analysis; and developing thematic representations. The process of interpretation, involving the extraction of larger meanings from codes and themes, is not a discrete step but underlies every phase of the research process. Researchers may position their role as a complete participant, participant as observer, nonparticipant observer, or complete observer. Rather than purporting to conduct their studies objectively, qualitative researchers recognize and manage their subjectivity through a variety of means. The chapter concludes with a discussion of possible rhetorical structures for presenting and discussing findings of a qualitative study.
APA, Harvard, Vancouver, ISO, and other styles
10

Palma-Ruiz, Jesús Manuel, Herik Germán Valles-Baca, Carmen Romelia Flores-Morales, and Luis Raúl Sánchez-Acosta. "Experiences, Perceptions, and Expectations of the Business Community in Mexico Amidst the COVID-19 Crisis." In Handbook of Research on Emerging Business Models and the New World Economic Order, 39–59. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-7689-2.ch003.

Full text
Abstract:
The objective of this chapter is to provide a contextualized perspective about the effects of the COVID-19 health crisis for companies with economic activity and fixed installations in Mexico, mainly during the second to third phases of the contingency. For this purpose, data from the INEGI ECOVID-IE 2020 survey is analyzed, which used a sampling frame of 1,873,564 Mexican companies compared by size. Relevant information is provided about the reality of the Mexican business community to report the main sanitary measures implemented, the operational actions used, the sources and types of support received, the best support policies identified, and the income expectations for the following months. Faced with a negative scenario, targeted support strategies from governmental, chambers, and business organizations must be aligned to regain the confidence of the business community to support their continuity.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Second phase sampling"

1

Okumura, Keisuke, and Xavier Défago. "Quick Multi-Robot Motion Planning by Combining Sampling and Search." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/29.

Full text
Abstract:
We propose a novel algorithm to solve multi-robot motion planning (MRMP) rapidly, called Simultaneous Sampling-and-Search Planning (SSSP). Conventional MRMP studies mostly take the form of two-phase planning that constructs roadmaps and then finds inter-robot collision-free paths on those roadmaps. In contrast, SSSP simultaneously performs roadmap construction and collision-free pathfinding. This is realized by uniting techniques of single-robot sampling-based motion planning and search techniques of multi-agent pathfinding on discretized spaces. Doing so builds the small search space, leading to quick MRMP. SSSP ensures finding a solution eventually if exists. Our empirical evaluations in various scenarios demonstrate that SSSP significantly outperforms standard approaches to MRMP, i.e., solving more problem instances much faster. We also applied SSSP to planning for 32 ground robots in a dense situation.
APA, Harvard, Vancouver, ISO, and other styles
2

Voss, I., S. Schroder, and R. W. De Doncker. "Predictive Digital Current Control Using Advanced Average Current Sampling Algorithm for Multi-Phase 2-Quadrant DC/DC Converters." In APEC 07 - Twenty-Second Annual IEEE Applied Power Electronics Conference and Exposition. IEEE, 2007. http://dx.doi.org/10.1109/apex.2007.357489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

McCurry, Cynthia K., and Robert R. Romanosky. "Sampling and Analysis of Alkali in High-Temperature, High-Pressure Gasification Streams." In ASME 1985 International Gas Turbine Conference and Exhibit. American Society of Mechanical Engineers, 1985. http://dx.doi.org/10.1115/85-gt-202.

Full text
Abstract:
This paper describes the experiences leading to successful sampling of hot, contaminated, coal-derived gas streams for alkali constituents using advanced spectrometers. This activity was integrated with a multi-phase, combustion test program which addressed the use of minimally treated, coal-derived fuel gas in gas turbines. Alkali contaminants in coal-derived fuels are a source of concern, as they may induce corrosion of and deposition on turbine components. Real-time measurement of alkali concentrations in gasifier output fuel gas streams is important in evaluating these effects on turbine performance. An automated, dual-channel, flame atomic emission spectrometer was used to obtain on-line measurements of total sodium and potassium mass loadings (vapors and particles) in two process streams at the General Electric fixed-bed coal gasifier and turbine combustor simulator facility in Schenectady, New York. Alkali measurements were taken on (1) slipstreams of high temperature, high pressure, minimally clean, low-Btu fuel gas containing entrained particles from the gasifier and (2) a slipstream of the exhaust gas from the combustor/turbine simulator. Alkali detection limits for the analyzer were found to be on the order of one part per billion. Providing a representative sample to the alkali analyzer at the limited flows required by the instrument was a major challenge of this activity. Several approaches and sampling hardware configurations were utilized with varying degrees of success during this testing campaign. The resulting information formed the basis for a second generation sampling system which has recently been successfully utilized to measure alkali concentrations in slipstreams from the described fixed-bed coal gasifier and turbine combustor simulator.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Yingxu, Guoming G. Zhu, and Ranjan Mukherjee. "Experimental Study of NMP Sample and Hold Input Using an Inverted Pendulum." In ASME 2018 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/dscc2018-8994.

Full text
Abstract:
Early research showed that a zero-order hold is able to convert a continuous-time non-minimum-phase (NMP) system to a discrete-time minimum-phase (MP) system with a sufficiently large sampling period. However the resulting sample period is often too large to adequately cover the original NMP system dynamics and hence not suitable for control application to take advantage of a discrete-time MP system. This problem was solved using different sample and hold inputs (SHI) to reduce the sampling period significantly for MP discrete-time system. Three SHIs were studied analytically and they are square pulse, forward triangle and backward triangle SHIs. To validate the finding experimentally, a dual-loop linear quadratic regulator (LQR) control configuration is designed for the Quanser single inverted pendulum (SIP) system, where the SIP system is stabilized using the Quanser continuous-time LQR (the first loop) and an additional discrete-time LQR (the second loop) with the proposed SHIs to reduce the cart oscillation. The experimental results show more than 75% reduction of the steady-state cart displacement variance over the single-loop Quanser controller and hence demonstrated the effectiveness of the proposed SHI.
APA, Harvard, Vancouver, ISO, and other styles
5

Tzuang, C. K. C., D. Miller, T. H. Wang, T. Itoh, D. P. Neikirk, P. Williams, and M. Downer. "Picosecond Response of an Optically Controlled Millimeter Wave Phase Shifter." In Picosecond Electronics and Optoelectronics. Washington, D.C.: Optica Publishing Group, 1987. http://dx.doi.org/10.1364/peo.1987.fb5.

Full text
Abstract:
The introduction of electro-optic sampling techniques now allows the study of very high speed pulse propagation on planar transmission lines. In the frequency domain, recent work on microstrip and coplanar waveguides (CPW) on semiconductor substrates has shown the existence of slow-wave phenomena [1][2]. Since these phenomena are frequency dependent, strong dispersive effects on picosecond pulses would be expected. For a lossless substrate, a spectral domain technique has been used to calculate pulse dispersion for a coplanar waveguide (CPW) and coplanar strips [3], and for a particular geometry and lossy layer both mode matching and finite element techniques have been used to predict pulse behavior in microstrip and CPW [4]. In a typical structure exhibiting strong slow wave effects there are three distinct layers in the substrate: first, a thin, lossless spacer layer immediately below the CPW; second, a somewhat thicker lossy (i.e. doped) layer; and third, a very thick lossless layer. The slow-wave phenomenon (and corresponding variation in effective dielectric constant) occurs as a result of the different interactions of the electric and magnetic fields of the propagating wave with the lossy layer below the CPW. The complex effective dielectric constant of the guide is a function of the conductivity of the lossy layer, the separation of the CPW from this layer, the transmission line dimensions, and the frequency. For frequency domain applications, we have recently proposed a new method to control the slow wave factor in these structures by replacing the doped lossy layer with an (CW) optically generating electron-hole plasma layer in the semiconductor substrate [5]. The device could then serve as a phase-shifter, controlled by varying the optical excitation level, and thus the conductivity of the lossy layer. In this paper we discuss the the effects of the lossy layer on picosecond pulse dispersion.
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Yong Hoon, R. E. Corman, Randy H. Ewoldt, and James T. Allison. "A Multiobjective Adaptive Surrogate Modeling-Based Optimization (MO-ASMO) Framework Using Efficient Sampling Strategies." In ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/detc2017-67541.

Full text
Abstract:
A novel multiobjective adaptive surrogate modeling-based optimization (MO-ASMO) framework is proposed to utilize a minimal number of training samples efficiently for sequential model updates. All the sample points are enforced to be feasible, and to provide coverage of sparsely explored sparse design regions using a new optimization subproblem. The MO-ASMO method only evaluates high-fidelity functions at feasible sample points. During an exploitation sample phase, samples are selected to enhance solution accuracy rather than the global exploration. Sampling tasks are especially challenging for multiobjective optimization; for an n-dimensional design space, a strategy is required for generating model update sample points near an (n − 1)-dimensional hypersurface corresponding to the Pareto set in the design space. This is addressed here using a force-directed layout algorithm, adapted from graph visualization strategies, to distribute feasible sample points evenly near the estimated Pareto set. Model validation samples are chosen uniformly on the Pareto set hypersurface, and surrogate model estimates at these points are compared to high-fidelity model responses. All high-fidelity model evaluations are stored for later use to train an updated surrogate model. The MO-ASMO algorithm, along with the set of new sampling strategies, are tested using two mathematical and one realistic engineering problems. The second mathematical test problems is specifically designed to test the limits of this algorithm to cope with very narrow, non-convex feasible domains. It involves oscillatory objective functions, giving rise to a discontinuous set of Pareto-optimal solutions. Also, the third test problem demonstrates that the MO-ASMO algorithm can handle a practical engineering problem with more than 10 design variables and black-box simulations. The efficiency of the MO-ASMO algorithm is demonstrated by comparing the result of two mathematical problems to the results of the NSGA-II algorithm in terms of the number of high fidelity function evaluations, and is shown to reduce total function evaluations by several orders of magnitude when converging to the same Pareto sets.
APA, Harvard, Vancouver, ISO, and other styles
7

Byrdwell, William, and Hari Kiran Kotapati. "Adventures in multiple dimensions of chromatography and mass spectrometry for lipidomic analysis." In 2022 AOCS Annual Meeting & Expo. American Oil Chemists' Society (AOCS), 2022. http://dx.doi.org/10.21748/athx8798.

Full text
Abstract:
Two-dimensional liquid chromatography (2D-LC) is commercially available and has become increasingly common in laboratories across the world. Most 2D-LC systems that are coupled to mass spectrometry use one mass spectrometer attached to the outlet of the second dimension, and the first dimension is reconstructed by “stitching together” the signal from all of the modulation periods. This requires short, fast separations in the second dimension, and fast-scanning mass spectrometers, otherwise “under sampling” can occur. Quantification is problematic using the “blobs” in 2D-LC chromatograms. We have bypassed or eliminated many of the shortcomings or limitations in conventional systems by using multiple mass spectrometers distributed across two or three dimensions of chromatography. We have published results showing the use of split-flow 2D-LC with four mass spectrometers in LC1MS2 × LC1MS2 = LC2MS4 experiments that combined non-aqueous reversed-phase (NARP) HPLC with silver ion chromatography UHPLC for analysis of cis/trans isomers and regioisomers in seed oils, with classic quantification of fat-soluble vitamins (FSVs) and triacylglycerols (TAGs) using direct detection in the first dimension and isomer separation in the second dimension. We have further reported split-flow three-dimensional (3D) LC with four mass spectrometers in LC1MS2 × (LC1MS1 + LC1MS1) = LC3MS4 analysis of infant formula that combined classic quantification of FSVs in the first dimension and TAG quantification by lipidomic analysis in the second dimension. We innovated multi-cycle (a.k.a., “constructive wraparound”) chromatography in the second second dimension for improved separation compared to the conventional approach used in the first second dimension. These and other combinations of LCxMSy are described.
APA, Harvard, Vancouver, ISO, and other styles
8

Mithani, Aijaz Hussain, Eadie Azhar Rosland, M. Aiman Jamaludin, W. Rokiah W Ismail, Maxwell Tommie Lajawi, and Irzie Hani A Salam. "Reservoir Souring Simulation and Modelling Study for Field with Long History of Water Injection: History Matching, Prediction." In SPE Asia Pacific Oil & Gas Conference and Exhibition. SPE, 2022. http://dx.doi.org/10.2118/210778-ms.

Full text
Abstract:
Abstract The field under study is a mature brownfield with no H2S in the fluid stream (PVT) at the time of development. However, concentrations more than 1000 ppm were recorded recently causing wells to shut in (a few already shut). Hence, to bring these wells on stream an assessment of souring potential in the field is required. This paper is the results of our experience in H2S mapping at reservoir-well-facilities modelling, history matching, and prediction of H2S. We will highlight the workflow adopted to find the root causes of souring via sampling and modelling approach since the H2S is measured throughout the field across all the reservoirs, including those undergoing waterflood. Moreover, various options that were studied through simulation will be discussed for mitigation and management of H2S within this field to safeguard the production, and thus recovery of the field. A systematic phased approach is adopted to mitigate and manage the unwanted sour gas (H2S). In the first phase, we performed the analysis of the historical development of H2S throughout the field and developed the concept for possible souring causes. In the second phase, we designed and conducted a comprehensive sampling and laboratory analysis program end-to-end to fill the existing knowledge gap. In the third phase, we performed 3D dynamic reservoir souring modelling where we history matched the H2S and assessed the future potential via forecasting. Finally, we developed multiple mitigation scenarios ranging from nitrate injection, sulphate reducing unit, limiting the nutrient supply for microbe growth via water mixing, etc. It was evident that a) increased injection water contributed to souring wells, b) link between souring wells and nutrient availability, c) increased negative fractioning of a Sulphur isotope as H2S concentration increases, and d) and mesophilic SRBs detected in some souring wells. This evidence suggested that BSR is the predominant cause of souring. It was also seen based on water chemistry that injection water was rich in sulfate while formation water was rich in volatile fatty acids. Results indicate that the nitrate injection (up to 200 ppm) alone may not be an attractive option to mitigate the H2S within this field. However, the combination of SRU and nitrate injection of 150 ppm could be a technically feasible option to mitigate such high concentration of H2S within allowable facilities limits.
APA, Harvard, Vancouver, ISO, and other styles
9

Mithani, Aijaz Hussain, Eadie Azhar Rosland, M. Aiman Jamaludin, W. Rokiah W Ismail, Maxwell Tommie Lajawai, Irzie Hani A Salam, and Seyed Mousa Mousavi Mirkalaei. "Reservoir Souring in Mature Offshore Field Malaysia: Root Cause, Mitigation, and Management of H2S." In International Petroleum Technology Conference. IPTC, 2023. http://dx.doi.org/10.2523/iptc-22874-ms.

Full text
Abstract:
Abstract The field under study is a mature brownfield with no H2S in the fluid stream (PVT) at the time of development. However, concentrations more than 1000 ppm were recorded recently causing wells to shut in (a few already shut). Hence, to bring these wells on stream an assessment of souring potential in the field is required. This paper is the results of our experience in H2S mapping at reservoir-well-facilities modelling, history matching, and prediction of H2S. We will highlight the workflow adopted to find the root causes of souring via sampling and modelling approach since the H2S is measured throughout the field across all the reservoirs, including those undergoing waterflood. Moreover, various options that were studied through simulation will be discussed for mitigation and management of H2S within this field to safeguard the production, and thus recovery of the field. A systematic phased approach is adopted to mitigate and manage the unwanted sour gas (H2S). In the first phase, we performed the analysis of the historical development of H2S throughout the field and developed the concept for possible souring causes. In the second phase, we designed and conducted a comprehensive sampling and laboratory analysis program end-to-end to fill the existing knowledge gap. In the third phase, we performed 3D dynamic reservoir souring modelling where we history matched the H2S and assessed the future potential via forecasting. Finally, we developed multiple mitigation scenarios ranging from nitrate injection, sulphate reducing unit, limiting the nutrient supply for microbe growth via water mixing, etc. It was evident that a) increased injection water contributed to souring wells, b) link between souring wells and nutrient availability, c) increased negative fractioning of a Sulphur isotope as H2S concentration increases, and d) and mesophilic SRBs detected in some souring wells. This evidence suggested that BSR is the predominant cause of souring. It was also seen based on water chemistry that injection water was rich in sulfate while formation water was rich in volatile fatty acids. Results indicate that the nitrate injection (up to 200 ppm) alone may not be an attractive option to mitigate the H2S within this field. However, the combination of SRU and nitrate injection of 150 ppm could be a technically feasible option to mitigate such high concentration of H2S within allowable facilities limits.
APA, Harvard, Vancouver, ISO, and other styles
10

Mithani, Aijaz Hussain, Eadie Azhar Rosland, M. Aiman Jamaludin, W. Rokiah W Ismail, Maxwell Tommie Lajawi, and Irzie Hani A Salam. "Reservoir Souring in Mature Offshore Field Malaysia: Root Cause, Mitigation, and Management of H2S." In Offshore Technology Conference. OTC, 2022. http://dx.doi.org/10.4043/32141-ms.

Full text
Abstract:
Abstract The field under study is a mature brownfield with no H2S in the fluid stream (PVT) at the time of development. However, concentrations more than 1000 ppm were recorded recently causing wells to close in (few already closed). Hence, the shut-in wells have to be brought on stream and an assessment of souring potential in the field has to be completed. This paper will share our experience in H2S mapping at reservoir-well-facilities modelling, history matching and prediction of H2S. We will highlight the workflow adopted to find the root causes of souring via sampling and modelling approach since the H2S is measured throughout the field across all the reservoirs, including those undergoing waterflood. Moreover, various options that were studied through simulation will be discussed for mitigation and management of H2S within this field to safeguard the production, and thus recovery of the field. A systematic phased approach is adopted to mitigate and manage the unwanted sour gas (H2S). In first phase we performed the analysis on the historical development of H2S throughout the field and developed the concept for possible souring causes. In second phase, we designed and conducted a comprehensive sampling and laboratory analysis program end-to-end to fill the existence knowledge gap. In third phase, we performed 3D dynamic reservoir souring modelling where we history matched the H2S and assessed the future potential via forecasting. Finally, we developed multiple mitigation scenarios ranging from nitrate injection, sulphate reducing unit, limiting the nutrient supply for microbe growth via water mixing etc. It was evident that a) increased injection water contributed to souring wells, b) link between souring wells and nutrient availability, c) increased negative fractioning of Sulphur isotope as H2S concentration increases, d) and mesophilic SRBs detected in some souring wells. This evidence suggested that BSR is the predominant cause of souring. It was also seen based on water chemistry that injection water was rich in sulphate while formation water rich in volatile fatty acids. Results indicate that the nitrate injection (up to 200ppm) alone may not be an attractive option to mitigate the H2S within this field. However, the combination of SRU and nitrate injection of 150ppm could be a technically feasible options to mitigate such higher concentration of H2S within allowable facilities limits of H2S.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Second phase sampling"

1

Shiihi, Solomon, U. G. Okafor, Zita Ekeocha, Stephen Robert Byrn, and Kari L. Clase. Improving the Outcome of GMP Inspections by Improving Proficiency of Inspectors through Consistent GMP Trainings. Purdue University, November 2021. http://dx.doi.org/10.5703/1288284317433.

Full text
Abstract:
Approximately 90% of the pharmaceutical inspectors in a pharmacy practice regulatory agency in West Africa have not updated their training on Good Manufacturing Practice (GMP) inspection in at least eight years. However, in the last two years the inspectors relied on learning-on-the job skills. During this time, the agency introduced about 17% of its inspectors to hands-on GMP trainings. GMP is the part of quality assurance that ensures the production or manufacture of medicinal products is consistent in order to control the quality standards appropriate for their intended use as required by the specification of the product. Inspection reports on the Agency’s GMP inspection format in-between 2013 to 2019 across the six geopolitical zones in the country were reviewed retrospectively for gap analysis. Sampling was done in two phases. During the first phase sampling of reports was done by random selection, using a stratified sampling method. In the second phase, inspectors from the Regulatory Agency from different regions were contacted on phone to send in four reports each by email. For those that forwarded four reports, two, were selected. However for those who forwarded one or two, all were considered. Also, the Agency’s inspection format/checklist was compared with the World Health Organization (WHO) GMP checklist and the GMP practice observed. The purpose of this study was to evaluate the reporting skills and the ability of inspectors to interpret findings vis-à-vis their proficiency in inspection activities hence the efficiency of the system. Secondly, the study seeks to establish shortfalls or adequacies of the Agency’s checklist with the aim of reviewing and improving in-line with best global practices. It was observed that different inspectors have different styles and methods of writing reports from the same check-list/inspection format, leading to non-conformances. Interpretations of findings were found to be subjective. However, it was also observed that inspection reports from the few inspectors with the hands-on training in the last two year were more coherent. This indicates that pharmaceutical inspectors need to be trained regularly to increase their knowledge and skills in order to be kept on the same pace. It was also observed that there is a slight deviation in placing sub indicators under the GMP components in the Agency’s GMP inspection format, as compared to the WHO checklist.
APA, Harvard, Vancouver, ISO, and other styles
2

Delwiche, Michael, Boaz Zion, Robert BonDurant, Judith Rishpon, Ephraim Maltz, and Miriam Rosenberg. Biosensors for On-Line Measurement of Reproductive Hormones and Milk Proteins to Improve Dairy Herd Management. United States Department of Agriculture, February 2001. http://dx.doi.org/10.32747/2001.7573998.bard.

Full text
Abstract:
The original objectives of this research project were to: (1) develop immunoassays, photometric sensors, and electrochemical sensors for real-time measurement of progesterone and estradiol in milk, (2) develop biosensors for measurement of caseins in milk, and (3) integrate and adapt these sensor technologies to create an automated electronic sensing system for operation in dairy parlors during milking. The overall direction of research was not changed, although the work was expanded to include other milk components such as urea and lactose. A second generation biosensor for on-line measurement of bovine progesterone was designed and tested. Anti-progesterone antibody was coated on small disks of nitrocellulose membrane, which were inserted in the reaction chamber prior to testing, and a real-time assay was developed. The biosensor was designed using micropumps and valves under computer control, and assayed fluid volumes on the order of 1 ml. An automated sampler was designed to draw a test volume of milk from the long milk tube using a 4-way pinch valve. The system could execute a measurement cycle in about 10 min. Progesterone could be measured at concentrations low enough to distinguish luteal-phase from follicular-phase cows. The potential of the sensor to detect actual ovulatory events was compared with standard methods of estrus detection, including human observation and an activity monitor. The biosensor correctly identified all ovulatory events during its testperiod, but the variability at low progesterone concentrations triggered some false positives. Direct on-line measurement and intelligent interpretation of reproductive hormone profiles offers the potential for substantial improvement in reproductive management. A simple potentiometric method for measurement of milk protein was developed and tested. The method was based on the fact that proteins bind iodine. When proteins are added to a solution of the redox couple iodine/iodide (I-I2), the concentration of free iodine is changed and, as a consequence, the potential between two electrodes immersed in the solution is changed. The method worked well with analytical casein solutions and accurately measured concentrations of analytical caseins added to fresh milk. When tested with actual milk samples, the correlation between the sensor readings and the reference lab results (of both total proteins and casein content) was inferior to that of analytical casein. A number of different technologies were explored for the analysis of milk urea, and a manometric technique was selected for the final design. In the new sensor, urea in the sample was hydrolyzed to ammonium and carbonate by the enzyme urease, and subsequent shaking of the sample with citric acid in a sealed cell allowed urea to be estimated as a change in partial pressure of carbon dioxide. The pressure change in the cell was measured with a miniature piezoresistive pressure sensor, and effects of background dissolved gases and vapor pressures were corrected for by repeating the measurement of pressure developed in the sample without the addition of urease. Results were accurate in the physiological range of milk, the assay was faster than the typical milking period, and no toxic reagents were required. A sampling device was designed and built to passively draw milk from the long milk tube in the parlor. An electrochemical sensor for lactose was developed starting with a three-cascaded-enzyme sensor, evolving into two enzymes and CO2[Fe (CN)6] as a mediator, and then into a microflow injection system using poly-osmium modified screen-printed electrodes. The sensor was designed to serve multiple milking positions, using a manifold valve, a sampling valve, and two pumps. Disposable screen-printed electrodes with enzymatic membranes were used. The sensor was optimized for electrode coating components, flow rate, pH, and sample size, and the results correlated well (r2= 0.967) with known lactose concentrations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography