Academic literature on the topic 'Three Phase sampling'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Three Phase sampling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Three Phase sampling"

1

Sang, Hailin, Kenneth K. Lopiano, Denise A. Abreu, Andrea C. Lamas, Pam Arroway, and Linda J. Young. "Adjusting for Misclassification: A Three-Phase Sampling Approach." Journal of Official Statistics 33, no. 1 (March 1, 2017): 207–22. http://dx.doi.org/10.1515/jos-2017-0011.

Full text
Abstract:
Abstract The United States Department of Agriculture’s National Agricultural Statistics Service (NASS) conducts the June Agricultural Survey (JAS) annually. Substantial misclassification occurs during the prescreening process and from field-estimating farm status for nonresponse and inaccessible records, resulting in a biased estimate of the number of US farms from the JAS. Here, the Annual Land Utilization Survey (ALUS) is proposed as a follow-on survey to the JAS to adjust the estimates of the number of US farms and other important variables. A three-phase survey design-based estimator is developed for the JAS-ALUS with nonresponse adjustment for the second phase (ALUS). A design-unbiased estimator of the variance is provided in explicit form.
APA, Harvard, Vancouver, ISO, and other styles
2

Magnussen, Steen. "Stepwise estimators for three-phase sampling of categorical variables." Journal of Applied Statistics 30, no. 5 (June 2003): 461–75. http://dx.doi.org/10.1080/0266476032000053628.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Saliby, Eduardo, and Ray J. Paul. "Implementing Descriptive Sampling in Three-Phase Discrete Event Simulation Models." Journal of the Operational Research Society 44, no. 2 (February 1993): 147. http://dx.doi.org/10.2307/2584363.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Carugati, Ignacio, Sebastian Maestri, Patricio G. Donato, Daniel Carrica, and Mario Benedetti. "Variable Sampling Period Filter PLL for Distorted Three-Phase Systems." IEEE Transactions on Power Electronics 27, no. 1 (January 2012): 321–30. http://dx.doi.org/10.1109/tpel.2011.2149542.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Saliby, Eduardo, and Ray J. Paul. "Implementing Descriptive Sampling in Three-Phase Discrete Event Simulation Models." Journal of the Operational Research Society 44, no. 2 (February 1993): 147–60. http://dx.doi.org/10.1057/jors.1993.27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lepage, Kyle Q., Mark A. Kramer, and Uri T. Eden. "Some Sampling Properties of Common Phase Estimators." Neural Computation 25, no. 4 (April 2013): 901–21. http://dx.doi.org/10.1162/neco_a_00422.

Full text
Abstract:
The instantaneous phase of neural rhythms is important to many neuroscience-related studies. In this letter, we show that the statistical sampling properties of three instantaneous phase estimators commonly employed to analyze neuroscience data share common features, allowing an analytical investigation into their behavior. These three phase estimators—the Hilbert, complex Morlet, and discrete Fourier transform—are each shown to maximize the likelihood of the data, assuming the observation of different neural signals. This connection, explored with the use of a geometric argument, is used to describe the bias and variance properties of each of the phase estimators, their temporal dependence, and the effect of model misspecification. This analysis suggests how prior knowledge about a rhythmic signal can be used to improve the accuracy of phase estimates.
APA, Harvard, Vancouver, ISO, and other styles
7

Fattorini, Lorenzo, Marzia Marcheselli, and Caterina Pisani. "A three-phase sampling strategy for large-scale multiresource forest inventories." Journal of Agricultural, Biological, and Environmental Statistics 11, no. 3 (September 2006): 296–316. http://dx.doi.org/10.1198/108571106x130548.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Becker, Benjamin, Christian Kochleus, Denise Spira, Christel Möhlenkamp, Julia Bachtin, Stefan Meinecke, and Etiënne L. M. Vermeirssen. "Passive sampler phases for pesticides: evaluation of AttractSPE™ SDB-RPS and HLB versus Empore™ SDB-RPS." Environmental Science and Pollution Research 28, no. 9 (January 12, 2021): 11697–707. http://dx.doi.org/10.1007/s11356-020-12109-9.

Full text
Abstract:
AbstractIn this study, three different passive sampling receiving phases were evaluated, with a main focus on the comparability of established styrene-divinylbenzene reversed phase sulfonated (SDB-RPS) sampling phase from Empore™ (E-RPS) and novel AttractSPE™ (A-RPS). Furthermore, AttractSPE™ hydrophilic-lipophilic balance (HLB) disks were tested. To support sampling phase selection for ongoing monitoring needs, it is important to have information on the characteristics of alternative phases. Three sets of passive samplers (days 1–7, days 8–14, and days 1–14) were exposed to a continuously exchanged mixture of creek and rainwater in a stream channel system under controlled conditions. The system was spiked with nine pesticides in two peak scenarios, with log KOW values ranging from approx. − 1 to 5. Three analytes were continuously spiked at a low concentration. All three sampling phases turned out to be suitable for the chosen analytes, and, in general, uptake rates were similar for all three materials, particularly for SDB-RPS phases. Exceptions concerned bentazon, where E-RPS sampled less than 20% compared with the other phases, and nicosulfuron, where HLB sampled noticeably more than both SDB-RPS phases. All three phases will work for environmental monitoring. They are very similar, but differences indicate one cannot just use literature calibration data and transfer these from one SDB phase to another, though for most compounds, it may work fine. Graphical abstract
APA, Harvard, Vancouver, ISO, and other styles
9

Mandallaz, Daniel. "A three-phase sampling extension of the generalized regression estimator with partially exhaustive information." Canadian Journal of Forest Research 44, no. 4 (April 2014): 383–88. http://dx.doi.org/10.1139/cjfr-2013-0449.

Full text
Abstract:
We consider three-phase sampling schemes in which one component of the auxiliary information is known in the very large sample of the so-called null phase and the second component is available only in the large sample of the first phase, whereas the second phase provides the terrestrial inventory data. We extend to three-phase sampling the generalized regression estimator that applies when the null phase is exhaustive, for global and local estimation, and derive its asymptotic design-based variance. The new three-phase regression estimator is particularly useful for reducing substantially the computing time required to treat exhaustively very large data sets generated by modern remote sensing technology such as LiDAR.
APA, Harvard, Vancouver, ISO, and other styles
10

Mekhilef, Saad, Ahmad Maliki Omar, and Nasrudin Abd Rahim. "Modeling of Three-Phase Uniform Symmetrical Sampling Digital PWM for Power Converter." IEEE Transactions on Industrial Electronics 54, no. 1 (February 2007): 427–32. http://dx.doi.org/10.1109/tie.2006.885151.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Three Phase sampling"

1

Osman, Ahmad. "Automated evaluation of three dimensional ultrasonic datasets." Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00995119.

Full text
Abstract:
Non-destructive testing has become necessary to ensure the quality of materials and components either in-service or at the production stage. This requires the use of a rapid, robust and reliable testing technique. As a main testing technique, the ultrasound technology has unique abilities to assess the discontinuity location, size and shape. Such information play a vital role in the acceptance criteria which are based on safety and quality requirements of manufactured components. Consequently, an extensive usage of the ultrasound technique is perceived especially in the inspection of large scale composites manufactured in the aerospace industry. Significant technical advances have contributed into optimizing the ultrasound acquisition techniques such as the sampling phased array technique. However, acquisition systems need to be complemented with an automated data analysis procedure to avoid the time consuming manual interpretation of all produced data. Such a complement would accelerate the inspection process and improve its reliability. The objective of this thesis is to propose an analysis chain dedicated to automatically process the 3D ultrasound volumes obtained using the sampling phased array technique. First, a detailed study of the speckle noise affecting the ultrasound data was conducted, as speckle reduces the quality of ultrasound data. Afterward, an analysis chain was developed, composed of a segmentation procedure followed by a classification procedure. The proposed segmentation methodology is adapted for ultrasound 3D data and has the objective to detect all potential defects inside the input volume. While the detection of defects is vital, one main difficulty is the high amount of false alarms which are detected by the segmentation procedure. The correct distinction of false alarms is necessary to reduce the rejection ratio of safe parts. This has to be done without risking missing true defects. Therefore, there is a need for a powerful classifier which can efficiently distinguish true defects from false alarms. This is achieved using a specific classification approach based on data fusion theory. The chain was tested on several ultrasound volumetric measures of Carbon Fiber Reinforced Polymers components. Experimental results of the chain revealed high accuracy, reliability in detecting, characterizing and classifying defects.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Three Phase sampling"

1

Hankin, David, Michael S. Mohr, and Kenneth B. Newman. Sampling Theory. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198815792.001.0001.

Full text
Abstract:
We present a rigorous but understandable introduction to the field of sampling theory for ecologists and natural resource scientists. Sampling theory concerns itself with development of procedures for random selection of a subset of units, a sample, from a larger finite population, and with how to best use sample data to make scientifically and statistically sound inferences about the population as a whole. The inferences fall into two broad categories: (a) estimation of simple descriptive population parameters, such as means, totals, or proportions, for variables of interest, and (b) estimation of uncertainty associated with estimated parameter values. Although the targets of estimation are few and simple, estimates of means, totals, or proportions see important and often controversial uses in management of natural resources and in fundamental ecological research, but few ecologists or natural resource scientists have formal training in sampling theory. We emphasize the classical design-based approach to sampling in which variable values associated with units are regarded as fixed and uncertainty of estimation arises via various randomization strategies that may be used to select samples. In addition to covering standard topics such as simple random, systematic, cluster, unequal probability (stressing the generality of Horvitz–Thompson estimation), multi-stage, and multi-phase sampling, we also consider adaptive sampling, spatially balanced sampling, and sampling through time, three areas of special importance for ecologists and natural resource scientists. The text is directed to undergraduate seniors, graduate students, and practicing professionals. Problems emphasize application of the theory and R programming in ecological and natural resource settings.
APA, Harvard, Vancouver, ISO, and other styles
2

Chalabi, Azadeh. A Cross-Case Analysis of NHRAPs of Fifty-Three Countries. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198822844.003.0006.

Full text
Abstract:
Part III, ‘Empirical Perspectives’, contains only one chapter, Chapter 5, which presents the results of a cross-case analysis of national human rights action plans of fifty-three countries. Adopting a purposive sampling technique, these countries are selected on the basis of four main criteria, namely human rights record, geographical diversity, political regimes, and cultural diversity. This comprehensive cross-case study follows two objectives. The first objective of this chapter is to unearth significant problems in the ‘pre-phase’ and the four phases of planning, namely ‘preparatory phase’, ‘development phase’, ‘implementing phase’, and ‘assessment phase’. These problems are significantly detrimental to the effective implementation of human rights and their identification will substantially help generate response strategies. These are best addressed by attempting to mitigate their root causes as opposed to only correcting the immediately obvious symptoms. This brings us to the chapter’s second objective, which is to explore the underlying causes of these problems.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Three Phase sampling"

1

Chinazzo, André, Christian De Schryver, Katharina Zweig, and Norbert Wehn. "Increasing the Sampling Efficiency for the Link Assessment Problem." In Lecture Notes in Computer Science, 39–56. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-21534-6_3.

Full text
Abstract:
AbstractComplex graphs are at the heart of today’s big data challenges like recommendation systems, customer behavior modeling, or incident detection systems. One reoccurring task in these fields is the extraction of network motifs, which are subgraphs that are reoccurring and statistically significant. To assess the statistical significance of their occurrence, the observed values in the real network need to be compared to their expected value in a random graph model.In this chapter, we focus on the so-called Link Assessment (LA) problem, in particular for bipartite networks. Lacking closed-form solutions, we require stochastic Monte Carlo approaches that raise the challenge of finding appropriate metrics for quantifying the quality of results (QoR) together with suitable heuristics that stop the computation process if no further increase in quality is expected. We provide investigation results for three quality metrics and show that observing the right metrics reveals so-called phase transitions that can be used as a reliable basis for such heuristics. Finally, we propose a heuristic that has been evaluated with real-word datasets, providing a speedup of $$15.4\times $$ 15.4 × over previous approaches.
APA, Harvard, Vancouver, ISO, and other styles
2

Marcus, Pamela M. "Behind the Scenes." In Assessment of Cancer Screening, 15–22. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-94577-0_2.

Full text
Abstract:
AbstractCancer screening aims to interfere with disease progression by detecting cancer at a point in its natural history when it is either curable or, if not curable, when treatment will extend life beyond what it would have been in the absence of cancer screening. A simple, four phase model of cancer progression is presented in Chap. 2 and is used to demonstrate how the natural history of cancer is perturbed in the presence of cancer screening. Lead time, length-weighted sampling, and overdiagnosis, three phenomena that are frequently discussed in the context of cancer screening, are defined. Examples are presented to assist in comprehension of the impact of the phenomena.
APA, Harvard, Vancouver, ISO, and other styles
3

Dawar, Kshitij, Sanjay Srinivasan, and Mort D. Webster. "Application of Reinforcement Learning for Well Location Optimization." In Springer Proceedings in Earth and Environmental Sciences, 81–110. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-19845-8_7.

Full text
Abstract:
AbstractThe extensive deployment of sensors in oilfield operation and management has led to the collection of vast amounts of data, which in turn has enabled the use of machine learning models to improve decision-making. One of the prime applications of data-based decision-making is the identification of optimum well locations for hydrocarbon recovery. This task is made difficult by the relative lack of high-fidelity data regarding the subsurface to develop precise models in support of decision-making. Each well placement decision not only affects eventual recovery but also the decisions affecting future wells. Hence, there exists a tradeoff between recovery maximization and information gain. Existing methodologies for placement of wells during the early phases of reservoir development fail to take an abiding view of maximizing reservoir profitability, instead focusing on short-term gains. While improvements in drilling technologies have dramatically lowered the costs of producing hydrocarbon from prospects and resulted in very efficient drilling operations, these advancements have led to sub-optimal and haphazard placement of wells. This can lead to considerable number of unprofitable wells being drilled which, during periods of low oil and gas prices, can be detrimental for a company’s solvency. The goal of the research is to present a methodology that builds machine learning models, integrating geostatistics and reservoir flow dynamics, to determine optimum future well locations for maximizing reservoir recovery. A deep reinforcement learning (DRL) framework has been proposed to address the issue of long-horizon decision-making. The DRL reservoir agent employs intelligent sampling and utilizes a reward framework that is based on geostatistical and flow simulations. The implemented approach provides opportunities to insert expert information while basing well placement decisions on data collected from seismic data and prior well tests. Effects of prior information on the well placement decisions are explored and the developed DRL derived policies are compared to single-stage optimization methods for reservoir development. Under similar reward framework, sequential well placement strategies developed using DRL have been shown to perform better than simultaneous drilling of several wells.
APA, Harvard, Vancouver, ISO, and other styles
4

Dhamodharavadhani S. and Rathipriya R. "Enhanced Logistic Regression (ELR) Model for Big Data." In Handbook of Research on Big Data Clustering and Machine Learning, 152–76. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-0106-1.ch008.

Full text
Abstract:
Regression model is an important tool for modeling and analyzing data. In this chapter, the proposed model comprises three phases. The first phase concentrates on sampling techniques to get best sample for building the regression model. The second phase is to predict the residual of logistic regression (LR) model using time series analysis method: autoregressive. The third phase is to develop enhanced logistic regression (ELR) model by combining both LR model and residual prediction (RP) model. The empirical study is carried out to study the performance of the ELR model using large diabetic dataset. The results show that ELR model has a higher level of accuracy than the traditional logistic regression model.
APA, Harvard, Vancouver, ISO, and other styles
5

Sun, H., R. Zhang, K. Zhou, and J. Ye. "A design of a three phase AC sampling module for an electricity information acquisition device." In Biomedical Engineering and Environmental Engineering, 203–8. CRC Press, 2015. http://dx.doi.org/10.1201/b18419-44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Marks II, Robert J. "Multidimensional Signal Analysis." In Handbook of Fourier Analysis & Its Applications. Oxford University Press, 2009. http://dx.doi.org/10.1093/oso/9780195335927.003.0013.

Full text
Abstract:
N dimensional signals are characterized as values in an N dimensional space. Each point in the space is assigned a value, possibly complex. Each dimension in the space can be discrete, continuous, or on a time scale. A black and white movie can be modelled as a three dimensional signal.Acolor picture can be modelled as three signals in two dimensions, one each, for example, for red, green and blue. This chapter explores Fourier characterization of different types of multidimensional signals and corresponding applications. Some signal characterizations are straightforward extensions of their one dimensional counterparts. Others, even in two dimensions, have properties not found in one dimensional signals. We are fortunate to be able to visualize structures in two, three, and sometimes four dimensions. It assists in the intuitive generalization of properties to higher dimensions. Fourier characterization of multidimensional signals allows straightforward modelling of reconstruction of images from their tomographic projections. Doing so is the foundation of certain medical and industrial imaging, including CAT (for computed axial tomography) scans. Multidimensional Fourier series are based on models found in nature in periodically replicated crystal Bravais lattices [987, 1188]. As is one dimension, the Fourier series components can be found from sampling the Fourier transform of a single period of the periodic signal. The multidimensional cosine transform, a relative of the Fourier transform, is used in image compression such as JPG images. Multidimensional signals can be filtered. The McClellan transform is a powerful method for the design of multidimensional filters, including generalization of the large catalog of zero phase one dimensional FIR filters into higher dimensions. As in one dimension, the multidimensional sampling theorem is the Fourier dual of the Fourier series. Unlike one dimension, sampling can be performed at the Nyquist density with a resulting dependency among sample values. This property can be used to reduce the sampling density of certain images below that of Nyquist, or to restore lost samples from those remaining. Multidimensional signal and image analysis is also the topic of Chapter 9 on time frequency representations, and Chapter 11 where POCS is applied signals in higher dimensions.
APA, Harvard, Vancouver, ISO, and other styles
7

Özkent, Yasemin. "Social Reactions to the Pandemic." In COVID-19 Pandemic Impact on New Economy Development and Societal Change, 279–95. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-3374-4.ch014.

Full text
Abstract:
Coronavirus disease (COVID-19) has become a global health and economic crisis and has had many impacts on daily life. This study investigates the effect of the pandemic on movie viewing preferences in Turkey. Using Google Trends data, it handles trends towards epidemic movies with a quantitative analysis method. Google Trends data is a valuable source of information for examining quarantine's psychological, sociological, and health effects. In this way, it can be determined which media preferences the society, which wants to get rid of the epidemic's concerns, is turning to. In this study, the search was made on IMDb with the keyword “contagion,” and movies with an IMDb rating above 6.0 among the listed pandemic movies were examined as sampling. The interest in epidemic films determined three months before and after the epidemic's start was compared. This study suggests an increase in watching pandemic movies in Turkey in response to the initial phase of the COVID-19 pandemic.
APA, Harvard, Vancouver, ISO, and other styles
8

Colomban, P. "Glass, Ceramics and Enamelled Objects." In Conservation Science: Heritage Materials, 200–247. 2nd ed. The Royal Society of Chemistry, 2021. http://dx.doi.org/10.1039/bk9781788010931-00200.

Full text
Abstract:
Much like weapons, vessels made from glasses and ceramics have long been held as objects of very high technology. Ceramic technology mastery is even at the foundation of metallurgy. In producing glass, pottery and enamelled metals, three critical and energy intensive steps are needed: obtaining fine powder, firing, and building appropriate kilns. Control of the colour also requires advanced physical and chemical knowledge. Indeed, if ceramic production is somewhat the art of forming a heterogeneous matter (only some components melt), glass or enamel production requires the object to pass through a homogeneous liquid state to obtain the desired microstructure and properties. This chapter presents the different destructive, non-destructive and non-invasive analytical methods that can be carried out in a laboratory on shards or sampling with fixed ‘big’ instruments, or on-site (museums, reserves, etc.) with mobile set-ups. After a brief overview of the history of pottery, the implications of the processes involved (grinding, shaping, sintering, enamelling, decoration) on micro- and nano-structures (formation and decomposition temperature, kinetic and phase rules, sintering) is given. Emphasis is given to information that can be obtained by XRF and Raman mobile non-invasive measurements. Examples illustrating how these studies help to document technology exchanges and exchange routes are also given.
APA, Harvard, Vancouver, ISO, and other styles
9

"Advancing an Ecosystem Approach in the Gulf of Maine." In Advancing an Ecosystem Approach in the Gulf of Maine, edited by Peter A. Jumars. American Fisheries Society, 2012. http://dx.doi.org/10.47886/9781934874301.ch23.

Full text
Abstract:
<i>Abstract</i> .—Because of partial recirculation and steep bottom slopes, the Gulf of Maine (GoM) contains steep environmental gradients in both space and time. I focus, in particular, on optical properties associated with both resources and risks. The GoM estuary-shelf systems differ from those whose fine sediments are trapped behind barrier bars; in the GoM, nepheloid layers prevail over a wide range of depths, and onshore-offshore turbidity gradients at a given water depth are also steep. Turbidity reduces predation risk. Three crustacean species that are major fish forages respond to the strong environmental gradients in resources and risks by migrating seasonally both horizontally and vertically. Northern shrimp (also known as pink shrimp) <i>Pandalus borealis</i> , sevenspine bay shrimp <i>Crangon septemspinosa</i> , and the most common mysid shrimp in the GoM, <i>Neomysis americana</i> , share both stalked eyes that appear capable of detecting polarized light and statocysts. This pair of features likely confers sun-compass navigational ability, facilitating use of multiple habitats. All three species converge on a shallow-water bloom at depths <100 m of the western GoM shelf in December–March, well before the basin-wide, climatological spring bloom in April. In addition to reaching abundant food resources, I propose that they are also using optical protection, quantified as the integral of the beam attenuation coefficient from the surface to the depth that they occupy during daylight. Spring immigration into, and fall emigration from, estuaries appear to be common in GoM sevenspine bay shrimp and <i>N. americana</i> , out of phase with their populations south of New England and with turbidity differences a likely cause. Migration studies that include measurements of turbidity are needed, however, to test the strength of the effect of optical protection on habitat use by all three species. Simultaneous sampling of estuaries and the adjacent shelf, together with trace-element tracer studies, would be very useful to resolve timing and extent of mass migrations, which likely are sensitive to turbidity change resulting from climate change. These migrations present special challenges to ecosystem-based management by using so many different habitats.
APA, Harvard, Vancouver, ISO, and other styles
10

Marks II, Robert J. "Signal and Image Synthesis: Alternating Projections Onto Convex Sets." In Handbook of Fourier Analysis & Its Applications. Oxford University Press, 2009. http://dx.doi.org/10.1093/oso/9780195335927.003.0016.

Full text
Abstract:
Alternating projections onto convex sets (POCS) [319, 918, 1324, 1333] is a powerful tool for signal and image restoration and synthesis. The desirable properties of a reconstructed signal may be defined by a convex set of constraint parameters. Iteratively projecting onto these convex constraint sets can result in a signal which contains all desired properties. Convex signal sets are frequently encountered in practice and include the sets of bandlimited signals, duration limited signals, causal signals, signals that are the same (e.g., zero) on some given interval, bounded signals, signals of a given area and complex signals with a specified phase. POCS was initially introduced by Bregman [156] and Gubin et al. [558] and was later popularized by Youla & Webb [1550] and Sezan & Stark [1253]. POCS has been applied to such topics as acoustics [300, 1381], beamforming [426], bioinformatics [484], cellular radio control [1148], communications systems [29, 769, 1433], deconvolution and extrapolation [718, 907, 1216], diffraction [421], geophysics [4], image compression [1091, 1473], image processing [311, 321, 470, 471, 672, 736, 834, 1065, 1069, 1093, 1473, 1535, 1547, 1596], holography [880, 1381], interpolation [358, 559, 1266], neural networks [1254, 1543, 909, 913, 1039], pattern recognition [1444, 1588], optimization [598, 1359, 1435], radiotherapy [298, 814, 1385], remote sensing [1223], robotics [740], sampling theory [399, 1334, 1542], signal recovery [320, 737, 1104, 1428, 1594], speech processing [1450], superresolution [399, 633, 654, 834, 1393, 1521], television [736, 786], time-frequency analysis [1037, 1043], tomography [1103, 713, 1212, 1213, 1275, 916, 1322, 1060, 1040], video processing [560, 786, 1092], and watermarking [19, 1470]. Although signal processing applications ofPOCS use sets of signals,POCSis best visualized viewing the operations on sets of points. In this section, POCS is introduced geometrically in two and three dimensions. Such visualization of POCS is invaluable in application of the theory.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Three Phase sampling"

1

Tsang, P. W. M., and T. C. Poon. "Non-iterative Generation of Phase-only Holograms based on Dynamic Down-Sampling." In Digital Holography and Three-Dimensional Imaging. Washington, D.C.: OSA, 2017. http://dx.doi.org/10.1364/dh.2017.m4b.5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kurten Ihlenfeld, W. G., K. Dauke, A. Suchy, and P. Rather. "Three-phase primary ac power sampling standard with improved frequency resolution." In 2008 Conference on Precision Electromagnetic Measurements (CPEM 2008). IEEE, 2008. http://dx.doi.org/10.1109/cpem.2008.4574865.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yiming Li, Jun Rong, and Fang Tong. "The technology study of nature sampling method in three-phase inverter." In 2011 International Conference on Multimedia Technology (ICMT). IEEE, 2011. http://dx.doi.org/10.1109/icmt.2011.6002275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Töpfer, Sebastian, Marta Gilaberte Basset, Jorge Fuenzalida, Fabian Steinlechner, Juan P. Torres, and Markus Gräfe. "Quantum Holography With Undetected Photons." In Digital Holography and Three-Dimensional Imaging. Washington, D.C.: Optica Publishing Group, 2022. http://dx.doi.org/10.1364/dh.2022.tu4a.1.

Full text
Abstract:
We developed and experimentally characterized a imaging setup for holography with separated wavelengths in the detection and sampling, by combining the quantum imaging with undetected photons and digital phase shifting.
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Bo, Ren Ren, Zheyu Zhang, Fred Wang, and Daniel Costinett. "A sampling scheme for three-phase high switching frequency and speed converter." In 2018 IEEE Applied Power Electronics Conference and Exposition (APEC). IEEE, 2018. http://dx.doi.org/10.1109/apec.2018.8341532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Durna, Emre. "Effect of sampling frequency and execution time on hysteresis current controlled three-phase three-wire HAPF converters." In 2016 International Symposium on Power Electronics, Electrical Drives, Automation and Motion (SPEEDAM). IEEE, 2016. http://dx.doi.org/10.1109/speedam.2016.7525968.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Anusha K. V. and A. Vijayakumari. "The effect of sampling rates on the performance of a three phase PWM inverter and choice of appropriate sampling rates." In 2016 Biennial International Conference on Power and Energy Systems: Towards Sustainable Energy (PESTSE). IEEE, 2016. http://dx.doi.org/10.1109/pestse.2016.7516494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Yue, Peisheng Gu, and Songlin Zhuang. "New method for reconstructing phase factor." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1989. http://dx.doi.org/10.1364/oam.1989.md6.

Full text
Abstract:
In a Fourier transform system, an object f(x) is interrelated to its Fourier spectrum F(u) by Fourier transform. Our goal is to recover the phase factor of the object from the measured intensity data. By means of the discrete Fourier transform, assuming the sampling number is N, it is shown that the intensity at one point in the Fourier plane directly connects with both the phase term and the amplitude term of N points in the object plane. Because of the large number N, the relationship as usual is somewhat complex. To simplify it, we may allow fewer sampling points into the system. A particularly successful approach to solving this problem is to let only two neighboring points, n and n+ 1, pass the system. Therefore, the corresponding formula tells us that, if three intensity measurements at three points in the Fourier plane are taken, the phase difference of the object between the two sampling points will then be mastered. Letting n change from 1 to N — 1, we now can reconstruct the phase distribution of the object function.
APA, Harvard, Vancouver, ISO, and other styles
9

Sato, Kotaro, Kazuki Nakamura, Sakyo Takeuchi, and Tomoki Yokoyama. "A Study of 1MHz Multi-Sampling SVPWM Method for Low Carrier Three Phase Modular Multilevel Converter." In 2022 International Power Electronics Conference (IPEC-Himeji 2022- ECCE Asia). IEEE, 2022. http://dx.doi.org/10.23919/ipec-himeji2022-ecce53331.2022.9807038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Vojvodic, Nikola, and Milan Bebic. "Analysis of the influence of non-simultaneous sampling on the measurement of three-phase instantaneous power." In 2021 21st International Symposium on Power Electronics (Ee). IEEE, 2021. http://dx.doi.org/10.1109/ee53374.2021.9628247.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Three Phase sampling"

1

Kelner, Britton, and Sparks. L51986 Natural Gas Sample Collection and Handling Phase II Simulated Field Conditions. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), January 2003. http://dx.doi.org/10.55274/r0011157.

Full text
Abstract:
Phase II was originally planned as a series of field tests to confirm the results of the sampling methods performance tests conducted during Phase I. However, the API Chapter 14.1 Gas Sampling Research Working Group chose to have the tests conducted at a newly developed wet gas test facility located at the Colorado Engineering Experiment Station (CEESI), in Nunn, Colorado. Three general tests were conducted. Test Plan I was intended to investigate the effects of sample point location on on-line gas chromatograph (GC) analyses and on spot sampling methods. Test Plan II was intended to investigate the effects of sample point location on on-line GC analyses and to compare several spot sampling methods when sampling from the same point. Test Plan III was intended to investigate the effects of coupling configurations and cylinder temperature on two specific methods: Helium Pop, and purging - Fill/Empty.
APA, Harvard, Vancouver, ISO, and other styles
2

Beauregard, Yannick. PR261-193604-R01 Optimizing Stress Corrosion Cracking Management - Field and Economic Study. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), October 2021. http://dx.doi.org/10.55274/r0012179.

Full text
Abstract:
This work aims to improve pipeline segment prioritization for stress corrosion cracking (SCC) excavations. Specifically, it is aimed at optimizing the technical accuracy and the cost of the Association for Materials Protection and Performance (AMPP, formerly NACE) Stress Corrosion Cracking Direct Assessment (SP0204-2015) process by: - evaluating the SCC susceptibility criteria of soil property parameters that were proposed in the first phase of the project (pH, resistivity, sulfide concentration, soil carbon dioxide (CO2) concentration, carbonate concentration, soil oxygen (O2) concentration, sulphate reducing bacteria (SRB) concentration, oxygen reduction potential (ORP), soil moisture content, soil effect on steel hydrogen permeation and electrochemical properties) - investigating the technical and economic feasibility of using commercially available field instruments for the measurement of these soil parameters to overcome limitations of laboratory testing (e.g., sample preservation and external costs) Soil sampling and testing was conducted at twenty-two dig sites in three geographic regions in the USA and Canada. On-site soil sampling and testing activities were conducted by field service providers using commercially available portable instruments. Soil samples were sent to laboratories for chemical analysis and for electrochemical characterization. The data analysis consisted of: (i) comparison of soil properties obtained at sites with and without SCC against the proposed SCC susceptibility criteria (ii) comparison of soil property data obtained in the field to those obtained through laboratory analysis (iii) comparison of soil property data obtained using different field and lab measurement techniques (iv) comparison of costs associated with performing in-field measurements to those of laboratory analysis.
APA, Harvard, Vancouver, ISO, and other styles
3

George and Grant. PR-015-14609-R01 Study of Sample Probe Minimum Insertion Depth Requirements. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), May 2015. http://dx.doi.org/10.55274/r0010844.

Full text
Abstract:
Probes for natural gas sample collection and analysis must extend far enough into the pipeline to avoid contaminants at the pipe wall, but must not be so long that there is a risk of flow-induced resonant vibration and failure. PRCI has sponsored a project to determine the minimum probe depth for obtaining a representative single-phase gas sample in flows with small amounts of contaminants. To this end, Phase 1 of the project involved a review of existing literature and industry standards to identify key probe design parameters. Several current standards for sampling clean, dry natural gas were reviewed, and their requirements for sample probe dimensions and mounting arrangements were compared. Some of these standard requirements suggested probe designs and sampling approaches that could be used to collect gas-only samples from two-phase flows. A literature review identified many useful studies of two-phase flows and phase behavior. While few of these studies evaluated probe designs, the majority examined the behavior of gas and liquid in two-phase flows, methods of predicting flow regimes, and methods of predicting flow conditions that define the minimum probe depth for gas-only samples in gas-liquid flows. Useful recommendations were provided for selecting general probe features where liquids must be rejected from the gas sample. A basic design procedure was also provided to select the minimum sample probe insertion length and optimum installation position for known flow conditions. Plans to test the recommendations and the design procedure in Phase 2 of the project were also discussed. This report has a related webinar.
APA, Harvard, Vancouver, ISO, and other styles
4

Shiihi, Solomon, U. G. Okafor, Zita Ekeocha, Stephen Robert Byrn, and Kari L. Clase. Improving the Outcome of GMP Inspections by Improving Proficiency of Inspectors through Consistent GMP Trainings. Purdue University, November 2021. http://dx.doi.org/10.5703/1288284317433.

Full text
Abstract:
Approximately 90% of the pharmaceutical inspectors in a pharmacy practice regulatory agency in West Africa have not updated their training on Good Manufacturing Practice (GMP) inspection in at least eight years. However, in the last two years the inspectors relied on learning-on-the job skills. During this time, the agency introduced about 17% of its inspectors to hands-on GMP trainings. GMP is the part of quality assurance that ensures the production or manufacture of medicinal products is consistent in order to control the quality standards appropriate for their intended use as required by the specification of the product. Inspection reports on the Agency’s GMP inspection format in-between 2013 to 2019 across the six geopolitical zones in the country were reviewed retrospectively for gap analysis. Sampling was done in two phases. During the first phase sampling of reports was done by random selection, using a stratified sampling method. In the second phase, inspectors from the Regulatory Agency from different regions were contacted on phone to send in four reports each by email. For those that forwarded four reports, two, were selected. However for those who forwarded one or two, all were considered. Also, the Agency’s inspection format/checklist was compared with the World Health Organization (WHO) GMP checklist and the GMP practice observed. The purpose of this study was to evaluate the reporting skills and the ability of inspectors to interpret findings vis-à-vis their proficiency in inspection activities hence the efficiency of the system. Secondly, the study seeks to establish shortfalls or adequacies of the Agency’s checklist with the aim of reviewing and improving in-line with best global practices. It was observed that different inspectors have different styles and methods of writing reports from the same check-list/inspection format, leading to non-conformances. Interpretations of findings were found to be subjective. However, it was also observed that inspection reports from the few inspectors with the hands-on training in the last two year were more coherent. This indicates that pharmaceutical inspectors need to be trained regularly to increase their knowledge and skills in order to be kept on the same pace. It was also observed that there is a slight deviation in placing sub indicators under the GMP components in the Agency’s GMP inspection format, as compared to the WHO checklist.
APA, Harvard, Vancouver, ISO, and other styles
5

Delwiche, Michael, Boaz Zion, Robert BonDurant, Judith Rishpon, Ephraim Maltz, and Miriam Rosenberg. Biosensors for On-Line Measurement of Reproductive Hormones and Milk Proteins to Improve Dairy Herd Management. United States Department of Agriculture, February 2001. http://dx.doi.org/10.32747/2001.7573998.bard.

Full text
Abstract:
The original objectives of this research project were to: (1) develop immunoassays, photometric sensors, and electrochemical sensors for real-time measurement of progesterone and estradiol in milk, (2) develop biosensors for measurement of caseins in milk, and (3) integrate and adapt these sensor technologies to create an automated electronic sensing system for operation in dairy parlors during milking. The overall direction of research was not changed, although the work was expanded to include other milk components such as urea and lactose. A second generation biosensor for on-line measurement of bovine progesterone was designed and tested. Anti-progesterone antibody was coated on small disks of nitrocellulose membrane, which were inserted in the reaction chamber prior to testing, and a real-time assay was developed. The biosensor was designed using micropumps and valves under computer control, and assayed fluid volumes on the order of 1 ml. An automated sampler was designed to draw a test volume of milk from the long milk tube using a 4-way pinch valve. The system could execute a measurement cycle in about 10 min. Progesterone could be measured at concentrations low enough to distinguish luteal-phase from follicular-phase cows. The potential of the sensor to detect actual ovulatory events was compared with standard methods of estrus detection, including human observation and an activity monitor. The biosensor correctly identified all ovulatory events during its testperiod, but the variability at low progesterone concentrations triggered some false positives. Direct on-line measurement and intelligent interpretation of reproductive hormone profiles offers the potential for substantial improvement in reproductive management. A simple potentiometric method for measurement of milk protein was developed and tested. The method was based on the fact that proteins bind iodine. When proteins are added to a solution of the redox couple iodine/iodide (I-I2), the concentration of free iodine is changed and, as a consequence, the potential between two electrodes immersed in the solution is changed. The method worked well with analytical casein solutions and accurately measured concentrations of analytical caseins added to fresh milk. When tested with actual milk samples, the correlation between the sensor readings and the reference lab results (of both total proteins and casein content) was inferior to that of analytical casein. A number of different technologies were explored for the analysis of milk urea, and a manometric technique was selected for the final design. In the new sensor, urea in the sample was hydrolyzed to ammonium and carbonate by the enzyme urease, and subsequent shaking of the sample with citric acid in a sealed cell allowed urea to be estimated as a change in partial pressure of carbon dioxide. The pressure change in the cell was measured with a miniature piezoresistive pressure sensor, and effects of background dissolved gases and vapor pressures were corrected for by repeating the measurement of pressure developed in the sample without the addition of urease. Results were accurate in the physiological range of milk, the assay was faster than the typical milking period, and no toxic reagents were required. A sampling device was designed and built to passively draw milk from the long milk tube in the parlor. An electrochemical sensor for lactose was developed starting with a three-cascaded-enzyme sensor, evolving into two enzymes and CO2[Fe (CN)6] as a mediator, and then into a microflow injection system using poly-osmium modified screen-printed electrodes. The sensor was designed to serve multiple milking positions, using a manifold valve, a sampling valve, and two pumps. Disposable screen-printed electrodes with enzymatic membranes were used. The sensor was optimized for electrode coating components, flow rate, pH, and sample size, and the results correlated well (r2= 0.967) with known lactose concentrations.
APA, Harvard, Vancouver, ISO, and other styles
6

Anderson, Gerald L., and Kalman Peleg. Precision Cropping by Remotely Sensed Prorotype Plots and Calibration in the Complex Domain. United States Department of Agriculture, December 2002. http://dx.doi.org/10.32747/2002.7585193.bard.

Full text
Abstract:
This research report describes a methodology whereby multi-spectral and hyperspectral imagery from remote sensing, is used for deriving predicted field maps of selected plant growth attributes which are required for precision cropping. A major task in precision cropping is to establish areas of the field that differ from the rest of the field and share a common characteristic. Yield distribution f maps can be prepared by yield monitors, which are available for some harvester types. Other field attributes of interest in precision cropping, e.g. soil properties, leaf Nitrate, biomass etc. are obtained by manual sampling of the filed in a grid pattern. Maps of various field attributes are then prepared from these samples by the "Inverse Distance" interpolation method or by Kriging. An improved interpolation method was developed which is based on minimizing the overall curvature of the resulting map. Such maps are the ground truth reference, used for training the algorithm that generates the predicted field maps from remote sensing imagery. Both the reference and the predicted maps are stratified into "Prototype Plots", e.g. 15xl5 blocks of 2m pixels whereby the block size is 30x30m. This averaging reduces the datasets to manageable size and significantly improves the typically poor repeatability of remote sensing imaging systems. In the first two years of the project we used the Normalized Difference Vegetation Index (NDVI), for generating predicted yield maps of sugar beets and com. The NDVI was computed from image cubes of three spectral bands, generated by an optically filtered three camera video imaging system. A two dimensional FFT based regression model Y=f(X), was used wherein Y was the reference map and X=NDVI was the predictor. The FFT regression method applies the "Wavelet Based", "Pixel Block" and "Image Rotation" transforms to the reference and remote images, prior to the Fast - Fourier Transform (FFT) Regression method with the "Phase Lock" option. A complex domain based map Yfft is derived by least squares minimization between the amplitude matrices of X and Y, via the 2D FFT. For one time predictions, the phase matrix of Y is combined with the amplitude matrix ofYfft, whereby an improved predicted map Yplock is formed. Usually, the residuals of Y plock versus Y are about half of the values of Yfft versus Y. For long term predictions, the phase matrix of a "field mask" is combined with the amplitude matrices of the reference image Y and the predicted image Yfft. The field mask is a binary image of a pre-selected region of interest in X and Y. The resultant maps Ypref and Ypred aremodified versions of Y and Yfft respectively. The residuals of Ypred versus Ypref are even lower than the residuals of Yplock versus Y. The maps, Ypref and Ypred represent a close consensus of two independent imaging methods which "view" the same target. In the last two years of the project our remote sensing capability was expanded by addition of a CASI II airborne hyperspectral imaging system and an ASD hyperspectral radiometer. Unfortunately, the cross-noice and poor repeatability problem we had in multi-spectral imaging was exasperated in hyperspectral imaging. We have been able to overcome this problem by over-flying each field twice in rapid succession and developing the Repeatability Index (RI). The RI quantifies the repeatability of each spectral band in the hyperspectral image cube. Thereby, it is possible to select the bands of higher repeatability for inclusion in the prediction model while bands of low repeatability are excluded. Further segregation of high and low repeatability bands takes place in the prediction model algorithm, which is based on a combination of a "Genetic Algorithm" and Partial Least Squares", (PLS-GA). In summary, modus operandi was developed, for deriving important plant growth attribute maps (yield, leaf nitrate, biomass and sugar percent in beets), from remote sensing imagery, with sufficient accuracy for precision cropping applications. This achievement is remarkable, given the inherently high cross-noice between the reference and remote imagery as well as the highly non-repeatable nature of remote sensing systems. The above methodologies may be readily adopted by commercial companies, which specialize in proving remotely sensed data to farmers.
APA, Harvard, Vancouver, ISO, and other styles
7

Bowlin, Elizabeth, and Puneet Agarwal. PR-201-153718-R03 Integrity Assessment of DTI Pipelines Using High Resolution NDE in Select Areas. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), May 2018. http://dx.doi.org/10.55274/r0011486.

Full text
Abstract:
Hydrostatic test and In-Line Inspection are the prescribed integrity assessment methods cited in various Codes and Regulations and have been proven to enhance pipeline safety. But a significant number of pipelines across the world remain difficult to inspect and impractical to modify for inspection by the prescribed methods due to physical configurations or operating conditions. This research performs a state of the art (SOTA) analysis of NDE technology readiness considering physical and operational barriers and technology deployment from inside, outside or over pipelines, and the possible role of inspection sampling to conclude pipeline integrity and justify intervals for conversion for piggability or hydrotest. The goal of the research is to propose alternatives to ILI for safe prioritization and scheduling for conversion or replacement and not to replace hydrostatic test or ILI as currently prescribed in Codes and Regulations. The scope of the research is limited to technologies and integrity management concerning metal loss threat. This report represents the third and final update of prior reports from the two preceding years presenting a compendium of technologies describing technology readiness for state of the art non-destructive evaluation (NDE) technologies intended for low resolution pipeline condition screening and high resolution NDE for deployment at sample locations with capabilities applicable to difficult to inspect pipeline configurations. Integrated cleaning and inspection pigs, smart balls, external deployed ultrasonic, radiographic and magnetometry are pipe wall screening technologies evaluated in the reports. A structured process is proposed for assessing pipeline integrity based on low resolution screening of the full length of a pipeline segment followed by high resolution NDE samples at locations where screening indicates locations of possible wall loss. The process employs extreme value analysis for prediction of maximum metal loss severity across the screened segment. For instances where no metal loss indications reported by screening or from high resolution samples an alternative "compliance approach" is also addressed. Case studies are presented where PRCI members have deployed some of the technologies referenced in the NDE SOTA phase of the research and implemented the proposed extreme value or the compliance approaches. Validation of fitness for service conclusions based on inspection sampling by comparison with full length high resolution ILI or hydrostatic test are included in some of the case studies. The conclusions of the case studies demonstrate integrity conclusions obtained from the PRCI structured process are conservative and consistent with ILI or hydrostatic test conclusions. Based on the experience from the case studies and the SOTA, a metal loss screening efficiency factor (MLSE) is proposed enabling pipeline operators to understand the general relationship between screening level (sample stratification) and direct examination (inspection sampling) required to provide equivalent understanding of pipe wall condition, limited to metal loss. As mentioned by ASME/API ILI has limitations that need to be considered in its deployment and full discovery of metal loss conditions. Under some conditions (noted by API 1163) ILI predictions can be accepted without any direct examinations or verifications, i.e full length screening (high resolution) and no verification samples. At the other end of the spectrum random sampling can be theoretically deployed as a screening approach but depending on the condition of the pipeline, the high-resolution sample area could be very large to obtain a significant integrity conclusion. This report proposes a comparative scale of effectiveness for SOTA pipe wall screening technologies that offer the operator an expectation of high resolution NDE sample size. There is a related webinar
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography