Dissertations / Theses on the topic 'Full sampling'

To see the other types of publications on this topic, follow the link: Full sampling.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Full sampling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Castorena, Juan. "Full-Waveform LIDAR Recovery at Sub-Nyquist Rates." International Foundation for Telemetering, 2013. http://hdl.handle.net/10150/579674.

Full text
Abstract:
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV
Third generation LIDAR full-waveform (FW) based systems collect 1D FW signals of the echoes generated by laser pulses of wide bandwidth reflected at the intercepted objects to construct depth profiles along each pulse path. By emitting a series of pulses towards a scene using a predefined scanning patter, a 3D image containing spatial-depth information can be constructed. Unfortunately, acquisition of a high number of wide bandwidth pulses is necessary to achieve high depth and spatial resolutions of the scene. This implies the collection of massive amounts of data which generate problems for the storage, processing and transmission of the FW signal set. In this research, we explore the recovery of individual continuous-time FW signals at sub-Nyquist rates. The key step to achieve this is to exploit the sparsity in FW signals. Doing this allows one to sub-sample and recover FW signals at rates much lower than that implied by Shannon's theorem. Here, we describe the theoretical framework supporting recovery and present the reader with examples using real LIDAR data.
APA, Harvard, Vancouver, ISO, and other styles
2

Rahman, S. M. Rayhan. "Performance of local planners with respect to sampling strategies in sampling-based motion planning." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=96891.

Full text
Abstract:
Automatically planning the motion of rigid bodies moving in 3D by translation and rotation in the presence of obstacles has long been a research challenge for mathematicians, algorithm designers and roboticists. The field made dramatic progress with the introduction of the probabilistic and sampling-based "roadmap" approach. However, motion planning when narrow passages are present has remained a challenge. This thesis presents a framework for experimenting with combinations of sampling strategies and local planners, and for comparing their performance on user defined input problems. Our framework also allows parallel implementations on a variable number of processing cores. We present experimental results. In particular, our framework has allowed us to find combinations of sampling strategy choice with local planner choice that can solve difficult benchmark motion planningproblems.
La planification automatique du mouvement de corps rigides en mouvement 3D par translation et rotation en présence d'obstacles a longtemps été un défi pour la recherche pour les mathématiciens, les concepteurs de l'algorithme et roboticiens. Le champ a fait d'importants progrès avec l'introduction de la méthode de "feuille de route" probabiliste basée sur l'échantillonnage. Mais la planification du mouvement en présence de passages étroits est resté un défi.Cette thése présente un cadre d'expérimentation avec des combinaisons de stratégies d'échantillonnage et les planificateurs locaux, et de comparaison de leurs performances sur des problémes définis par l'utilisateur. Notre programme peut également être exécuté parallèle sur un nombre variable de processeurs. Nous présentons des résultats expérimentaux. En particulier, notre cadre nous a permis de trouver des combinaisons de choix d'une stratégie d'échantillonnage avec choix de planificateur local qui peut résoudre des problèmes difficiles de référence.
APA, Harvard, Vancouver, ISO, and other styles
3

Wong, Raymond Y. P. "Critical analysis of the existing food sampling programmes." Thesis, University of Strathclyde, 2000. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=21140.

Full text
Abstract:
Existing food sampling programmes used by the local authorities, if they exist, operate in a 'hit or miss' fashion, and the use of small sample size is common in the programmes. Although the U.K. food co-ordination network is well developed, the complexity of the three-way systems creates many complications and duplications. Also, compliance with the European legislation generates extra burdens to the U.K. governments. A national survey was undertaken in 1998 to investigate the purpose and effectiveness on local authonty food sampling. Although only half of the returns believed that local food programmes contributed significantly to the prevention of foodborne illness, over three-quarters agreed that the programmes could be improved upon. It was clearly shown that U.K. local authorities were eager to advance their sampling regime, but were handicapped by resource constraints. The local authorities stated that improvement could be achieved if sampling activities were increased. Because sampling involves errors due to uncertainties and variations, a statistically validated sampling model was developed in an attempt to determine suitable sample sizes under various sample proportions that would also satisfy good normal approximation in order to reduce margin of error to a minimum. However, the model illustrated that current sampling regimes were far from reaching the minimum requirement. In the main, if sampling has a part in food safety activities, then central government support towards sampling and analysis cost is vital. Routine sampling can be undertaken collectively at a regional basis, and such high cost may be split among local authorities. Alternatively, a requirement can be placed upon food premises to undertake their own sampling, and officers will then carry out local audits. Finally, further investigations should be extended to the determination of many contaminants' limits and the cost benefit analysis along the chain of causality.
APA, Harvard, Vancouver, ISO, and other styles
4

Amoako-Tuffour, Yaw. "Design of an automated ingestible gastrointestinal sampling device." Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=123257.

Full text
Abstract:
An ingestible, electromechanical capsule was designed to collect physical samples from the lumen of the human gastrointestinal tract with the aims of being able to better localize the source of gastrointestinal ailments, explore the microbiome, and monitor metabolic processes. A complete prototype was developed encompassing hardware, custom electronics, firmware and a novel sampling mechanism leveraging the cylindrical shape of the device. The prototype was assessed for its ability to collect samples and maintain their integrity; withstand the environmental conditions and forces associated with normal clinical use; and for its ability to transit safely through the gastrointestinal (GI) tract. The device was able to collect heterogeneous samples from an ex-vivo porcine intestine and maintain average sample cross-contamination of 7.58% over a 12 hour period at 37°C. The device was demonstrated to be an effective and non-invasive means to study the physiology of the GI tract and serve as a platform for further development in personalized medicine, drug delivery and GI intervention.
Une capsule électromécanique ingérable a été conçue pour recueillir des échantillons physiques du tractus gastro-intestinal humain dans le but de mieux localiser la source des malaises gastro-intestinaux, d'explorer le microbiome et de surveiller les processus métaboliques. Un prototype complet a été développé incluant matériel, électronique sur mesure, logiciel et un mécanisme d'échantillonnage novateur tirant parti de la forme cylindrique de l'appareil. Des tests ont été effectués afin d'évaluer la capacité du prototype à prélever des échantillons et maintenir leur intégrité, supporter les conditions environnementales et les forces associées à l'utilisation clinique normale, et transiter en toute sécurité à travers le tractus gastro-intestinal. La contamination croisée a été plafonnée à 7.58% sur une période de 12 heures à 37 ° C. Et l'appareil était capable de prélever des échantillons hétérogènes. Il a été démontré que ce dispositif est un moyen efficace et non-invasif pour étudier la physiologie du tractus gastro-intestinal et servir de plate-forme pour le développement futur de la médecine personnalisée, l'administration de médicaments et d'intervention GI.
APA, Harvard, Vancouver, ISO, and other styles
5

Morin, Antoine. "Estimation and prediction of black fly abundance and productivity." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=75447.

Full text
Abstract:
Sampling and analytical techniques to estimate abundance and productivity of stream invertebrates are examined for their precision and accuracy, and then utilized to develop empirical models of sampling variability, abundance, and growth rates of overwintering larvae of black flies (Diptera: Simuliidae). Sampling variability of density estimates of stream benthos increases with mean density, and decreases with sampler size. Artificial substrates do not consistently reduce sampling variability, and introduce variable bias in estimates of simuliid density. Growth rates of overwintering simuliids are mainly a function of their body size, but available data show that growth rates also increase with water temperature. Biomass of overwintering simuliids in lake outlets in Southern Quebec is positively related to chlorophyll concentration and current velocity, and negatively related to distance from the lake, water depth, and periphyton biomass. Computer simulations show that published methods fail to produce reliable confidence intervals for estimates of secondary production for highly aggregated populations, and a reliable method, based on the Bootstrap procedure and the Allen curve, is presented.
APA, Harvard, Vancouver, ISO, and other styles
6

Payette, Francois. "Applications of a sampling strategy for the ERBE scanner data." Thesis, McGill University, 1988. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=61784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ertefaie, Ashkan. "Casual inference via propensity score regression and length-biased sampling." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=104784.

Full text
Abstract:
Confounder adjustment is the key in the estimation of exposure effect in observational studies. Two well known causal adjustment techniques are the propensity score and the inverse probability of treatment weighting. We have compared the asymptotic properties of these two estimators and showed that the former method results in a more efficient estimator. Since ignoring important confounders result in a biased estimator, it seems beneficial to adjust for all the covariates. This, however, may result in an inflation of the variance of the estimated parameters and induce bias as well. We present a penalization technique based on the joint likelihood of the treatment and response variables to select the key covariates that need to be included in the treatment assignment model. Besides the bias induced by the non-randomization, we discuss another source of bias induced by having a non-representative sample of the target population. In particular, we study the effect of length-biased sampling in the estimation of the treatment effect. We introduced a weighted and a double robust estimating equations to adjust for the biased sampling and the non-randomization in the generalized accelerated failure time model setting. Large sample properties of the estimators are established.We conduct an extensive simulation studies to study the small sample properties of the estimators. In each Chapter, we apply our proposed technique on real data sets and compare the result with those obtained by other methods.
L'ajustement du facteur de confusion est la clé dans l'estimation de l'effet de traitement dans les études observationelles. Deux techniques bien connus d'ajustement causal sont le score de propension et la probabilité de traitement inverse pondéré. Nous avons comparé les propriétés asymptotiques de ces deux estimateurs et avons démontré que la première méthode est un estimateur plus efficace. Étant donné que d'ignorer des facteurs de confusion importants ne fait que biaiser l'estimateur, il semble bénéfique de tenir compte de tous les co-variables. Cependant, ceci peut entrainer une inflation de la variance des paramètres estimés et provoquer des biais également. Par conséquent, nous présentons une pénalisation technique basée conjointement sur la probabilité du traitement et sur les variables de la réponse pour sélectionner la clé co-variables qui doit être inclus dans le modèle du traitement attribué. Outre le biais introduit par la non-randomisation, nous discutons d'une autre source de biais introduit par un échantillon non représentatif de la population cible. Plus précisément, nous étudions l'effet de la longueur du biais de l'échantillon dans l'estimation de la résultante du traitement. Nous avons introduit une pondération et une solide équation d'estimation double pour ajuster l'échantillonnage biaisé et la non-randomisation dans la généralisation du modèle à temps accéléré échec réglage. Puis, les propriétés des estimateurs du vaste échantillon sont établies. Nous menons une étude étendue pour examiner la simulation des propriétés des estimateurs du petit échantillon. Dans chaque chapitre, nous appliquons notre propre technique sur de véritables ensembles de données et comparons les résultats avec ceux obtenus par d'autres méthodes.
APA, Harvard, Vancouver, ISO, and other styles
8

Sacher, William. "The effect of sampling noise in ensemble-based Kalman filters." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66769.

Full text
Abstract:
Ensemble-based Kalman filters have drawn a lot of attention in the atmospheric and ocean scientific community because of their potential to be used as a data assimilation tool for numerical prediction in a strongly nonlinear context at an affordable cost. However, many studies have noted practical problems in their implementation. Indeed, being Monte-Carlo methods, the useful parameters are estimated from a sample of limited size of independent realizations of the process. As a consequence, the unavoidable sampling noise impacts the quality of the analysis. An idealized perfect model context is considered in which the analytical expression for the analysis accuracy and reliability as a function of the ensemble size is established, from a second-order moment perspective. It is proved that one can analytically explain the general tendency for ensemble-based Kalman filters to underestimate, on average, the analysis variance and therefore the likeliness for these filters to diverge. Performance of alternative methods, designed to reduce or eliminate sampling error effects, such as the double ensemble Kalman filter or covariance inflation are also analytically explored. For methods using perturbed observations, it is shown that the covariance inflation is the easiest and least expensive method to obtain the most accurate and reliable analysis. These analytical results agreed well with means over a large number of experiments using a perfect, low-resolution, and quasi-geostrophic barotropic model, in a series of observation system simulation experiments of single analysis cycles as well as in a simulated forecast system. In one-analysis cycle experiments with rank histograms, non-perturbed-observation methods show a lack of reliability regardless of the number of members. For small ensemble sizes, sampling error effects are dominant but have a smaller impact than in the perturbed observation method, making non
La possibilité pour les filtres de Kalman d'ensemble d'être mis en œuvre à un coût non prohibitif comme outil d'assimilation de données dans les modèles de prévision numérique du temps, et donc dans un contexte hautement non-linéaire, a suscité l'attention de la communauté scientifique au cours des dernières années. De nombreuses études ont cependant montré les limites pratiques de leur implémentation. En effet, en tant que méthode de Monte-Carlo, ils requièrent l'utilisation d'un échantillon limité de réalisations indépendantes du processus étudié. Les inévitables erreurs d'échantillonnage engendrées conduisent à une détérioration de la qualité de l'analyse. L'expression théorique donnant la précision et la fiabilité de l'analyse en fonction de la taille de l'ensemble est établie dans un contexte idéalisé impliquant un modèle parfait, en se bornant aux moments de second ordre des distributions statistiques des erreurs. La tendance générale des filtres de Kalman d'ensemble à sous-estimer en moyenne la variance de l'analyse, et donc leur propension à diverger, est ici prouvée théoriquement. Les comportements de méthodes alternatives construites pour réduire ou éliminer les effets de l'erreur d'échantillonage font également l'objet d'une étude théorique. Le filtre de Kalman d'ensemble double et l'inflation des covariances sont étudiés. Dans le cas des méthodes utilisant des observations perturbées, la méthode d'inflation des covariances apparaît comme la plus facile et la moins coûteuse à mettre en œuvre. Les résultats théoriques obtenus sont en accord avec les moyennes effectuées sur un grand nombre de réalisations d'expériences utilisant un modèle barotrope parfait, de faible résolution et quasi-géostrophique. Ces expériences-jumelles ont d'abord été effectuées sur un seul cycle d'analyse, puis dans un système de prévision numéri
APA, Harvard, Vancouver, ISO, and other styles
9

Foster, Kristina. "Using Distinct Sectors in Media Sampling and Full Media Analysis to Detect Presence of Documents from a Corpus." Thesis, Monterey, California. Naval Postgraduate School, 2012. http://hdl.handle.net/10945/17365.

Full text
Abstract:
Approved for public release; distribution is unlimited
Forensics examiners frequently search for known content by comparing each file from a target media to a known file hash database. We propose using sector hashing to rapidly identify content of interest. Using this method, we hash 512 B or 4 KiB disk sectors of the target media and compare those to a hash database of known file blocks, fixed-sized file fragments of the same size. Sector-level analysis is fast because it can be parallelized and we can sample a sufficient number of sectors to determine with high probability if a known file exists on the target. Sector hashing is also file system agnostic and allows us to identify evidence that a file once existed even if it is not fully recoverable. In this thesis we analyze the occurrence of distinct file blocksヨblocks that only occur as a copy of the original fileヨin three multi-million file corpora and show that most files, including documents, legitimate and malicious software, consist of distinct blocks. We also determine the relative performanceof several conventional SQL and NoSQL databases with a set of one billion file block hashes.
APA, Harvard, Vancouver, ISO, and other styles
10

Turner, Barry John. "Spatial sampling and vertical variability effects on microwave radiometer rainfall estimates." Thesis, McGill University, 1991. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=59910.

Full text
Abstract:
Three-dimensional radar data for three Florida storms are used with a radiative transfer model to simulate observations at 19 GHz by a nadir pointing, satellite bourne microwave radiometer. Estimates were made of spatial sampling errors due to both horizontal and vertical variability of the precipitation. Calibrated radar data were taken as realistic representations of rainfall fields.
The optimal conversion between microwave brightness temperature and rainfall rate was highly sensitive to the spatial resolution of observations. Retrievals were made from the simulated microwave measurements using rainfall retrieval functions optimized for each resolution and for each storm case.
There is potential for microwave radiometer measurements from the planned TRMM satellite to provide better 'snapshot' estimates than area-threshold VIS/IR methods. Variability of the vertical profile of precipitation did not seriously reduce accuracy. However, it is crucial that calibration of retrieval methods be done with ground truth of the same spatial resolution.
APA, Harvard, Vancouver, ISO, and other styles
11

Makulsawatudom, Arun. "Construction productivity measurement and improvement in Thailand by improved work-sampling." Thesis, University of Strathclyde, 2003. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=21527.

Full text
Abstract:
The Thailand construction industry is the database for this research, which consists of three main sections. As there has been a lack of research in the construction industry in Thailand, firstly, structured questionnaires were distributed to project manager, foremen and craftsmen, to observe general construction productivity and to find out if a work-sampling study could be tailored to detect these problems. The results indicated that lack of materials, incomplete drawing and lack of tools and equipment have the greatest effect on construction productivity in Thailand, and so a work-sampling study can be tailored to detect these problems. Having confirmed that it is possible to undertake work-sampling to increase construction productivity, this thesis, secondly, has improved and clearly specified all the individual steps required to carry out a work-sampling study. In addition, this research has also reported the application steps of FDS. The work-sampling technique was applied to four construction cases and FDS was also carried out on two of the four sites. The results confirmed that a work-sampling study can highlight the productivity problems, and indicate how to overcome or alleviate them, and inferred that late start/early finish and crew imbalance are likely to be universal construction productivity problems in Thailand. In addition, these two techniques contribute to each other and should be implemented together. The final part of this study applied the Markov process to predict the results of worksampling at any particular periods of time in the future. This concept is not only able to predict the results, but also supports the principle of work-sampling which, if it is to be successful requires full support from management.
APA, Harvard, Vancouver, ISO, and other styles
12

Castorena, Juan. "A Comparison of Compressive Sensing Approaches for LIDAR Return Pulse Capture, Transmission, and Storage." International Foundation for Telemetering, 2014. http://hdl.handle.net/10150/577483.

Full text
Abstract:
ITC/USA 2014 Conference Proceedings / The Fiftieth Annual International Telemetering Conference and Technical Exhibition / October 20-23, 2014 / Town and Country Resort & Convention Center, San Diego, CA
Massive amounts of data are typically acquired in third generation full-waveform (FW) LIDAR systems to generate image-like depthmaps of a scene of acceptable quality. The sampling systems acquiring this data, however, seldom take into account the low information rate generally present in the FW signals and, consequently, they sample very inefficiently. Our main goal here is to compare two efficient sampling models and processes for the individual time-resolved FW signals collected by a LIDAR system. Specifically, we compare two approaches of sub-Nyquist sampling of the continuous-time LIDAR FW return pulses: (i) modeling FW signals as short-duration pulses with multiple bandlimited echoes, and (ii) modeling them as signals with finite rates of innovation (FRI).
APA, Harvard, Vancouver, ISO, and other styles
13

Kaharabata, Samuel K. "Moisture transfer behind windbreaks : laboratory simulations and conditional sampling in the field." Thesis, McGill University, 1991. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=60535.

Full text
Abstract:
The spatial distribution of local evaporation from ground-based sources behind solid and porous windbreaks was studied in laboratory models for steady state and intermittent flows. Field observations of wind and turbulence characteristics (turbulence intensity, power spectra and integral length scale L) over surfaces whose zero displacement (d) and roughness length (z$ sb0$) had also been determined, were used to scale the laboratory simulations. Scaling parameters were z/z$ sb0$, $ sigma$/U, L/z$ sb0$ and Uz$ sb{0}$/K, where z, U, $ sigma$ and K are height, wind speed, standard deviation of velocity fluctuations and turbulent diffusivity, respectively. The 50% porosity barrier was found to be the most effective single-barrier set-up for the reduction of moisture loss.
Conditional sampling of fluctuations w' and q' of the wind and moisture, respectively, with sonic anemometer and fast-response Krypton hygrometer behind solid and porous windbreaks in the field, revealed frequency of occurrence, duration and intensity of those turbulent structures primarily responsible for moisture transfer.
APA, Harvard, Vancouver, ISO, and other styles
14

Bergeron, Pierre-Jérôme. "Covariates and length-biased sampling : is there more than meets the eye ?" Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=102958.

Full text
Abstract:
It is well known that when subjects with a disease are identified through a cross-sectional survey and then followed forward in time until either failure or censoring, their estimated survival function of the true survival function from onset are biased. This bias, which is caused by the sampling of prevalent rather than incident cases, is termed length bias if the onset time of the disease forms a stationary Poisson process. While authors have proposed different approaches to the analysis of length-biased survival data, there remain a number of issues that have not been fully addressed. The most, important of these is perhaps that of how to include covariates into length-biased lifetime data analysis of the natural history of diseases, that are initiated by cross-sectional sampling of a population. One aspect of that problem, which appears to have been neglected in the literature, concerns the effect of length-bias on the sampling distribution of the covariates. If the covariates have an effect on the survival time, then their marginal distribution in a length-biased sample is also subject to a bias and is informative about the parameters of interest. As is conventional in most regression analyses one conditions on the observed covariate values. By conditioning on the observed covariates in the situation described above, however, one effectively ignores the information contained in the distribution of the covariates in the sample. We present the appropriate likelihood approach that takes into account this information and we establish the consistency and asymptotic normality of the resulting estimators. It is shown that by ignoring the information contained in the sampling distribution of the covariates, one can still obtain, asymptotically, the same point estimates as with the joint likelihood. However, these conditional estimates are less efficient. Our results are illustrated using data on survival with dementia; collected as part of the Canadian Study of Health an Aging.
APA, Harvard, Vancouver, ISO, and other styles
15

Morrone, Dario. "Induced bias on measuring influence by length-biased sampling of failure times." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=21931.

Full text
Abstract:
Influence diagnostic measures for various statistical models have been developed. Nonetheless, a proper influence measure for handling right-censored prevalent cohort data has yet to have been suggested. We present an influence measure which properly accounts for the length-bias and right censoring encountered in prevalent cohort data. This measure makes use of a likelihood correctly accounting for length-bias and possible information in the marginal covariate distribution. An approximation of this influence diagnostic measure is also developed. We illustrate the relevance of correctly incorporating length-bias and covariate information by analyzing differences in influence when one appropriately acknowledges the nature of prevalent cohort data and when one does not. Results are depicted with data on survival with dementia among the elderly in Canada provided by the Canadian Study on Health and Aging.
Des mesures d'influence pour divers modèles statistiques ont déjà été développées. Néanmoins, une mesure d'influence qui traite des données censurées de la droite parvenant de cohorte prévalentes n'a pourtant pas été traitée. Nous présentons une mesure d'influence qui tient compte du biais en longueur et de la censure de la droite, présents dans des données parvenant d'une cohorte prévalente. Cette mesure fait usage d'une vraisemblance correctement ajustée pour le biais en longueur ainsi que l'information potentiellement contenu dans la distribution marginale des covariés. Une approximation de cette mesure est développée. Nous illustrons la pertinence de correctement incorporer le biais en longueur et l'information contenu dans les covariés en analysant les différences d'influence quand la nature des données provenant d'une cohorte prévalente est reconnue et quand elle est ignorée. Les résultats sont illustrés avec l'aide de données sur la survie avec la démence parmi les personnes âgées au Canada fourni par le Canadian Study on Health and Aging.
APA, Harvard, Vancouver, ISO, and other styles
16

Matta, Marwan. "Spatiotemporal interpolation for sampling grid conversion with application to scalable video coding." Thesis, McGill University, 1994. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=22663.

Full text
Abstract:
Scan conversion for 2:1 field rate conversion from a SIF transmission format to a CCCIR601 video display format, and scan conversion for scalability are presented. We also address scan conversion from interlaced to progressive formats, also known as deinterlacing. For this purpose we examine adaptive weighted and switched techniques based on spatial and motion-compensated deinterlacing. We introduce a motion-adaptive frame rate conversion based on motion-compensated and temporal interpolation. When motion is relatively low at a pixel neighborhood, temporal interpolation will be carried out, otherwise motion-compensated interpolation will be performed. Indeed when motion is relatively high, temporal frame rate conversion causes repetition of contours and jerkiness. We develop and use a Maximally Flat Digital Filter for generating the samples that will be used in temporal interpolation, and this to improve the quality of this interpolation. We compare motion adaptive frame rate conversion with MPEG-2 conversion, using both visual and PSNR criteria. We also propose a new scheme that achieves multiscale spatial conversion: pattern motion-compensated interpolation. For this we develop a least square solution using Cauchy steepest descent. We further prove the existence and uniqueness of such solution. This method gives better visual quality images than cubic, MPEG-2 and pattern interpolation. Finally we analyze the impact that different interpolation algorithms have on MPEG-2 scalable video coding. We present a new down conversion that replaces the MPEG-2 one and removes the aliasing introduced by this method.
APA, Harvard, Vancouver, ISO, and other styles
17

McCray, Robert B. "UTILIZATION OF A SMALL UNMANNED AIRCRAFT SYSTEM FOR DIRECT SAMPLING OF NITROGEN OXIDES PRODUCED BY FULL-SCALE SURFACE MINE BLASTING." UKnowledge, 2016. http://uknowledge.uky.edu/mng_etds/31.

Full text
Abstract:
Emerging health concern for gaseous nitrogen oxides (NOx) emitted during surface mine blasting has prompted mining authorities in the United States to pursue new regulations. NOx is comprised of various binary compounds of nitrogen and oxygen. Nitric oxide (NO) and nitrogen dioxide (NO2) are the most prominent. Modern explosive formulations are not designed to produce NOx during properly-sustained detonations, and researchers have identified several causes through laboratory experiments; however, direct sampling of NOx following full-scale surface mine blasting has not been accomplished. The purpose of this thesis was to demonstrate a safe, innovative method of directly quantifying NOx concentrations in a full-scale surface mining environment. A small unmanned aircraft system was used with a continuous gas monitor to sample concentrated fumes. Three flights were completed – two in the Powder River Basin. Results from a moderate NOx emission showed peak NO and NO2 concentrations of 257 ppm and 67.2 ppm, respectively. The estimated NO2 presence following a severe NOx emission was 137.3 ppm. Dispersion of the gases occurred over short distances, and novel geometric models were developed to describe emission characteristics. Overall, the direct sampling method was successful, and the data collected are new to the body of scientific knowledge.
APA, Harvard, Vancouver, ISO, and other styles
18

Rabbath, Camille Alain. "Sensitivity of the discrete- to continuous-time pole transformation at fast sampling rates." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=23267.

Full text
Abstract:
This thesis examines the propagation of small relative errors in poles in discrete-time domains $z, varepsilon={{z-1} over {T}}, z sp prime=z-1, w sp prime={2 over {T}}{{z-1} over{z+1}}$ and z = ${{z-1} over{Tz}},$ where T is the sampling period, to the continuous-time domain. By prescribing pole locations in the discrete-time domains or usable sampling periods in a continuous-time context, sensitivity specifications in time constant of a real pole, natural frequency, damping ratio and uncertain relative region of a complex pole can be achieved. It is shown in this thesis that the alternative discrete-time operators, $ varepsilon, z sp prime, w sp prime$ and z, provide superior performance in the propagation of errors coming from coefficient quantization of first order control laws than the z operator at fast sampling rates, and possess sensitivity properties converging to those of an equivalent continuous-time system as the sampling interval approaches zero. A two-stage least squares identification process of a high order plant is studied with emphasis placed on sensitivity effects as well as on the effect of the accuracy of the digital-to-analog and analog-to-digital converters. The Euler form identification process is shown to yield the most accurate continuous-time pole estimates among the operator forms examined in this work and the accuracy of the converters is shown to bring an upper limit on the sampling rate, at which the data are captured for identification, so that relatively accurate pole estimates are obtained.
APA, Harvard, Vancouver, ISO, and other styles
19

Azerêdo, Daniel Mendes. "Pesquisas sob amostragem informativa utilizando o FBST." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/45/45132/tde-19082013-161233/.

Full text
Abstract:
Pfeffermann, Krieger e Rinott (1998) apresentaram uma metodologia para modelar processos de amostragem que pode ser utilizada para avaliar se este processo de amostragem é informativo. Neste cenário, as probabilidades de seleção da amostra são aproximadas por uma função polinomial dependendo das variáveis resposta e concomitantes. Nesta abordagem, nossa principal proposta é investigar a aplicação do teste de significância FBST (Full Bayesian Significance Test), apresentado por Pereira e Stern (1999), como uma ferramenta para testar a ignorabilidade amostral, isto é, para avaliar uma relação de significância entre as probabilidades de seleção da amostra e a variável resposta. A performance desta modelagem estatística é testada com alguns experimentos computacionais.
Pfeffermann, Krieger and Rinott (1998) introduced a framework for modeling sampling processes that can be used to assess if a sampling process is informative. In this setting, sample selection probabilities are approximated by a polynomial function depending on outcome and auxiliary variables. Within this framework, our main purpose is to investigate the application of the Full Bayesian Significance Test (FBST), introduced by Pereira and Stern (1999), as a tool for testing sampling ignorability, that is, to detect a significant relation between the sample selection probabilities and the outcome variable. The performance of this statistical modelling framework is tested with some simulation experiments.
APA, Harvard, Vancouver, ISO, and other styles
20

Vafaee, Manouchehr S. "Evaluation and implementation of an automated blood sampling system for positron emission tomographic studies." Thesis, McGill University, 1993. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=57009.

Full text
Abstract:
Quantification of physiological functions with positron emission tomography requires knowledge of the arterial radioactivity concentration. Automated blood sampling systems increase the accuracy of this measurement, particularly for short-lived tracers such as oxygen-15, by reducing the sampling interval to a fraction of a second. They, however, require correction for tracer delay between the arterial puncture site and the external radiation detector (external delay), and for the tracer bolus distortion in the sampling catheter (external dispersion).
We have evaluated and implemented the "Scanditronix" automated blood sampling system and measured its external delay and dispersion. PET studies of cerebral blood flow and oxygen metabolism using simultaneous manual and automated blood sampling were analyzed and compared. We show that the results obtained with automated blood sampling are more reliable than those based on manual sampling. We also present suggestions to further improve the reliability of quantitative PET studies based on automated blood sampling.
APA, Harvard, Vancouver, ISO, and other styles
21

Shu, Weihuan. "Optimal sampling rate assignment with dynamic route selection for real-time wireless sensor networks." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=32351.

Full text
Abstract:
The allocation of computation and communication resources in a manner that optimizes aggregate system performance is a crucial aspect of system management. Wireless sensor network poses new challenges due to the resource constraints and real-time requirements. Existing work has dealt with the real-time sampling rate assignment problem, under single processor case and network case with static routing environment. For wireless sensor networks, in order to achieve better overall network performance, routing should be considered together with the rate assignments of individual flows. In this thesis, we address the problem of optimizing sampling rates with dynamic route selection for wireless sensor networks. We model the problem as a constrained optimization problem and solve it under the Network Utility Maximization framework. Based on the primal-dual method and dual decomposition technique, we design a distributed algorithm that achieves the optimal global network utility considering both dynamic route decision and rate assignment. Extensive simulations have been conducted to demonstrate the efficiency and efficacy of our proposed solutions.
L'attribution de calcul et de la communication ressources d'une mani`ere qui optimise les performances du syst`eme global est un aspect crucial de la gestion du syst`eme. R´eseau de capteurs sans fil pose de nouveaux d´efis en raison de la p´enurie de ressources et en temps r´eel. Travaux existants a traite distribution temps-reel probl`eme de taux d'´echantillonnage, dans un seul processeur cas et r´eseau cas de routage environment statique. Pour les r´eseaux de capteurs sans fil, afin de parvenir `a une meilleure performance globale du r´eseau, le routage devrait tre examin´e en mˆeme temps que la distribution de taux des flux individuels. Dans cet article, nous abordons le probl`eme de l'optimisation des taux d'´echantillonnage avec route s´election dynamique pour r´eseaux de capteurs sans fil. Nous modelisons le probleme comme un probl`eme d'optimisation et le r´esolvons dans le cadre de l'utilite de reseau maximisation. Sur la base de la m´ethode primal-dual et la dual d´ecomposition technique, nous concevons un algorithme distribu´e qui atteint le meilleur l'utilite de reseau globale au vu de route d´ecision dynamique et le taux distribution. Des simulations ont ´et´e r´ealis´ees pour d´emontrer l'efficience et l'efficacit´e de nos solutions propos´ees. fr
APA, Harvard, Vancouver, ISO, and other styles
22

Gibson, Keith W. "Time-concentrated sampling : a simple strategy for information gain at a novel, depleted patch." Thesis, McGill University, 2002. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=78368.

Full text
Abstract:
Little theoretical or empirical research has examined how an animal that has found and exploited a new patch should determine whether and when it will renew. A rapid series of visits to the patch should provide information concerning the probability of a quick renewal. If a renewal is not encountered, however, a subsequent decrease in the rate of visits should allow monitoring of the patch at minimal cost. After a long period without renewal, a patch should not be visited at all. By analogy with area-concentrated search, I propose the term 'time-concentrated sampling' (TCS) for this pattern of visits and suggest that it should be widespread for species foraging on patchy prey in environments where the probability of renewal and latency to renewal of patches are variable between patches. In this study, I tested whether eastern chipmunks (Tamias striatus) presented with a small number of peanuts followed by a small patch of sunflower seeds exhibit TCS following their depletion of these and, if so, whether their patterns of visits are influenced by potential indicators of patch value. (Abstract shortened by UMI.)
APA, Harvard, Vancouver, ISO, and other styles
23

Simon, Philippe 1964. "Long-term integrated sampling to characterize airborne volatile organic compounds in indoor and outdoor environments." Thesis, McGill University, 1997. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=34455.

Full text
Abstract:
Sampling methods used for the assessment of exposure to volatile organic chemicals (VOCs) in the workplace or for environmental studies are now limited to an upper integrative sampling time of 24 hours or less. Generally, these methods lack versatility and are difficult to use. A passive sampler that can extend sampling periods was developed as part of this research. This novel sampler relies on capillary tubes to restrict and control ambient air entry into an evacuated sample container.
A mathematical model was derived by modifications to the Hagen-Poiseuille and ideal gas laws. This model defines the relationship between container volume and capillary geometry (length/internal diameter) required to provide selected sampling times. Based on theoretical considerations, simulations were performed to study the effects of dimensional parameters. From these results, capillaries having 0.05 and 0.10 mm internal diameters were selected according to their ability to reduce sampling flow rates and to increase sampling times. Different capillary lengths were tested on various sampler prototypes. It was found that a constant sampling flow rate was delivered when a maximum discharge rate was established under the influence of a pressure gradient between a vacuum and ambient pressure. Experimental flow rates from 0.018 to 2.6 ml/min were obtained and compared with model predictions. From this comparison, empirical relationships between capillary geometry and maximum discharge rate given by the pressure gradient were defined. Essentially, based on these empirical relationships, capillary sampling flow controller specifications can be calculated to offer extended integrated sampling periods. On this basis, sampler prototypes were configured for stationary sampling and personal sampling.
Studies, based on theory, have indicated that factors such as temperature, humidity and longitudinal molecular diffusion are not likely to influence the passive sampling process. Subsequent experiments confirmed that temperature changes should not significantly affect flow rates delivered by controllers, and that molecular diffusion does not have any impact on the representativeness of long-term samples. Recovery tests provided acceptable results demonstrating that selected capillaries do not contribute to adsorption that could seriously affect the validity of this sampling approach.
Field demonstration studies were performed with both stationary and personal sampler prototypes in the indoor and outdoor environments. The performance of the sampler compared favorably, and in some instances, exceeded that of accepted methodology. These novel samplers were more reliable, had greater versatility and principally, allowed sampling periods extending from hours to a month. These inherent qualities will assist industrial hygienists and environmentalists in the study of emission sources, pollutant concentrations, dispersion, migration and control measures. This novel sampler is presently the only device available for the effective study of episodic events of VOC emission.
Selected capillary geometries acting as a restriction to the entry of ambient air into evacuated sample container can provide a simple, versatile and reliable alternative for the collection of VOCs. This approach can contribute to a better understanding of VOC effects on human health and the environment.
APA, Harvard, Vancouver, ISO, and other styles
24

Rowan, David J. "The distribution, texture and trace element concentrations of lake sediments /." Thesis, McGill University, 1992. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=70361.

Full text
Abstract:
Hypotheses regarding the distribution, texture and trace element concentrations of lake sediments were tested by empirical analyses of multi-lake data sets (52 to 83 lakes). Sediment distribution was best characterized by the deposition boundary depth (DBD), the abrupt transition from coarse- to fine-grained sediments. The DBD can now be predicted from either empirical models or empirical-theoretical simplifications of wave of sediment threshold theory, both in terms of exposure (or fetch) and bottom slope. The texture (organic content, water content and bulk density) of profundal sediments was related to the inorganic sedimentation rate and exposure, but not to the lake trophic status or the net organic matter sedimentation rate. The relationships between sediment texture and intra- and inter-site variability, together with the models that predict the DBD and sediment texture, were used to develop an algorithm that should greatly reduce sampling effort in lake sediment surveys. Finally, sediment trace element concentrations were predicted from sediment texture, site depth and simple geologic classifications. The models developed here, provide a framework in which to sample lake sediments and interpret their properties.
APA, Harvard, Vancouver, ISO, and other styles
25

Beedell, David C. (David Charles). "The effect of sampling error on the interpretation of a least squares regression relating phosporus and chlorophyll." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=22720.

Full text
Abstract:
Least squares linear regression is a common tool in ecological research. One of the central assumptions of least squares linear regression is that the independent variable is measured without error. But this variable is measured with error whenever it is a sample mean. The significance of such contraventions is not regularly assessed in ecological studies. A simulation program was made to provide such an assessment. The program requires a hypothetical data set, and using estimates of S$ sp2$ it scatters the hypothetical data to simulate the effect of sampling error. A regression line is drawn through the scattered data, and SSE and r$ sp2$ are measured. This is repeated numerous times (e.g. 1000) to generate probability distributions for r$ sp2$ and SSE. From these distributions it is possible to assess the likelihood of the hypothetical data resulting in a given SSE or r$ sp2$. The method was applied to survey data used in a published TP-CHLa regression (Pace 1984). Beginning with a hypothetical, linear data set (r$ sp2$ = 1), simulated scatter due to sampling exceeded the SSE from the regression through the survey data about 30% of the time. Thus chances are 3 out of 10 that the level of uncertainty found in the surveyed TP-CHLa relationship would be observed if the true relationship were perfectly linear. If this is so, more precise and more comprehensive models will only be possible when better estimates of the means are available. This simulation approach should apply to all least squares regression studies that use sampled means, and should be especially relevant to studies that use log-transformed values.
APA, Harvard, Vancouver, ISO, and other styles
26

Cerigo, Helen. "HPV knowledge and self-sampling for the detection of HPV-DNA among Inuit women in Nunavik, Quebec." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=96825.

Full text
Abstract:
The prevalence of human papillomavirus (HPV), a necessary cause of cervical cancer has been found to be high in Inuit populations. This study examined 1) the level of knowledge about HPV infection and its relation to cervical cancer and 2) the comparability of self-collected cervicovaginal samples to provider-collected cervical samples for the detection of HPV and to assess preference of sampling methods among Inuit women in Nunavik, Quebec. Questionnaires were used to measure HPV knowledge and sampling method preference. To assess comparability of sampling techniques, samples were tested for 36 HPV types with PCR. Previous awareness of HPV was reported by 31% of women. The level of knowledge about HPV was low, but similar to that of other non-Indigenous populations. The agreement in detection of high-risk HPV between paired observations was found to be high. Self-sampling is comparable to provider-sampling and is a promising intervention to increase coverage of cervical cancer screening.
La prévalence du virus du papillome humain(VPH) est élevée dans la population Inuit du Québec. Nous avons donc 1) documenter le niveau de connaissance concernant le VPH et son lien avec le cancer du col utérin et 2) évaluer le rendement de l'auto prélèvement pour le VPH en comparaison avec le prélèvement fait par l'intervenant de santé et 3) déterminer la préférence des femmes Inuit du Nunavik entre les deux méthodes. Un questionnaire fut utilisé pour évaluer le niveau de connaissance et la préférence entre les modes de prélèvements. La comparabilité entre les modes de prélèvements s'est effectuée sur les résultats du test PCR détectant 36 différents types de VPH. Plus de 31% des femmes Inuit avaient entendues parler du VPH. Le niveau de connaissance général sur le VPH est faible mais semblable à celui rapporté pour des populations non Autochtone. La comparabilité en matière de détection des VPH est élevée entre les deux méthodes. L'auto prélèvement est potentiellement une méthode de prélèvement propice à augmenter le taux de dépistage du cancer du col utérin.
APA, Harvard, Vancouver, ISO, and other styles
27

Rossner, Alan. "The development and evaluation of a novel personal air sampling canister for the collection of gases and vapors /." Thesis, McGill University, 2002. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=84428.

Full text
Abstract:
A continuing challenge in occupational hygiene is that of estimating exposure to the multitude of airborne chemicals found in the workplace and surrounding community. Occupational exposure limits (OELs) have been established to prescribe the acceptable time weighted average for many different chemicals. Comparing the OELs to the measured workplace concentration allows occupational hygienists to assess the health risks and the need for control measures. Hence, methods to more effectively sample contaminants in the workplace are necessary to ensure that accurate exposure characterizations are completed. Evacuated canisters have been used for many years to collect ambient air samples for gases and vapors. Recently, increased interest has arisen in using evacuated canisters for personal breathing zone sampling as an alternative to sorbent samplers. A capillary flow control device was designed at McGill University mid 1990s. The flow control device was designed to provide a very low flow rate to allow a passive sample to be collected over an extended period of time. This research focused on the development and evaluation of a methodology to use a small canister coupled with the capillary flow controllers to collect long term time weighted air samples for gases and vapors.
A series of flow rate experiments were done to test the capillary flow capabilities with a 300 mL canister for sampling times ranging from a few minutes to over 40 hours. Flow rates ranging from 0.05 to 1.0 mL/min were experimentally tested and empirical formulae were developed to predict flow rates for given capillary geometries. The low flow rates allow for the collection of a long term air sample in a small personal canister.
Studies to examine the collection of air contaminants were conducted in laboratory and in field tests. Air samples for six volatile organic compounds were collected from a small exposure chamber using the capillary-canisters, charcoal tubes and diffusive badges at varied concentrations. The results from the three sampling devices were compared to each other and to concentration values obtained by an on-line gas chromatography. The results indicate that the capillary-canister compares quite favorably to the sorbent methods and to the on line GC values for the six compounds evaluated.
Personal air monitoring was conducted in a large exposure chamber to assess the effectiveness of the capillary-canister method to evaluate breathing zone samples. In addition, field testing was performed at a manufacturing facility to assess the long term monitoring capabilities of the capillary-canister. Precision and accuracy were found to parallel that of sorbent sampling methods.
The capillary-canister device displayed many positive attributes for occupational and community air sampling. Extended sampling times, greater capabilities to sample a broad range of chemicals simultaneously, ease of use, ease of analysis and the low relative cost of the flow controller should allow for improvements in exposure assessment.
APA, Harvard, Vancouver, ISO, and other styles
28

De, la Chenelière Véronik. "The risks and benefits of an invasive technique, biopsy sampling, for an endangered population, the St. Lawrence beluga (Delphinapterus leucas) /." Thesis, McGill University, 1998. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=21536.

Full text
Abstract:
Research can conflict with conservation when invasive techniques are used on protected animal species. We developed a decision framework including the research question, the choice of technique, and the recommended course of action following the evaluation of the risks and benefits. This evaluation includes biological risks and benefits and considerations linked to the perception of resource users. We applied this framework a posteriori to a case study, the use of biopsy sampling on St. Lawrence belugas. We monitored the biological risks and benefits over four field seasons using behavioural and physiological indices and reports on the work in progress. We evaluated the risks as "low" and the benefits as "medium". For benefits to outweigh risks, procedures to minimise risks, publication of the work, and formulation of recommendations for conservation are essential. Researchers should be prepared to discuss with stakeholders the potential conflicts between their projects and conservation.
APA, Harvard, Vancouver, ISO, and other styles
29

Grimm, Carsten. "Well-Being in its Natural Habitat: Orientations to Happiness and the Experience of Everyday Activities." Thesis, University of Canterbury. Psychology, 2013. http://hdl.handle.net/10092/8040.

Full text
Abstract:
Peterson, Park, and Seligman (2005) have proposed that individuals seek to increase their well-being through three behavioural orientations; via pleasure, meaning, and engagement. The current study investigated how orientations to happiness influenced the pursuit and experience of daily activities using an experience sampling methodology (ESM). Daily activities were experienced as a blend of both hedonic and eudaimonic characteristics. Dominant orientation to happiness did not predict engaging in different daily activities. Trait orientations to happiness had some influence on the momentary experience of behaviour. Those scoring highest on all three orientations to happiness also rated their daily activities highest on momentary pleasure, meaning, engagement, and happiness. The results suggest that increasing all three orientations is a pathway to the full life and a balanced well-being portfolio.
APA, Harvard, Vancouver, ISO, and other styles
30

Leong, Aaron. "Population prevalence of diabetes: validation of a case definition from health administrative data using a population-based survey and home blood glucose sampling." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=116891.

Full text
Abstract:
Background: Previous validation studies of diabetes case definitions from health administrative data are scarce and have not systematically assessed undiagnosed diabetes. We aimed to determine the accuracy of a diabetes case definition (two physician claims or one hospitalization for diabetes, within a two-year period; Canadian National Diabetes Surveillance System, NDSS) using a systematic review of the published literature and by comparing cases identified from the Quebec health services administrative databases to those obtained from self-report, respectively. We also estimated the prevalence of physician-diagnosed diabetes by self-report in a health survey, and undiagnosed diabetes by measuring glucose levels on mailed-in capillary blood samples.Methods: For the systematic review, we searched Medline (from 1950) and Embase (from 1980) databases for relevant validation studies published through August 2012 (keywords: "diabetes mellitus", "administrative databases" and "validation studies"). Reviewers abstracted study data and assessed quality using standardized forms. As there was heterogeneity, a random-effects bivariate regression model was used to pool sensitivity and specificity estimates. To determine the accuracy of the NDSS case definition from the Quebec health services administrative databases, we obtained administrative data on a stratified random sample of 6,247 Quebec individuals (2009) whom we surveyed by telephone to query diabetes status and asked them to mail-in fasting capillary blood samples to a central laboratory for glucose testing. The NDSS case definition was compared with self-reported diabetes alone and with self-reported diabetes and/or elevated glucose level (≥7 mmol/l) for sensitivity, specificity, positive and negative predictive values, and statistical agreement. Population-level prevalence was estimated using the NDSS definition, corrected based on sensitivity and specificity estimates and sampling weights. In addition, we added the Quebec validation study to the meta-analysis and reported the final test properties of the NDSS case definition.Results: The search strategy identified 1423 abstracts among which 11 studies were deemed relevant and reviewed; 6 of these reported sensitivity and specificity allowing pooling in a meta-analysis. Compared to surveys or medical records, sensitivity was 82.3% (95%CI 75.8, 87.4) and specificity was 97.9% (95%CI 96.5, 98.8). NDSS-based diabetes cases obtained from the Quebec health administrative databases compared to self-report revealed a sensitivity of the NDSS case definition of 84.3% (95%CI 79.3, 88.5) and specificity of 97.9% (95%CI 97.4, 98.4). Compared to self-report combined with glucose testing, sensitivity was 58.2% (95%CI 52.2, 64.6) and specificity was 98.7% (95%CI 98.0, 99.3). Adjusted for sampling weights, physician-diagnosed diabetes prevalence in Quebec was 7.2% (95%CI 6.3, 8.0) and total diagnosed and undiagnosed diabetes prevalence was 13.4% (95%CI 11.7, 15.0).Conclusion: Including the Quebec study in an updated meta-analysis of 7 studies, the pooled sensitivity of the NDSS case definition was 82.6% (95%CI 77.1, 87.0) and specificity was 97.9% (95%CI 96.8, 98.6) for physician-diagnosed diabetes. The NDSS case definition is sufficiently accurate for surveillance purposes, in particular monitoring trends over time. The definition however misses approximately one fifth of physician-diagnosed cases and approximately 40% of those diagnosed and undiagnosed; it wrongly identifies diabetes in around 2% of the general population. Individuals with undiagnosed diabetes are likely to have a delay in diabetes treatment which implies a higher risk for diabetes-related complications. Diabetes prevalence estimated from health services administrative databases should be adjusted for the sensitivity and specificity of the case definition to better quantify yearly prevalence changes and account for undiagnosed diabetes.
Contexte: Les études qui valident les cas de diabète identifiés à partir des données administratives sont limitées et n'évaluent pas systématiquement le diabète non diagnostiqué. Nous avons mené une revue systématique pour déterminer la validité d'un algorithme utilisé pour identifier les cas de diabète (deux facturations de médecins ou une hospitalisation pour le diabète, pendant une période de deux ans; Système National de Surveillance du Diabète du Canada, SNSD). Nous avons aussi validé cet algorithme dans les bases de données du Québec en comparaison avec des données obtenues à partir d'une enquête publique. En plus, nous avons estimé la prévalence du diabète diagnostiqué à partir des données d'une enquête publique ainsi que la prévalence du diabète non diagnostiqué à partir du taux de la glycémie mesurée en utilisant des échantillons sanguins envoyés par la poste.Méthodes: Pour la revue systématique, nous avons effectué une recherche dans les bases de données de Medline (de 1950) et Embase (de 1980) pour les études de validation publiées jusqu'en Août 2012 (mots-clés: «diabetes mellitus», «administrative databases» et «validation studies»). Un modèle de régression bi-variée aux effets aléatoires a été utilisé pour regrouper des estimations de sensibilité et de spécificité. Pour la validation de l'algorithme dans les bases de données du Québec, nous avons obtenu des données administratives relatives à un échantillon aléatoire stratifié de 6247 résidents du Québec (2009). Ces individus ont été aussi interrogés au téléphone et ont été invités à envoyer à un laboratoire central des échantillons de sang prélevés à jeun. Les cas de diabète du SNSD ont été comparés avec ceux du diabète auto-déclaré par le patient et avec l'auto-déclaration en conjonction avec une glycémie élevée (≥ 7 mmol/l), respectivement. La sensibilité, spécificité, les valeurs prédictives positives et négatives, et la concordance statistique ont été calculées. La prévalence du diabète dans la population a été estimée en utilisant la définition ajustée en fonction des estimations de la sensibilité et de la spécificité.Résultats: Dans la revue systématique, la stratégie de recherche a identifié 1,423 résumés desquels 11 études de validation ont été choisies pour la revue. Six études ont été regroupées dans une méta-analyse. En comparaison aux données des enquêtes et/ou des dossiers médicaux, la sensibilité était de 82,3% (IC 95%, 75,8, 87,4%) et la spécificité était de 97,9% (IC 95%: 96,5, 98,8%). Pour la validation de la définition dans les banques de données du Québec, la comparaison avec les données obtenues à partir de l'enquête a montré une sensibilité de 84,3% (IC 95%: 79,3, 88,5) et une spécificité de 97,9% (IC 95%: 97,4, 98,4). La comparaison avec les données de l'enquête combinées aux taux de glycémie, a montré une sensibilité beaucoup plus faible de 58,2% (IC 95% 52,2, 64,6) et une spécificité de 98,7% (IC 95%: 98,0, 99,3). Après ajustement pour le poids d'échantillonnage, la prévalence du diabète diagnostiqué était de 7,2% (IC 95% 6,3, 8,0) et la prévalence du diabète diagnostiqué et non-diagnostiqué était de 13,4% (IC 95% 11.7, 15.0).Conclusion: Incluant l'étude du Québec dans une méta-analyse actualisée des 7 études, la sensibilité de la définition de cas de diabète diagnostiqué était de 82,6% (IC 95%, 77.1, 87.0) et la spécificité était de 97,9% (IC 95%: 96,8, 98,6). La définition semble être suffisamment précise pour une surveillance en santé publique, en particulier pour les analyses de tendance. Les personnes non-diagnostiqués sont susceptibles de subir un retard dans le traitement et d'encourir un risque plus élevé pour les complications liées au diabète. La prévalence du diabète estimée à partir des banques de données administrative doit être corrigée pour la sensibilité et la spécificité de la définition afin de mieux quantifier les changements annuels de prévalence et de tenir compte des cas de diabète non diagnostiqués.
APA, Harvard, Vancouver, ISO, and other styles
31

Qin, Yulin. "Non-Parametric and Parametric Estimators of the Survival Function under Dependent Censorship." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1368086293.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

DI, SALVO ANDREA. "CMOS distributed signal processing systems for radiation sensors." Doctoral thesis, Politecnico di Torino, 2022. http://hdl.handle.net/11583/2957742.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Benamara, Tariq. "Full-field multi-fidelity surrogate models for optimal design of turbomachines." Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2368.

Full text
Abstract:
L’optimisation des différents composants d’une turbomachine reste encore un sujet épineux, malgré les récentes avancées théoriques, expérimentales ou informatiques. Cette thèse propose et investigue des techniques d’optimisation assistées par méta-modèles vectoriels multi-fidélité basés sur la Décomposition aux Valeurs Propres (POD). Le couplage de la POD à des techniques de modélisation multifidélité permet de suivre l’évolution des structures dominantes de l’écoulement en réponse à des déformations géométriques. Deux méthodes d’optimisation multi-fidélité basées sur la POD sont ici proposées. La première consiste en une stratégie d’enrichissement adaptée aux modèles multi-fidelité par Gappy-POD (GPOD). Celle-ci vise surtout des problèmes associés à des simulations basse-fidélité à coût de restitution négligeable, ce qui la rend difficilement utilisable pour la conception aérodynamique de turbomachines. Elle est néanmoins validée sur une étude du domaine de vol d’une aile 2D issue de la littérature. La seconde méthodologie est basée sur une extension multi-fidèle des modèles par POD Non-Intrusive (NIPOD). Cette extension naît de la ré-interprétation du concept de POD Contrainte (CPOD) et permet l’enrichissement de l’espace réduit par ajout important d’information basse-fidélité approximative. En seconde partie de cette thèse, un cas de validation est introduit pour valider les méthodologies d’optimisation vectorielle multi-fidélité. Cet exemple présente des caractéristiques représentatives des problèmes d’optimisation de turbomachines. La capacité de généralisation des méta-modèles par NIPOD multifidélité proposés est comparée, aussi bien sur cas analytique qu’industriel, à des techniques de méta-modélisation issues de la littérature. Enfin, nous utilisons la méthode développée au cours de cette thèse pour l’optimisation d’un étage et demi d’un compresseur basse-pression et comparons les résultats obtenus à des approches à l’état de l’art
Optimizing turbomachinery components stands as a real challenge despite recent advances in theoretical, experimental and High-Performance Computing (HPC) domains. This thesis introduces and validates optimization techniques assisted by full-field Multi-Fidelity Surrogate Models (MFSMs) based on Proper Orthogonal Decomposition (POD). The combination of POD and Multi-Fidelity Modeling (MFM) techniques allows to capture the evolution of dominant flow features with geometry modifications. Two POD based multi-fidelity optimization methods are proposed. Thefirst one consists in an enrichment strategy dedicated to Gappy-POD (GPOD)models. It is more suitable for instantaneous low-fidelity computations whichmakes it hardly tractable for aerodynamic design of turbomachines. This methodis demonstrated on the flight domain study of a 2D airfoil from the literature. The second methodology is based on a multi-fidelity extension to Non-IntrusivePOD (NIPOD) models. This extension starts with a re-interpretation of theConstrained POD (CPOD) concept and allows to enrich the reduced spacedefinition with abondant, albeit inaccurate, low-fidelity information. In the second part of the thesis, a benchmark test case is introduced to test fullfield multi-fidelity optimization methodologies on an example presenting featuresrepresentative of turbomachinery problems. The predictability of the proposedMulti-Fidelity NIPOD (MFNIPOD) surrogate models is compared to classical surrogates from the literature on both analytical and industrial-scale applications. Finally, we employ the proposed tool to the shape optimization of a 1.5-stage boosterand we compare the obtained results with standard state of the art approaches
APA, Harvard, Vancouver, ISO, and other styles
34

Warren, Georgina. "Developing land management units using Geospatial technologies: An agricultural application." Curtin University of Technology, Department of Spatial Sciences, 2007. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=17509.

Full text
Abstract:
This research develops a methodology for determining farm scale land managementunits (LMUs) using soil sampling data, high resolution digital multi-spectral imagery (DMSI) and a digital elevation model (DEM). The LMUs are zones within a paddock suitable for precision agriculture which are managed according to their productive capabilities. Soil sampling and analysis are crucial in depicting landscape characteristics, but costly. Data based on DMSI and DEM is available cheaply and at high resolution.The design and implementation of a two-stage methodology using a spatiallyweighted multivariate classification, for delineating LMUs is described. Utilising data on physical and chemical soil properties collected at 250 sampling locations within a 1780ha farm in Western Australia, the methodology initially classifies sampling points into LMUs based on a spatially weighted similarity matrix. The second stage delineates higher resolution LMU boundaries using DMSI and topographic variables derived from a DEM on a 10m grid across the study area. The method groups sample points and pixels with respect to their characteristics and their spatial relationships, thus forming contiguous, homogenous LMUs that can be adopted in precision agricultural applications. The methodology combines readily available and relatively cheap high resolution data sets with soil properties sampled at low resolution. This minimises cost while still forming LMUs at high resolution.The allocation of pixels to LMUs based on their DMSI and topographic variables has been verified. Yield differences between the LMUs have also been analysed. The results indicate the potential of the approach for precision agriculture and the importance of continued research in this area.
APA, Harvard, Vancouver, ISO, and other styles
35

Vallelonga, Paul Travis. "Measurement of Lead Isotopes in Snow and Ice from Law Dome and other sites in Antarctica to characterize the Lead and seek evidence of its origin." Curtin University of Technology, School of Applied Science, 2002. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=14018.

Full text
Abstract:
Human activities such as mining and smelting of lead (Pb) ores and combustion of alkyllead additives in gasoline have resulted in extensive global Pb pollution. Since the late 1960's studies of polar ice and snow have been undertaken to evaluate the extent of anthropogenic Pb emissions in recent times as well as to investigate changes in anthropogenic Pb emissions in the more distant past. The polar ice sheets have been used to investigate Pb pollution as they offer a long-term record of human activity located far from pollution sources and sample aerosol emissions on a hemispheric scale. Lead isotopes have been previously used to identify sources of Pb in polar snow and ice, while new evaluations of Pb isotopic compositions in aerosols and Pb ore bodies allow more thorough evaluations of anthropogenic Pb emissions. Lead isotopic compositions and Pb and Barium (Ba) concentrations have been measured in snow and ice core samples from Law Dome, East Antarctica, to produce a detailed pollution history between 1530 AD and 1989 AD. Such a record has been produced to evaluate changes in anthropogenic Pb emission levels and sources over the past 500 years, to determine when industrial (anthropogenic) activities first began to influence Antarctica and also to investigate natural Pb fluxes to Antarctica. Additional samples were also collected from Law Dome snow and ice cores to respectively investigate seasonal variations in Pb and Ba deposition, and the influence of the 1815 AD volcanic eruption of Tambora, Indonesia. All samples were measured by thermal ionisation mass spectrometry, for which techniques were developed to reliably analyse Pb isotopic compositions in Antarctic samples containing sub-picogram per gram concentrations of Pb.
Particular attention was given to the quantity of Pb added to the samples during the decontamination and sample storage stages of the sample preparation process. These stages, including the use of a stainless steel chisel for the decontamination, contributed ~5.2 pg to the total sample analysed, amounting to a concentration increase of ~13 fg g-1. In comparison, the mass spectrometer ion source contributed typically 89 +/- 19 fg to the blank, however its influence depended upon the amount of Pb available for analysis and so had the greatest impact when small volumes of samples with a very low concentration were analysed. As a consequence of these careful investigations of the Pb blank contributions to the samples, the corrections made to the Pb isotopic ratios and concentrations measured are smaller than previously reported evaluations of Pb in Antarctica by thermal ionisation mass spectrometry. The data indicate that East Antarctica was relatively pristine until -1884 AD, after which the first influence of anthropogenic Pb in Law Dome is observed. "Natural", pre-industrial, background concentrations of Pb and Ba were - 0.4 pg/g and - 1.3 pg/g, respectively, with Pb isotopic compositions within the range 206Pb/207Pb = 1.20 - 1.25 and 208Pb/207Pb = 2.46 - 2.50 and an average rock and soil dust Pb contribution of 8-12%. A major pollution event was observed at Law Dome between 1884 and 1908 AD, elevating the Pb concentration fourfold and changing 206Pb/207Pb ratios in the ice to ~1.12. Based on Pb isotopic systematics and Pb emissions statistics, this was attributed to Pb mined at Broken Hill and smelted at Broken Hill and Port Pirie, Australia.
Anthropogenic Pb inputs to Law Dome were most significant from ~1900 to 1910 and from ~1960 to 1980. During the 20th century, Ba concentrations were consistently higher than "natural" levels. This was attributed to increased dust production, suggesting the influence of climate change and/or changes in land coverage with vegetation. Law Dome ice dated from 1814 AD to 1819 AD was analysed for Pb isotopes and Pb, Ba and Bismuth (Bi) concentrations to investigate the influence of the 1815 AD volcanic eruption of Tambora, Indonesia. The presence of volcanic debris in the core samples was observed from late-1816 AD to 1818 AD as an increase in sulphate concentrations and electrical conductivity of the ice. Barium concentrations were approximately three times higher than background levels from mid-1816 to mid1818, consistent with increased atmospheric loading of rock and soil dust, while enhanced Pb/Ba and Bi/Ba ratios, associated with deposition of volcanic debris, were observed at mid-1814 and from early-1817 to mid-1818. From the results, it appeared likely that Pb emitted from Tambora was removed from the atmosphere within the 1.6 year period required to transport aerosols to Antarctica. Increased Pb and Bi concentrations observed in Law Dome ice ~1818 AD were attributed to either increased heavy metal emissions from Mount Erebus, or increased fluxes of heavy metals to the Antarctic ice sheet resulting from climate and meteorological modifications following the Tambora eruption.
A non-continuous series of Law Dome snow core samples dating from 1980 to 9185 AD were analysed to investigate seasonal variations in the deposition of Pb and Ba. It was found that Pb and Ba at Law Dome do exhibit seasonal variations in deposition, with higher concentrations of Pb and Ba usually observed during Summer and lower concentrations of Pb and Ba usually observed during the Autumn and Spring seasons. At Law Dome, broad patterns of seasonal Pb and Ba deposition are evident however these appear to be punctuated by short-term deposition events or may even be composed of a continuum of short-term deposition events. This variability suggests that complex meteorological systems are responsible for the transport of Pb and Ba to Law Dome, and probably Antarctica in general.
APA, Harvard, Vancouver, ISO, and other styles
36

Gioia, Dario <1988&gt. "Fully Flexible Binding of Taxane-Site Ligands to Tubulin via Enhanced Sampling MD Simulations." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amsdottorato.unibo.it/8624/1/Gioia_Dario_Tesi.pdf.

Full text
Abstract:
Microtubules (MTs) are cytoskeleton components involved in a plenty of cellular functions such as transport, motility, and mitosis. Being polymers made up of α/β-tubulin heterodimers, in order to accomplish these functions, they go through large variations in their spatial arrangement switching between polymerization and depolymerization phases. Because of their role in cellular division, interfering with MTs dynamic behavior has been proven to be suitable for anticancer therapy as tubulin-binding agents induce mitotic arrest and cell death by apoptosis. However, how microtubule-stabilizing agents like taxane-site ligands act to promote microtubule assembly and stabilization is still argument of debate. As in the case of tubulin, traditional docking techniques lack the necessary capabilities of treating protein flexibility that are central to certain binding processes. For this reason, the aim of this project is to put in place a protocol for dynamic docking of taxane-site ligands to β-tubulin by means of enhanced sampling MD simulation techniques. Firstly, the behavior of the binding pocket has been investigated with classical MD simulations. It has been observed that the most flexible part of the taxane site is the so-called “M-loop”, the one involved into the lateral associations of tubulin heterodimers and that is supposed to be stabilized by taxane-site ligands. Secondly, the protocol for the dynamic docking has been put in place using the MD-Binding technique developed by BiKi Technologies. It showed to be effective in reproducing the binding mode of epothilone A and discodermolide as in their X-ray crystal structures. Finally, the protocol has been tested against paclitaxel, a drug for which no X-ray crystal structure is currently available. These results showed the potential of such an approach and strengthen the belief that in the future dynamic docking will replace traditional static docking in the drug discovery and development process.
APA, Harvard, Vancouver, ISO, and other styles
37

Frühwirth-Schnatter, Sylvia. "Fully Bayesian Analysis of Switching Gaussian State Space Models." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2000. http://epub.wu.ac.at/812/1/document.pdf.

Full text
Abstract:
In the present paper we study switching state space models from a Bayesian point of view. For estimation, the model is reformulated as a hierarchical model. We discuss various MCMC methods for Bayesian estimation, among them unconstrained Gibbs sampling, constrained sampling and permutation sampling. We address in detail the problem of unidentifiability, and discuss potential information available from an unidentified model. Furthermore the paper discusses issues in model selection such as selecting the number of states or testing for the presence of Markov switching heterogeneity. The model likelihoods of all possible hypotheses are estimated by using the method of bridge sampling. We conclude the paper with applications to simulated data as well as to modelling the U.S./U.K. real exchange rate. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
38

Tongroon, Manida. "Combustion characteristics and in-cylinder process of CAI combustion with alcohol fuels." Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/4501.

Full text
Abstract:
Controlled auto-ignition (CAI) combustion in the gasoline engine has been extensively studied in the last several years due to its potential for simultaneous improvement in fuel consumption and exhaust emissions. At the same time, there has been increasing interest in the use of alternative fuels in order to reduce reliance on conventional fossil fuels. Therefore, this study has been carried out to investigate the effect of alcohol fuels on the combustion characteristics and in-cylinder processes of CAI combustion in a single cylinder gasoline engine. In order to study the effect of alcohol fuels, combustion characteristics were investigated by heat releases analysis in the first part. The combustion process was studied through flame structure and excited molecule by chemiluminescence imaging. Furthermore, in-cylinder gas composition was analysis by GC-MS to identify the auto-ignition reactions involved in the CAI combustion. In addition, the influence of spark-assisted ignition and injection timings were also studied. Alcohol fuels, in particular methanol, resulted in advanced auto-ignition and faster combustion than that of gasoline. In addition, their use could lead to substantially lower HC, NOX and CO exhaust emissions. Spark-assisted ignition assisted gasoline combustion by advancing ignition timing and initiating flame kernel at the centre of combustion chamber but it had marginal effect on alcohol fuels. Auto-ignition always took place at the perimeter of the chamber and occurred earlier with alcohol fuels. Fuel reforming reactions during the NVO period were observed and they had significant effect on alcohol combustion.
APA, Harvard, Vancouver, ISO, and other styles
39

Chang, Meng-I. "A Comparison of Two MCMC Algorithms for Estimating the 2PL IRT Models." OpenSIUC, 2017. https://opensiuc.lib.siu.edu/dissertations/1446.

Full text
Abstract:
The fully Bayesian estimation via the use of Markov chain Monte Carlo (MCMC) techniques has become popular for estimating item response theory (IRT) models. The current development of MCMC includes two major algorithms: Gibbs sampling and the No-U-Turn sampler (NUTS). While the former has been used with fitting various IRT models, the latter is relatively new, calling for the research to compare it with other algorithms. The purpose of the present study is to evaluate the performances of these two emerging MCMC algorithms in estimating two two-parameter logistic (2PL) IRT models, namely, the 2PL unidimensional model and the 2PL multi-unidimensional model under various test situations. Through investigating the accuracy and bias in estimating the model parameters given different test lengths, sample sizes, prior specifications, and/or correlations for these models, the key motivation is to provide researchers and practitioners with general guidelines when it comes to estimating a UIRT model and a multi-unidimensional IRT model. The results from the present study suggest that NUTS is equally effective as Gibbs sampling at parameter estimation under most conditions for the 2PL IRT models. Findings also shed light on the use of the two MCMC algorithms with more complex IRT models.
APA, Harvard, Vancouver, ISO, and other styles
40

Wu, Xinying. "Reliability Assessment of a Continuous-state Fuel Cell Stack System with Multiple Degrading Components." Ohio University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1556794664723115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Afrifa-Yamoah, Ebenezer. "Imputation, modelling and optimal sampling design for digital camera data in recreational fisheries monitoring." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2021. https://ro.ecu.edu.au/theses/2387.

Full text
Abstract:
Digital camera monitoring has evolved as an active application-oriented scheme to help address questions in areas such as fisheries, ecology, computer vision, artificial intelligence, and criminology. In recreational fisheries research, digital camera monitoring has become a viable option for probability-based survey methods, and is also used for corroborative and validation purposes. In comparison to onsite surveys (e.g. boat ramp surveys), digital cameras provide a cost-effective method of monitoring boating activity and fishing effort, including night-time fishing activities. However, there are challenges in the use of digital camera monitoring that need to be resolved. Notably, missing data problems and the cost of data interpretation are among the most pertinent. This study provides relevant statistical support to address these challenges of digital camera monitoring of boating effort, to improve its utility to enhance recreational fisheries management in Western Australia and elsewhere, with capacity to extend to other areas of application. Digital cameras can provide continuous recordings of boating and other recreational fishing activities; however, interruptions of camera operations can lead to significant gaps within the data. To fill these gaps, some climatic and other temporal classification variables were considered as predictors of boating effort (defined as number of powerboat launches and retrievals). A generalized linear mixed effect model built on fully-conditional specification multiple imputation framework was considered to fill in the gaps in the camera dataset. Specifically, the zero-inflated Poisson model was found to satisfactorily impute plausible values for missing observations for varied durations of outages in the digital camera monitoring data of recreational boating effort. Additional modelling options were explored to guide both short- and long-term forecasting of boating activity and to support management decisions in monitoring recreational fisheries. Autoregressive conditional Poisson (ACP) and integer-valued autoregressive (INAR) models were identified as useful time series models for predicting short-term behaviour of such data. In Western Australia, digital camera monitoring data that coincide with 12-month state-wide boat-based surveys (now conducted on a triennial basis) have been read but the periods between the surveys have not been read. A Bayesian regression framework was applied to describe the temporal distribution of recreational boating effort using climatic and temporally classified variables to help construct data for such missing periods. This can potentially provide a useful cost-saving alternative of obtaining continuous time series data on boating effort. Finally, data from digital camera monitoring are often manually interpreted and the associated cost can be substantial, especially if multiple sites are involved. Empirical support for low-level monitoring schemes for digital camera has been provided. It was found that manual interpretation of camera footage for 40% of the days within a year can be deemed as an adequate level of sampling effort to obtain unbiased, precise and accurate estimates to meet broad management objectives. A well-balanced low-level monitoring scheme will ultimately reduce the cost of manual interpretation and produce unbiased estimates of recreational fishing indexes from digital camera surveys.
APA, Harvard, Vancouver, ISO, and other styles
42

Zuazo, Pablo [Verfasser], Klaus [Akademischer Betreuer] Butterbach-Bahl, and Heinz [Akademischer Betreuer] Rennenberg. "Development of a fully automated soil incubation and gas sampling system for quantifying trace gas emission pulses from soils at high temporal resolution." Freiburg : Universität, 2016. http://d-nb.info/1129080730/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Andersson, Oskar. "Avskiljning av inert material från avfallsbränsle : En fältstudie av förbättrad RDF-produktion på bränsleberedningen i Västerås." Thesis, Mälardalens högskola, Akademin för ekonomi, samhälle och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-35780.

Full text
Abstract:
Samtidigt som världens energiproduktion till stor del baseras på förbränning av fossila bränslen behandlas enorma mängder avfall genom deponering. Ökad energiåtervinning av avfall kan bidra till att minska världens utsläpp av växthusgaser. Då avfall bör ses som en resurs är det dock viktigt med en effektiv energiåtervinning. Förbränning i fluidbäddspanna möjliggör god förbränning och hög verkningsgrad men kräver ett finfördelat avfall med lågt innehåll av inert (icke brännbart) material, så kallat RDF. Därför behöver avfallet beredas innan förbränning. En effektiv och välfungerande beredning av avfallsbränsle möjliggör resurseffektiv avfallshantering av utsorterade fraktioner samt effektiv förbränning genom hög bränslekvalitet. Mälarenergis panna 6 på kraftvärmeverket i Västerås är en avfallseldad CFB-panna med bränsleeffekt på omkring 170 MW, vilket motsvarar omkring 50 ton avfall per timme. På den tillhörande bränsleberedningen produceras avfallsbränsle, RDF, i tre beredningslinjer genom att avfallet krossas och olika typer av inert material avskiljs och bildar rejekt från anläggningen.  Magnetisk metall avskiljs med magnetavskiljare, icke-magnetisk metall avskiljs med virvelströmsavskiljare och en tungfraktion bestående av bland annat sten och glas avskiljs med vindsikt. Kvaliteten på avskiljningen är dock bristfällig vilket leder till högt innehåll av inert material i bränslet och högt innehåll av brännbart material i de avskilda fraktionerna. Dessa två problem orsakar kostnader och miljöpåverkan som skulle kunna minskas. Syftet med detta examensarbete var att undersöka vilka faktorer som påverkar avskiljningen av inert material från avfallsbränsle för förbränning i fluidbäddspanna samt ge förslag på åtgärder som kan leda till förbättrad avskiljning. Detta har undersökts genom en fältstudie på den aktuella bränsleberedningen. För att insamla kunskap om bränsleberedningsprocessen och problembilden genomfördes en kartläggning av avskiljningen. Utifrån detta identifierades faktorer som kan påverka avskiljningen. För att ytterligare undersöka vad som påverkar avskiljningsprocessen genomfördes ett antal provtagningar av avskiljningen. En anpassad metod för provtagning av kvaliteten på avskiljningen genom plockanalys togs fram. Sammanlagt genomfördes nio provtagningar under olika förutsättningar. En ny typ av vindsikt testades också för att undersöka hur en investering skulle kunna förbättra avskiljningen. Vindsikten testades utifrån två alternativ av placering. Utifrån resultatet av kartläggningen identifierades fem faktorer som tros påverka avskiljningen. Dessa faktorer är det inkommande avfallet och dess egenskaper, materialflödets storlek genom produktionslinjen, ojämnt materialflöde genom magnetavskiljaren, tillbakakastande turbulens i vindsikten och fastnande material på spjället i vindsikten. Resultatet från de genomförda provtagningarna av kvaliteten på avskiljningen bekräftar att det inkommande avfallet samt materialflödets storlek genom produktionslinjen tros ha stor påverkan på samtliga avskiljare. Då den nya typen av vindsikt testades för att placeras i beredningslinjen visades ingen utmärkande förbättring jämfört med de befintliga vindsiktarna. Då den testades som andra steget i en två-stegs vindsiktning visade däremot resultatet potential att uppnå förbättrad avskiljning. Resultatet visade att två-stegs vindsiktningen har potential att minska mängden tungfraktionsrejekt med cirka 30 – 50 %. Det inerta innehållet i utgående lättfraktion var dock 6 – 8 % vilket motsvarar en höjning av det inerta innehållet i den totala mängden RDF på cirka 0,5 procentenheter. Dock medför en två-stegs vindsiktning att mer material kan siktas ut i vindsiktarna i beredningslinjerna vilket därmed skulle kunna ge en minskning av den totala mängden inert material i RDF. Som slutsats dras att investeringen i ny vindsikt för att skapa en två-stegs vindsiktning skulle kunna ge förbättrad avskiljning. Den nya vindsikten kan med fördel efterföljas av ytterligare avskiljning eftersom mängden inert material i RDF är relativt koncentrerat där. Dock bör en vidare utredning om kostnader och besparingspotential genomföras innan investeringen kan föreslås som åtgärd. Två typer av enklare konstruktioner föreslås för att åtgärda tre av de faktorer som identifierats. En konstruktion för att jämna ut materialflödet innan magnetavskiljaren samt en konstruktion för att förändra luftflödet i vindsikten. Att minska materialflödet genom linjerna föreslås som en viktig åtgärd för att förbättra avskiljningen. Detta kan åstadkommas genom att fördela RDF-produktionen så jämnt som möjligt på produktionslinjerna samt att sprida ut produktionen jämnt över tid. Detta kräver en mer aktiv planering av produktionen samt minimering av stopptider. En viktig slutsats som har dragits är att det inkommande avfallet varierar kraftigt och har stor inverkan på avskiljningsprocessen. En åtgärd som föreslås för att ge förbättrad avskiljning är att en regelbunden kontroll och variation av processen bör införas. Detta föreslås ske genom uttag och kontroll av RDF och rejekt från beredningslinjerna tillsammans med en bedömning av det inkommande avfallet. Informationen bör sedan ligga till grund för ett beslut om hur processen ska styras för att säkerställa en stabil kvalitet på avskiljningen.
Energy recovery of waste got huge potential of decreasing the greenhouse gas emissions in the world. Combustion in fluidized bed boilers gives high resource efficiency but demands a comminuted fuel with low content of inert (non-combustible) materials, a so called refuse derived fuel (RDF).  A well-functioning separation process as part of the RDF-production allows efficient combustion as well as efficient treatment of the separated materials. The purpose of this degree project is to investigate what factors that influences on the separation of inert material from waste for combustion in a fluidized bed boiler and how the separation can be improved. This is investigated through a field study of a fuel-preparation plant in Sweden. The separation process has been examined visually and by experiments based on sampling and manual sorting of waste fractions. The results show five factors that are assumed to influence on the sorting. Three of them are suggested to be solved by simple constructions. One factor that shows to have a great impact is the input waste to the process which is varying to a large extent. A measure that is suggested to give improved separation is a recurrent check of the RDF quality and the reject quality. Combined with information about the input waste this should be basis for recurrent adjustments of the plant to achieve a more stable quality of the separation output. Another measure that is suggested is to decrease the size of the material flow through the production line. This is suggested since the size of the flow is assumed to have an important impact on the separation. The decrease can be achieved by more evenly distribute the production over time and over the production lines. This will though require a more active planning of the production and minimization of production stops. As part of the work a new wind sifter has also been tested.  The wind sifter show good potential of improving the separation if it would be installed to create a two-step wind sifting. However, since the investment of a new wind sifter implies a high investment, a study of the costs and saving potential is required before the investment can be suggested as a measure.
APA, Harvard, Vancouver, ISO, and other styles
44

BARUA, SUKHENDU LAL. "APPLICATION OF CONDITIONAL SIMULATION MODEL TO RUN-OF-MINE COAL SAMPLING FREQUENCY DETERMINATION AND COAL QUALITY CONTROL AT THE POWER PLANT (BLENDING, GOAL PROGRAMMING, MICROCOMPUTER)." Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/187940.

Full text
Abstract:
Run-of-mine (ROM) coal sampling is one of the most important factors in determining the disposition of ROM coal for an overall emission control strategy. Determination of the amount of sample, or still better, the frequency of ROM coal sampling is thus essential to the analysis of overall emission control strategies. A simulation model of a portion of the Upper Freeport coal seam in western Pennsylvania was developed employing conditional simulation. On the simulated deposit, different mining methods were simulated to generate ROM coal data. ROM coal data was statistically analyzed to determine the sampling frequency. Two schemes were suggested: (1) the use of geostatistical techniques if there is spatial correlation in ROM coal quality, and (2) the use of classical statistics if the spatial correlation in ROM coal quality is not present. Conditions under which spatial correlation in ROM coal quality can be expected are also examined. To link the ROM coal and coals from other sources to coal stockpiles and subsequently to solve coal blending problems, where varying qualities of stockpiled coals are normally used, an interactive computer program was developed. Simple file-handling, for stockpiling problems, and multi-objective goal programming technique, for blending problems, provided their solutions. The computer program was made suitable for use on both minicomputer and microcomputer. Menu-driven and interactive capabilities give this program a high level of flexibility that is needed to analyze and solve stockpiling and blending problems at the power plant.
APA, Harvard, Vancouver, ISO, and other styles
45

Ricosset, Thomas. "Signature électronique basée sur les réseaux euclidiens et échantillonnage selon une loi normale discrète." Thesis, Toulouse, INPT, 2018. http://www.theses.fr/2018INPT0106/document.

Full text
Abstract:
La cryptographie à base de réseaux euclidiens a généré un vif intérêt durant les deux dernièresdécennies grâce à des propriétés intéressantes, incluant une conjecture de résistance àl’ordinateur quantique, de fortes garanties de sécurité provenant d’hypothèses de difficulté sur lepire cas et la construction de schémas de chiffrement pleinement homomorphes. Cela dit, bienqu’elle soit cruciale à bon nombre de schémas à base de réseaux euclidiens, la génération debruit gaussien reste peu étudiée et continue de limiter l’efficacité de cette cryptographie nouvelle.Cette thèse s’attelle dans un premier temps à améliorer l’efficacité des générateurs de bruitgaussien pour les signatures hache-puis-signe à base de réseaux euclidiens. Nous proposons unnouvel algorithme non-centré, avec un compromis temps-mémoire flexible, aussi rapide que savariante centrée pour des tables pré-calculées de tailles acceptables en pratique. Nousemployons également la divergence de Rényi afin de réduire la précision nécessaire à la doubleprécision standard. Notre second propos tient à construire Falcon, un nouveau schéma designature hache-puis-signe, basé sur la méthode théorique de Gentry, Peikert et Vaikuntanathanpour les signatures à base de réseaux euclidiens. Nous instancions cette méthode sur les réseauxNTRU avec un nouvel algorithme de génération de trappes
Lattice-based cryptography has generated considerable interest in the last two decades due toattractive features, including conjectured security against quantum attacks, strong securityguarantees from worst-case hardness assumptions and constructions of fully homomorphicencryption schemes. On the other hand, even though it is a crucial part of many lattice-basedschemes, Gaussian sampling is still lagging and continues to limit the effectiveness of this newcryptography. The first goal of this thesis is to improve the efficiency of Gaussian sampling forlattice-based hash-and-sign signature schemes. We propose a non-centered algorithm, with aflexible time-memory tradeoff, as fast as its centered variant for practicable size of precomputedtables. We also use the Rényi divergence to bound the precision requirement to the standarddouble precision. Our second objective is to construct Falcon, a new hash-and-sign signaturescheme, based on the theoretical framework of Gentry, Peikert and Vaikuntanathan for latticebasedsignatures. We instantiate that framework over NTRU lattices with a new trapdoor sampler
APA, Harvard, Vancouver, ISO, and other styles
46

Křížek, Miroslav. "Měřicí modul s A/D převodníkem se současným vzorkováním." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-217840.

Full text
Abstract:
In this work is designed programme unit for acquisition analog acoustic signlas from sensors. There is used accurate A/D converter ADS1287 by the company Texas Iinstrument with resolution of 24 bits to digitizing these signals. There is used 32-bit microprocessor AT91SAM7S64 by the company Atmel to control this A/D converter and sending digitized data to PC. This microprocessor has implemented USB interface. By force of developmental programme units whit microprocessor and A/D converter is produced programme for microprocessor in developmental setting IAR Embedded Workbench IDE 5.0 and simple aplication for PC in setting Borland C++ Builder. Both of those programs are in language C++. Rate of sampling is 26 kHz. On the basis is realized programme unit USB-ADC whit microprocessor and A/D converter.
APA, Harvard, Vancouver, ISO, and other styles
47

Säll, Erik. "Design of a Low Power, High Performance Track-and-Hold Circuit in a 0.18µm CMOS Technology." Thesis, Linköping University, Department of Electrical Engineering, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1353.

Full text
Abstract:

This master thesis describes the design of a track-and-hold (T&H) circuit with 10bit resolution, 80MS/s and 30MHz bandwidth. It is designed in a 0.18µm CMOS process with a supply voltage of 1.8 Volt. The circuit is supposed to work together with a 10bit pipelined analog to digital converter.

A switched capacitor topology is used for the T&H circuit and the amplifier is a folded cascode OTA with regulated cascode. The switches used are of transmission gate type.

The thesis presents the design decisions, design phase and the theory needed to understand the design decisions and the considerations in the design phase.

The results are based on circuit level SPICE simulations in Cadence with foundry provided BSIM3 transistor models. They show that the circuit has 10bit resolution and 7.6mW power consumption, for the worst-case frequency of 30MHz. The requirements on the dynamic performance are all fulfilled, most of them with large margins.

APA, Harvard, Vancouver, ISO, and other styles
48

Pardon, Gaspard. "From Macro to Nano : Electrokinetic Transport and Surface Control." Doctoral thesis, KTH, Mikro- och nanosystemteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-144994.

Full text
Abstract:
Today, the growing and aging population, and the rise of new global threats on human health puts an increasing demand on the healthcare system and calls for preventive actions. To make existing medical treatments more efficient and widely accessible and to prevent the emergence of new threats such as drug-resistant bacteria, improved diagnostic technologies are needed. Potential solutions to address these medical challenges could come from the development of novel lab-on-chip (LoC) for point-of-care (PoC) diagnostics. At the same time, the increasing demand for sustainable energy calls for the development of novel approaches for energy conversion and storage systems (ECS), to which micro- and nanotechnologies could also contribute. This thesis has for objective to contribute to these developments and presents the results of interdisciplinary research at the crossing of three disciplines of physics and engineering: electrokinetic transport in fluids, manufacturing of micro- and nanofluidic systems, and surface control and modification. By combining knowledge from each of these disciplines, novel solutions and functionalities were developed at the macro-, micro- and nanoscale, towards applications in PoC diagnostics and ECS systems. At the macroscale, electrokinetic transport was applied to the development of a novel PoC sampler for the efficient capture of exhaled breath aerosol onto a microfluidic platform. At the microscale, several methods for polymer micromanufacturing and surface modification were developed. Using direct photolithography in off-stoichiometry thiol-ene (OSTE) polymers, a novel manufacturing method for mold-free rapid prototyping of microfluidic devices was developed. An investigation of the photolithography of OSTE polymers revealed that a novel photopatterning mechanism arises from the off-stoichiometric polymer formulation. Using photografting on OSTE surfaces, a novel surface modification method was developed for the photopatterning of the surface energy. Finally, a novel method was developed for single-step microstructuring and micropatterning of surface energy, using a molecular self-alignment process resulting in spontaneous mimicking, in the replica, of the surface energy of the mold. At the nanoscale, several solutions for the study of electrokinetic transport toward selective biofiltration and energy conversion were developed. A novel, comprehensive model was developed for electrostatic gating of the electrokinetic transport in nanofluidics. A novel method for the manufacturing of electrostatically-gated nanofluidic membranes was developed, using atomic layer deposition (ALD) in deep anodic alumina oxide (AAO) nanopores. Finally, a preliminary investigation of the nanopatterning of OSTE polymers was performed for the manufacturing of polymer nanofluidic devices.

QC 20140509


Rappid
NanoGate
Norosensor
APA, Harvard, Vancouver, ISO, and other styles
49

Hsieh, Hung-Chih, and 謝鴻志. "The full-field heterodyne interferometry with novel sampling scheme." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/00396438979331007005.

Full text
Abstract:
博士
國立交通大學
光電工程學系
99
In order to apply the heterodyne interferometry to the full-field measurement by using a digital camera, the relations between the camera sampling frequency and the heterodyne frequency are investigated based on the procedures to derive the associated phases. We find that the full-field heterodyne interferometry can be operated whether the sampling conditions meet the Nyquist sampling theorem or not. The large step height and the refractive index distribution are performed in order with the conventional Nyquist sampling theorem. Then, the optimal conditions for a commonly used CCD camera are proposed to reduce the cost. The full-field phase retardation distribution of a wave plate is measured to demonstrate their validities. To measure the height distribution, an alternative full-field interferometric profilometry is proposed by combining the two-wavelength interferometry and the heterodyne interferometry. A collimated heterodyne light is introduced into a modified Twyman-Green interferometer, and its phase and profile can be obtained. In the measurement of the full-field refractive index distribution, the circular heterodyne light is incident on the sample obliquely. The reflected light passes through an analyzer and its associated phases are derived from the interference signals. The estimated data are substituted into the special equations derived from Fresnel’s equations, and the full-field refractive index distribution of the sample can be obtained. The processes to derive the associated phases from the data of a series of recorded frames are performed, two optimal sampling conditions for a common-used CCD camera are proposed. The full-field phase retardation of a wave plate is measured by using a common path heterodyne interferometry to show the validities. The above methods have several merits such as easy operation, high resolution and rapid measurement.
APA, Harvard, Vancouver, ISO, and other styles
50

"Glauber Dynamics for Sampling an Edge Colouring of Full Homogeneous Trees." 2016. http://repository.lib.cuhk.edu.hk/en/item/cuhk-1292280.

Full text
Abstract:
我們研究用 Glauber Dynamics 隨機抽出圖頂點著色的問題。這是個有趣的問題因為 Jerrum 證明如果能隨機抽出頂點著色,則可以找出圖形著色數目的近似值。我們希望有 多項式時間內的取樣器。Glauber Dynamics 用馬可夫鏈抽出圖形著色並被大量研究。它 的狀態空間是頂點著色的集合。每一步我們隨機抽出一種顏色 c 和頂點 u,並嘗試把 u 的 顏色改成 c。如果有衝突則停留在現在的顏色。假設圖的最大度是 d,顏色數目是 q。這 問題其中一個目標是証明當 q 不少於 d + 2 時,混合時間是多項式時間。現在最好的成果 是 Vigoda 提出的對於所有圖,混合時間是多項式時間如果 q 不少於 11d/6。對於某一種 類的圖,我們可以用更少的顏色。其中一個例子是有大周長和大的最大度的圖。
在這篇論文,我們集中在用 Glauber Dynamics 隨機抽出正則樹邊著色的問題。這相當於 用 Glauber Dynamics 隨機抽出正則樹的線圖的頂點著色。我們研究這特殊例子因為線圖 的周長小因此不能直接應用之前的結果。目前最好的結果是 vigoda 的多項式混合時間如 果 q 不少於 11d/3。我們的主要成果是多項式混合時間如果 q 不少於 2d。
We study the problem of sampling a graph colouring using Glauber Dynamics. This is an interesting problem as Jerrum showed that we can approximate the number of proper colouring if we can sample a colouring nearly uniformly. Therefore we want a sampler with polynomial running time. Glauber Dynamics is one natural Markov Chain for sampling a graph colouring and it has been studied extensively. The state space of it is the set of proper colourings. In each step, we sample a colour c and a node u randomly. Then we update u to colour c if the new colouring is still proper, otherwise we stay at the current colouring. For a graph with maximum degree d, let q be the number of colours. One important goal on this problem is proving polynomial mixing time if q ≥ d + 2. For general graphs, the best result is polynomial mixing time if q ≥ 11d/6 by Vigoda. For some classes of graphs, we can sample a colouring for fewer number of colours. One example are graphs with large girth and large maximum degree and there are many results for this kinds of graphs.
In the thesis, we focus on the mixing time of Glauber Dynamics for sampling an edge colouring of a full d-homogeneous tree. This is equivalent to sampling a proper vertex colouring of the line graph of the tree. We consider this special case as the line graph has small girth so previous results and techniques does not apply directly. The best previous result is polynomial mixing time if q ≥ 11d/3 by Vigoda. Our main result is polynomial mixing time if q ≥ 2d. Our proof is based on the multicommodity flow argument by Sinclair.
Poon, Chun Yeung.
Thesis M.Phil. Chinese University of Hong Kong 2016.
Includes bibliographical references (leaves ).
Abstracts also in Chinese.
Title from PDF title page (viewed on …).
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography