Dissertations / Theses on the topic 'Sampling of Single Character'

To see the other types of publications on this topic, follow the link: Sampling of Single Character.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Sampling of Single Character.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Xu, Nuo. "A Monte Carlo Study of Single Imputation in Survey Sampling." Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-30541.

Full text
Abstract:
Missing values in sample survey can lead to biased estimation if not treated. Imputation was posted asa popular way to deal with missing values. In this paper, based on Särndal (1994, 2005)’s research, aMonte-Carlo simulation is conducted to study how the estimators work in different situations and howdifferent imputation methods work for different response distributions.
APA, Harvard, Vancouver, ISO, and other styles
2

Rogers, Sylvia Caren 1957. "Efficient sampling for dynamic single-photon emission computed tomographic imaging." Thesis, The University of Arizona, 1990. http://hdl.handle.net/10150/278605.

Full text
Abstract:
Our goal is to develop a single-photon emission computed tomography (SPECT) system for dynamic cardiac imaging so that heart disease may be more accurately evaluated. We have developed multiple, stationary, modular scintillation cameras that allow for dynamic imaging because of large detector area, large collection efficiency, high count-rate capability, and no motion of detector, collimator, or aperture. We make use of coded-aperture pinhole arrays because they increase photon-collection efficiency. The coded apertures allow for overlapping projections or multiplexing of an object onto the detector face. We have designed a novel collimation system that allows for an increased number of pinhole projections without substantial multiplexing. This new method is called "subslicing". We verified the subslice concept both in computer simulation and with our 16-module ring imaging system. Comparison of results with and without subslicing shows that the new approach substantially reduces artifacts in the image reconstruction. (Abstract shortened with permission of author.)
APA, Harvard, Vancouver, ISO, and other styles
3

Meresmana, Jelena. "Towards a new instrument for single aerosol particle sampling and characterization." Thesis, University of Bristol, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.508083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Parasoglou, Prodromos, Andrew J. Sederman, John Rasburn, Hugh Powell, and Michael L. Johns. "Optimal k-space sampling for single point imaging of transient systems." Universitätsbibliothek Leipzig, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-192138.

Full text
Abstract:
A modification of the Single Point Imaging (SPI) is presented. The novel approach aims at increasing the sensitivity of the method and hence the resulting Signal-to-Noise ratio (SNR) for a given total time interval. With prior knowledge of the shape of the object under study, a selective sparse k-space sampling can then be used to follow dynamic phenomena of transient systems, in this case the absorption of moisture by a cereal-based wafer material. Further improvement in the image quality is achieved when the un-sampled k-space points are replaced by those of the initial dry or the final wet sample acquired at the beginning and the end of the acquisition respectively when there are no acquisition time limitations.
APA, Harvard, Vancouver, ISO, and other styles
5

Parasoglou, Prodromos, Andrew J. Sederman, John Rasburn, Hugh Powell, and Michael L. Johns. "Optimal k-space sampling for single point imaging of transient systems." Diffusion fundamentals 10 (2009) 13, S. 1-3, 2009. https://ul.qucosa.de/id/qucosa%3A14104.

Full text
Abstract:
A modification of the Single Point Imaging (SPI) is presented. The novel approach aims at increasing the sensitivity of the method and hence the resulting Signal-to-Noise ratio (SNR) for a given total time interval. With prior knowledge of the shape of the object under study, a selective sparse k-space sampling can then be used to follow dynamic phenomena of transient systems, in this case the absorption of moisture by a cereal-based wafer material. Further improvement in the image quality is achieved when the un-sampled k-space points are replaced by those of the initial dry or the final wet sample acquired at the beginning and the end of the acquisition respectively when there are no acquisition time limitations.
APA, Harvard, Vancouver, ISO, and other styles
6

Newcombe, Guy Charles Fernley. "Low energy electronic excitations in CoSi₂ and YNi₃." Thesis, University of Cambridge, 1990. https://www.repository.cam.ac.uk/handle/1810/283666.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kolano, Michael [Verfasser]. "Design and characterization of a single-laser polarization-controlled optical sampling system / Michael Kolano." München : Verlag Dr. Hut, 2019. http://d-nb.info/1198542748/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Russell, Carrie L. "Comparison of culturable/viable airborne mold and total mold spore sampling results in single-family dwellings." Virtual Press, 2002. http://liblink.bsu.edu/uhtbin/catkey/1233192.

Full text
Abstract:
This study was conducted to determine and compare indoor mold concentrations of total mold, five target taxa, and unidentified mold taxa using culturable/viable mold sampling (on DG-18 and MEA) and total mold spore sampling concurrently. Samples were taken within two locations of 22 single-family dwellings. Paired comparisons of culturable/viable mold concentrations revealed that DG- 18 samples had significantly higher total colony counts than MEA samples and near significantly higher counts of Aspergillus. Total mold spore concentrations were an average of 16-21 times greater than culturable/viable mold concentrations. The use of both sampling techniques concurrently allowed apparent viability ratios to be calculated. Significant differences in apparent viability were observed on the two media for total mold and Cladosporium, and near significance for Aspergillus; higher ratios were observed using DG-18. These studies indicate that DG-18 may be a superior medium for culturable/viable mold sampling and significant apparent viability differences exist among mold taxa quantified.
Department of Natural Resources and Environmental Management
APA, Harvard, Vancouver, ISO, and other styles
9

Schroeder, Matthew William. "Association of Campylobacter spp. Levels between Chicken Grow-Out Environmental Samples and Processed Carcasses." Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/32169.

Full text
Abstract:
Campylobacter spp. have been isolated from live poultry, production environment, processing facility, and raw poultry products. The detection of Campylobacter using both quantitative and qualitative techniques would provide a more accurate assessment of pre- or post harvest contamination. Environmental sampling in a poultry grow-out house, combined with carcass rinse sampling from the same flock may provide a relative assessment of Campylobacter contamination and transmission. Air samples, fecal/litter samples, and feed pan/drink line samples were collected from four commercial chicken grow-out houses. Birds from the sampled house were the first flock slaughtered the following day, and were sampled by post-chill carcass rinses. Quantitative (direct plating) and qualitative (direct plating after enrichment step) detection methods were used to determine Campylobacter contamination in each environmental sample and carcass rinse. Campylobacter, from post-enrichment samples, was detected from 27% (32/120) of house environmental samples and 37.5% (45/120) of carcass rinse samples. All sample types from each house included at least one positive sample except the house 2 air samples. Samples from house 1 and associated carcass rinses accounted for the highest total of Campylobacter positives (29/60). The fewest number of Campylobacter positives, based on both house environmental (4/30) and carcass rinse samples (8/30) were detected from flock B. Environmental sampling techniques provide a non-invasive and efficient way to test for foodborne pathogens. Correlating qualitative or quantitative Campylobacter levels from house and plant samples may enable the scheduled processing of flocks with lower pathogen incidence or concentrations, as a way to reduce post-slaughter pathogen transmission.
Master of Science in Life Sciences
APA, Harvard, Vancouver, ISO, and other styles
10

Vieira-Ribeiro, Simon A. (Simon Albert) Carleton University Dissertation Engineering Systems and Computer. "Single-IF DECT receiver architecture using a quadrature sub-sampling band-pass sigma-delta modulator." Ottawa, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
11

Good, Norman Markus. "Methods for estimating the component biomass of a single tree and a stand of trees using variable probability sampling techniques." Thesis, Queensland University of Technology, 2001. https://eprints.qut.edu.au/37097/1/37097_Good_2001.pdf.

Full text
Abstract:
This thesis developed multistage sampling methods for estimating the aggregate biomass of selected tree components, such as leaves, branches, trunk and total, in woodlands in central and western Queensland. To estimate the component biomass of a single tree randomised branch sampling (RBS) and importance sampling (IS) were trialed. RBS and IS were found to reduce the amount of time and effort to sample tree components in comparison with other standard destructive sampling methods such as ratio sampling, especially when sampling small components such as leaves and small twigs. However, RBS did not estimate leaf and small twig biomass to an acceptable degree of precision using current methods for creating path selection probabilities. In addition to providing an unbiased estimate of tree component biomass, individual estimates were used for developing allometric regression equations. Equations based on large components such as total biomass produced narrower confidence intervals than equations developed using ratio sampling. However, RBS does not estimate small component biomass such as leaves and small wood components with an acceptable degree of precision, and should be mainly used in conjunction with IS for estimating larger component biomass. A whole tree was completely enumerated to set up a sampling space with which RBS could be evaluated under a number of scenarios. To achieve a desired precision, RBS sample size and branch diameter exponents were varied, and the RBS method was simulated using both analytical and re-sampling methods. It was found that there is a significant amount of natural variation present when relating the biomass of small components to branch diameter, for example. This finding validates earlier decisions to question the efficacy of RBS for estimating small component biomass in eucalypt species. In addition, significant improvements can be made to increase the precision of RBS by increasing the number of samples taken, but more importantly by varying the exponent used for constructing selection probabilities. To further evaluate RBS on trees with differing growth forms from that enumerated, virtual trees were generated. These virtual trees were created using L-systems algebra. Decision rules for creating trees were based on easily measurable characteristics that influence a tree's growth and form. These characteristics included; child-to-child and children-to-parent branch diameter relationships, branch length and branch taper. They were modelled using probability distributions of best fit. By varying the size of a tree and/or the variation in the model describing tree characteristics; it was possible to simulate the natural variation between trees of similar size and fonn. By creating visualisations of these trees, it is possible to determine using visual means whether RBS could be effectively applied to particular trees or tree species. Simulation also aided in identifying which characteristics most influenced the precision of RBS, namely, branch length and branch taper. After evaluation of RBS/IS for estimating the component biomass of a single tree, methods for estimating the component biomass of a stand of trees (or plot) were developed and evaluated. A sampling scheme was developed which incorporated both model-based and design-based biomass estimation methods. This scheme clearly illustrated the strong and weak points associated with both approaches for estimating plot biomass. Using ratio sampling was more efficient than using RBS/IS in the field, especially for larger tree components. Probability proportional to size sampling (PPS) -size being the trunk diameter at breast height - generated estimates of component plot biomass that were comparable to those generated using model-based approaches. The research did, however, indicate that PPS is more precise than the use of regression prediction ( allometric) equations for estimating larger components such as trunk or total biomass, and the precision increases in areas of greater biomass. Using more reliable auxiliary information for identifying suitable strata would reduce the amount of within plot variation, thereby increasing precision. PPS had the added advantage of being unbiased and unhindered by numerous assumptions applicable to the population of interest, the case with a model-based approach. The application of allometric equations in predicting the component biomass of tree species other than that for which the allometric was developed is problematic. Differences in wood density need to be taken into account as well as differences in growth form and within species variability, as outlined in virtual tree simulations. However, the development and application of allometric prediction equations in local species-specific contexts is more desirable than PPS.
APA, Harvard, Vancouver, ISO, and other styles
12

Bryan, Paul David. "Accelerating microarchitectural simulation via statistical sampling principles." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/47715.

Full text
Abstract:
The design and evaluation of computer systems rely heavily upon simulation. Simulation is also a major bottleneck in the iterative design process. Applications that may be executed natively on physical systems in a matter of minutes may take weeks or months to simulate. As designs incorporate increasingly higher numbers of processor cores, it is expected the times required to simulate future systems will become an even greater issue. Simulation exhibits a tradeoff between speed and accuracy. By basing experimental procedures upon known statistical methods, the simulation of systems may be dramatically accelerated while retaining reliable methods to estimate error. This thesis focuses on the acceleration of simulation through statistical processes. The first two techniques discussed in this thesis focus on accelerating single-threaded simulation via cluster sampling. Cluster sampling extracts multiple groups of contiguous population elements to form a sample. This thesis introduces techniques to reduce sampling and non-sampling bias components, which must be reduced for sample measurements to be reliable. Non-sampling bias is reduced through the Reverse State Reconstruction algorithm, which removes ineffectual instructions from the skipped instruction stream between simulated clusters. Sampling bias is reduced via the Single Pass Sampling Regimen Design Process, which guides the user towards selected representative sampling regimens. Unfortunately, the extension of cluster sampling to include multi-threaded architectures is non-trivial and raises many interesting challenges. Overcoming these challenges will be discussed. This thesis also introduces thread skew, a useful metric that quantitatively measures the non-sampling bias associated with divergent thread progressions at the beginning of a sampling unit. Finally, the Barrier Interval Simulation method is discussed as a technique to dramatically decrease the simulation times of certain classes of multi-threaded programs. It segments a program into discrete intervals, separated by barriers, which are leveraged to avoid many of the challenges that prevent multi-threaded sampling.
APA, Harvard, Vancouver, ISO, and other styles
13

Agnew, Robert J. "Assessment of the variablity of indoor viable airborne mold sampling using the Anderson N-6 single stage impactor." Oklahoma City : [s.n.], 2002. http://library.ouhsc.edu/epub/theses/Agnew-Robert-J.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Bergsten, Johannes. "Taxonomy, phylogeny, and secondary sexual character evolution of diving beetles, focusing on the genus Acilius." Doctoral thesis, Umeå : Univ., Ekologi, miljö och geovetenskap, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Hiner, Stephen W. "Analyses of Two Aspects of Study Design for Bioassessment With Benthic Macroinvertebrates: Single Versus Multiple Habitat Sampling and Taxonomic Identification Level." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/9716.

Full text
Abstract:
Bioassessment is the concept of evaluating the ecological condition of habitats by surveying the resident assemblages of living organisms. Conducting bioassessment with benthic macroinvertebrates is still evolving and continues to be refined. There are strongly divided opinions about study design, sampling methods, laboratory analyses, and data analysis. Two issues that are currently being debated about study design for bioassessment in streams were examined here: 1) what habitats within streams should be sampled; 2) and is it necessary to identify organisms to the species level? The influence of habitat sampling design and level of taxonomic identification on the interpretation of ecological conditions of ten small streams in western Virginia was examined. Cattle watering and grazing heavily affected five of these streams (impaired sites). The other five streams, with no recent cattle activity or other impact by man, were considered to be reference sites because they were minimally impaired and represented best attainable conditions. Inferential and non-inferential statistical analyses concluded that multiple habitat sampling design was more effective than a single habitat design (riffle only) at distinguishing impaired conditions, regardless of taxonomic level. It appeared that sampling design (riffle habitat versus multiple habitats) is more important than taxonomic identification level for distinguishing reference and impaired ecological conditions in this bioassessment study. All levels of taxonomic resolution, which were studied, showed that the macroinvertebrate assemblages at the reference and impaired sites were very different and the assemblages at the impaired sites were adversely affected by perturbation. This study supported the sampling of multiple habitats and identification to the family level as a design for best determining the ecological condition of streams in bioassessment.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
16

Spring, Justin Benjamin. "Single photon generation and quantum computing with integrated photonics." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:b08937c7-ec87-47f8-b5ac-902673f87ce2.

Full text
Abstract:
Photonics has consistently played an important role in the investigation of quantum-enhanced technologies and the corresponding study of fundamental quantum phenomena. The majority of these experiments have relied on the free space propagation of light between bulk optical components. This relatively simple and flexible approach often provides the fastest route to small proof-of-principle demonstrations. Unfortunately, such experiments occupy significant space, are not inherently phase stable, and can exhibit significant scattering loss which severely limits their use. Integrated photonics offers a scalable route to building larger quantum states of light by surmounting these barriers. In the first half of this thesis, we describe the operation of on-chip heralded sources of single photons. Loss plays a critical role in determining whether many quantum technologies have any hope of outperforming their classical analogues. Minimizing loss leads us to choose Spontaneous Four-Wave Mixing (SFWM) in a silica waveguide for our source design; silica exhibits extremely low scattering loss and emission can be efficiently coupled to the silica chips and fibers that are widely used in quantum optics experiments. We show there is a straightforward route to maximizing heralded photon purity by minimizing the spectral correlations between emitted photon pairs. Fabrication of identical sources on a large scale is demonstrated by a series of high-visibility interference experiments. This architecture offers a promising route to the construction of nonclassical states of higher photon number by operating many on-chip SFWM sources in parallel. In the second half, we detail one of the first proof-of-principle demonstrations of a new intermediate model of quantum computation called boson sampling. While likely less powerful than a universal quantum computer, boson sampling machines appear significantly easier to build and may allow the first convincing demonstration of a quantum-enhanced computation in the not-distant future. Boson sampling requires a large interferometric network which are challenging to build with bulk optics, we therefore perform our experiment on-chip. We model the effect of loss on our postselected experiment and implement a circuit characterization technique that accounts for this loss. Experimental imperfections, including higher-order emission from our photon pair sources and photon distinguishability, are modeled and found to explain the sampling error observed in our experiment.
APA, Harvard, Vancouver, ISO, and other styles
17

Teixeira, Filho Carlos Augusto. "Analysis of the effects of ionospheric sampling of reflection points near-path for high-frequency single-site-location direction finding systems." Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA245950.

Full text
Abstract:
Thesis (M.S. in Systems Engineeering (Electronic Warfare))--Naval Postgraduate School, December 1990.
Thesis Advisor(s): Adler, Richard W. Second Reader: Jauregui, Stephen. "December 1990." Description based on title screen as viewed on March 30, 2010. DTIC Descriptor(s): Ionosphere, Parameters, Electron Density, Ionospheric Disturbances, Theses, Estimates, Sampling, Value, Measurement, Paths. DTIC Identifier(s): Ionospheric Disturbances, Radio Direction Finders, Atmospheric Refraction, Theses. Author(s) subject terms: Single-Site-Location, Direction-Finding, High-Frequency, Estimation, Sampling. Includes bibliographical references (p. 57-58). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
18

Hedtjärn, Håkan. "Dosimetry in brachytherapy : application of the Monte Carlo method to single source dosimetry and use of correlated sampling for accelerated dose calculations /." Linköping : Univ, 2003. http://www.bibl.liu.se/liupubl/disp/disp2003/med790s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Wei, Xiaoyao. "Extensions of the theory of sampling signals with finite rate of innovation, performance analysis and an application to single image super-resolution." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/45649.

Full text
Abstract:
Sampling is the reduction of a continuous-time signal to a discrete sequence. The classical sampling theorem limits the signals that can be perfectly reconstructed to bandlimited signals. In 2002, the theory of finite rate of innovations (FRI) emerged and broadened classical sampling paradigm to classes of signals with finite number of parameters per unit of time, which includes certain classes of non-bandlimited signals. In this thesis we analyse the performance of the FRI reconstruction algorithm and present extensions of the FRI theory. We also extend the FRI theory for the application of image upsampling. First, we explain the breakdown phenomenon in FRI reconstruction by subspace swap and work out at which noise level FRI reconstruction algorithm is guaranteed to achieve the optimal performance given by the Cramer-Rao bound. Our prediction of the breakdown PSNR is directly related to the distance between adjacent Diracs, sampling rate and the order of the sampling kernel and its accuracy is verified by simulations. Next, we propose an algorithm that can estimate the rate of innovation of the input signals and this extends the current FRI framework to a universal one that works with arbitrarily unknown rate of innovation. Moreover, we improve the current identification scheme of “parametrically sparse” systems, i.e. systems that are fully specified by small number of parameters. Inspired by the denoising technique used for FRI signals, we propose the modified Cadzow denoising algorithm which leads to robust system identification. We also show the possibility of perfectly identifying the input signal and the system simultaneously and we also propose reliable algorithm for simultaneous identification of both in the presence of noise. Lastly, by noting that lines of images can be modelled as piecewise smooth signals, we propose a novel image upsampling scheme based on our proposed method for reconstructing piecewise smooth signals which fuses the FRI method with the classical linear reconstruction method. We further improve our upsampled image by learning from the errors of our upsampled results at lower resolution levels. The proposed algorithm outperforms the state-of-the-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
20

Mccullough, Mollie Marie. "Improving Elementary Teachers’ Well-Being through a Strengths-Based Intervention: A Multiple Baseline Single-Case Design." Scholar Commons, 2015. http://scholarcommons.usf.edu/etd/5990.

Full text
Abstract:
Teaching is considered to be one of the most highly demanding professions, and one that is associated with high levels of stress and sometimes deleterious outcomes. Although research demonstrates that burnout and attrition are often associated with specific characteristics of the occupation (e.g., challenging workload, standardized testing, merit-based salary) minimal research focuses on how to better support teachers’ well-being. The field of positive psychology affords a new perspective in how to obtain quality mental health without solely focusing on psychopathology within a deficits-based approach. This includes the implementation of interventions (i.e., positive psychology interventions [PPI]) that target constructs of well-being (e.g., character strengths, hope, optimism, gratitude, etc.) and are associated with positive changes in authentic happiness. This study examined how a strength-based, PPI entitled Utilizing Signature Strengths in a New Way (Seligman, Steen, Park, & Peterson, 2005) impacts dimensions of teacher well-being, as well as other relevant outcomes (i.e., flourishing, burnout) within the school context. Previous research has shown that strengths-based intervention to be the PPI with the most substantial impact and the longest lasting outcomes (Seligman et al., 2005). Utilizing a concurrent multiple baseline single-case design with eight teachers, the study evaluated the effects of the strengths-based PPI on teacher’s overall happiness (i.e., subjective well-being) as indicated by self-report measures of life satisfaction and positive and negative affect. The teachers exhibited significant gains in life satisfaction and reductions in negative affect from pre- to post-intervention that were also evident one month following the intervention. Although positive affect did not significantly change from pre- to post-intervention, a significant gain was apparent at one-month follow-up. Single-case analytic strategies (i.e., visual analysis, masked visual analysis, and hierarchical linear modeling) found that the intervention positively impacted teachers’ overall subjective well-being (composite of standardized life satisfaction, positive affect, and negative affect scores). Results for single indicators of subjective well-being found variability in basic effects among different individuals (i.e., some teachers benefited more than others) further supporting the theory of person-activity fit. Regarding the intervention’s effects on secondary outcomes that were examined only at pre, post, and one-month follow-up time points, findings indicated the teachers experienced a significant increase in work satisfaction immediately following the intervention, as well as a significant increase in feelings of flourishing at follow-up. Significant decreases in negative dimensions of teachers’ mental health including stress and burnout (i.e., emotional exhaustion) were also demonstrated. Findings from the current study provide initial support for the efficacy of a teacher-focused, strengths-based intervention and its ability to improve multiple components of teacher well-being within an elementary school. Implications for school psychologists and policy, contributions to the literature, and future directions are discussed.
APA, Harvard, Vancouver, ISO, and other styles
21

Geijer, Matilda, and Joachim Persson. ""Femtio olika skäggarter?" : En studie om karaktärsskapande i singleplayerspel." Thesis, Södertörns högskola, Institutionen för naturvetenskap, miljö och teknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-29444.

Full text
Abstract:
I denna studie undersöks hur de grafiska aspekterna av karaktärsskapande påverkar spelare och deras engagemang i singleplayerspel. Syftet med studien är att undersöka vilken betydelse utseendet på spelarskapade karaktärer har för spelares engagemang och ifall det påverkar deras spelande. Semi-strukturerade intervjuer utfördes med sex deltagare där teman som bl.a. inspiration, motivation, form vs funktion och rollspelande uppkom. Slutsatsen var att det visuella utseendet har stor betydelse för spelare och bidrar till deras engagemang i spelet. Studien kan hjälpa spelutvecklare med att skapa spel med character creation som är uppskattade av spelare och ger dem en meningsfull upplevelse.
This study examines how the graphical aspects of character creation affects players and their involvement in single-player games. The purpose of the study is to investigate what significance the appearance of player created characters has to player commitment and whether it affects their gaming. Semi-structured interviews were conducted with six participants where themes such as inspiration, motivation, form vs function and role playing arose. The conclusion was that the appearance of the player created character had a big significance for players and that it contributed to player involvement in the game. The study can help game developers create games with character creation which are appreciated by players and give them a meaningful experience.
APA, Harvard, Vancouver, ISO, and other styles
22

Dai, Yu. "Genetic association studies : exploiting SNP-haplotype selection and covariate independence /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/9582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Macey, Deborah Ann 1970. "Ancient archetypes in modern media: A comparative analysis of "Golden Girls", "Living Single", and "Sex and the City"." Thesis, University of Oregon, 2008. http://hdl.handle.net/1794/8583.

Full text
Abstract:
xii, 214 p. A print copy of this title is available through the UO Libraries. Search the library catalog for the location and call number.
Recombinant television, a common television practice involving recycled, prepackaged formulas, updated to create programming that is perceived as novel, impacts more than industry processes. While the industry uses recombinants to reduce risk by facilitating aspects of production and audience affiliation, the inadvertent outcomes include a litany of narratives and characters that influence our worldview. As did the myths of earlier oral societies, television serves as one of our modern storytellers, teaching what we value and helping us make sense of our culture. This study focuses on how the prevalence of recombinant television limits portrayals of women and the discourse of feminism in three popular, female cast American sitcoms. This study comparatively examines the recombinant narratives and characters in Golden Girls, Living Single , and Sex and the City . While these programs are seemingly about very different modern women, older White women in suburban Florida; twenty-something African-American women in Brooklyn; and thirty-something, White, professional women in Manhattan, respectively, the four main characters in each show represent feminine archetypes found throughout Western mythology: the iron maiden, the sex object, the child, and the mother. First, a content analysis determines if a relationship exists between the characters and archetypes. Then, a comparative textual analysis reveals the deeper meanings the archetypes carry. Finally, a comparative narrative analysis examines the similarities and differences among the series. The findings reveal that a relationship exists between each modern character and her corresponding ancient archetype, reflecting particular meanings and discourses. The iron maiden archetypes, for example, generally bring forth a feminist discourse, whereas the child archetypes exhibit traditional values. While the sex object archetypes are self-absorbed, consumed with their own beauty and sexual conquests, the mother archetypes seek psychological wellness for themselves and those around them, generally providing much of the emotional work for the group. As reflected in these popular U.S. television series, the similarities among the archetypes and narratives depict limited views of women's lives, while the variance indicates differences among age, race, and class demographics. These recombinant portrayals of ancient archetypes as modern women suggest that our understanding of women's lives remains antiquated, reductionist, and conventional.
Adviser: Debra Merskin
APA, Harvard, Vancouver, ISO, and other styles
24

Gough, Richard D. "Player attitudes to avatar development in digital games : an exploratory study of single-player role-playing games and other genres." Thesis, Loughborough University, 2013. https://dspace.lboro.ac.uk/2134/13540.

Full text
Abstract:
Digital games incorporate systems that allow players to customise and develop their controllable in-game representative (avatar) over the course of a game. Avatar customisation systems represent a point at which the goals and values of players interface with the intentions of the game developer forming a dynamic and complex relationship between system and user. With the proliferation of customisable avatars through digital games and the ongoing monetisation of customisation options through digital content delivery platforms it is important to understand the relationship between player and avatar in order to provide a better user experience and to develop an understanding of the cultural impact of the avatar. Previous research on avatar customisation has focused on the users of virtual worlds and massively multiplayer games, leaving single-player avatar experiences. These past studies have also typically focused on one particular aspect of avatar customisation and those that have looked at all factors involved in avatar customisation have done so with a very small sample. This research has aimed to address this gap in the literature by focusing primarily on avatar customisation features in single-player games, aiming to investigate the relationship between player and customisation systems from the perspective of the players of digital games. To fulfill the research aims and objectives, the qualitative approach of interpretative phenomenological analysis was adopted. Thirty participants were recruited using snowball and purposive sampling (the criteria being that participants had played games featuring customisable avatars) and accounts of their experiences were gathered through semi-structured interviews. Through this research, strategies of avatar customisation were explored in order to demonstrate how people use such systems. The shortcomings in game mechanics and user interfaces were highlighted so that future games can improve the avatar customisation experience.
APA, Harvard, Vancouver, ISO, and other styles
25

Peyrard, Clément. "Single image super-resolution based on neural networks for text and face recognition." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEI083/document.

Full text
Abstract:
Cette thèse porte sur les méthodes de super-résolution (SR) pour l’amélioration des performances des systèmes de reconnaissance automatique (OCR, reconnaissance faciale). Les méthodes de Super-Résolution (SR) permettent de générer des images haute résolution (HR) à partir d’images basse résolution (BR). Contrairement à un rééchantillonage par interpolation, elles restituent les hautes fréquences spatiales et compensent les artéfacts (flou, crénelures). Parmi elles, les méthodes d’apprentissage automatique telles que les réseaux de neurones artificiels permettent d’apprendre et de modéliser la relation entre les images BR et HR à partir d’exemples. Ce travail démontre l’intérêt des méthodes de SR à base de réseaux de neurones pour les systèmes de reconnaissance automatique. Les réseaux de neurones à convolutions sont particulièrement adaptés puisqu’ils peuvent être entraînés à extraire des caractéristiques non-linéaires bidimensionnelles pertinentes tout en apprenant la correspondance entre les espaces BR et HR. Sur des images de type documents, la méthode proposée permet d’améliorer la précision en reconnaissance de caractère de +7.85 points par rapport à une simple interpolation. La création d’une base d’images annotée et l’organisation d’une compétition internationale (ICDAR2015) ont souligné l’intérêt et la pertinence de telles approches. Pour les images de visages, les caractéristiques faciales sont cruciales pour la reconnaissance automatique. Une méthode en deux étapes est proposée dans laquelle la qualité de l’image est d’abord globalement améliorée, pour ensuite se focaliser sur les caractéristiques essentielles grâce à des modèles spécifiques. Les performances d’un système de vérification faciale se trouvent améliorées de +6.91 à +8.15 points. Enfin, pour le traitement d’images BR en conditions réelles, l’utilisation de réseaux de neurones profonds permet d’absorber la variabilité des noyaux de flous caractérisant l’image BR, et produire des images HR ayant des statistiques naturelles sans connaissance du modèle d’observation exact
This thesis is focussed on super-resolution (SR) methods for improving automatic recognition system (Optical Character Recognition, face recognition) in realistic contexts. SR methods allow to generate high resolution images from low resolution ones. Unlike upsampling methods such as interpolation, they restore spatial high frequencies and compensate artefacts such as blur or jaggy edges. In particular, example-based approaches learn and model the relationship between low and high resolution spaces via pairs of low and high resolution images. Artificial Neural Networks are among the most efficient systems to address this problem. This work demonstrate the interest of SR methods based on neural networks for improved automatic recognition systems. By adapting the data, it is possible to train such Machine Learning algorithms to produce high-resolution images. Convolutional Neural Networks are especially efficient as they are trained to simultaneously extract relevant non-linear features while learning the mapping between low and high resolution spaces. On document text images, the proposed method improves OCR accuracy by +7.85 points compared with simple interpolation. The creation of an annotated image dataset and the organisation of an international competition (ICDAR2015) highlighted the interest and the relevance of such approaches. Moreover, if a priori knowledge is available, it can be used by a suitable network architecture. For facial images, face features are critical for automatic recognition. A two step method is proposed in which image resolution is first improved, followed by specialised models that focus on the essential features. An off-the-shelf face verification system has its performance improved from +6.91 up to +8.15 points. Finally, to address the variability of real-world low-resolution images, deep neural networks allow to absorb the diversity of the blurring kernels that characterise the low-resolution images. With a single model, high-resolution images are produced with natural image statistics, without any knowledge of the actual observation model of the low-resolution image
APA, Harvard, Vancouver, ISO, and other styles
26

Börke, Peter. "Untersuchungen zur Quantifizierung der Grundwasserimmission von polyzyklischen aromatischen Kohlenwasserstoffen mithilfe von passiven Probennahmesystemen." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2007. http://nbn-resolving.de/urn:nbn:de:swb:14-1191577528885-16227.

Full text
Abstract:
Kern der Arbeit bildete die Entwicklung einer Fluxmeter-Passivsammlereinheit für hydrophobe organische Substanzen im Grundwasser kontaminierter Standorte sowie deren Testung im Feld. Ferner kamen Keramik-Dosimeter unter identischen Feldbedingungen zum Einsatz. Die Ergebnisse der beiden passiven Sammelsysteme wurden mit herkömmlicher Grundwasserprobennahmetechnik mithilfe von Unterwassermotorpumpen verglichen und bewertet. Grundlage für den Einsatz der Passivsammlereinheit als „mass flux meter“ bildete die Kenntnis über den Volumenstrom im Bohrloch und den reduzierten Volumenstrom in der Passivsammlereinheit und andererseits über die räumliche Verteilung der hydraulischen Durchlässigkeit und der daraus resul¬tierenden heterogenen Geschwindigkeitsverteilung bzw. der Volumenströme über so genannte Kontroll¬ebenen bzw. Teilbilanzräume. Anhand numerischer Modelluntersuchungen konnten der Filterwiderstand der Passivsammlereinheit und die Strömungsverteilung in Modellkontrollebenen und im Feld näherungsweise bestimmt werden. Die Bestimmung des Volumenstromes des untersuchten Standortes wurde zum einen mithilfe von numerischen Modelluntersuchungen an stochastisch generierten quasi-3-dimensionalen Modellen mit hydro¬dynamischen Randbedingungen und kF-Wertverteilungen aus dem Feld und zum anderen mithilfe von Einbohrlochverfahren durchgeführt. Als Einbohrlochverfahren kamen zum einen ein optisches Kolloid-Logging (Grundwasserfluss-Visualisierungssystem) und zum anderen ein modifiziertes Fluidlogging-Verfahren mit Hilfe eines Salztracers zum Einsatz.
APA, Harvard, Vancouver, ISO, and other styles
27

Braun, Stefan K. "Aspekte des „Samplings“." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-147027.

Full text
Abstract:
Mash-Ups (auch Bootlegging, Bastard Pop oder Collage genannt) erfreuen sich seit Jahren steigender Beliebtheit. Waren es zu Beginn der 1990er Jahre meist nur 2 unterschiedliche Popsongs, deren Gesangs- und Instrumentenspuren in Remixform ineinander gemischt wurden, existieren heute Multi-Mash-Ups mit mehreren Dutzend gemixten und gesampelten Songs, Interpreten, Videosequenzen und Effekten. Eine Herausforderung stellt die Kombination unterschiedlichster Stile dar, diese zu neuen tanzbaren Titeln aus den Charts zu mischen. Das Mash-Up Projekt Pop Danthology z.B. enthält in einem knapp 6 minütigen aktuellen Musikclip 68 verschiedene Interpreten, u. a. Bruno Mars, Britney Spears, Rhianna und Lady Gaga. Die Verwendung und das Sampeln fremder Musik- und Videotitel kann eine Urheberrechtsverletzung darstellen. Die Komponisten des Titels „Nur mir“ mit Sängerin Sabrina Setlur unterlagen in einem Rechtsstreit, der bis zum BGH führte. Sie haben im Zuge eines Tonträger-Samplings, so der BGH , in das Tonträgerherstellerrecht der Kläger (Musikgruppe Kraftwerk) eingegriffen, in dem sie im Wege des „Sampling“ zwei Takte einer Rhythmussequenz des Titels „Metall auf Metall“ entnommen und diese im eigenen Stück unterlegt haben. Der rasante technische Fortschritt macht es mittlerweile möglich, immer einfacher, schneller und besser Musik-, Film- und Bildaufnahmen zu bearbeiten und zu verändern. Computer mit Bearbeitungssoftware haben Keyboards, Synthesizer und analoge Mehrspurtechnik abgelöst. Die Methoden des Samplings unterscheiden sich von der klassischen Raubkopie dahingehend, dass mit der Sampleübernahme eine weitreichende Umgestaltung und Bearbeitung erfolgt. Die Raubkopie zeichnet sich durch eine unveränderte Übernahme des Originals aus. Betroffen von den Auswirkungen eines nicht rechtmäßig durchgeführten Sampling sind Urheber- und Leistungsschutzrechte ausübender Künstler sowie Leistungsschutzrechte von Tonträgerherstellern. U. U. sind auch Verstöße gegen das allgemeine Persönlichkeits- und Wettbewerbsrecht Gegenstand von streitigen Auseinandersetzungen.
APA, Harvard, Vancouver, ISO, and other styles
28

Börke, Peter. "Untersuchungen zur Quantifizierung der Grundwasserimmission von polyzyklischen aromatischen Kohlenwasserstoffen mithilfe von passiven Probennahmesystemen." Doctoral thesis, Technische Universität Dresden, 2006. https://tud.qucosa.de/id/qucosa%3A23978.

Full text
Abstract:
Kern der Arbeit bildete die Entwicklung einer Fluxmeter-Passivsammlereinheit für hydrophobe organische Substanzen im Grundwasser kontaminierter Standorte sowie deren Testung im Feld. Ferner kamen Keramik-Dosimeter unter identischen Feldbedingungen zum Einsatz. Die Ergebnisse der beiden passiven Sammelsysteme wurden mit herkömmlicher Grundwasserprobennahmetechnik mithilfe von Unterwassermotorpumpen verglichen und bewertet. Grundlage für den Einsatz der Passivsammlereinheit als „mass flux meter“ bildete die Kenntnis über den Volumenstrom im Bohrloch und den reduzierten Volumenstrom in der Passivsammlereinheit und andererseits über die räumliche Verteilung der hydraulischen Durchlässigkeit und der daraus resul¬tierenden heterogenen Geschwindigkeitsverteilung bzw. der Volumenströme über so genannte Kontroll¬ebenen bzw. Teilbilanzräume. Anhand numerischer Modelluntersuchungen konnten der Filterwiderstand der Passivsammlereinheit und die Strömungsverteilung in Modellkontrollebenen und im Feld näherungsweise bestimmt werden. Die Bestimmung des Volumenstromes des untersuchten Standortes wurde zum einen mithilfe von numerischen Modelluntersuchungen an stochastisch generierten quasi-3-dimensionalen Modellen mit hydro¬dynamischen Randbedingungen und kF-Wertverteilungen aus dem Feld und zum anderen mithilfe von Einbohrlochverfahren durchgeführt. Als Einbohrlochverfahren kamen zum einen ein optisches Kolloid-Logging (Grundwasserfluss-Visualisierungssystem) und zum anderen ein modifiziertes Fluidlogging-Verfahren mit Hilfe eines Salztracers zum Einsatz.
APA, Harvard, Vancouver, ISO, and other styles
29

Brandt, Stephan Peter. "Zelltyp-spezifische Mikroanalyse von Arabidopsis thaliana-Blättern." Phd thesis, Universität Potsdam, 2001. http://opus.kobv.de/ubp/volltexte/2005/40/.

Full text
Abstract:
Im ersten Teil der Arbeit wurden Strategien zur Analyse von Transkripten erarbeitet. Die ersten Versuche zielten darauf ab, in mit Glaskapillaren genommenen Einzelzellproben verschiedener Gewebeschichten RT-PCR durchzuführen, um spezifische Transkripte nachweisen zu können. Dies gelang für eine Reihe von Genen aus verschiedenen Pflanzenspezies. Dabei konnten sowohl Transkripte stark wie auch schwach exprimierter Gene nachgewiesen werden.
Für die Erstellung von Gewebe-spezifischen Expressionsprofilen war es notwendig, die in vereinigten Zellproben enthaltene mRNA zunächst zu amplifizieren, um eine ausreichende Menge für Arrayhybridisierungen zu erhalten. Vor der Vermehrung wurde die mRNA revers transkribiert. Es wurden daran anschließend verschiedene Amplifikationsstrategien getestet: Die neben Tailing, Adapterligation und anderen PCR-basierenden Protokollen getestete Arbitrary-PCR hat sich in dieser Arbeit als einfache und einzige Methode herausgestellt, die mit so geringen cDNA-Mengen reproduzierbar arbeitet. Durch Gewebe-spezifische Array-hybridisierungen mit der so amplifizierten RNA konnten schon bekannte Expressionsmuster verschiedener Gene, vornehmlich solcher, die an der Photosynthese beteiligt sind, beobachtet werden. Es wurden aber auch eine ganze Reihe neuer offensichtlich Gewebe-spezifisch exprimierter Gene gefunden. Exemplarisch für die differentiell exprimierten Gene konnte das durch Arrayhybridisierungen gefundene Expressionsmuster der kleinen Untereinheit von Rubisco verifiziert werden. Hierzu wurden Methoden zum Gewebe-spezifischen Northernblot sowie semiquantitativer und Echtzeit-Einzelzell-RT-PCR entwickelt.
Im zweiten Teil der Arbeit wurden Methoden zur Analyse von Metaboliten einschließlich anorganischer Ionen verwendet. Es stellte sich heraus, daß die multiparallele Methode der Gaschromatographie-Massenspektrometrie keine geeignete Methode für die Analyse selbst vieler vereinigter Zellinhalte ist. Daher wurde auf Kapillarelektrophorese zurückgegriffen. Eine Methode, die mit sehr kleinen Probenvolumina auskommt, eine hohe Trennung erzielt und zudem extrem geringe Detektionslimits besitzt. Die Analyse von Kohlenhydraten und Anionen erfordert eine weitere Optimierung. Über UV-Detektion konnte die K+-Konzentration in verschiedenen Geweben von A. thaliana bestimmt werden. Sie lag in Epidermis und Mesophyll mit ca. 25 mM unterhalb der für andere Pflanzenspezies (Solanum tuberosum und Hordeum vulgare) publizierten Konzentration. Weiter konnte gezeigt werden, daß zwölf freie Aminosäuren mittels einer auf Kapillarelektrophorese basierenden Methode in vereinigten Zellproben von Cucurbita maxima identifiziert werden konnten. Die Übertragung der Methode auf A. thaliana-Proben muß jedoch weiter optimiert werden, da die Sensitivität selbst bei Laser induzierter Fluoreszenz-Detektion nicht ausreichte.
Im dritten und letzten Teil der Arbeit wurde eine Methode entwickelt, die die Analyse bekannter wie unbekannter Proteine in Gewebe-spezifischen Proben ermöglicht. Hierzu wurde zur Probennahme mittels mechanischer Mikrodissektion eine alternative Methode zur Laser Capture Microdissection verwendet, um aus eingebetteten Gewebeschnitten distinkte Bereiche herauszuschneiden und somit homogenes Gewebe anzureichern. Aus diesem konnten die Proteine extrahiert und über Polyacrylamidgelelektrophorese separariert werden. Banden konnten ausgeschnitten, tryptisch verdaut und massenspektrometrisch die Primärsequenz der Peptidfragmente bestimmt werden. So konnten als Hauptproteine im Mesophyll die große Untereinheit von Rubisco sowie ein Chlorophyll bindendes Protein gefunden werden.
Die in dieser Arbeit entwickelten und auf die Modellpflanze Arabidopsis thaliana angewandten Einzelzellanalysetechniken erlauben es in Zukunft, physiologische Prozesse besser sowohl räumlich als auch zeitlich aufzulösen. Dies wird zu einem detaillierteren Verständnis mannigfaltiger Vorgänge wie Zell-Zell-Kommunikation, Signalweiterleitung oder Pflanzen-Pathogen-Interaktionen führen.
The subject of this thesis was the analysis of single plant cells in respect to their contents of i) transcripts, ii) inorganic cations and anions, iii) metabolites like amino acids and carbohydrates as well as iv) proteins. One task was the transfer of existing methods to single cell analysis on leaf tissues of the model plant Arabisopsis thaliana L., the second one was the refinement and the development, respectively, of new protocols for the analysis of such picoliter samples. For cell type specific sampling two different complimentary methods were applied: Using micro glass capillaries specific single cell contents could be harvested from intact plants, whereas typical sample volumes were in the picoliter range. Even the sampling of inner cell types such as companion cells could be demonstrated. Using mechanical micro dissection of embedded tissue a larger amount of homogenous tissue could be collected.
Because single cell samples contain only femtogram amounts of mRNA, direct detection of transcripts is impossible. Therefore, two amplification protocols were applied to the cell samples: The first procedure makes use of specifically primed RT-PCR for amplification. Several genes derived from different plants and tissues could be detected after successful RT-PCR, including high as well as low expressed genes. The second method was developed to monitor the activity of many genes in parallel using array hybridisation with filters containing the cDNA of as many as 16.000 ESTs. For this purpose, unspecific RT-PCR as it is applied in the differential display was used to amplify different transcripts in just one reaction. However, in these tissue specific array hybridisations the expression patterns of several hundreds genes could be monitored. These included known tissue specific expression patterns (of mainly photosynthesis related genes) as well as a couple of unknown expression patterns. To verify the tissue specificity of gene activity some results were reconsidered using tissue specific northern blot hybridisations and real time RT-PCR, respectively.
Secondly, metabolites (including inorganic ions) were investigated: Because gas chromatography-mass spectrometry does not reveal the sensitivity which in necessary for the analysis of even multiple pooled single cell samples capillary electrophoresis was applied for these studies. This method has a high potential as it needs only small amounts of starting material, has uncomparable low detection limits and exhibits a high number of theoretical plates.
The analysis of inorganic anions and carbohydrates needs further optimisations. Using UV absorption-detection potassium could be detected in different cell types whereas the concentrations in mesophyll and epidermis were found around 25 mM each. These concentrations are lower than in other species as Solanum tuberosum or Hordeum vulgare. For investigations of amino acids the cell samples were derivatized to make the use of laser induced fluorescence-detection capable. In samples derived from pumpkin (Cucurbita maxima) mesophyll twelve amino acids could be detected and identified. The transfer of this method to A. thaliana derived samples exhibited no results which may be due to the low concentration of free amino acids in these plants.
Finally, a method was developed with which the existence of known and unknown proteins in tissue specific samples could be monitored. For this, mechanical micro dissection was used to: After embedding and sectioning the tissue of interest was cut out by an vibrating steel chisel to get homogenous samples. The proteins contained in these tissue pieces were extracted and separated by one dimensional SDS polyacrylamid gel electrophoresis. Several protein bands could be detected after staining with either silver or coomassie blue stain. These bands were cut out and sequenced by mass spectrometry. The large subunit of rubisco as well as one chlorophyll binding protein could be identified as the major proteins within the mesophyll.
The single cell analysis methods which were developed and applied to the model plant A. thaliana in this thesis allow a better spatial as well as temporal resolution of analysis. This will lead to a more detailed understanding of physiological processes like cell to cell communication, signalling or plant-pathogen interactions.
APA, Harvard, Vancouver, ISO, and other styles
30

Bousselham, Abdel Kader. "FPGA based data acquistion and digital pulse processing for PET and SPECT." Doctoral thesis, Stockholm University, Department of Physics, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-6618.

Full text
Abstract:

The most important aspects of nuclear medicine imaging systems such as Positron Emission Tomography (PET) or Single Photon Emission Computed Tomography (SPECT) are the spatial resolution and the sensitivity (detector efficiency in combination with the geometric efficiency). Considerable efforts have been spent during the last two decades in improving the resolution and the efficiency by developing new detectors. Our proposed improvement technique is focused on the readout and electronics. Instead of using traditional pulse height analysis techniques we propose using free running digital sampling by replacing the analog readout and acquisition electronics with fully digital programmable systems.

This thesis describes a fully digital data acquisition system for KS/SU SPECT, new algorithms for high resolution timing for PET, and modular FPGA based decentralized data acquisition system with optimal timing and energy. The necessary signal processing algorithms for energy assessment and high resolution timing are developed and evaluated. The implementation of the algorithms in field programmable gate arrays (FPGAs) and digital signal processors (DSP) is also covered. Finally, modular decentralized digital data acquisition systems based on FPGAs and Ethernet are described.

APA, Harvard, Vancouver, ISO, and other styles
31

Marklund, Emil. "Bayesian inference in aggregated hidden Markov models." Thesis, Uppsala universitet, Institutionen för biologisk grundutbildning, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-243090.

Full text
Abstract:
Single molecule experiments study the kinetics of molecular biological systems. Many such studies generate data that can be described by aggregated hidden Markov models, whereby there is a need of doing inference on such data and models. In this study, model selection in aggregated Hidden Markov models was performed with a criterion of maximum Bayesian evidence. Variational Bayes inference was seen to underestimate the evidence for aggregated model fits. Estimation of the evidence integral by brute force Monte Carlo integration theoretically always converges to the correct value, but it converges in far from tractable time. Nested sampling is a promising method for solving this problem by doing faster Monte Carlo integration, but it was here seen to have difficulties generating uncorrelated samples.
APA, Harvard, Vancouver, ISO, and other styles
32

Dénes, Francisco Voeroes. "Abundância de aves de rapina no Cerrado e Pantanal do Mato Grosso do Sul e os efeitos da degradação de hábitat: perspectivas com métodos baseados na detectabilidade." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/41/41133/tde-15012015-152016/.

Full text
Abstract:
A urbanização e a expansão das fronteiras agrícolas na região Neotropical estão entre as principais forças causadoras da degradação ambiental em hábitats abertos naturais. Inferências e estimativas de abundância são críticas para quantificação de dinâmicas populacionais e impactos de mudanças ambientais. Contudo, a detecção imperfeita e outros fenômenos que causam inflação de zeros podem induzir erros de estimativas e dificultar a identificação de padrões ecológicos. Examinamos como a consideração desses fenômenos em dados de contagens de indivíduos não marcados pode informar na escolha do método apropriado para estimativas populacionais. Revisamos métodos estabelecidos (modelos lineares generalizados [GLMs] e amostragem de distância [distance sampling]) e emergentes que usam modelos hierárquicos baseados em misturas (N-mixture; modelo de Royle-Nichols [RN], e N-mixture básico, zero inflacionado, espacialmente explicito, visita única, e multiespécies) para estimar a abundância de populações não marcadas. Como estudo de caso, aplicamos o método N-mixture baseado em visitas únicas para modelar dados de contagens de aves de rapina em estradas e investigar como transformações de habitat no Cerrado e Pantanal do Mato Grosso do Sul afetaram as populações de 12 espécies em uma escala regional (>300.000 km2). Os métodos diferem nos pré-requisitos de desenho amostral, e a sua adequabilidade depender da espécie em questão, da escala e objetivos do estudo, e considerações financeiras e logísticas, que devem ser avaliados para que verbas, tempo e esforço sejam utilizados com eficiência. No estudo de caso, a detecção de todas as espécies foi influenciada pela horário de amostragem, com efeitos congruentes com expectativas baseadas no comportamentos de forregeamento e de voo. A vegetação fechada e carcaças também influenciaram a detecção de algumas espécies. A abundância da maioria das espécies foi negativamente influenciada pela conversão de habitats naturais para antrópicos, particularmente pastagens e plantações de soja e cana-de-açúcar, até mesmo para espécies generalistas consideradas como indicadores ruins da qualidade de hábitats. A proteção dos hábitats naturais remanescentes é essencial para prevenir um declínio ainda maior das populações de aves de rapina na área de estudo, especialmente no domínio do Cerrado
Urbanization and the expansion of agricultural frontiers are among the main forces driving the degradation of natural habitats in Neotropical open habitats. Inference and estimates of abundance are critical for quantifying population dynamics and the impacts of environmental change. Yet imperfect detection and other phenomena that cause zero inflation can induce estimation error and obscure ecological patterns. We examine how detection error and zero-inflation in count data of unmarked individuals inform the choice of analytical method for estimating population size. We review established (GLMs and distance sampling) and emerging methods that use N-mixture models (Royle-Nichols model, and basic, zero-inflated, temporary emigration, beta-binomial, generalized open-population, spatially explicit, single-visit and multispecies) to estimate abundance of unmarked populations. As a case study, we employed a single visit N-mixture approach to model roadside raptor count data and investigate how land-use transformations in the Cerrado and Pantanal domains in Brazil have affected the populations of 12 species on a regional scale (>300,000 km2). Methods differ in sampling design requirements, and their suitability will depend on the study species, scale and objectives of the study, and financial and logistical considerations, which should be evaluated to use funds, time and effort efficiently. In the case study, detection of all species was influenced by time of day, with effects that follow expectations based on foraging and flying behavior. Closed vegetation on and carcasses found during surveys also influenced detection of some species. Abundance of most species was negatively influenced by conversion of natural Cerrado and Pantanal habitats to anthropogenic uses, particularly pastures, soybean and sugar cane plantations, even for generalist species usually considered poor habitat-quality indicators. Protection of the remaining natural habitats is essential to prevent further decline of raptor populations in the study area, especially in the Cerrado domain
APA, Harvard, Vancouver, ISO, and other styles
33

Chahid, Makhlad. "Echantillonnage compressif appliqué à la microscopie de fluorescence et à la microscopie de super résolution." Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0426/document.

Full text
Abstract:
Mes travaux de thèse portent sur l’application de la théorie de l’échantillonnagecompressif (Compressed Sensing ou Compressive Sampling, CS) à la microscopie defluorescence, domaine en constante évolution et outil privilégié de la recherche fondamentaleen biologie. La récente théorie du CS a démontré que pour des signauxparticuliers, dits parcimonieux, il est possible de réduire la fréquence d’échantillonnagede l’information à une valeur bien plus faible que ne le prédit la théorie classiquede l’échantillonnage. La théorie du CS stipule qu’il est possible de reconstruireun signal, sans perte d’information, à partir de mesures aléatoires fortement incomplèteset/ou corrompues de ce signal à la seule condition que celui-ci présente unestructure parcimonieuse.Nous avons développé une approche expérimentale inédite de la théorie du CSà la microscopie de fluorescence, domaine où les signaux sont naturellement parcimonieux.La méthode est basée sur l’association d’une illumination dynamiquestructurée à champs large et d’une détection rapide à point unique. Cette modalitépermet d’inclure l’étape de compression pendant l’acquisition. En outre, nous avonsmontré que l’introduction de dimensions supplémentaires (2D+couleur) augmentela redondance du signal, qui peut être pleinement exploitée par le CS afin d’atteindredes taux de compression très importants.Dans la continuité de ces travaux, nous nous sommes intéressés à une autre applicationdu CS à la microscopie de super résolution, par localisation de moléculesindividuelles (PALM/STORM). Ces nouvelles techniques de microscopie de fluorescenceont permis de s’affranchir de la limite de diffraction pour atteindre des résolutionsnanométriques. Nous avons exploré la possibilité d’exploiter le CS pour réduiredrastiquement les temps d’acquisition et de traitement.Mots clefs : échantillonnage compressif, microscopie de fluorescence, parcimonie,microscopie de super résolution, redondance, traitement du signal, localisation demolécules uniques, bio-imagerie
My PhD work deals with the application of Compressed Sensing (or CompressiveSampling, CS) in fluorescence microscopy as a powerful toolkit for fundamental biologicalresearch. The recent mathematical theory of CS has demonstrated that, for aparticular type of signal, called sparse, it is possible to reduce the sampling frequencyto rates well below that which the sampling theorem classically requires. Its centralresult states it is possible to losslessly reconstruct a signal from highly incompleteand/or inaccurate measurements if the original signal possesses a sparse representation.We developed a unique experimental approach of a CS implementation in fluorescencemicroscopy, where most signals are naturally sparse. Our CS microscopecombines dynamic structured wide-field illumination with fast and sensitive singlepointfluorescence detection. In this scheme, the compression is directly integratedin the measurement process. Additionally, we showed that introducing extra dimensions(2D+color) results in extreme redundancy that is fully exploited by CS to greatlyincrease compression ratios.The second purpose of this thesis is another appealing application of CS forsuper-resolution microscopy using single molecule localization techniques (e.g.PALM/STORM). This new powerful tool has allowed to break the diffraction barrierdown to nanometric resolutions. We explored the possibility of using CS to drasticallyreduce acquisition and processing times
APA, Harvard, Vancouver, ISO, and other styles
34

Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Full text
Abstract:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
APA, Harvard, Vancouver, ISO, and other styles
35

"Optimal single variable sampling plans." Chinese University of Hong Kong, 1989. http://library.cuhk.edu.hk/record=b5886152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Kwen-Haw, Lee, and 李冠澔. "The Selection between Single Sampling Plan and Repetitive Group Sampling Plan." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/32529984087094839927.

Full text
Abstract:
碩士
華梵大學
工業工程與經營資訊學系碩士班
102
In quality control, sampling plan is an important chief tool. Sampling plan is fist proposed by Dodge and Roming (1929), generally speaking, there are four types of sampling plans, single sampling plan, double sampling plan, multiple sampling plan and sequential sampling plan. Among these sampling plans, single sampling plan is the most extensively used in the manufacturing industries. Afterward Sherman (1965) proposed the alternative repetitive sampling plan, which shows that repetitive sampling plan not only has smaller average sample number but also is used easily compared with single sampling plan and double sampling plan. In this paper, we proceed the comparable analysis between the sampling plan proposed by Sherman (1965) and single sampling plan. Given the specific producer's risk α, consumer's risk β, acceptance quality level and lot tolerance percent defective, the relevant parameters of these two sampling plans are determined by minimizing average sample number. In addition, with the consideration of rectify sampling plan, we use the parameters to make an analysis of performance indices on average outgoing quality and average total inspection further. By the output of the research, the company can select the suitable sampling plan to execute the lot sentencing according to the preference of performance indices.
APA, Harvard, Vancouver, ISO, and other styles
37

Huang, Pei-Ching, and 黃珮菁. "Single Sampling Inspection Plans based on Fuzzy Theory." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/87751164036241964280.

Full text
Abstract:
碩士
國立成功大學
統計學系碩博士班
94
Sampling inspection is an important subject in quality controls. Consider the restrictions of cost, time or reality problems, the manufacturer is unable to inspect all products, so they will inspect samples randomly, and compare with test criterion to judge the lot of products should be accepted or rejected. By this process, we can make sure the quality of outgoing productions. The observations of traditional sampling inspection are supposed precise number, for instances, the fraction defective p, producer’s risk, and consumer’s risk...and so on. However, it is difficult to keep a process in control with a fixed process level for a long period of time, so we get the vague number p instead of precise one. However, the producer’s risk and consumer’s risk should be decided by producer and consumer, so we consider they are both precise numbers. Based on above assumptions, the fuzzy theory is applied in single sampling inspection plans in this thesis, including the sampling plan of given strength and Dodge and Roming LPTD system. Also, we compared the proposed methods with traditional and other fuzzy sampling inspection methods. The result shows that the proposed sampling plans of given strength have steadier producer's risks and consumer's risks than JIS Z9002, and the proposed Dodge and Roming LPTD system sampling plan by using the center of area method can avoid the problem on the choice of delta values when using delta-cut method.
APA, Harvard, Vancouver, ISO, and other styles
38

Shue, Yo-Tin, and 許育榳. "Bandpass sampling of multiple single-side-band RF signals." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/44193759892113065240.

Full text
Abstract:
碩士
大同大學
通訊工程研究所
93
The kernel concept of the software defined radio (SDR) is to annul the mixers and to place analog-to-digital converters as near the antenna as possible [2]. The reason why is that we can implement the most SDR functionality to a programmable micro or signal processor. Bandpass sampling is a helpful method for direct digital downconversion without mixers. Although the bandpass sampling theory for single RF signal is well developed, however, for two or more RF signals is relatively immature. In this thesis, we derived the general equations which are related to the bandpass sampling of multiple single-side-band RF signals. This is good to design the program to compute. That is meeting the center idea behind the software radio architecture.
APA, Harvard, Vancouver, ISO, and other styles
39

Shen, Yufeng. "Single Complex Image Matting." Master's thesis, 2010. http://hdl.handle.net/10048/1168.

Full text
Abstract:
Single image matting refers to the problem of accurately estimating the foreground object given only one input image. It is a fundamental technique in many image editing applications and has been extensively studied in the literature. Various matting techniques and systems have been proposed and impressive advances have been achieved in efficiently extracting high quality mattes. However, existing matting methods usually perform well for relatively uniform and smooth images only but generate noisy alpha mattes for complex images. The main motivation of this thesis is to develop a new matting approach that can handle complex images. We examine the color sampling and alpha propagation techniques in detail, which are two popular techniques employed by many state-of-the-art matting methods, to understand the reasons why the performance of these methods degrade significantly for complex images. The main contribution of this thesis is the development of two novel matting algorithms that can handle images with complex texture patterns. The first proposed matting method is aimed at complex images with homogeneous texture pattern background. A novel texture synthesis scheme is developed to utilize the known texture information to infer the texture information in the unknown region and thus alleviate the problems introduced by textured background. The second proposed matting algorithm is for complex images with heterogeneous texture patterns. A new foreground and background pixels identification algorithm is used to identify the pure foreground and background pixels in the unknown region and thus effectively handle the challenges of large color variation introduced by complex images. Our experimental results, both qualitative and quantitative, show that the proposed matting methods can effectively handle images with complex background and generate cleaner alpha mattes than existing matting methods.
APA, Harvard, Vancouver, ISO, and other styles
40

Chang, Ting-Pei, and 張庭培. "Sampling Completeness and Image Quality Analysis of Single Photon Emission Microscope." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/rzz9v4.

Full text
Abstract:
碩士
國立中央大學
光電科學與工程學系
106
Abstract In this thesis, we use the seven-pinhole single photon emission microscope system (SPEM), which is a version of single photon emission computed tomography (SPECT) system, to simulate the sampling completeness coefficient and analyze the image quality. In order to lessen the development cost, we simulated the system architecture to predict the results and make sure that the system works properly. In the simulation process, the Sampling Completeness Coefficient (SCC) is based on Tuy’s condition in a cone-beam CT system. We utilize it to evaluate the SCC of seven-pinhole single photon emission microscope system when an object is projected with circular and helical orbits onto the scintillator with 40 mm diameter. To acquire high quality images, we need an accurate imaging system matrix, called H matrix, which is established from the flux and width models by using the experimental data. The H matrix is used to forward-project the simulated phantoms and the Maximum Likelihood Expectation Maximization (MLEM) algorithm is used for image reconstruction. The results show that SCC values are consistent with the reconstructed image quality. Therefore, the SCC analysis can be used to evaluate the system architecture, in terms of geometry designs and orbit paths, before building the actual system to lessen the cost and save time.
APA, Harvard, Vancouver, ISO, and other styles
41

Cahya, Suntara. "Sampling properties of optimal operating conditions of single and multiple response surface systems." 2002. http://www.etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-121/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Nugroho, Bintang, and 唐启明. "Resubmitted Lots with Single Sampling Plan by Attributes Under the Conditions of Poisson Distribution." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/q975ym.

Full text
Abstract:
碩士
國立臺灣科技大學
工業管理系
105
The most important consumer decision factors in the selection among competing products and services is quality. In the application of statistical methods to quality engineering, it is typical to classify data on quality characteristics as either attributes or variables data. One of the most important statistical tools widely used for this purpose is acceptance sampling. Acceptance sampling plans state the required sample size and the required acceptance or rejection criteria for lots sentencing. This thesis proposes resubmitted lots with single sampling plan (RSSP) by attributes under the conditions of Poisson distribution. Attributes RSSP under Poisson distribution will measure the quality characteristics on a numerical scale, with several advantages of having smaller sample size than traditional Single Sampling Plan under the same level of protection for both producer and customer. This is especially beneficial when the inspection is costly and destructive. For study purpose, tables for the required sample size and critical acceptance value of various combinations of quality levels, producer’s risk and consumer’s risk are constructed. Furthermore, the behaviors of the RSSP under Poisson distribution with various conditions are also shown and discussed. A numerical example is provided to illustrate and demonstrate how to use the RSSP under Poisson distribution in the real applications.
APA, Harvard, Vancouver, ISO, and other styles
43

Howard, John Edward. "The detection reliability of a single-bit sampling cross- correlator for detecting random Gaussian reflections." Thesis, 2015. http://hdl.handle.net/10539/18198.

Full text
Abstract:
A Thesis Submitted to th e Faculty of Engineering , University of th e Witwatersrand, Johannesburg , for the Degree of Doctor of Philosophy Johannesburg 1974
In this thesis the detection reliability of a single-bit, digital, sampling cross-correlator used for detecting either single-hit or analog bandlimited Gaussian signals is investigated. This is done by deriving the exact output probability mass function of the cross-correlator, this directly yields the Detection and False Alarm probabilities. The cross-correlator output mass function is derived for the following cases: (a) a single-bit bandlimited Gaussian signal cross-correlated with an attenuated reflection corrupted by wideband Gaussian noise, and (b) and in many applications the interfering signal can be periodic in nature; thus the output mass function is also considered. (c) for a single-bit bandlimited Gaussian signal cross-correlated with no attenuated reflection corrupted by a random phase sinusoid. In all cases except (d), the cross-correlated function is derived first, and then the probability mass functions are derived for both burst and continuous transmitted signal operation. In (d) the cross-correlation function cannot he derived in a closed form, and a series approximation is given. However, the zero delay (i.e. peak) cross-correlation function is derived exactly, and this yields information on the detection probabilities to be expected. The cross-correlator output probability mass functions are discussed qualitatively in this case. It is found that in general the detection reliability obtained using single-bit bandlimited Gaussian signals is higher than that achievable with analog signals, and that a random phase sine wave has a more adverse effect on the cross-correlator's detection performance than wideband Gaussian noire has. The theoretical derivations of (a), (b) and (c) are verified by extremely close agreement with experimental results taken on a specially built single-bit, sampling cross-correlator. The cross-correlator's performance under multiple reflection condit¬ions is considered, and the cross-correlation function of a single-bit or an analog bandlimited Gaussian signal with two attenuated reflections corrupted by wideband Gaussian noise is derived. An extension of the theory to more than two reflections is discussed in both cases. The derivation of the cross-correlator output mass functions is considered for both burst and continuous signal operation. It is shown that under conditions where there are two overlapping single-bit reflection, in a low extraneous noise environment, there is a high probability of missing the smaller of the two reflections completely, even though it may be only slighter smaller than the larger one. This defect does not occur with analog Gaussian signals, and, although the peaks in their case are not so sharp or well-defined, under these conditions analog signals offer distinct advantage over single-bit signals. The practical application of the detection scheme to acoustics is briefly discussed, and it is found that the Gaussian signal centre freq¬uency and the cross-correlator sampling frequency must he matched. A sampling frequency of between one and ten times the signal centre frequency yields satisfactory results. There are several constraints on the signal bandwidth, and octave bandwidth, are found to offer a good compromise.
APA, Harvard, Vancouver, ISO, and other styles
44

Subtil, Joana Filipa Azevedo Sampaio Diz. "Bilateral inferior petrosal sinus sampling in the diagnosis of cushing's syndrome: a single-center experience." Master's thesis, 2018. https://hdl.handle.net/10216/112628.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Subtil, Joana Filipa Azevedo Sampaio Diz. "Bilateral inferior petrosal sinus sampling in the diagnosis of cushing's syndrome: a single-center experience." Dissertação, 2018. https://repositorio-aberto.up.pt/handle/10216/112628.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Wu, Hsien, and 吳嫻. "The homophonic effect in word recognition processes of Chinese single characters and two-character words." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/16357920955111739503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Tseng, Jiunn-Chin, and 曾俊欽. "A 8-bit 280MS/s Multiple Sampling Single Conversion CMOS A/D Converter for IQ Demodulation." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/58317040590469174874.

Full text
Abstract:
碩士
國立交通大學
電子研究所
84
In this thesis, a CMOS 70 MHz IF Quadrature demodulator is presented. This IFdemodulator uses the sample-and-hold ckt operated at 4 times IF frequency,280MHz The IF modulated signal is sampled successively to different capacitors in samplingcapacitor array of I and Q channel(The capacitor ratios in sampling capacitor arrayrepresent the coefficient of IQ channel filter).The filter function is achievedby analog charge addition in sampling capacitors.The resultant discrete-time basebandI and Q signals are digitalized by 8-bit successive approximation ADC.The dataoutput rate in each channel is 1.09MS/ s. This IF quadrature demodulator which includes mixer,lowpass filters and successiveapproximation AD converter and utilizes the multiple sampling,single conversionarchitecture is fabricated in UMC 0.8 DPDM CMOS process.The whole chip area is5000um x 4000um,and the average power dissipation is about 100mW.
APA, Harvard, Vancouver, ISO, and other styles
48

Arifiansyah, Vinno, and Vinno Arifiansyah. "Resubmitted Lots with Single Sampling Plan by Attributes Under the Conditions of Zero Truncated Poisson Distribution." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/gj69c3.

Full text
Abstract:
碩士
國立臺灣科技大學
工業管理系
105
There is a certain amount of variability in every product, consequently, no two products are ever identical. Since variability can only be described in statistical terms, statistical methods become a role model in quality improvement efforts. Acceptance sampling is one of the statistical methods in quality control area to help ensure that the output of a process meets requirements. The main purpose of acceptance sampling is to decide whether to accept or reject a lot of product. When we want to control some of proportion of substance that existed in food products like preservative, we can assume that preservative that existed in the food product is followed Zero Truncated Poisson (ZTP) distribution. This research attempt to design single and resubmitted sampling plan under the conditions of ZTP distribution. Sampling process and operation curves with tables were created and organized in this research. For illustrative purpose, an example is presented to demonstrate the use of determination of single sampling plan and resubmitted sampling plan by attributes under the conditions of ZTP distribution.
APA, Harvard, Vancouver, ISO, and other styles
49

Cheng, Chien-Chuan, and 鄭健川. "The comparative of expected total quality cost between traditional single sampling plan and the economic design." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/smsc7s.

Full text
Abstract:
碩士
國立屏東科技大學
工業管理系所
106
For the quality inspection practice of consumer electronics industry. MIL-STD-105E sampling table is viewed as the basis for sampling plans. This traditional quality inspection plan decides the sample size and reject rule based on the size of lot, consumer’s and producer’s risk and average quality level (AQL). Traditional sampling plan do not consider internal and external quality costs. However, quality costs were considered in many previous researches, but the comparison with traditional and economical design of single sampling plan is rare from now. In this paper, we study that the vendor simulated the incoming inspection of buyer before acceptance. And the costs of inspection, rework, replacement, and external failure are considered in this study. We compare the quality economical design with traditional single sampling plan under the total quality cost. This study can be as a reference for future studies and practical applications.
APA, Harvard, Vancouver, ISO, and other styles
50

Nguyen, Thutrang Thi. "Examining peak height ratios in low template DNA samples with and without sampling using a single-tube extraction protocol." Thesis, 2015. https://hdl.handle.net/2144/13979.

Full text
Abstract:
The developments of the polymerase chain reaction (PCR) and the short tandem repeat multiplex kits increased the ease and lowered the time and sample quantity required for deoxyribonucleic acid (DNA) typing compared to previous methods. However the amplification of low mass of DNA can lead to increased stochastic effects, such as allele drop-out (ADO) and heterozygous peak height (PH) imbalance, which make it difficult to determine the true donor profile. These stochastic effects are believed to be due to: 1) pre-PCR sampling from pipetting and sample transferal of dilute samples prior to amplification resulting in unbalanced heterozygous allele templates in the amplification reaction, and 2) the kinetics of the PCR process where, when few target templates are available, there is uneven amplification of heterozygous alleles during early PCR cycles. This study looks to examine the contribution of PCR chemistry and pre-PCR sampling errors on stochastic effects by utilizing a single-tube DNA extraction and direct amplification method. Cells were collected into tubes using the McCrone and Associates, Inc. cell transfer method, which allowed for approximation of DNA mass without quantification. The forensicGEM® Saliva Kit was used to lyse the cells and inactivate nucleases without inhibiting downstream amplification. The samples were then directly amplified with the AmpFLSTR® Identifiler® Plus PCR Amplification Kit. These samples should only show the effects of PCR chemistry since pipetting and tube transferal steps prior to amplification were removed with the expectation that equal numbers of heterozygous alleles are present in the sample pre-amplification. Comparisons of PH imbalance were made to samples extracted with forensicGEM® but had one or more pipetting and tube transferal steps prior to amplification. These samples were either created through the dilution of stock DNA or from the cell transfer method where aliquots were then taken for amplification; thus these samples would exhibit the effects of both pre-PCR sampling and PCR chemistry errors and inefficiencies. The use of carrier ribonucleic acid (cRNA) was also added to cell transfer samples prior to the amplification of samples to see if it assisted with amplification and increased signal. Results show that the samples with only PCR chemistry generally have significantly higher mean peak height ratios (PHRs) than samples with both pre-PCR sampling and PCR chemistry except in cases where there were large numbers of ADOs. When compared to the diluted samples, the cell transfer samples had significantly higher mean PHR at 0.0625 ng and 0.125 ng, and higher mean PHR at 0.0375 ng when PHs from ADOs are included. Average peak heights (APHs) in the cell transfer samples were also significantly higher in these comparisons. When compared to aliquots taken from cell transfer samples, mean PHR was significantly higher at 0.0625 ng in cell transfer samples with only PCR chemistry than cell transfer samples with both pre-PCR sampling and PCR chemistry; however APH for the samples with only PCR chemistry was also significantly higher in one experiment and not significantly different in another. In a third experiment, the difference in mean PHR was not significant while APH was significantly higher in the samples with pre-PCR sampling and PCR chemistry; however there were also a large numbers of ADOs. Our results also found quantification of dilute samples unreliable but cell counting through the cell transfer method is an appropriate alternative for DNA mass approximation. Also there were no significant changes in PHR or APH in the presence or absence of cRNA.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography