Dissertations / Theses on the topic 'Sampling – Software'

To see the other types of publications on this topic, follow the link: Sampling – Software.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 47 dissertations / theses for your research on the topic 'Sampling – Software.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Narayanappa, Harish B. "Monitoring software using property-aware program sampling." [Ames, Iowa : Iowa State University], 2010. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1476328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

奚永忻 and Yung-shing Paul Hsi. "On proportional sampling strategies in software testing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B3122443X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hsi, Yung-shing Paul. "On proportional sampling strategies in software testing." Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B22713293.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sun, Yi-Ran. "Generalized Bandpass Sampling Receivers for Software Defined Radio." Doctoral thesis, Stockholm, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sandoval, Alcocer Juan. "Horizontal profiling: A sampling technique to identify performance regressions." Tesis, Universidad de Chile, 2016. http://repositorio.uchile.cl/handle/2250/143666.

Full text
Abstract:
Doctor en Ciencias, Mención Computación
Los cambios continuos en el código fuente de un programa pueden inadvertidamente introducir una regresión de rendimiento en tiempo de ejecución. Dichas regresiones se refieren a situaciones donde el rendimiento de un programa se degrada en comparación de versiones anteriores del mismo, aunque la nueva versión funcione correctamente. Ejecutar puntos de referencia en cada versión de un programa es una técnica tradicional utilizada para identificar regresiones en etapas tempranas. A pesar de ser efectiva, esta técnica exhaustiva es difícil de llevar a cabo en la práctica, principalmente por la alta sobrecarga que esta actividad demanda. En esta tesis, realizamos un estudio empírico sobre una variedad de programas, con el fin de evaluar cómo el rendimiento de un programa evoluciona en el tiempo, a medida que es modificado. Guiados por este estudio, proponemos Horizontal Profiling, una técnica de muestreo para inferir si una nueva versión de un programa introduce una variación de rendimiento, usando información de la ejecución de versiones anteriores. El objetivo de Horizontal Profiling es reducir la sobrecarga que requiere monitorear el rendimiento de cada versión, ejecutando los puntos de referencia solo en las versiones que contengan cambios costosos de código fuente. Presentamos una evaluación en la cual aplicamos Horizontal Profiling para identificar regresiones de rendimiento en un número de programas escritos en en el lenguaje de programación Pharo. En base a las aplicaciones que analizamos, encontramos que Horizontal Profiling es capaz de detectar más del 80% de las regresiones de rendimiento, ejecutando los puntos de referencia en menos del 20% de las versiones. Adicionalmente, describimos los patrones identificados durante nuestro estudio empírico, y detallamos cómo abordamos los numerosos desafíos que enfrentamos para completar nuestros experimentos.
Este trabajo ha sido parcialmente financiado por CONICYT a través de la beca CONICYT-PCHA/Doctorado Nacional para extranjeros/2013-63130199, OBJECT PROFILE y LAM RESEARCH
APA, Harvard, Vancouver, ISO, and other styles
6

Patel, Milan. "Investigations in bandpass sampling for wireless communications and software defined radio." Thesis, University College London (University of London), 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.412490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yip, Wang. "Towards a proportional sampling strategy according to path complexity a simulation study /." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B22713232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yip, Wang, and 葉弘. "Towards a proportional sampling strategy according to path complexity: a simulation study." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31225500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kolsmyr, Johan, and Andreas Löfberg. "Software enhancement for improving sampling frequency and BLE communication in an embedded system." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-30928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Staunton, Richard C. "Visual inspection : image sampling, algorithms and architectures." Thesis, University of Warwick, 1991. http://wrap.warwick.ac.uk/108898/.

Full text
Abstract:
The thesis concerns the hexagonal sampling of images, the processing of industrially derived images, and the design of a novel processor element that can be assembled into pipelines to effect fast, economic and reliable processing. A hexagonally sampled two dimensional image can require 13.4% fewer sampling points than a square sampled equivalent. The grid symmetry results in simpler processing operators that compute more efficiently than square grid operators. Computation savings approaching 44% arc demonstrated. New hexagonal operators arc reported including a Gaussian smoothing filter, a binary thinner, and an edge detector with comparable accuracy to that of the Sobel detector. The design of hexagonal arrays of sensors is considered. Operators requiring small local areas of support are shown to be sufficient for processing controlled lighting and industrial images. Case studies show that small features in hexagonally processed images maintain their shape better, and that processes can tolerate a lower signal to noise ratio, than that for equivalent square processed images. The modelling of small defects in surfaces has been studied in depth. The flexible programmable processor element can perform the low level local operators required for industrial image processing on both square and hexagonal grids. The element has been specified and simulated by a high level computer program. A fast communication channel allows for dynamic reprogramming by a control computer, and the video rate element can be assembled into various pipeline architectures, that may eventually be adaptively controlled.
APA, Harvard, Vancouver, ISO, and other styles
11

Hörmann, Wolfgang, and Josef Leydold. "Black-Box Algorithms for Sampling from Continuous Distributions." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2006. http://epub.wu.ac.at/1042/1/document.pdf.

Full text
Abstract:
For generating non-uniform random variates, black-box algorithms are powerful tools that allow drawing samples from large classes of distributions. We give an overview of the design principles of such methods and show that they have advantages compared to specialized algorithms even for standard distributions, e.g., the marginal generation times are fast and depend mainly on the chosen method and not on the distribution. Moreover these methods are suitable for specialized tasks like sampling from truncated distributions and variance reduction techniques. We also present a library called UNU.RAN that provides an interface to a portable implementation of such methods. (author's abstract)
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
12

Henderson, Tim A. D. "Frequent Subgraph Analysis and its Software Engineering Applications." Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1496835753068605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Shu, Gang. "Statistical Estimation of Software Reliability and Failure-causing Effect." Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1405509796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Xiong, Xiaoyu. "Adaptive multiple importance sampling for Gaussian processes and its application in social signal processing." Thesis, University of Glasgow, 2017. http://theses.gla.ac.uk/8542/.

Full text
Abstract:
Social signal processing aims to automatically understand and interpret social signals (e.g. facial expressions and prosody) generated during human-human and human-machine interactions. Automatic interpretation of social signals involves two fundamentally important aspects: feature extraction and machine learning. So far, machine learning approaches applied to social signal processing have mainly focused on parametric approaches (e.g. linear regression) or non-parametric models such as support vector machine (SVM). However, these approaches fall short of taking into account any uncertainty as a result of model misspecification or lack interpretability for analyses of scenarios in social signal processing. Consequently, they are less able to understand and interpret human behaviours effectively. Gaussian processes (GPs), that have gained popularity in data analysis, offer a solution to these limitations through their attractive properties: being non-parametric enables them to flexibly model data and being probabilistic makes them capable of quantifying uncertainty. In addition, a proper parametrisation in the covariance function makes it possible to gain insights into the application under study. However, these appealing properties of GP models hinge on an accurate characterisation of the posterior distribution with respect to the covariance parameters. This is normally done by means of standard MCMC algorithms, which require repeated expensive calculations involving the marginal likelihood. Motivated by the desire to avoid the inefficiencies of MCMC algorithms rejecting a considerable number of expensive proposals, this thesis has developed an alternative inference framework based on adaptive multiple importance sampling (AMIS). In particular, this thesis studies the application of AMIS for Gaussian processes in the case of a Gaussian likelihood, and proposes a novel pseudo-marginal-based AMIS (PM-AMIS) algorithm for non-Gaussian likelihoods, where the marginal likelihood is unbiasedly estimated. Experiments on benchmark data sets show that the proposed framework outperforms the MCMC-based inference of GP covariance parameters in a wide range of scenarios. The PM-AMIS classifier - based on Gaussian processes with a newly designed group-automatic relevance determination (G-ARD) kernel - has been applied to predict whether a Flickr user is perceived to be above the median or not with respect to each of the Big-Five personality traits. The results show that, apart from the high prediction accuracies achieved (up to 79% depending on the trait), the parameters of the G-ARD kernel allow the identification of the groups of features that better account for the classification outcome and provide indications about cultural effects through their weight differences. Therefore, this demonstrates the value of the proposed non-parametric probabilistic framework for social signal processing. Feature extraction in signal processing is dominated by various methods based on short time Fourier transform (STFT). Recently, Hilbert spectral analysis (HSA), a new representation of signal which is fundamentally different from STFT has been proposed. This thesis is also the first attempt to investigate the extraction of features from this newly proposed HSA and its application in social signal processing. The experimental results reveal that, using features extracted from the Hilbert spectrum of voice data of female speakers, the prediction accuracy can be achieved by up to 81% when predicting their Big-Five personality traits, and hence show that HSA can work as an effective alternative to STFT for feature extraction in social signal processing.
APA, Harvard, Vancouver, ISO, and other styles
15

Kulkarni, Mandar Shashikant. "Implementation of a 1GHZ frontend using transform domain charge sampling techniques." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-3158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Patterson, Mason Foushee. "Standardization of Street Sampling Units to Improve Street Tree Population Estimates Derived by I-Tree Streets Inventory Software." Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/42687.

Full text
Abstract:
Street trees are a subpopulation of the urban forest resource and exist in the rights-of-way adjacent to public roads in a municipality. Benefit-cost analyses have shown that the annual benefits provided by the average street tree far outweigh the costs of planting and maintenance. City and municipal foresters spend a majority of their time and resources managing street tree populations. Sample street tree inventories are a common method of estimating municipal street tree populations for the purposes of making urban forest policy, planning, and management decisions. i-Tree Streets is a suite of software tools capable of producing estimates of street tree abundance and value from a sample of street trees taken along randomly selected sections (segments) of public streets. During sample street tree inventories conducted by Virginia Tech Urban Forestry, it was observed that the lengths of the sample streets recommended by i-Tree varied greatly within most municipalities leading to concern about the impact of street length variation on sampling precision. This project was conducted to improve i-Tree Streets by changing the recommended sampling protocol without altering the software. Complete street tree censuses were obtained from 7 localities and standardized using GIS. The effects of standardizing street segments to 3 different lengths prior to sampling on the accuracy and precision of i-Tree Streets estimates were investigated though computer simulations and analysis of changes in variation in number of trees per street segment as a basis for recommending procedural changes. It was found that standardizing street segments significantly improved the precision of i-Tree Streets estimates. Based on the results of this investigation, it is generally recommended that street segments be standardized to 91m (300 ft) prior to conducting a sample inventory. Standardizing to 91m will significantly reduce the number of trees, the number of street segments, and the percentage of total street segments that must be sampled to achieve an estimate with a 10% relative standard error. The effectiveness of standardization and the associated processing time can be computed from municipal attributes before standardization so practitioners can gauge the marginal gains in field time versus costs in processing time. Automating standardization procedures or conducting an optimization study of segment length would continue to increase the efficiency and marginal gains associated with street segment standardization.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
17

Stenner, Jack Eric. "Public news network: digital sampling to create a hybrid media feed." Thesis, Texas A&M University, 2004. http://hdl.handle.net/1969.1/456.

Full text
Abstract:
A software application called Public News Network (PNN) is created in this thesis, which functions to produce an aesthetic experience in the viewer. The application engenders this experience by presenting a three-dimensional virtual world that the viewer can navigate using the computer mouse and keyboard. As the viewer navigates the environment she sees irregularly shaped objects resting on an infinite ground plane, and hears an ethereal wind. As the viewer nears the objects, the sound transforms into the sound of television static and text is displayed which identifies this object as representative of an episode of the evening news. The viewer "touches" the episode and a "disembodied" transcript of the broadcast begins to scroll across the screen. With further interaction, video of the broadcast streams across the surfaces of the environment, distorted by the shapes upon which it flows. The viewer can further manipulate and repurpose the broadcast by searching for words contained within the transcript. The results of this search are reassembled into a new, re-contextualized display of video containing the search terms stripped from their original, pre-packaged context. It is this willful manipulation that completes the opportunity for true meaning to appear.
APA, Harvard, Vancouver, ISO, and other styles
18

Kaspříková, Nikola. "Výpočty plánů pro statistické přejímky měřením." Doctoral thesis, Vysoká škola ekonomická v Praze, 2006. http://www.nusl.cz/ntk/nusl-77069.

Full text
Abstract:
Analysis of several acceptance sampling procedures is performed and improvement of the procedures is sug- gested in this thesis. The issue of an efficient calculation of exact (P1, P2) single sampling plans for sampling by variables and exact LTPD and AOQL single sampling plans for sampling by variables when the rejected lots are inspected is addressed. The calculation of the plans is done using exact formula for operating characteristic of acceptance sampling plan and the new plans have better characteristics compared to the plans computed in a usual way using just approximately valid formula for operating characteristic. The tables of plans calculated according to procedures suggested in this thesis are provided for a couple of input parameters combinations. Tools designed for working with acceptance sampling plans, tools for analysis and calculation of acceptance sampling plans is one of the outputs of this thesis. R language and environment for statistical computations was chosen for implementation. The tools are satisfactorily efficient and may be easily used even for acceptance sampling plans tables computation. Newly defined functions in R are supplied including their source code and basic documentation so that acceptance sampling plans can be computed as demanded in case that the particular input parameters set is out of the scope covered in the tables of plans provided here.
APA, Harvard, Vancouver, ISO, and other styles
19

Anderson, Christopher R. "A Software Defined Ultra Wideband Transceiver Testbed for Communications, Ranging, or Imaging." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/29026.

Full text
Abstract:
Impulse Ultra Wideband (UWB) communications is an emerging technology that promises a number of benefits over traditional narrowband or broadband signals: extremely high data rates, extremely robust operation in dense multipath environments, low probability of intercept/detection, and the ability to operate concurrently with existing users. Unfortunately, most currently available UWB systems are based on dedicated hardware, preventing researchers from investigating algorithms or architectures that take advantage of some of the unique properties of UWB signals. This dissertation outlines the development of a general purpose software radio transceiver testbed for UWB signals. The testbed is an enabling technology that provides a development platform for investigating ultra wideband communication algorithms (e.g., acquisition, synchronization, modulation, multiple access), ranging or radar (e.g., precision position location, intrusion detection, heart and respiration rate monitoring), and could potentially be used in the area of ultra wideband based medical imaging or vital signs monitoring. As research into impulse ultra wideband expands, the need is greater now than ever for a platform that will allow researchers to collect real-world performance data to corroborate theoretical and simulation results. Additionally, this dissertation outlines the development of the Time-Interleaved Analog to Digital Converter array which served as the core of the testbed, along with a comprehensive theoretical and simulation-based analysis on the effects of Analog to Digital Converter mismatches in a Time-Interleaved Sampling array when the input signal is an ultra wideband Gaussian Monocycle. Included in the discussion is a thorough overview of the implementation of both a scaled-down prototype as well as the final version of the testbed. This dissertation concludes by evaluating the of the transceiver testbed in terms of the narrowband dynamic range, the accuracy with which it can sample and reconstruct a UWB pulse, and the bit error rate performance of the overall system.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
20

Blais, Antoine. "Feasibility of a Direct Sampling Dual-Frequency SDR Galileo Receiver for Civil Aviation." Phd thesis, Toulouse, INPT, 2014. http://oatao.univ-toulouse.fr/14271/1/Blais.pdf.

Full text
Abstract:
This thesis studies the relevance of DS SDR architectures applied to Galileo receivers in the specific context of Civil Aviation, characterized in particular by strict requirements of robustness to interference, in particular, interference caused by DME or CW signals. The Software Defined Radio concept renders the major tendency, inside the receiver, to move the demodulation part from an analog technology to digital signal processing, that is software. The choice of this kind of design is nearly generalized in new receiver architectures so it was considered the case in this work. The Direct Sampling method consists in digitizing the signal as close as possible to the antenna, typically after the LNA and the associated RF bandpass filter. So this technique does not use any conversion to an intermediate frequency, using as much as possible the bandpass sampling principle in order to minimize the sampling frequency and consequently the downstream computational costs. What is more, this thesis aiming at the greatest simplification of the analog part of the receiver, the decision was made to suppress the analog AGC which equips the receivers of classical architecture. Only fixed gained amplifiers should precede the ADC. This document exposes the work done to determine if these choices can apply to a multifrequency (E5a and E1 signals) Galileo receiver intended for a Civil Aviation use. The structure of the document reflects the approach used during this thesis. It progresses step by step from the antenna down to the digital signal, to be processed then by the SDR part. After an introduction detailing the problem to study and its context, the second chapter investigates the Civil Aviation requirements of robustness to interference a satellite navigation receiver must comply with. It is the basis which completely conditions the design process. The third chapter is devoted to the determination of the sampling frequency. Two sampling architectures are proposed: the first implements coherent sampling of the two E5a and E1 bands while the second uses separate sampling. In both cases the necessity to use extra RF filters is shown. The minimum attenuation to be provided by these filters is also specified. These requirements are strong enough to justify a feasibility investigation. It is the subject of chapter four where an experimental study, based on a SAW filter chip available on the shelf, is related. The issue of the sampling clock jitter, of concern with the Direct Sampling technique because of the high frequency of the signal to digitize, is investigated in chapter five. Some simulation results are presented and a dimensioning of the quality of the sampling clock is proposed. In chapter six, quantization, a byproduct of digitization, is detailed. Precisely it is the calculation of the number of bits the ADC must have to digitally represent the whole dynamic of, not only the useful signal, but also of the potential interference. Considering the high binary throughput highlighted in chapters three and six, chapter seven evaluates the possibility to reduce the coding dynamic of the digital signal at the output of the ADC by means of compression functions. The last chapter is focused on the digital separation of the two E5a and E1 bands in the coherent sampling architecture presented in chapter two. Here also specifications of minimum attenuation are given. Lastly the conclusions synthesize the contributions of this thesis and proposes ideas for future work to enrich them and more generally the subject of DS-SDR Galileo receivers for Civil Aviation.
APA, Harvard, Vancouver, ISO, and other styles
21

Wagner, Laurent. "KARTOTRAK, integrated software solution for contaminated site characterization." Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2015. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-181693.

Full text
Abstract:
Kartotrak software allows optimal waste classification and avoids unnecessary remediation. It has been designed for those - site owners, safety authorities or contractors, involved in environmental site characterization projects - who need to locate and estimate contaminated soil volumes confidently.
APA, Harvard, Vancouver, ISO, and other styles
22

Liu, Xu. "FIT refactoring improving the quality of FIT acceptance test /." Bowling Green, Ohio : Bowling Green State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=bgsu1182669280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Hjalmarsson, Felicia. "Predicting the Helpfulness of Online Product Reviews." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-22144.

Full text
Abstract:
Review helpfulness prediction has attracted growing attention of researchers that proposed various solutions using Machine Learning (ML) techniques. Most of the studies used online reviews from Amazon to predict helpfulness where each review is accompanied with information indicating how many people found the review helpful. This research aims to analyze the complete process of modelling review helpfulness from several perspectives. Experiments are conducted comparing different methods for representing the review text as well as analyzing the importance of data sampling for regression compared to using non-sampled datasets. Additionally, a set of review, review meta-data and product features are evaluated on their ability to capture the helpfulness of reviews. Two Amazon product review datasets are utilized for the experiments and two of the most widely used machine-learning algorithms, Linear Regression and Convolutional Neural Network (CNN). The experiments empirically demonstrate that the choice of representation of the textual data has an impact on performance with tf-idf and word2Vec obtaining the lowest Mean Squared Error (MSE) values. The importance of data sampling is also evident from the experiments as the imbalanced ratios in the unsampled dataset negatively affected the performance of both models with bias predictions in favor of the majority group of high ratios in the dataset. Lastly, the findings suggest that review features such as unigrams of review text and title, length of review text in words, polarity of title along with rating as review meta-data feature are the most influential features for determining helpfulness of reviews.
APA, Harvard, Vancouver, ISO, and other styles
24

Baquero, Oswaldo Santos. "Manejo populacional de cães e gatos: métodos quantitativos para caracterizar populações, identificar prioridades e estabelecer indicadores." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/10/10134/tde-16122015-105959/.

Full text
Abstract:
O manejo populacional de cães e gatos é um conjunto de estratégias para controlar e prevenir problemas relacionados com o convívio entre esses animais e os seres humanos. Nesta tese é proposto um fluxo de trabalho baseado em métodos quantitativos, para auxiliar o planejamento, implementação e monitoramento de programas de manejo populacional de animais de companhia. Ao seguir o fluxo de trabalho é possível coletar dados para caracterizar populações, analisar esses dados para propor intervenções e avaliar o efeito das intervenções. A proposta foi baseada na articulação de cinco pesquisas. Na primeira pesquisa foi implementado de um desenho amostral complexo para caracterizar a população de cães e gatos domiciliados de Votorantim, São Paulo. Na segunda pesquisa que foi baseada nos dados levantados na primeira, foram usadas análises de correspondências múltiplas para identificar perfis de opiniões públicas em relação ao abandono de cães e gatos. Na terceira pesquisa foi avaliada a validade do desenho amostral usado na primeira pesquisa, mediante simulações estocásticas. Na quarta pesquisa foi desenvolvido um modelo matemático de manejo populacional que permite priorizar as intervenções de acordo com o efeito que produzem. Na quinta pesquisa foi desenvolvido um modelo matemático para avaliar a eficiência do controle reprodutivo realizado com contraceptivos de efeito reversível. Os modelos das duas últimas pesquisas foram baseados em sistemas acoplados de equações diferenciais e em análises de sensibilidade global e local. A proposta foi implementada em um software de código aberto, o pacote do R capm, que pode ser incorporado na rotina de trabalho dos setores envolvidos no manejo populacional de animais de companhia
Dog and cat population management is a set of strategies to control and prevent problems related with the coexistence between those animals and human beings. In this thesis it is proposed a work-fiow based on quantitative methods to support the planning, implementation and mo- nitoring of companion animal population management programs. Following the work-fiow, it is possible to collecf data to characterize populations, analyze that data to propose interventions and assess the effect of interventions. The proposal was based on the articulation of five rese- arches. In the first research, a complex sampling design was implemented to characterize the owned dog and cat populations of Votorantim, São Paulo. In the second research, which was based on data from the first one, public opinion profiles regarding dog and cat abandonment were identified using multiple correspondence analysis. In the third research, the validity of the sampling design used in the first research was assessed through stochastic simulations. In the fourth research, a mathematical model of population management was developed. With that model, it is possible to prioritize interventions according to the effect they produce. In the fifth research, a mathematical model was developed to assess the efficiency of reproductive control based on contraceptives with reversible effect. The models of the last two researches were based on systems of coupled differential equations, and on global and local sensitivity analysis. The proposal was implemented in an open source software, the R package capm, that can be incorporated in the working routine of sectors involved with companion animal population management
APA, Harvard, Vancouver, ISO, and other styles
25

Oliveira, Thiago de Paula. "Modelos mistos para a análise da tonalidade da cor da casca de mamão (Carica papaya L.) cv. \"Sunrise Solo\", avaliada ao longo do tempo por meio de um scanner e de um colorímetro." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-04042014-085453/.

Full text
Abstract:
O mamão (Carica papaya L.) cv. \"Sunrise Solo\" é um fruto que apresenta mudança gradual e desuniforme da cor da casca, que vai de verde para amarela. Isso faz com que a metodologia instrumental para avaliação da cor, por meio de um colorímetro, seja subjetiva, devido ao número de pontos observados, bem como às localizações deles no fruto. Como alternativa, foi proposta a utilização de imagens digitalizadas de toda região da casca do fruto, obtidas por meio de um scanner de mesa. Para avaliar a precisão desses métodos, foi conduzido um experimento com 20 repetições. Cada repetição era constituída de um fruto de mamoeiro cv. \"Sunrise Solo\", mantido sob temperatura e umidade relativa controladas. A cor da casca dos frutos foi avaliada, diariamente, utilizando um colorímetro e um scanner. Com o scanner, foram digitalizadas as duas faces do fruto e, com o colorímetro, foram observados quatro pontos equidistantes, na região equatorial do mesmo. Como a avaliação para cada fruto foi feita ao longo do tempo, os dados são classificados como longitudinais. Assim, utilizaram-se modelos lineares de efeitos mistos para estudar o comportamento da tonalidade média, pois essa técnica permite o uso de diferentes estruturas de variâncias e covariâncias para as matrizes dos efeitos aleatórios e dos erros. O processo de seleção do modelo foi realizado por meio do teste da razão de verossimilhanças e dos critérios de informação AIC e BIC, resultando no mesmo preditor linear e matrizes de variâncias e covariâncias para ambas as metodologias de quantificação da cor. O modelo final apresentou um preditor linear quadrático com efeitos aleatórios para o intercepto e para os termos linear e quadrático com matriz de variâncias e covariâncias dos efeitos aleatórios não estruturada e componentes de variância com heterocedasticidade para os erros. A utilização do scanner revelou dois grupos de maturação fisiológica distintos, que podem estar relacionados ao ponto de colheita do fruto, fato que não ficou evidente ao utilizar um colorímetro. De forma geral, o uso de um scanner possibilitou obter uma avaliação precisa da maturação do fruto, além de ser mais consistentes e eficiente do que o uso de um colorímetro para estudar a tonalidade média da casca de frutos que apresentam coloração desuniforme.
Papayas (Carica papaya L.) of \"Sunrise solo\" variety are fruits that present gradual and uneven changes in the peel color, which goes from green to yellow. As a consequence, when using a colorimeter to quantify their color, the results are subjective because of the number of observed points, as well as because of their position on the fruit. A proposed alternative was to use scanned images of the whole fruit peel to quantify color. To assess the precision of these methods, an experiment with 20 replicates was carried out. Each replicate consisted of a papaya fruit, kept under controlled temperature and relative humidity. The fruits\' peel colors were assessed, daily, using a colorimeter and a scanner. With the scanner, both sides of the fruit were scanned and, with the colorimeter, four equidistant points at the equatorial region of the fruit were observed. As the assessment was made through time for a same fruit, the data are classified as longitudinal. Therefore, linear mixed effect models were used to study the behavior of the average fruit color tonality through time, as this technique allows usage of different random effects and error covariance structures. Model selection was made using likelihood-ratio tests and the Akaike and Bayesian information criteria, which resulted in the selection of the same linear predictor and covariance matrices for both color quantification methods. The final model presented a quadractic linear predictor with random effects for the intercept, linear and quadractic terms with an unstructured variance-covariance matrix for the random effects and a variance components with heterogeneity matrix for the residuals. The use of a scanner revealed two distinct phisiological maturation groups, which may be related to the harvesting time. This was not observed when using a colorimeter. In general, using a scanner made possible to obtain more consistent observations, which makes it a more efficient methodology to study the average fruit peel color tonality.
APA, Harvard, Vancouver, ISO, and other styles
26

Hsu, Chung-Yuan. "Formative Research on an Instructional Design Model for the Design of Computer Simulation for Teaching Statistical Concepts." Ohio University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1258048389.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Pezzott, George Lucas Moraes. "Estimação do tamanho populacional a partir de um modelo de captura-recaptura com heterogeneidade." Universidade Federal de São Carlos, 2014. https://repositorio.ufscar.br/handle/ufscar/4585.

Full text
Abstract:
Made available in DSpace on 2016-06-02T20:06:10Z (GMT). No. of bitstreams: 1 6083.pdf: 1151427 bytes, checksum: 24c39bb02ef8c214a3e10c3cc5bae9ef (MD5) Previous issue date: 2014-03-14
Financiadora de Estudos e Projetos
In this work, we consider the estimation of the number of errors in a software from a closed population. The process of estimating the population size is based on the capture-recapture method which consists of examining the software, in parallel, by a number of reviewers. The probabilistic model adopted accommodates situations in which reviewers are independent and homogeneous (equally efficient), and each error is an element that is part of a disjoint partition in relation to its detection probability. We propose an iterative process to obtain maximum likelihood estimates in which the EM algorithm is used to the nuisance parameters estimation. The estimates of population parameters were also obtained under the Bayesian approach, in which Monte Carlo on Markov Chains (MCMC) simulations through Gibbs sampling algorithm with insertion of latent variables were used on the conditional posterior distributions. The two approaches were applied to simulated data and in two real data sets from the literature.
Neste trabalho, consideramos a estimação do número de erros em um software provenientes de uma população fechada. O processo de estimação do tamanho populacional é baseado no método de captura-recaptura, que consiste em examinar o software, em paralelo, por certo número de revisores. O modelo probabilístico adotado acomoda situações em que os revisores são independentes e homogêneos (igualmente eficientes) e que cada erro é um elemento que faz parte de uma partição disjunta quanto à sua probabilidade de detecção. Propomos um processo iterativo para obtenção das estimativas de máxima verossimilhança em que utilizamos o algoritmo EM na estimação dos parâmetros perturbadores. As estimativas dos parâmetros populacionais também foram obtidas sob o enfoque Bayesiano, onde utilizamos simulações de Monte Carlo em Cadeias de Markov (MCMC) através do algoritmo Gibbs sampling com a inserção de variáveis latentes nas distribuições condicionais a posteriori. As duas abordagens foram aplicadas em dados simulados e em dois conjuntos de dados reais da literatura.
APA, Harvard, Vancouver, ISO, and other styles
28

Jovanovic, Aleksandar, and Cong Vu. "Triggningskriterier i triggningsmodul för trådlösa dataloggern DL141E." Thesis, Högskolan Kristianstad, Avdelningen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hkr:diva-18292.

Full text
Abstract:
Dataloggern DL141E möjliggör kontinuerlig loggnig av mätdata från sensorer på upp till 30k sampel/s, som vidare kan överföras till mobiltelefoner via trådlös kommunikation. Detta är dock för stora datamängder per tidsenhet för mobiltelefoner som är tekniskt begränsade. Därför önskas bara relevant mätdata för att reducerar denna onödiga datamängd. I denna studie föreslås ett tillvägagångssätt där enspecifik mindre samling av diskreta försampel loggas i tur och ordning. Varje samling signalbehandlas genom att ställas mot fördefinierade triggningskriterier för att trigga loggning av en stor uppsättning av sampel på bara intressanta signalavvikelser. Dessa triggningskriterier är en särskild nivåöverskridning och signalriktning i kombination med ett antal sampel i följd. Studien förser en granskning av hur signalberäkningsmetoden ”Lebesgue sampling” kan tillämpas med kriterierna för god träffsäkerhet och en skälig beräkningstid i mobiltelefoner. Detta beaktas med dataloggerns vanligaste signaltyper puls och ramp i en miljö där småbrus och transienter förekommer. Träffsäkerheten och beräkningsbördan beaktasför att bedöma Lebesgue metodens effektivitet och antal nödvändiga försampel per uppsättning. Implementeringen görs i Java Android plattform och integreras därefter i en digital triggningsmodul med Graphical User Interface (GUI).
With the data logger DL141E it’s possible to continuously log measurement data from sensors up to 30k samples per second, and then transferring them to a mobile phone with Bluetooth technology. But this is by far too much sample data in a small time for a mobile phone with technical limitations to receive. That’s why only relevant measurement data should be mass logged to reduce the unnecessary data amount. Int his study a new approach is proposed where a specific and smaller amount of discrete pre-samples are logged in sequence. Every set of pre-samples is processed by comparing them to the user pre-defined trigger criterias. Met criterias will trigger logging of a massive set of samples on basis of only interesting signal deviations. The following trigger criterias are used: a specific signal level to cross, a specific signal direction, and both of these in combination with an amount of consecutive samples. The study provides an examination on how the signal processing method ”Lebesgue sampling” can be applied with the above criterias to achieve a god accuracy with reasonable processing time on mobile phones. This is observed using sensors with the most common signal types ramp and pulse in an environment where small noises and transients occur. The accuracy and the processing load are taken into account when estimating the efficiency of Lebesgue method and when estimating how many pre-samples per set might be sufficient. The implementation is written in Java Android platform and then integrated into a digital triggering module with Graphical User Interface (GUI).
APA, Harvard, Vancouver, ISO, and other styles
29

Partsch, Patrik. "Bezdrátový zvonek s digitálním přenosem hlasu." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217694.

Full text
Abstract:
The master´s thesis deals with a design and layout of wireless doorbell with digital voice transmission. It also includes the nRF9E5 circuit analysis. In the beginning of the document there is characterization of the problem in general. Next part describes a layout example for the application schematic, a layout of printed circuit board and a layout of software. nRF9E5 is a true single chip system with fully integrated RF transceiver, 8051 compatible microcontroller and a four input 10 bit AD converter. Despite of all studies of available literature I have found only very little particular information related to this matter. In general, the main aim of this work is to make the wireless doorbell with digital voice transmission. Further, ilustrated schematics are enclosed as a attachement of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
30

Stevenson, Clint W. "A Logistic Regression Analysis of Utah Colleges Exit Poll Response Rates Using SAS Software." BYU ScholarsArchive, 2006. https://scholarsarchive.byu.edu/etd/1116.

Full text
Abstract:
In this study I examine voter response at an interview level using a dataset of 7562 voter contacts (including responses and nonresponses) in the 2004 Utah Colleges Exit Poll. In 2004, 4908 of the 7562 voters approached responded to the exit poll for an overall response rate of 65 percent. Logistic regression is used to estimate factors that contribute to a success or failure of each interview attempt. This logistic regression model uses interviewer characteristics, voter characteristics (both respondents and nonrespondents), and exogenous factors as independent variables. Voter characteristics such as race, gender, and age are strongly associated with response. An interviewer's prior retail sales experience is associated with whether a voter will decide to respond to a questionnaire or not. The only exogenous factor that is associated with voter response is whether the interview occurred in the morning or afternoon.
APA, Harvard, Vancouver, ISO, and other styles
31

Santos, Carlos Henrique da Silva. "Computação paralela aplicada a problemas eletromagneticos utilizando o metodo FDTD." [s.n.], 2005. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261767.

Full text
Abstract:
Orientador: Hugo Enrique Hernandez Figueroa
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-05T08:10:41Z (GMT). No. of bitstreams: 1 Santos_CarlosHenriquedaSilva_M.pdf: 1752834 bytes, checksum: 8ed5b0902bb130762ff802db03187fbb (MD5) Previous issue date: 2005
Resumo: Esse trabalho tem por objetivo desenvolver soluções computacionais de alto desempenho a um baixo custo, seguindo as propostas incentivadoras do Governo Federal para adoção de software livre. Essas soluções possibilitam simular, de maneira eficiente, os domínios computacionais de médio e grande porte utilizados no eletromagnetismo computacional. Os bons resultados obtidos nesse trabalho mostram a importância e eficiência da computação massivamente paralela utilizando cluster Beowulf para o processamento do método FDTD aplicado em estruturas complexas, porém a um baixo custo financeiro. O desempenho desse sistema ficou comprovado na realização de experimentos para analisar a SAR na cabeça humana e estudar os efeitos de estruturas metamateriais
Abstract: This work has as objective to develop high performance computational solutions to a low cost, following the stimulated proposals of the Federal Government for adoption of free software. They make possible to simulate, in efficient way, the computational domains of middle and high size useful on the computational electromagnetism. The good results gotten in these work showed the importance and efficiency of the massive parallel computation using the Beowulf cluster for the process the FDTD method applied on complex structures, however to a low financial cost. The performance of this system was proved in the realization of experiments to analyze the SAR on the human head and to study the effects of metamarial structures
Mestrado
Telecomunicações e Telemática
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
32

Mehrez, Hanen. "Interface Radio SDR pour récepteur GNSS multi constellations pour la continuité de positionnement entre l’intérieur et l’extérieur." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLL008/document.

Full text
Abstract:
Dans le but d’améliorer la disponibilité des services fournis par un récepteur, la conception d’un récepteur GNSS permettant de recevoir plusieurs signaux de toutes les bandes simultanément semble être la solution. Une architecture à sous échantillonnage RF optimisée de type SDR (Software Defined Radio) comportant un étage RF intégrable et reconfigurable et un étage de traitement numérique avec une implémentation logicielle du traitement en bande de base est défini pour ce récepteur GNSS, tout en répondant aux exigences des spécifications des standards GNSS : des réseaux radio cellulaires : GPS, Glonass, Galileo, Beidou. Un choix des composants discrets suite au dimensionnement system est effectué et ceci pour installer un prototype de validation expérimental. Ensuite nous nous s’intéressons à la caractérisation de la chaine RF afin d’étudier les limitations causés par la non linéarité et d’étudier la stabilité du prototype proposé. Un étage de traitement numérique des signaux IF, capturés à la sortie de l’ADC, est implémenté sous Matlab. L’acquisition de ces données permet la détermination des satellites visible à un instant donné qui nous permet éventuellement la détermination d’une position
In order to improve the availability of services provided by a receiver, designing a GNSS receiver to collect multiple signals from all bands simultaneously seems to be the solution. An optimized software-defined RF (SDR) sub-sampling architecture with an integral and reconfigurable RF stage and a digital processing stage with a software implementation of the baseband processing is defined for this GNSS receiver, while meeting the requirements GNSS standards specifications: cellular radio networks: GPS, Glonass, Galileo, Beidou. Many discrete components are selected after system dimensioning. Thus, experimental validation prototype is installed. Then we are interested in the characterization of the RF front-end in order to determine the limitations caused by the nonlinearity and to study the stability of the proposed prototype. A stage of digital processing of the IF signals, captured at the ADC output, is implemented under Matlab software. The acquisition of these data allows the determination of satellites visible at a given instant that allows us to determine a position
APA, Harvard, Vancouver, ISO, and other styles
33

Dogaru, Emanuel. "Built-In Self-Test of Flexible RF Transmitters Using Nonuniform Undersampling." Thesis, CentraleSupélec, 2015. http://www.theses.fr/2015SUPL0004/document.

Full text
Abstract:
Le secteur de communications sécurisés et portables connait une véritable révolution avec l’apparition des plateformes dites radios logiciels (Software Defined Radios, SDRs). Les performances exceptionnelles de ces systèmes sont les résultats d’une interaction assez complexe et souvent peu évidente entre le logiciel embarqué, le circuit de traitement numérique et les blocs mixtes analogiques/RF. Cette complexité limite la testabilité du produit fini. La méthodologie de test utilisée actuellement a atteint ses limites dues au cout élevé, le long temps de test et le bas degré de généralisation. De plus, les plateformes SDRs peuvent évoluer sur le terrain et elles vont supporter des standards et des scénarios qui n’ont pas été considérés pendant le la phase de conception. Donc, une stratégie de test sur le terrain (en ligne) n’est plus une caractéristique optionnelle mais une nécessité. Dans ce contexte, le but de notre recherche est d’inventer et développer une méthodologie de test capable de garantir le bon fonctionnement d’une plateforme SDR après la production et pendant sa vie. Notre objectif final est de réduire le coût du test en profitant de la reconfigurabilité de la plateforme. Pour les radios tactiques qui doivent être mises à jour sur le terrain sans équipement spécial, les stratégies Built-In Self-Test (BIST) sont, sans doute, la seule moyenne de garantir la conformité aux spécifications. Dans cette mémoire, nous introduisons une nouvelle architecture de test RF BIST qui utilise la technique de de sous-échantillonnage nonuniform à la sortie de l’émetteur (TX) d’une SDR afin d’évaluer la conformité de la masque spectrale. Notre solution s’appuie sur une implémentation autonome, est modulable et peut être appliquée pour le test sur le terrain avec des modifications minimes. Par rapport aux autres techniques de test analogiques/RF, cet approche ne dépends pas de la architecture du TX, ni d’un modèle ad-hoc, ce qui est idéale pour le test des SDRs
The advent of increasingly powerful Integrated Circuits (IC) has led to the emergence of the Software Defined Radio (SDR) concept, which brought the sector of secured mobile communications into a new era. The outstanding performance of these systems results from optimal trade-offs among advanced analog/Radio Frequency (RF) circuitry, high-speed reconfigurable digital hardware and sophisticated real-time software. The inherent sophistication of such platforms poses a challenging problem for product testing. Currently deployed industrial test strategies face rising obstacles due to the costlier RF test equipment, longer test time and lack of flexibility. Moreover, an SDR platform is field-upgradeable, which means it will support standards and scenarii not considered during the design phase. Therefore, an in-field test strategy is not anymore 'a nice to have' feature but a mandatory requirement. In this context, our research aims to invent and develop a new test methodology able to guarantee the correct functioning of the SDR platform post-fabrication and over its operational lifetime. The overall aim of our efforts is to reduce post-manufacture test cost of SDR transceivers by leveraging the reconfigurability of the platform.For tactical radio units that must be field-upgradeable without specialized equipment, Built-in Self-Test (BIST) schemes are arguably the only way to ensure continued compliance to specifications. In this study we introduce a novel RF BIST architecture which uses Periodically Nonuniform Sampling (PNS2) of the transmitter (TX) output to evaluate compliance to spectral mask specifications. Our solution supports a stand-alone implementation, is scalable across a wide set of complex specifications and can be easily applied for in-field testing with small added hardware. Compared to existing analog/RF test techniques, this approach is not limited to a given TX architecture and does not rely on an ad-hoc TX model, which makes it ideal for SDR testing
APA, Harvard, Vancouver, ISO, and other styles
34

"Successive sampling and software reliability." Alfred P. Sloan School of Management, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/2365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

蕭遠芳. "Threaded sampling review of software documents." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/58308148961599340804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Upp, Brandon E. "Computer Program Instrumentation Using Reservoir Sampling & Pin++." Thesis, 2019. http://hdl.handle.net/1805/19977.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
This thesis investigates techniques for improving real-time software instrumentation techniques of software systems. In particular, this thesis investigates two aspects of this real-time software instrumentation. First, this thesis investigates techniques for achieving different levels of visibility (i.e., ensuring all parts of a system are represented, or visible, in final results) into a software system without compromising software system performance. Secondly, this thesis investigates how using multi-core computing can be used to further reduce instrumentation overhead. The results of this research show that reservoir sampling can be used to reduce instrumentation overhead. Reservoir sampling at a rate of 20%, combined with parallelized disk I/O, added 34.1% additional overhead on a four-core machine, and only 9.9% additional overhead on a sixty-four core machine while also providing the desired system visibility. Additionally, this work can be used to further improve the performance of real-time distributed software instrumentation.
APA, Harvard, Vancouver, ISO, and other styles
37

Teixeira, David Rodrigues. "Deploying time-based sampling techniques in Software-Defined Networking." Master's thesis, 2017. http://hdl.handle.net/1822/55836.

Full text
Abstract:
Dissertação de mestrado em Computer Science
Today’s computer networks face demanding challenges with the proliferation of services and applications requiring constant access, low latency and high throughput from network infrastructures. The increase in the demand for this type of services requires continuous analysis and a network topology capable of adapting to the dynamic nature of applications, in order to overcome challenges such as performance, security and flexibility. Software-Defined Networking (SDN) emerge as a solution to meet these challenges by using a network control plane, dissociated from the data plane, able to have a global view of the topology and act when required, depending on the variation in infrastructure congestion. Decisions involving different activities, such as network management and performance evaluation, rely on information about the state of the network that in traditional networks involves a substantial amount of data. Traffic sampling is essential in order to provide valuable statistical data to applications and enable appropriate control and monitoring decisions to be made. In this context, this work proposes the application of time-based sampling techniques in a SDN environment to provide network statistics at the controller level, taking into account the underlying need to establish a balance between the reliability of the data collected and the computational burden involved in the sampling process. The results obtained emphasize that it is possible to apply these sampling techniques by using OpenFlow Group Mod messages, although packet losses can occur on the switch during periods of network congestion.
As redes de computadores atuais enfrentam desafios exigentes, com a proliferação de serviços e aplicações que exigem acesso constante, baixa latência e elevado fluxo de dados. O aumento na procura deste tipo de serviços exige análise contínua e uma topologia de rede capaz de se adaptar à natureza dinâmica das aplicações, de forma a superar desafios como desempenho, segurança e flexibilidade. As redes definidas por software (Software-Defined Networking - SDN) surgem como uma solução para corresponder a este desafio, através da utilização de uma estrutura de controlo na rede, separada do plano de dados, capaz de ter uma visão global da arquitetura e agir adequadamente consoante as necessidades, dependendo da variação na congestão da infraestrutura. As decisões que envolvem diversas atividades, tais como gestão da rede e avaliação de desempenho, dependem de informação sobre o estado da rede que, em redes tradicionais, envolve uma quantidade substancial de dados. A amostragem de tráfego é essencial para fornecer dados estatísticos valiosos a aplicações e permitir definir decisões adequadas de controlo e monitorização. Neste contexto, este trabalho propõe a implementação de técnicas de amostragem baseadas em tempo num ambiente SDN, para disponibilizar estatísticas da rede ao nível do controlador, tendo em conta a necessidade subjacente de estabelecer um balanço entre a fiabilidade dos dados recolhidos e o peso computacional envolvido no processo de amostragem. Os resultados obtidos enfatizam que é possível aplicar essas técnicas de amostragem utilizando mensagens OpenFlow Group Mod, embora perdas de pacotes possam ocorrer no switch em períodos de congestionamento da rede.
APA, Harvard, Vancouver, ISO, and other styles
38

Huang, Ching-Jung, and 黃清榮. "Sampling Techniques for Software-Defined-Radio Based Universal Handheld Terminals." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/42070325314399488278.

Full text
Abstract:
碩士
國立臺灣海洋大學
電機工程學系
95
The principal idea behind the design of a software defined radio (SDR) is that the analog-to-digital and digital-to-analog converters should be placed as near the antenna as possible, such that most of the radio functionalities can be implemented on a digital signal processor. One way to achieve this is by Digital Quadrature sampling Demodulation (DQD) of the desired RF signal, which shifts the RF signal to the baseband. In this thesis, we present an efficient algorithm to compute the minimum quadrature sampling frequency for direct downconversion of multiple RF signals from real applications, such as GSM, CDMA2000, UMTS, DAB and DVB-H, simultaneously in a single terminal. We also present the simulation result which verify the correctness of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
39

Tseng, Kai-Shen, and 曾凱聲. "Estimation of Hill numbers in Quadrat Sampling and Software Development." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/38652973582591391879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Tseng, Kuan-Hua, and 曾冠華. "A Dynamic Sampling Adjustment System in Software-Defined Networking Environment." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/9u9dpg.

Full text
Abstract:
碩士
國立中興大學
資訊科學與工程學系
103
As OpenFlow controller can install flow rules into switches, we may balance network flows among communication paths to mitigate the network congestion. However, it is necessary to cope with various traffic conditions to provide an efficient low-latency packets forwarding mechanism in software defined networking (SDN) environment. For instance, a sampling-based measurement platform in SDN was proposed with reduced monitor-reporting time using sFlow for TCP traffic in OpenSample. Inspired from OpenSample, in this paper, we present a Dynamic Sampling Adjustment System (DSAS) based on statistical analysis in SDN environment. According to the sFlow protocol, we study the variations of sampling accuracy by adjusting the sampling rate dynamically and investigate the monitoring data of sFlow traffic. We also take the advantages of SDN, especially centralized management capability and scalability. In addition to management capability available in switch, such as OVSDB, these management modules were developed in control plane to achieve network functions virtualization (NFV) for monitoring and control purpose. Simulation results show that DSAS acceptable sampling accuracy can be applied for reference of. Of various bandwidth usage in network traffic, under the requirement of a fixed sampling accuracy, such as 10%, DSAS can reduce monitoring sFlow data collection from 1 percent to 2.5 percent, as compared to sampling without dynamic adjustment. In addition, we also proposed a per-port monitor solution based on sFlow in virtual switches. With the additional feature of per-port monitor, we are able to improve the granularity of network management using DSAS.
APA, Harvard, Vancouver, ISO, and other styles
41

Chiu, Chih-Wei, and 邱志偉. "The Study of Software Radio Receiver based on Intermediate Frequency Sampling." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/87227378390601501288.

Full text
Abstract:
碩士
國立海洋大學
電機工程學系
89
For recently different mobile communication standards, software radio has been respected gradually to improve the flexibility and the upgradability of the receiver. Because of advance in high-speed digital signal processors and analog to digital converters, it reduces cost to realize possibly. In this thesis, we will concentrate on software radio receiver of intermediate frequency(IF) sampling. For DS-CDMA system, intermediate frequency(IF) is digitized by using bandpass sampling, research effect of noise which caused by intermediate frequency(IF) sampling. We use this structure to analyze performance of system in single user and multi-user environment. In COST-207 multipath channel model, combining traditional receiver, Equal Gain Combining RAKE and Maximal Ratio Combining RAKE receiver to analyze performance of system. Then this thesis provides a model of performance analysis to software radio receiver of intermediate frequency sampling.
APA, Harvard, Vancouver, ISO, and other styles
42

Cheng, Qi-Ren, and 程麒任. "Statistical Estimation and Software Development of Ecosystem Functional Diversity in Quadrat Sampling." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/4rdje3.

Full text
Abstract:
碩士
國立清華大學
統計學研究所
106
Functional diversity or trait diversity refers to the diversity of the value and range of species traits, and is a rapidly growing research topic in ecology. Functional diversity is essential to assess ecosystem processes and their responses to environmental stress or disturbance. The higher value the functional diversity is, the more dissimilar the characteristics among species are, and as a result the whole ecosystem can better adapt to environmental changes. Functional diversity is typically quantified by using species abundances and species trait values. Many functional diversity measures have been proposed in the literature. The thesis includes two parts. The first part focuses on modifying Chao et al. (2018) abundance-based functional diversity measures to replicated incidence-based data under quadrat sampling for a single ecosystem. The proposed measures and estimators are formulated based on attribute diversity (a generalization of Hill numbers) in terms of diversity order q  0 and any positive level of threshold of functional distinctiveness. Statistical estimators of the proposed incidence-based measures are proposed and their variances are estimated by a bootstrap method. The second part focuses on the functional diversity decomposition and estimation under multiple communities for both individual-based abundance data and incidence data. Functional (dis)similarity indices are derived as transformations of functional beta diversities. Simulation results are presented to compare the proposed estimators with the conventional empirical diversities; the proposed estimators exhibit substantial improvement in terms of bias and RMSE. Real data sets are used to illustrate the proposed functional diversity measures and related functional dissimilarity indices. To facilitate all computations, online software “FunD” is developed with R language and Shiny package.
APA, Harvard, Vancouver, ISO, and other styles
43

Amin, Bilal Surveying &amp Spatial Information Systems Faculty of Engineering UNSW. "Software radio global navigation satellite system (GNSS) receiver front-end design: sampling and jitter considerations." 2007. http://handle.unsw.edu.au/1959.4/40881.

Full text
Abstract:
This thesis examines the sampling and jitter specifications and considerations for Global Navigation Satellite Systems (GNSS) software receivers. Software Radio (SWR) technologies are being used in the implementation of communication receivers in general and GNSS receivers in particular. With the advent of new GPS signals, and a range of new Galileo and GLONASS signals soon becoming available, GNSS is an application where SWR and software-defined radio (SDR) are likely to have an impact. The sampling process is critical for SWR receivers where it occurs as close to the antenna as possible. One way to achieve this is by BandPass Sampling (BPS), which is an undersampling technique that exploits aliasing to perform downconversion. In this thesis, the allowable sampling frequencies are calculated and analyzed for the multiple frequency BPS software radio GNSS receivers. The SNR degradation due to jitter is calculated and the required jitter standard deviation allowable for wach GNSS band of interest is evaluated and a basic jitter budget is calculated that could assist in the design of multiple frequency SWR GNSS receivers. Analysis shows that psec-level jitter specifications are required in order to keep jitter noise well below the thermal noise for software radio satellite navigation receivers. However, analysis of a BPSK system shows that large errors occur if the jittered sample crosses a data bit boundary. However, the signal processing techniques required to process the BOC modulation are much more challenging than those for traditional BPSK. BOC and AltBOC have more transitions per chip of spreading code and hence jitter creates greater SNR degradation. This work derives expressions for noise due to jitter taking into account the transition probability in QPSK, BOC, AltBOC systems. Both simulations and analysis are used to give a better understanding of jitter effects on Software Radio GNSS receivers.
APA, Harvard, Vancouver, ISO, and other styles
44

Raviprakash, Karthik. "Reduced Area Discrete-Time Down-Sampling Filter Embedded With Windowed Integration Samplers." 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-08-8396.

Full text
Abstract:
Developing a flexible receiver, which can be reconfigured to multiple standards, is the key to solving the problem of embedding numerous and ever-changing functionalities in mobile handsets. Difficulty in efficiently reconfiguring analog blocks of a receiver chain to multiple standards calls for moving the ADC as close to the antenna as possible so that most of the processing is done in DSP. Different standards are sampled at different frequencies and a programmable anti-aliasing filtering is needed here. Windowed integration samplers have an inherent sinc filtering which creates nulls at multiples of fs. The attenuation provided by sinc filtering for a bandwidth B is directly proportional to the sampling frequency fs and, in order to meet the anti-aliasing specifications, a high sampling rate is needed. ADCs operating at such a high oversampling rate dissipate power for no good use. Hence, there is a need to develop a programmable discrete-time down-sampling circuit with high inherent anti-aliasing capabilities. Currently existing topologies use large numbers of switches and capacitors which occupy a lot of area.A novel technique in reducing die area on a discrete-time sinc2 ↓2 filter for charge sampling is proposed. An SNR comparison of the conventional and the proposed topology reveals that the new technique saves 25 percent die area occupied by the sampling capacitors of the filter. The proposed idea is also extended to implement higher downsampling factors and a greater percentage of area is saved as the down-sampling factor is increased. The proposed filter also has the topological advantage over previously reported works of allowing the designers to use active integration to charge the capacitance, which is critical in obtaining high linearity. A novel technique to implement a discrete-time sinc3 ↓2 filter for windowed integration samplers is also proposed. The topology reduces the idle time of the integration capacitors at the expense of a small complexity overhead in the clock generation, thereby saving 33 percent of the die area on the capacitors compared to the currently existing topology. Circuit Level simulations in 45 nm CMOS technlogy show a good agreement with the predicted behaviour obtained from the analaysis.
APA, Harvard, Vancouver, ISO, and other styles
45

Forbes, Travis Michael 1986. "Circuit techniques for programmable broadband radio receivers." Thesis, 2013. http://hdl.handle.net/2152/28712.

Full text
Abstract:
The functionality provided by mobile devices such as cellular phones and tablets continues to increase over the years, with integration of an ever larger number of wireless standards within a given device. In several of these designs, each standard supported by a device requires its own IC receiver to be mounted on the device’s PCB. In multistandard and multimode radios, it is desirable to integrate all receivers onto the same IC as the digital processors for the standards, in order to reduce device cost and size. Ideally all the receivers should also share a single signal chain. Since each standard has its own requirements for linearity and noise figure, and each standard operates at a different RF carrier frequency, implementing such a receiver is very challenging. Such a receiver could be theoretically implemented using a broadband mixing receiver or by direct sampling by a high-speed analog-to-digital converter (ADC). Broadband mixing requires the use of a harmonic rejection mixer (HRM) or tunable band pass filter to remove harmonic mixing effects, which in the past have suffered from a large primary clock tuning range and high power consumption. However, direct sampling of the RF input requires a high-speed ADC with large dynamic range which is typically limited by clock timing skew, clock jitter, or harmonic folding. In this dissertation, techniques for programmable broadband radio receivers are proposed. A local oscillator (LO) synthesis method within HRMs is proposed which reduces the required primary clock tuning range in broadband receivers. The LO synthesis method is implemented in 130-nm CMOS. A clocking technique is introduced within the two-stage HRM, which helps in achieving state-of-the-art harmonic rejection performance without calibration or harmonic filtering. An analog frequency synthesis based broadband channelizer is proposed using the LO synthesis method which is capable of channelizing a broadband input using a single mixing stage and primary clock frequency. A frequency-folded ADC architecture is proposed which enables high-speed sampling with high dynamic range. A receiver based on the frequency-folded ADC architecture is implemented in 65-nm CMOS and achieves a sample rate of 2-GS/s, a mean 49-dB SNDR, and 8.5-dB NF.
text
APA, Harvard, Vancouver, ISO, and other styles
46

Marchenko, Yulia V. "Multivariate Skew-t Distributions in Econometrics and Environmetrics." Thesis, 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-12-8764.

Full text
Abstract:
This dissertation is composed of three articles describing novel approaches for analysis and modeling using multivariate skew-normal and skew-t distributions in econometrics and environmetrics. In the first article we introduce the Heckman selection-t model. Sample selection arises often as a result of the partial observability of the outcome of interest in a study. In the presence of sample selection, the observed data do not represent a random sample from the population, even after controlling for explanatory variables. Heckman introduced a sample-selection model to analyze such data and proposed a full maximum likelihood estimation method under the assumption of normality. The method was criticized in the literature because of its sensitivity to the normality assumption. In practice, data, such as income or expenditure data, often violate the normality assumption because of heavier tails. We first establish a new link between sample-selection models and recently studied families of extended skew-elliptical distributions. This then allows us to introduce a selection-t model, which models the error distribution using a Student’s t distribution. We study its properties and investigate the finite-sample performance of the maximum likelihood estimators for this model. We compare the performance of the selection-t model to the Heckman selection model and apply it to analyze ambulatory expenditures. In the second article we introduce a family of multivariate log-skew-elliptical distributions, extending the list of multivariate distributions with positive support. We investigate their probabilistic properties such as stochastic representations, marginal and conditional distributions, and existence of moments, as well as inferential properties. We demonstrate, for example, that as for the log-t distribution, the positive moments of the log-skew-t distribution do not exist. Our emphasis is on two special cases, the log-skew-normal and log-skew-t distributions, which we use to analyze U.S. precipitation data. Many commonly used statistical methods assume that data are normally distributed. This assumption is often violated in practice which prompted the development of more flexible distributions. In the third article we describe two such multivariate distributions, the skew-normal and the skew-t, and present commands for fitting univariate and multivariate skew-normal and skew-t regressions in the statistical software package Stata.
APA, Harvard, Vancouver, ISO, and other styles
47

Ulerich, Rhys David. "Reducing turbulence- and transition-driven uncertainty in aerothermodynamic heating predictions for blunt-bodied reentry vehicles." Thesis, 2014. http://hdl.handle.net/2152/26886.

Full text
Abstract:
Turbulent boundary layers approximating those found on the NASA Orion Multi-Purpose Crew Vehicle (MPCV) thermal protection system during atmospheric reentry from the International Space Station have been studied by direct numerical simulation, with the ultimate goal of reducing aerothermodynamic heating prediction uncertainty. Simulations were performed using a new, well-verified, openly available Fourier/B-spline pseudospectral code called Suzerain equipped with a ``slow growth'' spatiotemporal homogenization approximation recently developed by Topalian et al. A first study aimed to reduce turbulence-driven heating prediction uncertainty by providing high-quality data suitable for calibrating Reynolds-averaged Navier--Stokes turbulence models to address the atypical boundary layer characteristics found in such reentry problems. The two data sets generated were Ma[approximate symbol] 0.9 and 1.15 homogenized boundary layers possessing Re[subscript theta, approximate symbol] 382 and 531, respectively. Edge-to-wall temperature ratios, T[subscript e]/T[subscript w], were close to 4.15 and wall blowing velocities, v[subscript w, superscript plus symbol]= v[subscript w]/u[subscript tau], were about 8 x 10-3 . The favorable pressure gradients had Pohlhausen parameters between 25 and 42. Skin frictions coefficients around 6 x10-3 and Nusselt numbers under 22 were observed. Near-wall vorticity fluctuations show qualitatively different profiles than observed by Spalart (J. Fluid Mech. 187 (1988)) or Guarini et al. (J. Fluid Mech. 414 (2000)). Small or negative displacement effects are evident. Uncertainty estimates and Favre-averaged equation budgets are provided. A second study aimed to reduce transition-driven uncertainty by determining where on the thermal protection system surface the boundary layer could sustain turbulence. Local boundary layer conditions were extracted from a laminar flow solution over the MPCV which included the bow shock, aerothermochemistry, heat shield surface curvature, and ablation. That information, as a function of leeward distance from the stagnation point, was approximated by Re[subscript theta], Ma[subscript e], [mathematical equation], v[subscript w, superscript plus sign], and T[subscript e]/T[subscript w] along with perfect gas assumptions. Homogenized turbulent boundary layers were initialized at those local conditions and evolved until either stationarity, implying the conditions could sustain turbulence, or relaminarization, implying the conditions could not. Fully turbulent fields relaminarized subject to conditions 4.134 m and 3.199 m leeward of the stagnation point. However, different initial conditions produced long-lived fluctuations at leeward position 2.299 m. Locations more than 1.389 m leeward of the stagnation point are predicted to sustain turbulence in this scenario.
text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography