Дисертації з теми "Approximate sampling"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-16 дисертацій для дослідження на тему "Approximate sampling".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Nutini, Julie Ann. "A derivative-free approximate gradient sampling algorithm for finite minimax problems." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/42200.
Повний текст джерелаRösch, Philipp, and Wolfgang Lehner. "Optimizing Sample Design for Approximate Query Processing." IGI Global, 2013. https://tud.qucosa.de/id/qucosa%3A72930.
Повний текст джерелаLe, Quoc Do. "Approximate Data Analytics Systems." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-234219.
Повний текст джерелаJahraus, Karen Veronica. "Using the jackknife technique to approximate sampling error for the cruise-based lumber recovery factor." Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/26419.
Повний текст джерелаForestry, Faculty of
Graduate
Rösch, Philipp. "Design von Stichproben in analytischen Datenbanken." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-22916.
Повний текст джерелаRecent studies have shown the fast and multi-dimensional growth in analytical databases: Over the last four years, the data volume has risen by a factor of 10; the number of users has increased by an average of 25% per year; and the number of queries has been doubling every year since 2004. These queries have increasingly become complex join queries with aggregations; they are often of an explorative nature and interactively submitted to the system. One option to address the need for interactivity in the context of this strong, multi-dimensional growth is the use of samples and an approximate query processing approach based on those samples. Such a solution offers significantly shorter response times as well as estimates with probabilistic error bounds. Given that joins, groupings and aggregations are the main components of analytical queries, the following requirements for the design of samples in analytical databases arise: 1) The foreign-key integrity between the samples of foreign-key related tables has to be preserved. 2) Any existing groups have to be represented appropriately. 3) Aggregation attributes have to be checked for extreme values. For each of these sub-problems, this dissertation presents sampling techniques that are characterized by memory-bounded samples and low estimation errors. In the first of these presented approaches, a correlated sampling process guarantees the referential integrity while only using up a minimum of additional memory. The second illustrated sampling technique considers the data distribution, and as a result, any arbitrary grouping is supported; all groups are appropriately represented. In the third approach, the multi-column outlier handling leads to low estimation errors for any number of aggregation attributes. For all three approaches, the quality of the resulting samples is discussed and considered when computing memory-bounded samples. In order to keep the computation effort - and thus the system load - at a low level, heuristics are provided for each algorithm; these are marked by high efficiency and minimal effects on the sampling quality. Furthermore, the dissertation examines all possible combinations of the presented sampling techniques; such combinations allow to additionally reduce estimation errors while increasing the range of applicability for the resulting samples at the same time. With the combination of all three techniques, a sampling technique is introduced that meets all requirements for the design of samples in analytical databases and that merges the advantages of the individual techniques. Thereby, the approximate but very precise answering of a wide range of queries becomes a true possibility
Žakienė, Inesa. "Horvico ir Tompsono įvertinio dispersijos vertinimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2012. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2012~D_20120813_131528-29461.
Повний текст джерелаIn this master's graduation work, the weights of estimators of Horvitz & Thompson estimator of variance are defined by using some different distance function and calibration equations. In such a way, the new eight estimators of Horvitz & Thompson estimator of variance were constructed. Using the Taylor linearization method the approximate variances of the constructed estimators were derived. The estimators of the variances of these estimators are proposed as well. Also we perform here a mathematical modeling using MATLAB program. The aim of this mathematical modeling is to compare the new estimators with each other and with a standard one. We analyze also how the accuracy of estimators depends of selected sampling design.
Heng, Jeremy. "On the use of transport and optimal control methods for Monte Carlo simulation." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:6cbc7690-ac54-4a6a-b235-57fa62e5b2fc.
Повний текст джерелаVo, Brenda. "Novel likelihood-free Bayesian parameter estimation methods for stochastic models of collective cell spreading." Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/99588/1/Brenda_Vo_Thesis.pdf.
Повний текст джерелаCao, Phuong Thao. "Approximation of OLAP queries on data warehouses." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00905292.
Повний текст джерелаSedki, Mohammed Amechtoh. "Échantillonnage préférentiel adaptatif et méthodes bayésiennes approchées appliquées à la génétique des populations." Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20041/document.
Повний текст джерелаThis thesis consists of two parts which can be read independently.The first part is about the Adaptive Multiple Importance Sampling (AMIS) algorithm presented in Cornuet et al.(2012) provides a significant improvement in stability and Effective Sample Size due to the introduction of the recycling procedure. These numerical properties are particularly adapted to the Bayesian paradigm in population genetics where the modelization involves a large number of parameters. However, the consistency of the AMIS estimator remains largely open. In this work, we provide a novel Adaptive Multiple Importance Sampling scheme corresponding to a slight modification of Cornuet et al. (2012) proposition that preserves the above-mentioned improvements. Finally, using limit theorems on triangular arrays of conditionally independant random variables, we give a consistensy result for the final particle system returned by our new scheme.The second part of this thesis lies in ABC paradigm. Approximate Bayesian Computation has been successfully used in population genetics models to bypass the calculation of the likelihood. These algorithms provide an accurate estimator by comparing the observed dataset to a sample of datasets simulated from the model. Although parallelization is easily achieved, computation times for assuring a suitable approximation quality of the posterior distribution are still long. To alleviate this issue, we propose a sequential algorithm adapted fromDel Moral et al. (2012) which runs twice as fast as traditional ABC algorithms. Itsparameters are calibrated to minimize the number of simulations from the model
Kramer, Stephan Christoph. "CUDA-based Scientific Computing." Doctoral thesis, Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2012. http://hdl.handle.net/11858/00-1735-0000-000D-FB52-0.
Повний текст джерелаCraiu, Virgil Radu. "Multivalent framework for approximate and exact sampling and resampling /." 2001. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3006484.
Повний текст джерелаLe, Quoc Do. "Approximate Data Analytics Systems." Doctoral thesis, 2017. https://tud.qucosa.de/id/qucosa%3A30872.
Повний текст джерелаYang, Shin-Ta, and 楊世達. "Improvement of the curve-based method of finding approximate repeating patterns: frame sampling and re-mapping." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/16264505268482025088.
Повний текст джерела輔仁大學
資訊工程學系
93
In the music information retrieval field, the most important topic is to extract the feature which represents the content from the music objects. Therefore, the content feature is useful for music analysis, music retrieval and other services. In this paper, an application of feature extraction from music data is first introduced to motivate our research of finding approximate repeating patterns from sequence data. Finding approximate repeating patterns from music data is one of the key issues in the research field of music information retrieval. In this paper, Liu propose a curve-based algorithm to efficiently find nontrivial and approximate repeating patterns from music data. First, the given interval sequence is cut into interval substrings by the sliding window. By applying DCT on interval substrings, each substring will be mapped into the corresponding point in the feature space. Those points, which are near to each other in Euclidean distance, will be self-joined together into a trial. Therefore, similar trials correspond to similar interval substrings which are considered as candidates for approximate repeating patterns. By validation process, the final result set of nontrivial repeating patterns will be confirmed. We provide frame sampling for search patterns more exactly and we change freom self-joining to re-mapping method. Experiments are also performed to show the efficient and effectiveness of our approach.
Le, Huu Minh. "New algorithmic developments in maximum consensus robust fitting." Thesis, 2018. http://hdl.handle.net/2440/115183.
Повний текст джерелаThesis (Ph.D.) (Research by Publication) -- University of Adelaide, School of Computer Science, 2018
Paz, Solange de Lemos. "Processamento aproximado depesquisas para análise de Big Data." Master's thesis, 2019. http://hdl.handle.net/10316/87927.
Повний текст джерелаNos últimos dez anos o crescimento dos dados digitais aumentou exponencialmente. Com o aumento da quantidade de dados processada diariamente, a análise de dados para extrair informações relevantes de forma rápida tornou-se uma tarefa cada vez mais importante e difícil. As tecnologias atuais para análise de dados, que utilizam sistemas de bases de dados relacionais e data warehouses tornaram-se incapazes de lidar de forma eficiente com grandes quantidades de dados. Uma pesquisa nesses sistemas pode demorar horas até devolver um resultado, surgindo assim a necessidade de melhorar o seu desempenho, em termos de custo e tempo. Para melhorar este desempenho surgiram os sistemas de processamento aproximado de pesquisas, que garantem o processamento rápido de grandes quantidades de dados, abdicando de 100% de exatidão na resposta mas promovendo tempos de resposta mais curtos, utilizando apenas uma parte do conjunto de dados. Ao longo das últimas décadas foram propostas diversas técnicas de processamento aproximado de pesquisas, no entanto estas possuem limitações.Neste trabalho é proposta e avaliada uma nova técnica de processamento aproximado de pesquisas que mitiga as seguintes deficiências das abordagens atuais: não requer que seja efetuada qualquer alteração na base de dados, uma vez que possui uma arquitetura de middleware; permite a parametrização do grau de confiança e o erro máximo admitido para a resposta de uma pesquisa e lida com a maioria dos tipos de pesquisas. Esta técnica, designada JDBCApprox, consiste na implementação de uma biblioteca Java que recorre a uma amostragem aleatória simples sem repetição para criar amostras das tabelas da base de dados e, em seguida utiliza uma base de dados com uma configuração em memória para obter uma aceleração no tempo de resposta das pesquisas. A avaliação experimental mostrou que a técnica JDBCApprox consegue ser até 24 vezes mais rápida do que o PostgreSQL e devolve na maioria dos casos respostas mais exatas do que o sistema que apresenta os melhores resultados do estado da arte.
Over the last ten years, the growth of digital data has increased exponentially. With the increase in the amount of data processed daily, using data analysis to quickly extract relevant information has become an increasingly important and difficult task. Current technologies for data analysis, which utilize relational database systems and data warehouses, have become incapable of handling large amounts of data efficiently. Performing a query on these systems may take hours before returning a result, thus emerging the need to improve their performance in terms of cost and time. To improve this performance, new processing systems of research have emerged. These systems ensure the rapid processing of large amounts of data, abdicating from 100\% accuracy in the response but promoting shorter response times, using only a portion of the data set. Over the last decades, several techniques have been proposed to approximate processing of queries, however these have limitations.\\ This work proposes and evaluates a new technique of approximate processing of researches that mitigates the following shortcomings of current approaches: it does not require any changes to be made on the database since it has a middleware architecture; allows the parameterization of the degree of confidence and the maximum error admitted to the response of a survey and deals with most types of queries. This technique, named JDBCApprox, consists of the implementation of a Java library that uses a simple random sampling without repetition to create samples of the database tables. It then uses a database with an in-memory configuration to get an acceleration in the response time of the queries. The deployed library can be up to 24 times faster than PostgreSQL and returns, in most cases, more accurate answers than the system that presents the best state of the art results.