Academic literature on the topic 'Approximate sampling'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Approximate sampling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Approximate sampling"

1

Von Collani, Elart. "Approximate a-optimal sampling plans." Statistics 18, no. 3 (January 1987): 333–44. http://dx.doi.org/10.1080/02331888708802025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Carrizosa, Emilio. "On approximate Monetary Unit Sampling." European Journal of Operational Research 217, no. 2 (March 2012): 479–82. http://dx.doi.org/10.1016/j.ejor.2011.09.037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dimitrakakis, Christos, and Michail G. Lagoudakis. "Rollout sampling approximate policy iteration." Machine Learning 72, no. 3 (July 10, 2008): 157–71. http://dx.doi.org/10.1007/s10994-008-5069-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rodrigues, G. S., David J. Nott, and S. A. Sisson. "Likelihood-free approximate Gibbs sampling." Statistics and Computing 30, no. 4 (March 11, 2020): 1057–73. http://dx.doi.org/10.1007/s11222-020-09933-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ryan, Kenneth J. "Approximate Confidence Intervals forpWhen Double Sampling." American Statistician 63, no. 2 (May 2009): 132–40. http://dx.doi.org/10.1198/tast.2009.0027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Geng, Bo, HuiJuan Zhang, Heng Wang, and GuoPing Wang. "Approximate Poisson disk sampling on mesh." Science China Information Sciences 56, no. 9 (September 9, 2011): 1–12. http://dx.doi.org/10.1007/s11432-011-4322-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Z., J. K. Kim, and S. Yang. "Approximate Bayesian inference under informative sampling." Biometrika 105, no. 1 (December 18, 2017): 91–102. http://dx.doi.org/10.1093/biomet/asx073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shaltiel, Ronen, and Christopher Umans. "Pseudorandomness for Approximate Counting and Sampling." computational complexity 15, no. 4 (December 2006): 298–341. http://dx.doi.org/10.1007/s00037-007-0218-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Monaco, Salvatore, and Dorothée Normand-Cyrot. "Linearization by Output Injection under Approximate Sampling." European Journal of Control 15, no. 2 (January 2009): 205–17. http://dx.doi.org/10.3166/ejc.15.205-217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chaudhuri, Surajit, Gautam Das, and Vivek Narasayya. "Optimized stratified sampling for approximate query processing." ACM Transactions on Database Systems 32, no. 2 (June 2007): 9. http://dx.doi.org/10.1145/1242524.1242526.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Approximate sampling"

1

Nutini, Julie Ann. "A derivative-free approximate gradient sampling algorithm for finite minimax problems." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/42200.

Full text
Abstract:
Mathematical optimization is the process of minimizing (or maximizing) a function. An algorithm is used to optimize a function when the minimum cannot be found by hand, or finding the minimum by hand is inefficient. The minimum of a function is a critical point and corresponds to a gradient (derivative) of 0. Thus, optimization algorithms commonly require gradient calculations. When gradient information of the objective function is unavailable, unreliable or ‘expensive’ in terms of computation time, a derivative-free optimization algorithm is ideal. As the name suggests, derivative-free optimization algorithms do not require gradient calculations. In this thesis, we present a derivative-free optimization algorithm for finite minimax problems. Structurally, a finite minimax problem minimizes the maximum taken over a finite set of functions. We focus on the finite minimax problem due to its frequent appearance in real-world applications. We present convergence results for a regular and a robust version of our algorithm, showing in both cases that either the function is unbounded below (the minimum is −∞) or we have found a critical point. Theoretical results are explored for stopping conditions. Additionally, theoretical and numerical results are presented for three examples of approximate gradients that can be used in our algorithm: the simplex gradient, the centered simplex gradient and the Gupal estimate of the gradient of the Steklov averaged function. A performance comparison is made between the regular and robust algorithm, the three approximate gradients, and the regular and robust stopping conditions. Finally, an application in seismic retrofitting is discussed.
APA, Harvard, Vancouver, ISO, and other styles
2

Rösch, Philipp, and Wolfgang Lehner. "Optimizing Sample Design for Approximate Query Processing." IGI Global, 2013. https://tud.qucosa.de/id/qucosa%3A72930.

Full text
Abstract:
The rapid increase of data volumes makes sampling a crucial component of modern data management systems. Although there is a large body of work on database sampling, the problem of automatically determine the optimal sample for a given query remained (almost) unaddressed. To tackle this problem the authors propose a sample advisor based on a novel cost model. Primarily designed for advising samples of a few queries specified by an expert, the authors additionally propose two extensions of the sample advisor. The first extension enhances the applicability by utilizing recorded workload information and taking memory bounds into account. The second extension increases the effectiveness by merging samples in case of overlapping pieces of sample advice. For both extensions, the authors present exact and heuristic solutions. Within their evaluation, the authors analyze the properties of the cost model and demonstrate the effectiveness and the efficiency of the heuristic solutions with a variety of experiments.
APA, Harvard, Vancouver, ISO, and other styles
3

Le, Quoc Do. "Approximate Data Analytics Systems." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-234219.

Full text
Abstract:
Today, most modern online services make use of big data analytics systems to extract useful information from the raw digital data. The data normally arrives as a continuous data stream at a high speed and in huge volumes. The cost of handling this massive data can be significant. Providing interactive latency in processing the data is often impractical due to the fact that the data is growing exponentially and even faster than Moore’s law predictions. To overcome this problem, approximate computing has recently emerged as a promising solution. Approximate computing is based on the observation that many modern applications are amenable to an approximate, rather than the exact output. Unlike traditional computing, approximate computing tolerates lower accuracy to achieve lower latency by computing over a partial subset instead of the entire input data. Unfortunately, the advancements in approximate computing are primarily geared towards batch analytics and cannot provide low-latency guarantees in the context of stream processing, where new data continuously arrives as an unbounded stream. In this thesis, we design and implement approximate computing techniques for processing and interacting with high-speed and large-scale stream data to achieve low latency and efficient utilization of resources. To achieve these goals, we have designed and built the following approximate data analytics systems: • StreamApprox—a data stream analytics system for approximate computing. This system supports approximate computing for low-latency stream analytics in a transparent way and has an ability to adapt to rapid fluctuations of input data streams. In this system, we designed an online adaptive stratified reservoir sampling algorithm to produce approximate output with bounded error. • IncApprox—a data analytics system for incremental approximate computing. This system adopts approximate and incremental computing in stream processing to achieve high-throughput and low-latency with efficient resource utilization. In this system, we designed an online stratified sampling algorithm that uses self-adjusting computation to produce an incrementally updated approximate output with bounded error. • PrivApprox—a data stream analytics system for privacy-preserving and approximate computing. This system supports high utility and low-latency data analytics and preserves user’s privacy at the same time. The system is based on the combination of privacy-preserving data analytics and approximate computing. • ApproxJoin—an approximate distributed joins system. This system improves the performance of joins — critical but expensive operations in big data systems. In this system, we employed a sketching technique (Bloom filter) to avoid shuffling non-joinable data items through the network as well as proposed a novel sampling mechanism that executes during the join to obtain an unbiased representative sample of the join output. Our evaluation based on micro-benchmarks and real world case studies shows that these systems can achieve significant performance speedup compared to state-of-the-art systems by tolerating negligible accuracy loss of the analytics output. In addition, our systems allow users to systematically make a trade-off between accuracy and throughput/latency and require no/minor modifications to the existing applications.
APA, Harvard, Vancouver, ISO, and other styles
4

Jahraus, Karen Veronica. "Using the jackknife technique to approximate sampling error for the cruise-based lumber recovery factor." Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/26419.

Full text
Abstract:
Timber cruises in the interior of British Columbia are designed to meet precision requirements for estimating total net merchantable volume. The effect of this single objective design on the precision of other cruise-based estimates is not calculated. One key secondary objective, used in the stumpage appraisal of timber in the interior of the province, is estimation of the lumber recovery factor (LRF). The importance of the LRF in determining stumpage values and the fact that its precision is not presently calculated, prompted this study. Since the LRF is a complicated statistic obtained from a complex sampling design, standard methods of variance calculation cannot be applied. Therefore, the jackknife procedure, a replication technique for approximating variance, was used to determine the sampling error for LRF. In the four cruises examined, the sampling error for LRF ranged from 1.27 fbm/m³ to 15.42 fbm/m³. The variability in the LRF was related to the number of sample trees used in its estimation. The impact of variations in the LRF on the appraised stumpage rate was influenced by the lumber selling price, the profit and risk ratio and the chip value used in the appraisal calculations. In the cruises investigated, the change in the stumpage rate per unit change in the LRF ranged between $0.17/m³ and $0.21/m³. As a result, sampling error in LRF can have a significant impact on assessed stumpage rates. Non-sampling error is also a major error source associated with LRF, but until procedural changes occur, control of sampling error is the only available means of increasing the precision of the LRF estimate. Consequently, it is recommended that the cruise design objectives be modified to include a maximum allowable level of sampling error for the LRF.
Forestry, Faculty of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
5

Rösch, Philipp. "Design von Stichproben in analytischen Datenbanken." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-22916.

Full text
Abstract:
Aktuelle Studien belegen ein rasantes, mehrdimensionales Wachstum in analytischen Datenbanken: Das Datenvolumen verzehnfachte sich in den letzten vier Jahren, die Anzahl der Nutzer wuchs um durchschnittlich 25% pro Jahr und die Anzahl der Anfragen verdoppelte sich seit 2004 jährlich. Bei den Anfragen handelt es sich zunehmend um komplexe Verbundanfragen mit Aggregationen; sie sind häufig explorativer Natur und werden interaktiv an das System gestellt. Eine Möglichkeit, der Forderung nach Interaktivität bei diesem starken, mehrdimensionalen Wachstum nachzukommen, stellen Stichproben und eine darauf aufsetzende näherungsweise Anfrageverarbeitung dar. Diese Lösung bietet signifikant kürzere Antwortzeiten sowie Schätzungen mit probabilistischen Fehlergrenzen. Mit den Operationen Verbund, Gruppierung und Aggregation als Hauptbestandteile analytischer Anfragen ergeben sich folgende Anforderungen an das Design von Stichproben in analytischen Datenbanken: Zwischen den Stichproben fremdschlüsselverbundener Relationen ist die referenzielle Integrität zu gewährleisten, sämtliche Gruppen sind angemessen zu repräsentieren und Aggregationsattribute sind auf extreme Werte zu untersuchen. In dieser Dissertation wird für jedes dieser Teilprobleme ein Stichprobenverfahren vorgestellt, das sich durch speicherplatzbeschränkte Stichproben und geringe Schätzfehler auszeichnet. Im ersten der vorgestellten Verfahren wird durch eine korrelierte Stichprobenerhebung die referenzielle Integrität bei minimalem zusätzlichen Speicherplatz gewährleistet. Das zweite vorgestellte Stichprobenverfahren hat durch eine Berücksichtigung der Streuung der Daten eine angemessene Repräsentation sämtlicher Gruppen zur Folge und unterstützt damit beliebige Gruppierungen, und im dritten Verfahren ermöglicht eine mehrdimensionale Ausreißerbehandlung geringe Schätzfehler für beliebig viele Aggregationsattribute. Für jedes dieser Verfahren wird die Qualität der resultierenden Stichprobe diskutiert und bei der Berechnung speicherplatzbeschränkter Stichproben berücksichtigt. Um den Berechnungsaufwand und damit die Systembelastung gering zu halten, werden für jeden Algorithmus Heuristiken vorgestellt, deren Kennzeichen hohe Effizienz und eine geringe Beeinflussung der Stichprobenqualität sind. Weiterhin werden alle möglichen Kombinationen der vorgestellten Stichprobenverfahren betrachtet; diese Kombinationen ermöglichen eine zusätzliche Verringerung der Schätzfehler und vergrößern gleichzeitig das Anwendungsspektrum der resultierenden Stichproben. Mit der Kombination aller drei Techniken wird ein Stichprobenverfahren vorgestellt, das alle Anforderungen an das Design von Stichproben in analytischen Datenbanken erfüllt und die Vorteile der Einzellösungen vereint. Damit ist es möglich, ein breites Spektrum an Anfragen mit hoher Genauigkeit näherungsweise zu beantworten
Recent studies have shown the fast and multi-dimensional growth in analytical databases: Over the last four years, the data volume has risen by a factor of 10; the number of users has increased by an average of 25% per year; and the number of queries has been doubling every year since 2004. These queries have increasingly become complex join queries with aggregations; they are often of an explorative nature and interactively submitted to the system. One option to address the need for interactivity in the context of this strong, multi-dimensional growth is the use of samples and an approximate query processing approach based on those samples. Such a solution offers significantly shorter response times as well as estimates with probabilistic error bounds. Given that joins, groupings and aggregations are the main components of analytical queries, the following requirements for the design of samples in analytical databases arise: 1) The foreign-key integrity between the samples of foreign-key related tables has to be preserved. 2) Any existing groups have to be represented appropriately. 3) Aggregation attributes have to be checked for extreme values. For each of these sub-problems, this dissertation presents sampling techniques that are characterized by memory-bounded samples and low estimation errors. In the first of these presented approaches, a correlated sampling process guarantees the referential integrity while only using up a minimum of additional memory. The second illustrated sampling technique considers the data distribution, and as a result, any arbitrary grouping is supported; all groups are appropriately represented. In the third approach, the multi-column outlier handling leads to low estimation errors for any number of aggregation attributes. For all three approaches, the quality of the resulting samples is discussed and considered when computing memory-bounded samples. In order to keep the computation effort - and thus the system load - at a low level, heuristics are provided for each algorithm; these are marked by high efficiency and minimal effects on the sampling quality. Furthermore, the dissertation examines all possible combinations of the presented sampling techniques; such combinations allow to additionally reduce estimation errors while increasing the range of applicability for the resulting samples at the same time. With the combination of all three techniques, a sampling technique is introduced that meets all requirements for the design of samples in analytical databases and that merges the advantages of the individual techniques. Thereby, the approximate but very precise answering of a wide range of queries becomes a true possibility
APA, Harvard, Vancouver, ISO, and other styles
6

Žakienė, Inesa. "Horvico ir Tompsono įvertinio dispersijos vertinimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2012. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2012~D_20120813_131528-29461.

Full text
Abstract:
Šiame magistro diplominiame darbe, naudojant skirtingas atstumo funkcijas ir kalibravimo lygtis, išvedami Horvico ir Tompsono įvertinio dispersijos įvertinių svoriai. Tokiu būdu, sukonstruojami aštuoni nauji Horvico ir Tompsono įvertinio dispersijos įvertiniai. Naudojant Teiloro ištiesinimo metodą pateikiamos sukonstruotų įvertinių apytikslės dispersijos ir pasiūlyti šių dispersijų įvertiniai. Be to, darbe atliekamas matematinis modeliavimas, kurio eksperimentai atlikti naudojant darbo autorės sukurtas MATLAB programas. Matematinio modeliavimo tikslas - naujus įvertinius palyginti tarpusavyje ir su standartiniu įvertiniu. Tiriama, kaip įvertinių tikslumas priklauso nuo pasirinkto imties plano.
In this master's graduation work, the weights of estimators of Horvitz & Thompson estimator of variance are defined by using some different distance function and calibration equations. In such a way, the new eight estimators of Horvitz & Thompson estimator of variance were constructed. Using the Taylor linearization method the approximate variances of the constructed estimators were derived. The estimators of the variances of these estimators are proposed as well. Also we perform here a mathematical modeling using MATLAB program. The aim of this mathematical modeling is to compare the new estimators with each other and with a standard one. We analyze also how the accuracy of estimators depends of selected sampling design.
APA, Harvard, Vancouver, ISO, and other styles
7

Heng, Jeremy. "On the use of transport and optimal control methods for Monte Carlo simulation." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:6cbc7690-ac54-4a6a-b235-57fa62e5b2fc.

Full text
Abstract:
This thesis explores ideas from transport theory and optimal control to develop novel Monte Carlo methods to perform efficient statistical computation. The first project considers the problem of constructing a transport map between two given probability measures. In the Bayesian formalism, this approach is natural when one introduces a curve of probability measures connecting the prior to posterior by tempering the likelihood function. The main idea is to move samples from the prior using an ordinary differential equation (ODE), constructed by solving the Liouville partial differential equation (PDE) which governs the time evolution of measures along the curve. In this work, we first study the regularity solutions of Liouville equation should satisfy to guarantee validity of this construction. We place an emphasis on understanding these issues as it explains the difficulties associated with solutions that have been previously reported. After ensuring that the flow transport problem is well-defined, we give a constructive solution. However, this result is only formal as the representation is given in terms of integrals which are intractable. For computational tractability, we proposed a novel approximation of the PDE which yields an ODE whose drift depends on the full conditional distributions of the intermediate distributions. Even when the ODE is time-discretized and the full conditional distributions are approximated numerically, the resulting distribution of mapped samples can be evaluated and used as a proposal within Markov chain Monte Carlo and sequential Monte Carlo (SMC) schemes. We then illustrate experimentally that the resulting algorithm can outperform state-of-the-art SMC methods at a fixed computational complexity. The second project aims to exploit ideas from optimal control to design more efficient SMC methods. The key idea is to control the proposal distribution induced by a time-discretized Langevin dynamics so as to minimize the Kullback-Leibler divergence of the extended target distribution from the proposal. The optimal value functions of the resulting optimal control problem can then be approximated using algorithms developed in the approximate dynamic programming (ADP) literature. We introduce a novel iterative scheme to perform ADP, provide a theoretical analysis of the proposed algorithm and demonstrate that the latter can provide significant gains over state-of-the-art methods at a fixed computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
8

Vo, Brenda. "Novel likelihood-free Bayesian parameter estimation methods for stochastic models of collective cell spreading." Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/99588/1/Brenda_Vo_Thesis.pdf.

Full text
Abstract:
Biological processes underlying skin cancer growth and wound healing are governed by various collective cell spreading mechanisms. This thesis develops new statistical methods to provide key insights into the mechanisms driving the spread of cell populations such as motility, proliferation and cell-to-cell adhesion, using experimental data. The new methods allow us to precisely estimate the parameters of such mechanisms, quantify the associated uncertainty and investigate how these mechanisms are influenced by various factors. The thesis provides a useful tool to measure the efficacy of medical treatments that aim to influence the spread of cell populations.
APA, Harvard, Vancouver, ISO, and other styles
9

Cao, Phuong Thao. "Approximation of OLAP queries on data warehouses." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00905292.

Full text
Abstract:
We study the approximate answers to OLAP queries on data warehouses. We consider the relative answers to OLAP queries on a schema, as distributions with the L1 distance and approximate the answers without storing the entire data warehouse. We first introduce three specific methods: the uniform sampling, the measure-based sampling and the statistical model. We introduce also an edit distance between data warehouses with edit operations adapted for data warehouses. Then, in the OLAP data exchange, we study how to sample each source and combine the samples to approximate any OLAP query. We next consider a streaming context, where a data warehouse is built by streams of different sources. We show a lower bound on the size of the memory necessary to approximate queries. In this case, we approximate OLAP queries with a finite memory. We describe also a method to discover the statistical dependencies, a new notion we introduce. We are looking for them based on the decision tree. We apply the method to two data warehouses. The first one simulates the data of sensors, which provide weather parameters over time and location from different sources. The second one is the collection of RSS from the web sites on Internet.
APA, Harvard, Vancouver, ISO, and other styles
10

Sedki, Mohammed Amechtoh. "Échantillonnage préférentiel adaptatif et méthodes bayésiennes approchées appliquées à la génétique des populations." Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20041/document.

Full text
Abstract:
Dans cette thèse, on propose des techniques d'inférence bayésienne dans les modèles où la vraisemblance possède une composante latente. La vraisemblance d'un jeu de données observé est l'intégrale de la vraisemblance dite complète sur l'espace de la variable latente. On s'intéresse aux cas où l'espace de la variable latente est de très grande dimension et comportes des directions de différentes natures (discrètes et continues), ce qui rend cette intégrale incalculable. Le champs d'application privilégié de cette thèse est l'inférence dans les modèles de génétique des populations. Pour mener leurs études, les généticiens des populations se basent sur l'information génétique extraite des populations du présent et représente la variable observée. L'information incluant l'histoire spatiale et temporelle de l'espèce considérée est inaccessible en général et représente la composante latente. Notre première contribution dans cette thèse suppose que la vraisemblance peut être évaluée via une approximation numériquement coûteuse. Le schéma d'échantillonnage préférentiel adaptatif et multiple (AMIS pour Adaptive Multiple Importance Sampling) de Cornuet et al. [2012] nécessite peu d'appels au calcul de la vraisemblance et recycle ces évaluations. Cet algorithme approche la loi a posteriori par un système de particules pondérées. Cette technique est conçue pour pouvoir recycler les simulations obtenues par le processus itératif (la construction séquentielle d'une suite de lois d'importance). Dans les nombreux tests numériques effectués sur des modèles de génétique des populations, l'algorithme AMIS a montré des performances numériques très prometteuses en terme de stabilité. Ces propriétés numériques sont particulièrement adéquates pour notre contexte. Toutefois, la question de la convergence des estimateurs obtenus parcette technique reste largement ouverte. Dans cette thèse, nous montrons des résultats de convergence d'une version légèrement modifiée de cet algorithme. Sur des simulations, nous montrons que ses qualités numériques sont identiques à celles du schéma original. Dans la deuxième contribution de cette thèse, on renonce à l'approximation de la vraisemblance et onsupposera seulement que la simulation suivant le modèle (suivant la vraisemblance) est possible. Notre apport est un algorithme ABC séquentiel (Approximate Bayesian Computation). Sur les modèles de la génétique des populations, cette méthode peut se révéler lente lorsqu'on vise uneapproximation précise de la loi a posteriori. L'algorithme que nous proposons est une amélioration de l'algorithme ABC-SMC de DelMoral et al. [2012] que nous optimisons en nombre d'appels aux simulations suivant la vraisemblance, et que nous munissons d'un mécanisme de choix de niveauxd'acceptations auto-calibré. Nous implémentons notre algorithme pour inférer les paramètres d'un scénario évolutif réel et complexe de génétique des populations. Nous montrons que pour la même qualité d'approximation, notre algorithme nécessite deux fois moins de simulations par rapport à laméthode ABC avec acceptation couramment utilisée
This thesis consists of two parts which can be read independently.The first part is about the Adaptive Multiple Importance Sampling (AMIS) algorithm presented in Cornuet et al.(2012) provides a significant improvement in stability and Effective Sample Size due to the introduction of the recycling procedure. These numerical properties are particularly adapted to the Bayesian paradigm in population genetics where the modelization involves a large number of parameters. However, the consistency of the AMIS estimator remains largely open. In this work, we provide a novel Adaptive Multiple Importance Sampling scheme corresponding to a slight modification of Cornuet et al. (2012) proposition that preserves the above-mentioned improvements. Finally, using limit theorems on triangular arrays of conditionally independant random variables, we give a consistensy result for the final particle system returned by our new scheme.The second part of this thesis lies in ABC paradigm. Approximate Bayesian Computation has been successfully used in population genetics models to bypass the calculation of the likelihood. These algorithms provide an accurate estimator by comparing the observed dataset to a sample of datasets simulated from the model. Although parallelization is easily achieved, computation times for assuring a suitable approximation quality of the posterior distribution are still long. To alleviate this issue, we propose a sequential algorithm adapted fromDel Moral et al. (2012) which runs twice as fast as traditional ABC algorithms. Itsparameters are calibrated to minimize the number of simulations from the model
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Approximate sampling"

1

Strategies to approximate random sampling and assignment. New York: Oxford University Press, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dattalo, Patrick. Strategies to Approximate Random Sampling and Assignment. Oxford University Press, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rajeev, S. G. Spectral Methods. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198805021.003.0013.

Full text
Abstract:
Thenumerical solution of ordinary differential equations (ODEs)with boundary conditions is studied here. Functions are approximated by polynomials in a Chebychev basis. Sections then cover spectral discretization, sampling, interpolation, differentiation, integration, and the basic ODE. Following Trefethen et al., differential operators are approximated as rectangular matrices. Boundary conditions add additional rows that turn them into square matrices. These can then be diagonalized using standard linear algebra methods. After studying various simple model problems, this method is applied to the Orr–Sommerfeld equation, deriving results originally due to Orszag. The difficulties of pushing spectral methods to higher dimensions are outlined.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Approximate sampling"

1

Bubley, Russ. "Techniques for Sampling and Approximate Sampling." In Randomized Algorithms: Approximation, Generation and Counting, 13–28. London: Springer London, 2001. http://dx.doi.org/10.1007/978-1-4471-0695-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dodson, M. M. "Abstract Exact and Approximate Sampling Theorems." In New Perspectives on Approximation and Sampling Theory, 1–21. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-08801-3_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Park, Laurence A. F. "Fast Approximate Text Document Clustering Using Compressive Sampling." In Machine Learning and Knowledge Discovery in Databases, 565–80. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23783-6_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xiao, Xingxing, and Jianzhong Li. "Sampling-Based Approximate Skyline Calculation on Big Data." In Combinatorial Optimization and Applications, 32–46. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64843-5_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fu, Bin, Wenfeng Li, and Zhiyong Peng. "Sublinear Time Approximate Sum via Uniform Random Sampling." In Lecture Notes in Computer Science, 713–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38768-5_64.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Textor, Johannes. "Efficient Negative Selection Algorithms by Sampling and Approximate Counting." In Lecture Notes in Computer Science, 32–41. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-32937-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dimitrakakis, Christos, and Michail G. Lagoudakis. "Algorithms and Bounds for Rollout Sampling Approximate Policy Iteration." In Lecture Notes in Computer Science, 27–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-89722-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sankowski, Piotr. "Multisampling: A New Approach to Uniform Sampling and Approximate Counting." In Algorithms - ESA 2003, 740–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39658-1_66.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Jiaoyun, Junda Wang, Wenjuan Cheng, and Lian Li. "Sampling to Maintain Approximate Probability Distribution Under Chi-Square Test." In Communications in Computer and Information Science, 29–45. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-15-0105-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhu, Junpeng, Hui Li, Mei Chen, Zhenyu Dai, and Ming Zhu. "Enhancing Stratified Graph Sampling Algorithms Based on Approximate Degree Distribution." In Advances in Intelligent Systems and Computing, 197–207. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-91189-2_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Approximate sampling"

1

Kumar, Sanjiv, Mehryar Mohri, and Ameet Talwalkar. "On sampling-based approximate spectral decomposition." In the 26th Annual International Conference. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1553374.1553446.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huber, Mark. "Exact sampling and approximate counting techniques." In the thirtieth annual ACM symposium. New York, New York, USA: ACM Press, 1998. http://dx.doi.org/10.1145/276698.276709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Hong, Zhenhua Sang, and Sameer Karali. "Approximate Quality Assessment with Sampling Approaches." In 2019 International Conference on Computational Science and Computational Intelligence (CSCI). IEEE, 2019. http://dx.doi.org/10.1109/csci49370.2019.00244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ahmed, Nesreen, Nick Duffield, and Liangzhen Xia. "Sampling for Approximate Bipartite Network Projection." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/456.

Full text
Abstract:
Bipartite graphs manifest as a stream of edges that represent transactions, e.g., purchases by retail customers. Recommender systems employ neighborhood-based measures of node similarity, such as the pairwise number of common neighbors (CN) and related metrics. While the number of node pairs that share neighbors is potentially enormous, only a relatively small proportion of them have many common neighbors. This motivates finding a weighted sampling approach to preferentially sample these node pairs. This paper presents a new sampling algorithm that provides a fixed size unbiased estimate of the similarity matrix resulting from a bipartite edge stream projection. The algorithm has two components. First, it maintains a reservoir of sampled bipartite edges with sampling weights that favor selection of high similarity nodes. Second, arriving edges generate a stream of similarity updates, based on their adjacency with the current sample. These updates are aggregated in a second reservoir sample-based stream aggregator to yield the final unbiased estimate. Experiments on real world graphs show that a 10% sample at each stage yields estimates of high similarity edges with weighted relative errors of about 1%.
APA, Harvard, Vancouver, ISO, and other styles
5

Atkeson, Christopher G. "Randomly Sampling Actions In Dynamic Programming." In 2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning. IEEE, 2007. http://dx.doi.org/10.1109/adprl.2007.368187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Graham, Rishi, and Jorge Cortes. "Cooperative adaptive sampling via approximate entropy maximization." In 2009 Joint 48th IEEE Conference on Decision and Control (CDC) and 28th Chinese Control Conference (CCC 2009). IEEE, 2009. http://dx.doi.org/10.1109/cdc.2009.5400511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kim, Hyun-Chul, Kyu-Hwan Jung, and Jaewook Lee. "Approximate Sampling Method for Locally Linear Embedding." In 2007 International Joint Conference on Neural Networks. IEEE, 2007. http://dx.doi.org/10.1109/ijcnn.2007.4371023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Lening, and Jie Fu. "Sampling-based approximate optimal temporal logic planning." In 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017. http://dx.doi.org/10.1109/icra.2017.7989157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cervellera, Cristiano, Mauro Gaggero, Danilo Maccio, and Roberto Marcialis. "Quasi-random sampling for approximate dynamic programming." In 2013 International Joint Conference on Neural Networks (IJCNN 2013 - Dallas). IEEE, 2013. http://dx.doi.org/10.1109/ijcnn.2013.6707065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lin, Juan K. "Approximate Inference based on Convex Set Sampling." In BAYESIAN INFERENCE AND MAXIMUM ENTROPY METHODS IN SCIENCE AND ENGINEERING: 23rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering. AIP, 2004. http://dx.doi.org/10.1063/1.1751360.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Approximate sampling"

1

Shiihi, Solomon, U. G. Okafor, Zita Ekeocha, Stephen Robert Byrn, and Kari L. Clase. Improving the Outcome of GMP Inspections by Improving Proficiency of Inspectors through Consistent GMP Trainings. Purdue University, November 2021. http://dx.doi.org/10.5703/1288284317433.

Full text
Abstract:
Approximately 90% of the pharmaceutical inspectors in a pharmacy practice regulatory agency in West Africa have not updated their training on Good Manufacturing Practice (GMP) inspection in at least eight years. However, in the last two years the inspectors relied on learning-on-the job skills. During this time, the agency introduced about 17% of its inspectors to hands-on GMP trainings. GMP is the part of quality assurance that ensures the production or manufacture of medicinal products is consistent in order to control the quality standards appropriate for their intended use as required by the specification of the product. Inspection reports on the Agency’s GMP inspection format in-between 2013 to 2019 across the six geopolitical zones in the country were reviewed retrospectively for gap analysis. Sampling was done in two phases. During the first phase sampling of reports was done by random selection, using a stratified sampling method. In the second phase, inspectors from the Regulatory Agency from different regions were contacted on phone to send in four reports each by email. For those that forwarded four reports, two, were selected. However for those who forwarded one or two, all were considered. Also, the Agency’s inspection format/checklist was compared with the World Health Organization (WHO) GMP checklist and the GMP practice observed. The purpose of this study was to evaluate the reporting skills and the ability of inspectors to interpret findings vis-à-vis their proficiency in inspection activities hence the efficiency of the system. Secondly, the study seeks to establish shortfalls or adequacies of the Agency’s checklist with the aim of reviewing and improving in-line with best global practices. It was observed that different inspectors have different styles and methods of writing reports from the same check-list/inspection format, leading to non-conformances. Interpretations of findings were found to be subjective. However, it was also observed that inspection reports from the few inspectors with the hands-on training in the last two year were more coherent. This indicates that pharmaceutical inspectors need to be trained regularly to increase their knowledge and skills in order to be kept on the same pace. It was also observed that there is a slight deviation in placing sub indicators under the GMP components in the Agency’s GMP inspection format, as compared to the WHO checklist.
APA, Harvard, Vancouver, ISO, and other styles
2

Kull, Kathleen, Craig Young, Jennifer Haack-Gaynor, Lloyd Morrison, and Michael DeBacker. Problematic plant monitoring protocol for the Heartland Inventory and Monitoring Network: Narrative, version 2.0. National Park Service, May 2022. http://dx.doi.org/10.36967/nrr-2293355.

Full text
Abstract:
Problematic species, which include invasive, exotic, and harmful species, fragment native ecosystems, displace native plants and animals, and alter ecosystem function. In National Parks, such species negatively affect park resources and visitor enjoyment by altering landscapes and fire regimes, reducing native plant and animal habitat, and increasing trail maintenance needs. Recognizing these challenges, Heartland Inventory and Monitoring (I&M) Network parks identified problematic plants as the highest-ranking vital sign across the network. Given the need to provide early detection of potential problematic plants (ProPs) and the size of network parks, the Heartland I&M Network opted to allocate available sampling effort to maximize the area searched. With this approach and the available sampling effort in mind, we developed realistic objectives for the ProP monitoring protocol. The monitoring objectives are: 1. Create a watch list of ProPs known to occur in network parks and a watch list of potential ProPs that may invade network parks in the future, and occasionally update these two lists as new information is made available. 2. Provide early detection monitoring for all ProPs on the watch lists. 3. Search at least 0.75% and up to 40% of the reference frame for ProP occurrences in each park. 4. Estimate/calculate and report the abundance and frequency of ProPs in each park. 5. To the extent possible, identify temporal changes in the distribution and abundance of ProPs known to occur in network parks. ProP watch lists are developed using the best available and most relevant state, regional, and national exotic plant lists. The lists are generated using the PriorityDB database. We designed the park reference frames (i.e., the area to be monitored) to focus on accessible natural and restored areas. The field methods vary for small parks and large parks, defined as parks with reference frames less than and greater than 350 acres (142 ha), respectively. For small parks, surveyors make three equidistant passes through polygon search units that are approximately 2-acres (0.8 ha) in size. For large parks, surveyors record each ProP encountered along 200-m or 400-m line search units. The cover of each ProP taxa encountered in search units is estimated using the following cover scale: 0 = 0, 1 = 0.1-0.9 m2, 2 = 1-9.9 m2, 3 = 10-49.9 m2, 4 = 50-99.9 m2, 5 = 100-499.9 m2, 6 = 499.9-999.9 m2, and 7 = 1,000-4,999.9 m2. The field data are managed in the FieldDB database. Monitoring is scheduled to revisit most parks every four years. The network will report the results to park managers and superintendents after completing ProP monitoring.
APA, Harvard, Vancouver, ISO, and other styles
3

Evans, Julie, Kendra Sikes, and Jamie Ratchford. Vegetation classification at Lake Mead National Recreation Area, Mojave National Preserve, Castle Mountains National Monument, and Death Valley National Park: Final report (Revised with Cost Estimate). National Park Service, October 2020. http://dx.doi.org/10.36967/nrr-2279201.

Full text
Abstract:
Vegetation inventory and mapping is a process to document the composition, distribution and abundance of vegetation types across the landscape. The National Park Service’s (NPS) Inventory and Monitoring (I&M) program has determined vegetation inventory and mapping to be an important resource for parks; it is one of 12 baseline inventories of natural resources to be completed for all 270 national parks within the NPS I&M program. The Mojave Desert Network Inventory & Monitoring (MOJN I&M) began its process of vegetation inventory in 2009 for four park units as follows: Lake Mead National Recreation Area (LAKE), Mojave National Preserve (MOJA), Castle Mountains National Monument (CAMO), and Death Valley National Park (DEVA). Mapping is a multi-step and multi-year process involving skills and interactions of several parties, including NPS, with a field ecology team, a classification team, and a mapping team. This process allows for compiling existing vegetation data, collecting new data to fill in gaps, and analyzing the data to develop a classification that then informs the mapping. The final products of this process include a vegetation classification, ecological descriptions and field keys of the vegetation types, and geospatial vegetation maps based on the classification. In this report, we present the narrative and results of the sampling and classification effort. In three other associated reports (Evens et al. 2020a, 2020b, 2020c) are the ecological descriptions and field keys. The resulting products of the vegetation mapping efforts are, or will be, presented in separate reports: mapping at LAKE was completed in 2016, mapping at MOJA and CAMO will be completed in 2020, and mapping at DEVA will occur in 2021. The California Native Plant Society (CNPS) and NatureServe, the classification team, have completed the vegetation classification for these four park units, with field keys and descriptions of the vegetation types developed at the alliance level per the U.S. National Vegetation Classification (USNVC). We have compiled approximately 9,000 existing and new vegetation data records into digital databases in Microsoft Access. The resulting classification and descriptions include approximately 105 alliances and landform types, and over 240 associations. CNPS also has assisted the mapping teams during map reconnaissance visits, follow-up on interpreting vegetation patterns, and general support for the geospatial vegetation maps being produced. A variety of alliances and associations occur in the four park units. Per park, the classification represents approximately 50 alliances at LAKE, 65 at MOJA and CAMO, and 85 at DEVA. Several riparian alliances or associations that are somewhat rare (ranked globally as G3) include shrublands of Pluchea sericea, meadow associations with Distichlis spicata and Juncus cooperi, and woodland associations of Salix laevigata and Prosopis pubescens along playas, streams, and springs. Other rare to somewhat rare types (G2 to G3) include shrubland stands with Eriogonum heermannii, Buddleja utahensis, Mortonia utahensis, and Salvia funerea on rocky calcareous slopes that occur sporadically in LAKE to MOJA and DEVA. Types that are globally rare (G1) include the associations of Swallenia alexandrae on sand dunes and Hecastocleis shockleyi on rocky calcareous slopes in DEVA. Two USNVC vegetation groups hold the highest number of alliances: 1) Warm Semi-Desert Shrub & Herb Dry Wash & Colluvial Slope Group (G541) has nine alliances, and 2) Mojave Mid-Elevation Mixed Desert Scrub Group (G296) has thirteen alliances. These two groups contribute significantly to the diversity of vegetation along alluvial washes and mid-elevation transition zones.
APA, Harvard, Vancouver, ISO, and other styles
4

Ray, Laura, Madeleine Jordan, Steven Arcone, Lynn Kaluzienski, Benjamin Walker, Peter Ortquist Koons, James Lever, and Gordon Hamilton. Velocity field in the McMurdo shear zone from annual ground penetrating radar imaging and crevasse matching. Engineer Research and Development Center (U.S.), December 2021. http://dx.doi.org/10.21079/11681/42623.

Full text
Abstract:
The McMurdo shear zone (MSZ) is strip of heavily crevassed ice oriented in the south-north direction and moving northward. Previous airborne surveys revealed a chaotic crevasse structure superimposed on a set of expected crevasse orientations at 45 degrees to the south-north flow (due to shear stress mechanisms). The dynamics that produced this chaotic structure are poorly understood. Our purpose is to present our field methodology and provide field data that will enable validation of models of the MSZ evolution, and here, we present a method for deriving a local velocity field from ground penetrating radar (GPR) data towards that end. Maps of near-surface crevasses were derived from two annual GPR surveys of a 28 km² region of the MSZ using Eulerian sampling. Our robot-towed and GPS navigated GPR enabled a dense survey grid, with transects of the shear zone at 50 m spacing. Each survey comprised multiple crossings of long (> 1 km) crevasses that appear in echelon on the western and eastern boundaries of the shear zone, as well as two or more crossings of shorter crevasses in the more chaotic zone between the western and eastern boundaries. From these maps, we derived a local velocity field based on the year-to-year movement of the same crevasses. Our velocity field varies significantly from fields previously established using remote sensing and provides more detail than one concurrently derived from a 29-station GPS network. Rather than a simple velocity gradient expected for crevasses oriented approximately 45 degrees to flow direction, we find constant velocity contours oriented diagonally across the shear zone with a wavy fine structure. Although our survey is based on near-surface crevasses, similar crevassing found in marine ice at 160 m depth leads us to conclude that this surface velocity field may hold through the body of meteoric and marine ice. Our success with robot-towed GPR with GPS navigation suggests we may greatly increase our survey areas.
APA, Harvard, Vancouver, ISO, and other styles
5

Jorgensen, Frieda, John Rodgers, Daisy Duncan, Joanna Lawes, Charles Byrne, and Craig Swift. Levels and trends of antimicrobial resistance in Campylobacter spp. from chicken in the UK. Food Standards Agency, September 2022. http://dx.doi.org/10.46756/sci.fsa.dud728.

Full text
Abstract:
Campylobacter spp. are the most common bacterial cause of foodborne illness in the UK, with chicken considered to be the most important vehicle of transmission for this organism. It is estimated there are 500,000 cases of campylobacteriosis in the UK annually, with Campylobacter jejuni (C. jejuni) and Campylobacter coli (C. coli) accounting for approximately 91% and 8 % of infections, respectively. Although severe infection in humans is uncommon, treatment is seldom needed for human infection but usually involves the administration of a macrolide (e.g., azithromycin) or a fluoroquinolone (e.g., ciprofloxacin). An increased rate of resistance in Campylobacter in chicken to such antimicrobials could limit effective treatment options for human infections and it is therefore important to monitor changes in rates of resistance over time. In this report we analysed trends in antimicrobial resistance (AMR) in C. jejuni and C. coli isolated from chicken in the UK. The chicken samples were from chicken reared for meat (ie. broiler chicken as opposed to layer chicken (ie. egg-laying chicken)) and included chicken sampled at slaughterhouses as well as from retail stores in the UK. Datasets included AMR results from retail surveys of Campylobacter spp. on chicken sampled in the UK from various projects in the time period from 2001 to 2020. In the retail surveys, samples were obtained from stores including major and minor retail stores throughout the UK (in proportion to the population size of each nation) and Campylobacter spp. testing was performed using standard methods with the majority of isolates obtained from direct culture on standard media (mCCDA). Data from national scale surveys of broiler chicken, sampling caecal contents and carcase neckskins at slaughterhouses, undertaken by APHA in 2007/2008, and between 2012 and 2018 were also included in the study. In the APHA-led surveys, Campylobacter were isolated using standard culture methods (culture onto mCCDA) and antimicrobial susceptibility testing was performed by a standard microbroth dilution method to determine the minimum inhibitory concentration (MIC) of isolates. Care was taken when comparing data from different studies as there had been changes to the threshold used to determine if an isolate was susceptible or resistant to an antimicrobial in a small number of scenarios. Harmonised thresholds (using epidemiological cut-off (ECOFF) values) were employed to assess AMR with appropriate adjustments made where required to allow meaningful comparisons of resistance prevalence over time. Data from additional isolates where resistance to antimicrobials were predicted from genome sequence data were also considered.
APA, Harvard, Vancouver, ISO, and other styles
6

Wozniakowska, P., D. W. Eaton, C. Deblonde, A. Mort, and O. H. Ardakani. Identification of regional structural corridors in the Montney play using trend surface analysis combined with geophysical imaging, British Columbia and Alberta. Natural Resources Canada/CMSS/Information Management, 2021. http://dx.doi.org/10.4095/328850.

Full text
Abstract:
The Western Canada Sedimentary Basin (WCSB) is a mature oil and gas basin with an extraordinary endowment of publicly accessible data. It contains structural elements of varying age, expressed as folding, faulting, and fracturing, which provide a record of tectonic activity during basin evolution. Knowledge of the structural architecture of the basin is crucial to understand its tectonic evolution; it also provides essential input for a range of geoscientific studies, including hydrogeology, geomechanics, and seismic risk analysis. This study focuses on an area defined by the subsurface extent of the Triassic Montney Formation, a region of the WCSB straddling the border between Alberta and British Columbia, and covering an area of approximately 130,000 km2. In terms of regional structural elements, this area is roughly bisected by the east-west trending Dawson Creek Graben Complex (DCGC), which initially formed in the Late Carboniferous, and is bordered to the southwest by the Late Cretaceous - Paleocene Rocky Mountain thrust and fold belt (TFB). The structural geology of this region has been extensively studied, but structural elements compiled from previous studies exhibit inconsistencies arising from distinct subregions of investigation in previous studies, differences in the interpreted locations of faults, and inconsistent terminology. Moreover, in cases where faults are mapped based on unpublished proprietary data, many existing interpretations suffer from a lack of reproducibility. In this study, publicly accessible data - formation tops derived from well logs, LITHOPROBE seismic profiles and regional potential-field grids, are used to delineate regional structural elements. Where seismic profiles cross key structural features, these features are generally expressed as multi-stranded or en echelon faults and structurally-linked folds, rather than discrete faults. Furthermore, even in areas of relatively tight well control, individual fault structures cannot be discerned in a robust manner, because the spatial sampling is insufficient to resolve fault strands. We have therefore adopted a structural-corridor approach, where structural corridors are defined as laterally continuous trends, identified using geological trend surface analysis supported by geophysical data, that contain co-genetic faults and folds. Such structural trends have been documented in laboratory models of basement-involved faults and some types of structural corridors have been described as flower structures. The distinction between discrete faults and structural corridors is particularly important for induced seismicity risk analysis, as the hazard posed by a single large structure differs from the hazard presented by a corridor of smaller pre-existing faults. We have implemented a workflow that uses trend surface analysis based on formation tops, with extensive quality control, combined with validation using available geophysical data. Seven formations are considered, from the Late Cretaceous Basal Fish Scale Zone (BFSZ) to the Wabamun Group. This approach helped to resolve the problem of limited spatial extent of available seismic data and provided a broader spatial coverage, enabling the investigation of structural trends throughout the entirety of the Montney play. In total, we identified 34 major structural corridors and number of smaller-scale structures, for which a GIS shapefile is included as a digital supplement to facilitate use of these features in other studies. Our study also outlines two buried regional foreland lobes of the Rocky Mountain TFB, both north and south of the DCGC.
APA, Harvard, Vancouver, ISO, and other styles
7

McCarthy, Noel, Eileen Taylor, Martin Maiden, Alison Cody, Melissa Jansen van Rensburg, Margaret Varga, Sophie Hedges, et al. Enhanced molecular-based (MLST/whole genome) surveillance and source attribution of Campylobacter infections in the UK. Food Standards Agency, July 2021. http://dx.doi.org/10.46756/sci.fsa.ksj135.

Full text
Abstract:
This human campylobacteriosis sentinel surveillance project was based at two sites in Oxfordshire and North East England chosen (i) to be representative of the English population on the Office for National Statistics urban-rural classification and (ii) to provide continuity with genetic surveillance started in Oxfordshire in October 2003. Between October 2015 and September 2018 epidemiological questionnaires and genome sequencing of isolates from human cases was accompanied by sampling and genome sequencing of isolates from possible food animal sources. The principal aim was to estimate the contributions of the main sources of human infection and to identify any changes over time. An extension to the project focussed on antimicrobial resistance in study isolates and older archived isolates. These older isolates were from earlier years at the Oxfordshire site and the earliest available coherent set of isolates from the national archive at Public Health England (1997/8). The aim of this additional work was to analyse the emergence of the antimicrobial resistance that is now present among human isolates and to describe and compare antimicrobial resistance in recent food animal isolates. Having identified the presence of bias in population genetic attribution, and that this was not addressed in the published literature, this study developed an approach to adjust for bias in population genetic attribution, and an alternative approach to attribution using sentinel types. Using these approaches the study estimated that approximately 70% of Campylobacter jejuni and just under 50% of C. coli infection in our sample was linked to the chicken source and that this was relatively stable over time. Ruminants were identified as the second most common source for C. jejuni and the most common for C. coli where there was also some evidence for pig as a source although less common than ruminant or chicken. These genomic attributions of themselves make no inference on routes of transmission. However, those infected with isolates genetically typical of chicken origin were substantially more likely to have eaten chicken than those infected with ruminant types. Consumption of lamb’s liver was very strongly associated with infection by a strain genetically typical of a ruminant source. These findings support consumption of these foods as being important in the transmission of these infections and highlight a potentially important role for lamb’s liver consumption as a source of Campylobacter infection. Antimicrobial resistance was predicted from genomic data using a pipeline validated by Public Health England and using BIGSdb software. In C. jejuni this showed a nine-fold increase in resistance to fluoroquinolones from 1997 to 2018. Tetracycline resistance was also common, with higher initial resistance (1997) and less substantial change over time. Resistance to aminoglycosides or macrolides remained low in human cases across all time periods. Among C. jejuni food animal isolates, fluoroquinolone resistance was common among isolates from chicken and substantially less common among ruminants, ducks or pigs. Tetracycline resistance was common across chicken, duck and pig but lower among ruminant origin isolates. In C. coli resistance to all four antimicrobial classes rose from low levels in 1997. The fluoroquinolone rise appears to have levelled off earlier and among animals, levels are high in duck as well as chicken isolates, although based on small sample sizes, macrolide and aminoglycoside resistance, was substantially higher than for C. jejuni among humans and highest among pig origin isolates. Tetracycline resistance is high in isolates from pigs and the very small sample from ducks. Antibiotic use following diagnosis was relatively high (43.4%) among respondents in the human surveillance study. Moreover, it varied substantially across sites and was highest among non-elderly adults compared to older adults or children suggesting opportunities for improved antimicrobial stewardship. The study also found evidence for stable lineages over time across human and source animal species as well as some tighter genomic clusters that may represent outbreaks. The genomic dataset will allow extensive further work beyond the specific goals of the study. This has been made accessible on the web, with access supported by data visualisation tools.
APA, Harvard, Vancouver, ISO, and other styles
8

Postabortion case load study in Egyptian public sector hospitals. Population Council, 1997. http://dx.doi.org/10.31899/rh1997.1016.

Full text
Abstract:
There is an absence of reliable data on the incidence of incomplete abortion in Egypt. A diagnostic, descriptive study that neither tests an experimental intervention nor evaluates in a comprehensive manner the quality of postabortion medical care was undertaken to address this issue. The study is a cross-sectional observation of the volume and nature of the postabortion case load in Egyptian public-sector hospitals, and it responds to the following objectives: 1) Accurately estimate the number of women who present for postabortion treatment in ob/gyn in-patient facilities as a percentage of ob/gyn admissions in a representative sample of Egyptian public-sector hospitals during one month; 2) Describe the medical and sociodemographic characteristics of the postabortion patients, including the cause(s) of the lost pregnancies, whether the pregnancy was wanted, the medical treatments received, and contraceptive-use history. As stated in this report, the study's sampling frame consists of the approximate 569 public-sector hospitals in Egypt. Approximately 15 percent of the hospitals were randomly selected with the probability of selection proportionate to the average number of beds in each hospital, using standard sampling procedures.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography