Thèses sur le sujet « ENUMERATIVE ANALYSIS »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : ENUMERATIVE ANALYSIS.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 37 meilleures thèses pour votre recherche sur le sujet « ENUMERATIVE ANALYSIS ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Carroll, Christina C. « Enumerative combinatorics of posets ». Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22659.

Texte intégral
Résumé :
Thesis (Ph. D.)--Mathematics, Georgia Institute of Technology, 2008.
Committee Chair: Tetali, Prasad; Committee Member: Duke, Richard; Committee Member: Heitsch, Christine; Committee Member: Randall, Dana; Committee Member: Trotter, William T.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Oliveira, Saullo Haniell Galvão de 1988. « On biclusters aggregation and its benefits for enumerative solutions = Agregação de biclusters e seus benefícios para soluções enumerativas ». [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259072.

Texte intégral
Résumé :
Orientador: Fernando José Von Zuben
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-27T03:28:44Z (GMT). No. of bitstreams: 1 Oliveira_SaulloHaniellGalvaode_M.pdf: 1171322 bytes, checksum: 5488cfc9b843dbab6d7a5745af1e3d4b (MD5) Previous issue date: 2015
Resumo: Biclusterização envolve a clusterização simultânea de objetos e seus atributos, definindo mo- delos locais de relacionamento entre os objetos e seus atributos. Assim como a clusterização, a biclusterização tem uma vasta gama de aplicações, desde suporte a sistemas de recomendação, até análise de dados de expressão gênica. Inicialmente, diversas heurísticas foram propostas para encontrar biclusters numa base de dados numérica. No entanto, tais heurísticas apresen- tam alguns inconvenientes, como não encontrar biclusters relevantes na base de dados e não maximizar o volume dos biclusters encontrados. Algoritmos enumerativos são uma proposta recente, especialmente no caso de bases numéricas, cuja solução é um conjunto de biclusters maximais e não redundantes. Contudo, a habilidade de enumerar biclusters trouxe mais um cenário desafiador: em bases de dados ruidosas, cada bicluster original se fragmenta em vá- rios outros biclusters com alto nível de sobreposição, o que impede uma análise direta dos resultados obtidos. Essa fragmentação irá ocorrer independente da definição escolhida de co- erência interna no bicluster, sendo mais relacionada com o próprio nível de ruído. Buscando reverter essa fragmentação, nesse trabalho propomos duas formas de agregação de biclusters a partir de resultados que apresentem alto grau de sobreposição: uma baseada na clusteriza- ção hierárquica com single linkage, e outra explorando diretamente a taxa de sobreposição dos biclusters. Em seguida, um passo de poda é executado para remover objetos ou atributos indesejados que podem ter sido incluídos como resultado da agregação. As duas propostas foram comparadas entre si e com o estado da arte, em diversos experimentos, incluindo bases de dados artificiais e reais. Essas duas novas formas de agregação não só reduziram significa- tivamente a quantidade de biclusters, essencialmente defragmentando os biclusters originais, mas também aumentaram consistentemente a qualidade da solução, medida em termos de precisão e recuperação, quando os biclusters são conhecidos previamente
Abstract: Biclustering involves the simultaneous clustering of objects and their attributes, thus defin- ing local models for the two-way relationship of objects and attributes. Just like clustering, biclustering has a broad set of applications, ranging from an advanced support for recom- mender systems of practical relevance to a decisive role in data mining techniques devoted to gene expression data analysis. Initially, heuristics have been proposed to find biclusters, and their main drawbacks are the possibility of losing some existing biclusters and the inca- pability of maximizing the volume of the obtained biclusters. Recently efficient algorithms were conceived to enumerate all the biclusters, particularly in numerical datasets, so that they compose a complete set of maximal and non-redundant biclusters. However, the ability to enumerate biclusters revealed a challenging scenario: in noisy datasets, each true bicluster becomes highly fragmented and with a high degree of overlapping, thus preventing a direct analysis of the obtained results. Fragmentation will happen no matter the boundary condi- tion adopted to specify the internal coherence of the valid biclusters, though the degree of fragmentation will be associated with the noise level. Aiming at reverting the fragmentation, we propose here two approaches for properly aggregating a set of biclusters exhibiting a high degree of overlapping: one based on single linkage and the other directly exploring the rate of overlapping. A pruning step is then employed to filter intruder objects and/or attributes that were added as a side effect of aggregation. Both proposals were compared with each other and also with the actual state-of-the-art in several experiments, including real and artificial datasets. The two newly-conceived aggregation mechanisms not only significantly reduced the number of biclusters, essentially defragmenting true biclusters, but also consistently in- creased the quality of the whole solution, measured in terms of Precision and Recall when the composition of the dataset is known a priori
Mestrado
Engenharia de Computação
Mestre em Engenharia Elétrica
Styles APA, Harvard, Vancouver, ISO, etc.
3

Zinger, Aleksey 1975. « Enumerative algebraic geometry via techniques of symplectic topology and analysis of local obstructions ». Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/8402.

Texte intégral
Résumé :
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 2002.
Includes bibliographical references (p. 239-240).
Enumerative geometry of algebraic varieties is a fascinating field of mathematics that dates back to the nineteenth century. We introduce new computational tools into this field that are motivated by recent progress in symplectic topology and its influence on enumerative geometry. The most straightforward applications of the methods developed are to enumeration of rational curves with a cusp of specified nature in projective spaces. A general approach for counting positive-genus curves with a fixed complex structure is also presented. The applications described include enumeration of rational curves with a (3,4)-cusp, genus-two and genus-three curves with a fixed complex structure in the two-dimensional complex projective space, and genus-two curves with a fixed complex structure in the three-dimensional complex projective space. Our constructions may be applicable to problems in symplectic topology as well.
by Aleksey Zinger.
Ph.D.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Fang, Wenjie. « Enumerative and bijective aspects of combinatorial maps : generalization, unification and application ». Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCC312/document.

Texte intégral
Résumé :
Le sujet de cette thèse est l'étude énumérative des cartes combinatoires et ses applications à l'énumération des autres objet s combinatoires.Les cartes combinatoires, aussi appelées simplement « cartes », sont un modèle combinatoire riche. Elles sont définies d'une manière intuitive et géométrique, mais elles sont aussi liées à des structures algébriques plus complexes. Par exemple, l'étude d'une famille de cartes appelées des « constellations » donne un cadre unifié à plusieurs problèmes d'énumération des factorisations dans le groupe symétrique. À la croisée des différents domaines, les cartes peuvent être analysées par une grande variété de méthodes, et leur énumération peut aussi nous aider à compter des autres objets combinatoires. Cette thèse présente un ensemble de résultats et de connexions très riches dans le domaine de l'énumération des cartes. Cette thèse se divise en quatre grandes parties. La première partie, qui correspond aux chapitres 1 et 2, est une introduction à l'étude énumérative des cartes. La deuxième partie, qui correspond aux chapitres 3 et 4, contient mes travaux sur l'énumération des constellations, qui sont des cartes particulières présentant un modèle unifié de certains types de factorisation de l'identité dans le groupe symétrique. La troisième partie, qui correspond aux chapitres 5 et 6, présente ma recherche sur le lien énumératif entre les cartes et des autres objets combinatoires, par exemple les généralisations du treillis de Tamari et les graphes aléatoires qui peuvent être plongés dans une surface donnée. La dernière partie correspond au chapitre 7, dé ns lequel je conclus cette thèse avec des perspectives et des directions de recherche dans l'étude énumérative des cartes
This thesis deals with the enumerative study of combinatorial maps, and its application to the enumeration of other combinatorial objects. Combinatorial maps, or simply maps, form a rich combinatorial model. They have an intuitive and geometric definition, but are also related to some deep algebraic structures. For instance, a special type of maps called \emph{constellations} provides a unifying framework for some enumeration problems concerning factorizations in the symmetric group. Standing on a position where many domains meet, maps can be studied using a large variety of methods, and their enumeration can also help us count other combinatorial objects. This thesis is a sampling from the rich results and connections in the enumeration of maps.This thesis is structured into four major parts. The first part, including Chapter 1 and 2, consist of an introduction to the enumerative study of maps. The second part, Chapter 3 and 4, contains my work in the enumeration of constellations, which are a special type of maps that can serve as a unifying model of some factorizations of die identity in the symmetric group: The third part, composed by Chapter 5 and 6, shows my research on the enumerative link from maps to other combinatori al objects, such as generalizations of the Tamari lattice and random graphs embeddable onto surfaces. The last part is the closing chapter, in which the thesis concludes with some perspectives and future directions in the enumerative study of maps
Styles APA, Harvard, Vancouver, ISO, etc.
5

Yang, Yingying. « An Application of Combinatorial Methods ». VCU Scholars Compass, 2005. http://scholarscompass.vcu.edu/etd/662.

Texte intégral
Résumé :
Probability theory is a branch of mathematics concerned with determining the long run frequency or chance that a given event will occur. This chance is determined by dividing the number of selected events by the number of total events possible, assuming these events are equally likely. Probability theory is simply enumerative combinatorial analysis when applied to finite sets. For a given finite sample space, probability questions are usually "just" a lot of counting. The purpose of this thesis is to provide some in depth analysis of several combinatorial methods, including basic principles of counting, permutations and combinations, by specifically exploring one type of probability problem: C ordered possible elements that are equally likely s independent sampled subjects r distinct elements, where r = 1, 2, 3, …, min (C, s) we want to know P(s subjects utilize exactly r distinct elements). This thesis gives a detailed step by step analysis on techniques used to ultimately finding a general formula to solve the above problem.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Distler, Andreas. « Classification and enumeration of finite semigroups ». Thesis, St Andrews, 2010. http://hdl.handle.net/10023/945.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Melczer, Stephen. « Analytic Combinatorics in Several Variables : Effective Asymptotics and Lattice Path Enumeration ». Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEN013/document.

Texte intégral
Résumé :
La combinatoire analytique étudie le comportement asymptotique des suites à travers les propriétés analytiques de leurs fonctions génératrices. Ce domaine a conduit au développement d’outils profonds et puissants avec de nombreuses applications. Au delà de la théorie univariée désormais classique, des travaux récents en combinatoire analytique en plusieurs variables (ACSV) ont montré comment calculer le comportement asymptotique d’une grande classe de fonctions différentiellement finies:les diagonales de fractions rationnelles. Cette thèse examine les méthodes de l’ACSV du point de vue du calcul formel, développe des algorithmes rigoureux et donne les premiers résultats de complexité dans ce domaine sous des hypothèses très faibles. En outre, cette thèse donne plusieurs nouvelles applications de l’ACSV à l’énumération des marches sur des réseaux restreintes à certaines régions: elle apporte la preuve de plusieurs conjectures ouvertes sur les comportements asymptotiques de telles marches,et une étude détaillée de modèles de marche sur des réseaux avec des étapes pondérées
The field of analytic combinatorics, which studies the asymptotic behaviour ofsequences through analytic properties of their generating functions, has led to thedevelopment of deep and powerful tools with applications across mathematics and thenatural sciences. In addition to the now classical univariate theory, recent work in thestudy of analytic combinatorics in several variables (ACSV) has shown how to deriveasymptotics for the coefficients of certain D-finite functions represented by diagonals ofmultivariate rational functions. This thesis examines the methods of ACSV from acomputer algebra viewpoint, developing rigorous algorithms and giving the firstcomplexity results in this area under conditions which are broadly satisfied.Furthermore, this thesis gives several new applications of ACSV to the enumeration oflattice walks restricted to certain regions. In addition to proving several openconjectures on the asymptotics of such walks, a detailed study of lattice walk modelswith weighted steps is undertaken
Styles APA, Harvard, Vancouver, ISO, etc.
8

Lladser, Manuel Eugenio. « Asymptotic enumeration via singularity analysis ». Connect to this title online, 2003. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1060976912.

Texte intégral
Résumé :
Thesis (Ph. D.)--Ohio State University, 2003.
Title from first page of PDF file. Document formatted into pages; contains x, 227 p.; also includes graphics Includes bibliographical references (p. 224-227). Available online via OhioLINK's ETD Center
Styles APA, Harvard, Vancouver, ISO, etc.
9

Cook, Frederick K. « Rapid bioluminometric enumeration of microorganisms in ground beef ». Diss., Virginia Polytechnic Institute and State University, 1988. http://hdl.handle.net/10919/51933.

Texte intégral
Résumé :
Use of the bioluminometric ATP assay was evaluated for estimating total bacterial counts in ground beef. Minimum sensitivity was found to be 10⁶ cfu/g using a double filtration procedure for sample preparation. Although ATP content per cfu decreased approximately 10 fold during storage, correlation of total aerobic plate count (APC) with microbial ATP content was 0.96. Selective non-microbial ATP extraction with ATPase treatment was evaluated for use in conjunction with the double filtration procedure to increase assay sensitivity. The new method was effective for removing additional non-microbial ATP without reducing ATP in bacteria. Estimated APC values were generally accurate to within ±0.50 log for ground beef samples above the detection limit of 5 x 10⁴ cfu/g. ATPase treatment increased sensitivity of the ATP assay and APC estimation by about 1 log while increasing assay time by 40 minutes, for a total of 60 minutes for 4 samples assayed in triplicate. The ATP assay was evaluated for use with ground beef patties inoculated with mixed ground beef spoilage flora, Pseudomonas, or Lactobacillus and stored at 2°C or 10°C using oxygen permeable or impermeable (vacuum) packaging. Excellent correlation (r²=0.95) was obtained for each inoculum and storage condition over the range of 5 x 10⁴ to 1 x 10⁹ cfu/g, when estimated APC values were compared with experimentally observed APC values. Usefulness of the ATP assay for estimating APC values of frozen ground beef was evaluated. Retail ground beef and Lactobacillus- and Pseudomonas-inoculated beef were frozen and thawed at different rates and examined for APC and microbial ATP content. Results indicated that, although freezing and thawing lowered numbers of Pseudomonas, APC values and microbial ATP content closely correlated. APC estimates were generally accurate to within 1/2 log. The importance of using an ATP assay standard to correct for variable enzyme activity and presence of quenching factors was demonstrated, and improved formulae were developed for optimum assay standard use. Alternate regression methods were evaluated for estimation of APC values but did not yield enhanced accuracy. Only one regression equation was needed for estimating APC values of ground beef containing different types of bacteria stored in various ways. Therefore, little knowledge of ground beef history is needed in order to rapidly and accurately estimate microbial numbers in ground beef using the bioluminometric ATP assay.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Khansa, Wael. « Réseaux de Pétri P-Temporels : contribution à l'étude des systèmes à évènements discrets ». Chambéry, 1997. http://www.theses.fr/1997CHAMS005.

Texte intégral
Résumé :
Nous montrons dans ce mémoire que parmi les extensions existantes des réseaux de Petri, aucune ne possède la puissance de spécification suffisante pour modéliser et analyser des systèmes à événements discrets à contraintes de temps de séjour minimum et maximum nécessitant des synchronisations sous obligation (cas par exemple des industries de traitement chimique). Nous sommes ainsi amenés à proposer un nouveau modèle temporel permettant de représenter et d'analyser de tels systèmes, dans lequel, des intervalles de temps sont associés aux places, que nous appelons: réseau de Petri p-temporel (p-RdP). La définition d'un nouvel outil nécessite l'établissement de méthodes permettant d'analyser ses propriétés. Nous définissons tout d'abord les propriétés fortes qu'il convient d'extraire (vivacité, finitude des marquages, vivacité de marques,…). Puis, le pouvoir de spécification de cet outil sera comparé à celui d'autres modèles de réseaux de Petri. Nous fournissons des méthodes d'analyse énumérative permettant d'étudier le comportement et de vérifier les propriétés des systèmes modélisés. Ensuite, une approche d'analyse structurelle est établie afin d'étudier les fonctionnements stationnaires et par conséquent les performances des systèmes modélisés. Par ailleurs, les systèmes peuvent être soumis à des perturbations. Il est alors intéressant de trouver des contrôles robustes qui peuvent absorber des telles perturbations. La connaissance des marges sur les instants de tir des transitions peut être un moyen de caractériser la robustesse. L'étude de ces marges est faite d'abord pour les réseaux temporisés puis pour le modèle p-temporel.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Wherrett, Mark. « A CCTV system for scene analysis facilitating personnel enumeration and tracking ». Thesis, University of Wolverhampton, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.401027.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
12

O'Grady, Andrew Robert Francis. « Automated design of separation processes using implicit enumeration and interval analysis ». Thesis, University College London (University of London), 2005. http://discovery.ucl.ac.uk/1445752/.

Texte intégral
Résumé :
This thesis concerns the automated synthesis of separation processes. A single multi- component stream is to processed to give one or more pure component product streams. A list of units are available for the task and the aim is to find the optimal flowsheet structure in terms of cost. Implicit enumeration (IE) has been used to tackle the synthesis problem. The main advantage of this approach is that IE does not require the development of a superstructure. A disadvantage of using IE is that it is necessary to discretise the values of unit operating conditions in order for there to be a finite search space (Fraga et al., 2000). The user may not have any idea of the effect of the discretisations on the quality of the solution. In addition, the optimal solution may be missed between the discrete values chosen. The purpose of this work is to address these issues. Interval analysis is used to bound the effects of this discretisation. This allows the cost of each particular flowsheet to be bounded based on the level of discretisation used. The technique is demonstrated by bounding the effect of discretisation on the synthesis of distillation flowsheets. The use of runs with progressively finer uniform discretisation lead to the isolation of the optimal structure. This result leads to the development of an adaptive algorithm that changes the discreti sation profile in response to bounding information from downstream in the search. The algorithm operates recursively and isolates the optimal process structure for each stream encountered. This builds up to the isolation of the overall optimal process structure for the feed process stream. The effectiveness and performance of the new algorithm are evalu ated using two very different separation problems. The first is a distillation sequence and the second a separation of a protein from a biological stream.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Collet, Gwendal. « Enumeration and analysis of models of planar maps via the bijective method ». Palaiseau, Ecole polytechnique, 2014. https://tel.archives-ouvertes.fr/tel-01084964/document.

Texte intégral
Résumé :
La combinatoire bijective est un domaine qui consiste à étudier les propriétés énumératives de familles d’objets mathématiques en exhibant des bijections (idéalement explicites) qui préservent ces propriétés entre de telles familles et des objets déjà connus. Cela permet alors d’appliquer tous les outils de la combinatoire analytique à ces nouveaux objets, afin d’en obtenir une énumération explicite, des propriétés asymptotiques, ou encore d’en faire la génération aléatoire. Dans cette thèse, nous nous intéresserons aux cartes planaires qui sont des graphes dessinés dans le plan sans croisement d’arêtes. Dans un premier temps, nous retrouverons une formule simple – établie par Eynard – pour la série génératrice des cartes biparties et cartes quasi-biparties avec des bords de longueurs définies, et nous en donnerons la généralisation naturelle aux p-constellations et quasi-p-constellations. Dans la seconde partie de cette thèse, nous présenterons une bijection originale pour les cartes simples – sans boucles, ni arêtes multiples – à face externe triangulaire et les triangulations eulériennes, nous permettant notamment de faire la génération aléatoire des cartes simples enracinées en contrôlant le nombre de sommets et d’arêtes. Grâce à cette bijection, nous étudierons également les propriétés métriques des cartes simples en démontrant la convergence du profil normalisé des distances vers une mesure aléatoire explicite liée au serpent brownien
Bijective combinatorics is a field which consists in studying the enumerative properties of some families of mathematical objects, by exhibiting bijections (ideally explicit) which preserve these properties between such families and already known objects. One can then apply any tool of analytic combinatorics to these new objets, in order to get explicit enumeration, asymptotics properties, or to perform random sampling. In this thesis, we will be interested in planar maps – graphs drawn on the plane with no crossing edges. First, we will recover a simple formula –obtained by Eynard – for the generating series of bipartite maps and quasi-bipartite maps with boundaries of prescribed lengths, and we will give anatural generalization to p-constellations and quasi-p-constellations. In the second part of this thesis, we will present an original bijection for outertriangular simple maps – with no loops nor multiple edges – and eulerian triangulations. We then use this bijection to design random samplers for rooted simple maps according to the number of vertices and edges. We will also study the metric properties of simple maps by proving the convergence of the rescaled distance-profile towards an explicit random measure related to the Brownian snake
Styles APA, Harvard, Vancouver, ISO, etc.
14

MacRae, Jean Dorothy. « Characterization of Caulobacters isolated from wastewater treatment systems and assay development for their enumeration ». Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/30112.

Texte intégral
Résumé :
Caulobacters are gram-negative bacteria that have a biphasic life cycle consisting of a swarmer and a stalked stage. As a result they have elicited interest as a simple developmental model. Less attention has focussed on their role in the environment, although they have been found in almost every aquatic environment as well as in many soils. Caulobacters are often described as oligotrophic bacteria because of their prevalence in pristine waters but have now been isolated from the relatively nutrient-rich wastewater environment. In order to learn more about this population some basic characterization was carried out and an assay system to determine their prevalence in sewage plants was designed. Most of the organisms isolated from sewage treatment facilities had similar gross morphological features, but differed in holdfast composition, total protein profile, antibiotic resistance and restriction fragment length polymorphism, thereby indicating a greater diversity than originally assumed. Most of the organisms hybridized with flagellin and surface array genes that had previously been cloned, and only one of 155 non-Caulobacter sewage isolates hybridized with the flagellin gene probe; consequently these were used in a DNA-based enumeration strategy. DNA was isolated directly from sewage and probed with the flagellin and the surface array gene probes. The signals obtained were compared to standards made up of pooled Caulobacter DNA from the sewage isolates and non-Caulobacter DNA from organisms also present in sewage. Using this assay Caulobacters could only be detected above the 1% level, which was higher than their proportion in the wastewater environment. It appears that this approach will not be useful in monitoring Caulobacters in treatment plants unless a more highly conserved or higher copy number probe is found.
Science, Faculty of
Microbiology and Immunology, Department of
Graduate
Styles APA, Harvard, Vancouver, ISO, etc.
15

Lockwood, Elise Nicole. « Student Approaches to Combinatorial Enumeration : The Role of Set-Oriented Thinking ». PDXScholar, 2011. https://pdxscholar.library.pdx.edu/open_access_etds/338.

Texte intégral
Résumé :
Combinatorics is a growing topic in mathematics with widespread applications in a variety of fields. Because of this, it has become increasingly prominent in both K-12 and undergraduate curricula. There is a clear need in mathematics education for studies that address cognitive and pedagogical issues surrounding combinatorics, particularly related to students' conceptions of combinatorial ideas. In this study, I describe my investigation of students' thinking as it relates to counting problems. I interviewed a number of post-secondary students as they solved a variety of combinatorial tasks, and through the analysis of this data I defined and elaborated a construct that I call set-oriented thinking. I describe and categorize ways in which students used set-oriented thinking in their counting, and I put forth a model for relationships between the formulas/expressions, the counting processes, and the sets of outcomes that are involved in students' counting activity.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Ko, Han Il. « Noncoliform enumeration and identification in potable water, and their senstivity to commonly used disinfectants ». Virtual Press, 1997. http://liblink.bsu.edu/uhtbin/catkey/1041914.

Texte intégral
Résumé :
Tap water collected according to standard methods was examined for microbial presence. Epifluorescent diagnoses using redox probe 5-cyano-2,3ditolyl tetrazolium chloride (CTC), 4',6-diamidino-2-phenylindole (DAPI), and acridine orange (AO) were employed for direct evidence of microorganisms. Evidence of total (DAPI or AO), respiring (CTC) bacteria, and heterotrophic plate count (HPC) was determined on multiple occasions during the summer, fall, and winter 1996-1997. Pseudomonas aeruginosa, Acinetobacter sp., Bacillus licheniformis, and Methylobacterium rhodinum were isolated and identified by the API and Biolog system using GN and GP procedures. On the basis of comparisons presented in this study between the CTC method and the standard HPC procedure, it appeared that the number of CTC-reducing bacteria in the tap water samples was typically higher than that determined by HPC, indicating that many respiring bacteria detected by the CTC reduction technique fail to produce visible colonieson the agar media used. In the seasonal data obtained by the CTC method, no difference was shown among respiring bacterial counts obtained from June through January. In the examination of P. aeruginosa viability in presence of chlorine, the number of CTC-positive bacteria exceeded the number of CFU by more than 2 logs after exposure to chlorine, suggesting that reliance on HPC overestimate the efficacy of disinfection treatment. In inactivation assays using the Biolog MT plate, no sensitivity to chlorine or chloramine disinfectants was noted even at high concentration levels (5 mg/liter). Following initial drop, bacterial activities increased as contact time increased. Thus, it appears that the MT microplate provides too low a cell concentration, too great a contact time, and/or too low a concentration of tetrazolium dye within the well for successful analysis of disinfectant capability to selected bacterial strains isolated from distribution water.
Department of Biology
Styles APA, Harvard, Vancouver, ISO, etc.
17

SHRESTHA, JAYESH. « Static Program Analysis ». Thesis, Uppsala universitet, Informationssystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-208293.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Bonar, Michal Mateusz. « Rapid Enumeration, Sorting and Maturation Analysis of Single Viral Particles in HIV-1 Swarms by High-Resolution Flow Virometry ». Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case149944467787067.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Dhladhla, Busisiwe I. R. « Enumeration of insect viruses using microscopic and molecular analyses : South African isolate of cryotophlebia leucotreta granulovirus as a case study ». Thesis, Nelson Mandela Metropolitan University, 2012. http://hdl.handle.net/10948/d1008395.

Texte intégral
Résumé :
Baculoviruses have been used as biocontrol agents to control insect pests in agriculture since the 1970s. Out of the fifteen virus families known to infect insects, baculoviruses offer the greatest potential as insect biopesticides, due to their high host specificity which makes them extremely safe to humans, other vertebrates, plants and non-target microorganisms. They comprise of two genera: nucleopolyhedroviruses (NPVs) and granuloviruses (GVs). The South African isolate of Cryptophlebia leucotreta granulovirus (CrleGV-SA) which is infectious for the false codling moth (FCM), Thaumatotibia leucotreta, (Meyrick) (Lepidoptera: Tortricidae), has been successfully developed into two commercial biopesticides; Cryptogran® and Cryptex®, for the control of FCM in citrus crops. The current method of enumeration used for CrleGV-SA virus particles in routine experiments during the production of the GV as biopesticides, is dark field microscopy. However, due to the small size of GVs (300-500 nm in length), the technique is not easy to perform on these viruses, and no systemic comparison has been made of potential alternative methods. Therefore, the main objective of this study was to develop a quantitative enumeration method for CrleGV-SA occlusion bodies (OBs) which is accurate, reliable, and feasible, and compare the developed methods of enumeration to the current method. Purified and semi-purified CrleGV-SA viral stocks were prepared for enumeration studies using spectrophotometry, dark field microscopy, scanning electron microscopy (SEM) and real time qPCR. Spectrophotometry was found to be an unreliable method for enumeration of GVs in the production, standardisation, and quality control of biopesticides. Dark field microscopy and SEM were found to be accurate, and statistically comparable (p = 0.064) enumeration techniques. qPCR is currently being optimised for the enumeration of GVs. This technique was demonstrated to generate accurate standard curves for absolute quantification of virus particles for pure and semi-pure virus preparations. qPCR offers the greatest potential as an accurate enumeration method because it is not affected by contamination with non-biological contaminating debris, nor by other biological material due to the specificity of PCR primers. Further work is required to fully develop qPCR as an enumeration method for GVs. However, dark field microscopy has been successfully validated as an enumeration method. SEM, which has a high resolution compared to light microscopy, has an added advantage over dark field microscopy, which is to distinguish virus particles in semi-pure viral stock preparations during counting. Therefore, SEM currently provides the most unambiguous and feasible enumeration method for GVs in both purified and semi-purified virus samples.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Machado, Lucas. « KL-cut based remapping ». reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/116138.

Texte intégral
Résumé :
Este trabalho introduz o conceito de cortes k e cortes kl sobre um circuito mapeado, em uma representação netlist. Esta nova abordagem é derivada do conceito de cortes k e cortes kl sobre AIGs (and inverter graphs), respeitando as diferenças entre essas duas formas de representar um circuito. As principais diferenças são: (1) o número de entradas em um nodo do grafo, e (2) a presença de inversores e buffers de forma explícita no circuito mapeado. Um algoritmo para enumerar cortes k e cortes kl é proposto e implementado. A principal motivação de usar cortes kl sobre circuitos mapeados é para realizar otimizações locais na síntese lógica de circuitos digitais. A principal contribuição deste trabalho é uma abordagem nova de remapeamento iterativo, utilizando cortes kl, reduzindo a área do circuito e respeitando as restrições de temporização do circuito. O uso de portas lógicas complexas pode potencialmente reduzir a área total de um circuito, mas elas precisam ser escolhidas corretamente de forma a manter as restrições de temporização do circuito. Ferramentas comerciais de síntese lógica trabalham melhor com portas lógicas simples e não são capazes de explorar eventuais vantagens em utilizar portas lógicas complexas. A abordagem proposta de remapeamento iterativo utilizando cortes kl é capaz de explorar uma quantidade maior de portas lógicas com funções lógicas diferentes, reduzindo a área do circuito, e mantendo as restrições de temporização intactas ao fazer uma checagem STA (análise temporal estática). Resultados experimentais mostram uma redução de até 38% de área na parte combinacional de circuitos para um subconjunto de benchmarks IWLS 2005, quando comparados aos resultados de ferramentas comerciais de síntese lógica. Outra contribuição deste trabalho é um novo modelo de rendimento (yield) para fabricação de circuitos integrados (IC) digitais, considerando problemas de resolução da etapa de litografia como uma fonte de diminuição do yield. O uso de leiautes regulares pode melhorar bastante a resolução da etapa de litografia, mas existe um aumento de área significativo ao se introduzir a regularidade. Esta é a primeira abordagem que considera o compromisso (trade off) de portas lógicas com diferentes níveis de regularidade e diferentes áreas durante a síntese lógica, de forma a melhorar o yield do projeto. A ferramenta desenvolvida de remapeamento tecnológico utilizando cortes kl foi modificada de forma a utilizar esse modelo de yield como função custo, de forma a aumentar o número de boas amostras (dies) por lâmina de silício (wafer), com resultados promissores.
This work introduces the concept of k-cuts and kl-cuts on top of a mapped circuit in a netlist representation. Such new approach is derived from the concept of k-cuts and klcuts on top of AIGs (and inverter graphs), respecting the differences between these two circuit representations. The main differences are: (1) the number of allowed inputs for a logic node, and (2) the presence of explicit inverters and buffers in the netlist. Algorithms for enumerating k-cuts and kl-cuts on top of a mapped circuit are proposed and implemented. The main motivation to use kl-cuts on top mapped circuits is to perform local optimization in digital circuit logic synthesis. The main contribution of this work is a novel iterative remapping approach using klcuts, reducing area while keeping the timing constraints attained. The use of complex gates can potentially reduce the circuit area, but they have to be chosen wisely to preserve timing constraints. Logic synthesis commercial design tools work better with simple cells and are not capable of taking full advantage of complex cells. The proposed iterative remapping approach can exploit a larger amount of logic gates, reducing circuit area, and respecting global timing constraints by performing an STA (static timing analysis) check. Experimental results show that this approach is able to reduce up to 38% in area of the combinational portion of circuits for a subset of IWLS 2005 benchmarks, when compared to results obtained from logic synthesis commercial tools. Another contribution of this work is a novel yield model for digital integrated circuits (IC) manufacturing, considering lithography printability problems as a source of yield loss. The use of regular layouts can improve the lithography, but it results in a significant area overhead by introducing regularity. This is the first approach that considers the tradeoff of cells with different level of regularity and different area overhead during the logic synthesis, in order to improve overall design yield. The technology remapping tool based on kl-cuts developed was modified in order to use such yield model as cost function, improving the number of good dies per wafer, with promising interesting results.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Domingues, Deborah Pereira. « Tópicos em combinatória ». [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/307514.

Texte intégral
Résumé :
Orientador: José Plínio de Oliveira Santos
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica
Made available in DSpace on 2018-08-16T18:39:44Z (GMT). No. of bitstreams: 1 Domingues_DeborahPereira_M.pdf: 925996 bytes, checksum: 6a430acfaa4475e03a36ee7e09bbf42a (MD5) Previous issue date: 2010
Resumo: Neste trabalho estudamos dois importantes tópicos em combinatória. O primeiro deles é o Teorema Enumerativo de Pólya. No capítulo 2 é dada uma demonstração deste teorema usando o Teorema de Burnside. Também neste capítulo, encontram-se algumas de suas diversas aplicações. O segundo tópico trata de Teoria de Partições. Esta dissertação aborda alguns objetos de estudo desta área. O primeiro objeto é o método de Partition Analisys, usado para achar funções geradoras de vários tipos de interessantes funções de partição. Ainda relacionado a funções geradoras, o capítulo 3 aborda um pouco sobre q-séries. O segundo objeto é o método gráfico, que utiliza a representação gráfica de Ferrers para uma partição. Ainda neste capítulo, são usados os conceitos de quadrado de Durfee e símbolo de Frobenius para provar algumas identidades.
Abstract: This paper presents two important topics in combinatorics. The first one is the Pólya Enumeration Theorem. In chapter 2 is given a demonstration of this theorem by Burnside's Theorem. Also in this chapter are some of their various applications. The second topic deals with the Theory of Partition. This dissertation addresses some aspects of the study on this area. The first is Partition Analysis, this method is used to find the generating functions of various kinds of interesting partition functions. In the third chapter we deal with q-series which is also related to generating functions. The second is the graphical method, which uses a Ferrers's graphical representation of a partition. In addition, we use the concepts of Durfee square and Frobenius's symbol to prove some identities.
Mestrado
Mestre em Matemática
Styles APA, Harvard, Vancouver, ISO, etc.
22

Nourbakhsh, Ghavameddin. « Reliability analysis and economic equipment replacement appraisal for substation and sub-transmission systems with explicit inclusion of non-repairable failures ». Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/40848/1/Ghavameddin_Nourbakhsh_Thesis.pdf.

Texte intégral
Résumé :
The modern society has come to expect the electrical energy on demand, while many of the facilities in power systems are aging beyond repair and maintenance. The risk of failure is increasing with the aging equipments and can pose serious consequences for continuity of electricity supply. As the equipments used in high voltage power networks are very expensive, economically it may not be feasible to purchase and store spares in a warehouse for extended periods of time. On the other hand, there is normally a significant time before receiving equipment once it is ordered. This situation has created a considerable interest in the evaluation and application of probability methods for aging plant and provisions of spares in bulk supply networks, and can be of particular importance for substations. Quantitative adequacy assessment of substation and sub-transmission power systems is generally done using a contingency enumeration approach which includes the evaluation of contingencies, classification of the contingencies based on selected failure criteria. The problem is very complex because of the need to include detailed modelling and operation of substation and sub-transmission equipment using network flow evaluation and to consider multiple levels of component failures. In this thesis a new model associated with aging equipment is developed to combine the standard tools of random failures, as well as specific model for aging failures. This technique is applied in this thesis to include and examine the impact of aging equipments on system reliability of bulk supply loads and consumers in distribution network for defined range of planning years. The power system risk indices depend on many factors such as the actual physical network configuration and operation, aging conditions of the equipment, and the relevant constraints. The impact and importance of equipment reliability on power system risk indices in a network with aging facilities contains valuable information for utilities to better understand network performance and the weak links in the system. In this thesis, algorithms are developed to measure the contribution of individual equipment to the power system risk indices, as part of the novel risk analysis tool. A new cost worth approach was developed in this thesis that can make an early decision in planning for replacement activities concerning non-repairable aging components, in order to maintain a system reliability performance which economically is acceptable. The concepts, techniques and procedures developed in this thesis are illustrated numerically using published test systems. It is believed that the methods and approaches presented, substantially improve the accuracy of risk predictions by explicit consideration of the effect of equipment entering a period of increased risk of a non-repairable failure.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Brasil, Junior Nelson Gomes 1989. « Bijeções envolvendo os números de Catalan ». [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/307511.

Texte intégral
Résumé :
Orientador: José Plínio de Oliveira Santos
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica
Made available in DSpace on 2018-08-25T04:32:08Z (GMT). No. of bitstreams: 1 BrasilJunior_NelsonGomes_M.pdf: 980636 bytes, checksum: dd8d61baeb633d5f598abc3523def800 (MD5) Previous issue date: 2014
Resumo: Neste trabalho, estudamos a sequência dos Números de Catalan, uma sequência que aparece como solução de vários problemas de contagem envolvendo árvores, palavras, grafos e outras estruturas combinatórias. Atualmente, são conhecidas cerca de 200 interpretações combinatórias distintas para os Números de Catalan, o que motiva o estudo de relações entre estas interpretações, isto é, entre conjuntos cuja cardinalidade é dada pelos termos desta sequência. O principal objetivo do nosso trabalho é, portanto, mostrar bijeções entre esses conjuntos. No início do texto fazemos uma pequena introdução histórica aos números de Catalan, assim como definimos algumas formas de representar a sequência estudada. Depois mostramos algumas bijeções clássicas entre conjuntos contados pela sequência de Catalan. Além disso, apresentamos outras bijeções entre conjuntos envolvendo diversos objetos combinatórios. No total, são exibidas 29 bijeções
Abstract: In this work, we study the sequence of Catalan Numbers, which appears as a solution of many counting problems involving trees, words, graphs and other combinatorial structures. Nowadays, about 200 different combinatorial interpretations of the Catalan Numbers are known and that motivates the study between them, i. e., the study between sets whose cardinality is given by the terms of this sequence. The main objective of our work is therefore to show bijections between these sets. In the beginning, we make a short historical introduction of the Catalan Numbers and define some ways to represent the sequence. After that, we show some classical bijections between sets counted by the Catalan Numbers. Additionally, we exhibit other bijections between sets involving several combinatorial objects. Altogether, 29 bijections are presented
Mestrado
Matematica Aplicada
Mestre em Matemática Aplicada
Styles APA, Harvard, Vancouver, ISO, etc.
24

Bernard, Jocelyn. « Gérer et analyser les grands graphes des entités nommées ». Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1067/document.

Texte intégral
Résumé :
Dans cette thèse nous étudierons des problématiques de graphes. Nous proposons deux études théoriques sur la recherche et l'énumération de cliques et quasi-cliques. Ensuite nous proposons une étude appliquée sur la propagation d'information dans un graphe d'entités nommées. Premièrement, nous étudierons la recherche de cliques dans des graphes compressés. Les problèmes MCE et MCP sont des problèmes rencontrés dans l'analyse des graphes. Ce sont des problèmes difficiles, pour lesquels des solutions adaptées doivent être conçues pour les grands graphes. Nous proposons de travailler sur une version compressée du graphe. Nous montrons les bons résultats obtenus par notre méthode pour l'énumération de cliques maximales. Secondement, nous étudierons l'énumération de quasi-cliques maximales. Nous proposons un algorithme distribué qui énumère l'ensemble des quasi-cliques maximales. Nous proposons aussi une heuristique qui liste des quasi-cliques plus rapidement. Nous montrons l'intérêt de l'énumération de ces quasi-cliques par une évaluation des relations en regardant la co-occurrence des noeuds dans l'ensemble des quasi-cliques énumérées. Troisièmement, nous travaillerons sur la diffusion d'événements dans un graphe d'entités nommées. De nombreux modèles existent pour simuler des problèmes de diffusion de rumeurs ou de maladies dans des réseaux sociaux ou des problèmes de propagation de faillites dans les milieux bancaires. Nous proposons de répondre au problème de diffusion d'événements dans des réseaux hétérogènes représentant un environnement économique du monde. Nous proposons un problème de diffusion, nommé problème de classification de l'infection, qui consiste à déterminer quelles entités sont concernées par un événement. Pour ce problème, nous proposons deux modèles inspirés du modèle de seuil linéaire auxquels nous ajoutons différentes fonctionnalités. Finalement, nous testons et validons nos modèles sur un ensemble d'événements
In this thesis we will study graph problems. We will study theoretical problems in pattern research and applied problems in information diffusion. We propose two theoretical studies on the identification/detection and enumeration of dense subgraphs, such as cliques and quasi-cliques. Then we propose an applied study on the propagation of information in a named entities graph. First, we will study the identification/detection of cliques in compressed graphs. The MCE and MCP are problems that are encountered in the analysis of data graphs. These problem are difficult to solve (NP-Hard for MCE and NP-Complete for MCP), and adapted solutions must be found for large graphs. We propose to solve these problems by working on a compressed version of the initial graph. We show the correct results obtained by our method for the enumeration of maximal cliques on compressed graphs. Secondly, we will study the enumeration of maximal quasi-cliques. We propose a distributed algorithm that enumerates the set of maximal quasi-cliques of the graph. We show that this algorithm lists the set of maximal quasi-cliques of the graph. We also propose a heuristic that lists a set of quasi-cliques more quickly. We show the interest of enumerating these quasi-cliques by an evaluation of relations by looking at the co-occurrence of nodes in the set of enumerated quasi-cliques. Finally, we work on the event diffusion in a named entities graph. Many models exist to simulate diffusion problems of rumors or diseases in social networks and bankruptcies in banking networks. We address the issue of significant events diffusion in heterogeneous networks, representing a global economic environment. We propose a diffusion problem, called infection classification problem, which consists to dertemine which entities are concerned by an event. To solve this problem we propose two models inspired by the linear threshold model to which we add different features. Finally, we test and validate our models on a set of events
Styles APA, Harvard, Vancouver, ISO, etc.
25

Bacher, Axel. « Chemins et animaux : applications de la théorie des empilements de pièces ». Phd thesis, Université Sciences et Technologies - Bordeaux I, 2011. http://tel.archives-ouvertes.fr/tel-00654805.

Texte intégral
Résumé :
Le but de cette thèse est d'établir des résultats énumératifs sur certaines classes de chemins et d'animaux. Ces résultats sont obtenus en appliquant la théorie des empilements de pièces développée par Viennot. Nous étudions les excursions discrètes (ou chemins de Dyck généralisés) de hauteur bornée; nous obtenons des interprétations combinatoires et des extensions de résultats de Banderier, Flajolet et Bousquet-Mélou. Nous décrivons et énumérons plusieurs classes de chemins auto-évitants, dits chemins faiblement dirigés. Ces chemins sont plus nombreux que les chemins prudents qui forment la classe naturelle la plus grande jusqu'alors. Nous calculons le périmètre de site moyen des animaux dirigés, prouvant des conjectures de Conway et Le Borgne. Enfin, nous obtenons des résultats nouveaux sur l'énumération des animaux de Klarner et les animaux multi-dirigés de Bousquet-Mélou et Rechnitzer.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Johnston, Michael David. « The Dominance of the Archaea in the Terrestrial Subsurface ». University of Akron / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=akron1384856797.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

McDonald, Andre Martin. « The analysis of enumerative source codes and their use in Burrows‑Wheeler compression algorithms ». Diss., 2010. http://hdl.handle.net/2263/27862.

Texte intégral
Résumé :
In the late 20th century the reliable and efficient transmission, reception and storage of information proved to be central to the most successful economies all over the world. The Internet, once a classified project accessible to a selected few, is now part of the everyday lives of a large part of the human population, and as such the efficient storage of information is an important part of the information economy. The improvement of the information storage density of optical and electronic media has been remarkable, but the elimination of redundancy in stored data and the reliable reconstruction of the original data is still a desired goal. The field of source coding is concerned with the compression of redundant data and its reliable decompression. The arithmetic source code, which was independently proposed by J. J. Rissanen and R. Pasco in 1976, revolutionized the field of source coding. Compression algorithms that use an arithmetic code to encode redundant data are typically more effective and computationally more efficient than compression algorithms that use earlier source codes such as extended Huffman codes. The arithmetic source code is also more flexible than earlier source codes, and is frequently used in adaptive compression algorithms. The arithmetic code remains the source code of choice, despite having been introduced more than 30 years ago. The problem of effectively encoding data from sources with known statistics (i.e. where the probability distribution of the source data is known) was solved with the introduction of the arithmetic code. The probability distribution of practical data is seldomly available to the source encoder, however. The source coding of data from sources with unknown statistics is a more challenging problem, and remains an active research topic. Enumerative source codes were introduced by T. J. Lynch and L. D. Davisson in the 1960s. These lossless source codes have the remarkable property that they may be used to effectively encode source sequences from certain sources without requiring any prior knowledge of the source statistics. One drawback of these source codes is the computationally complex nature of their implementations. Several years after the introduction of enumerative source codes, J. G. Cleary and I. H. Witten proved that approximate enumerative source codes may be realized by using an arithmetic code. Approximate enumerative source codes are significantly less complex than the original enumerative source codes, but are less effective than the original codes. Researchers have become more interested in arithmetic source codes than enumerative source codes since the publication of the work by Cleary and Witten. This thesis concerns the original enumerative source codes and their use in Burrows–Wheeler compression algorithms. A novel implementation of the original enumerative source code is proposed. This implementation has a significantly lower computational complexity than the direct implementation of the original enumerative source code. Several novel enumerative source codes are introduced in this thesis. These codes include optimal fixed–to–fixed length source codes with manageable computational complexity. A generalization of the original enumerative source code, which includes more complex data sources, is proposed in this thesis. The generalized source code uses the Burrows–Wheeler transform, which is a low–complexity algorithm for converting the redundancy of sequences from complex data sources to a more accessible form. The generalized source code effectively encodes the transformed sequences using the original enumerative source code. It is demonstrated and proved mathematically that this source code is universal (i.e. the code has an asymptotic normalized average redundancy of zero bits). AFRIKAANS : Die betroubare en doeltreffende versending, ontvangs en berging van inligting vorm teen die einde van die twintigste eeu die kern van die mees suksesvolle ekonomie¨e in die wˆereld. Die Internet, eens op ’n tyd ’n geheime projek en toeganklik vir slegs ’n klein groep verbruikers, is vandag deel van die alledaagse lewe van ’n groot persentasie van die mensdom, en derhalwe is die doeltreffende berging van inligting ’n belangrike deel van die inligtingsekonomie. Die verbetering van die bergingsdigteid van optiese en elektroniese media is merkwaardig, maar die uitwissing van oortolligheid in gebergde data, asook die betroubare herwinning van oorspronklike data, bly ’n doel om na te streef. Bronkodering is gemoeid met die kompressie van oortollige data, asook die betroubare dekompressie van die data. Die rekenkundige bronkode, wat onafhanklik voorgestel is deur J. J. Rissanen en R. Pasco in 1976, het ’n revolusie veroorsaak in die bronkoderingsveld. Kompressiealgoritmes wat rekenkundige bronkodes gebruik vir die kodering van oortollige data is tipies meer doeltreffend en rekenkundig meer effektief as kompressiealgoritmes wat vroe¨ere bronkodes, soos verlengde Huffman kodes, gebruik. Rekenkundige bronkodes, wat gereeld in aanpasbare kompressiealgoritmes gebruik word, is ook meer buigbaar as vroe¨ere bronkodes. Die rekenkundige bronkode bly na 30 jaar steeds die bronkode van eerste keuse. Die probleem om data wat afkomstig is van bronne met bekende statistieke (d.w.s. waar die waarskynlikheidsverspreiding van die brondata bekend is) doeltreffend te enkodeer is opgelos deur die instelling van rekenkundige bronkodes. Die bronenkodeerder het egter selde toegang tot die waarskynlikheidsverspreiding van praktiese data. Die bronkodering van data wat afkomstig is van bronne met onbekende statistieke is ’n groter uitdaging, en bly steeds ’n aktiewe navorsingsveld. T. J. Lynch and L. D. Davisson het tel–bronkodes in die 1960s voorgestel. Tel– bronkodes het die merkwaardige eienskap dat bronsekwensies van sekere bronne effektief met hierdie foutlose kodes ge¨enkodeer kan word, sonder dat die bronenkodeerder enige vooraf kennis omtrent die statistieke van die bron hoef te besit. Een nadeel van tel–bronkodes is die ho¨e rekenkompleksiteit van hul implementasies. J. G. Cleary en I. H. Witten het verskeie jare na die instelling van tel–bronkodes bewys dat benaderde tel–bronkodes gerealiseer kan word deur die gebruik van rekenkundige bronkodes. Benaderde tel–bronkodes het ’n laer rekenkompleksiteit as tel–bronkodes, maar benaderde tel–bronkodes is minder doeltreffend as die oorspronklike tel–bronkodes. Navorsers het sedert die werk van Cleary en Witten meer belangstelling getoon in rekenkundige bronkodes as tel–bronkodes. Hierdie tesis is gemoeid met die oorspronklike tel–bronkodes en die gebruik daarvan in Burrows–Wheeler kompressiealgoritmes. ’n Nuwe implementasie van die oorspronklike tel–bronkode word voorgestel. Die voorgestelde implementasie het ’n beduidende laer rekenkompleksiteit as die direkte implementasie van die oorspronklike tel–bronkode. Verskeie nuwe tel–bronkodes, insluitende optimale vaste–tot–vaste lengte tel–bronkodes met beheerbare rekenkompleksiteit, word voorgestel. ’n Veralgemening van die oorspronklike tel–bronkode, wat meer komplekse databronne insluit as die oorspronklike tel–bronkode, word voorgestel in hierdie tesis. The veralgemeende tel–bronkode maak gebruik van die Burrows–Wheeler omskakeling. Die Burrows–Wheeler omskakeling is ’n lae–kompleksiteit algoritme wat die oortolligheid van bronsekwensies wat afkomstig is van komplekse databronne omskakel na ’n meer toeganklike vorm. Die veralgemeende bronkode enkodeer die omgeskakelde sekwensies effektief deur die oorspronklike tel–bronkode te gebruik. Die universele aard van hierdie bronkode word gedemonstreer en wiskundig bewys (d.w.s. dit word bewys dat die kode ’n asimptotiese genormaliseerde gemiddelde oortolligheid van nul bisse het). Copyright
Dissertation (MEng)--University of Pretoria, 2010.
Electrical, Electronic and Computer Engineering
unrestricted
Styles APA, Harvard, Vancouver, ISO, etc.
28

Plitt, Ramona Teresa. « A Corpus-Based Analysis of Enumerative Existentials : From Grammatico-Semantic Features to Ariel’s Accessibility Theory ». 2018. https://tud.qucosa.de/id/qucosa%3A36603.

Texte intégral
Résumé :
Diese Arbeit befasst sich mit den grammatischen und semantischen Kontexten enumerativer 'there-existentials' im Englischen. Mithilfe von Korpusbelegen werden die Auftretenskontexte näher bestimmt. Im Anschluss werden die Ergebnisse anhand Mira Ariels 'Accessibility Theory' gegengeprüft und interpretiert.
This paper seeks to analyze the grammtical and semantic contexts of enumerative 'there-existentials' in English. By using corpus data, the contextual environment of there-extistantials' will be defined more closely. Afterwards, the results will be checked and interpreted againts Mira Ariel's 'Accessibility Theory'.
Styles APA, Harvard, Vancouver, ISO, etc.
29

PINZUTI, ALESSANDRO. « Compositional verification for Hierarchical Scheduling of Real-Time systems ». Doctoral thesis, 2013. http://hdl.handle.net/2158/799053.

Texte intégral
Résumé :
Hierarchical Scheduling (HS) techniques achieve resource partitioning among a set of Real-Time Applications, providing reduction of complexity, confinement of failure modes, and temporal isolation among system applications. This facilitates compositional analysis for architectural verification and plays a crucial role in all industrial areas where highperformance microprocessors allow growing integration of multiple applications on a single platform. We propose a compositional approach to formal specification and schedulability analysis of Real-Time Applications running under a Time Division Multiplexing (TDM) Global Scheduler and preemptive Fixed Priority (FP) Local Schedulers, according to the ARINC-653 standard. As a characterizing trait, each application is made of periodic, sporadic, and jittering tasks with offsets, jitters, and non-deterministic Execution Times, encompassing intra-application synchronizations through semaphores and mailboxes and inter-application communications among periodic tasks through message passing. The approach leverages the assumption of a TDM partitioning to enable compositional design and analysis based on the model of preemptive Time Petri Nets (pTPNs), which is expressly extended with a concept of Required Interface (RI) that specifies the embedding environment of an application through sequencing and timing constraints. This enables exact verification of intra-application constraints and approximate but safe verification of inter-application constraints. Experimentation illustrates results and validates their applicability on two challenging workloads in the field of safety-critical avionic systems.
Styles APA, Harvard, Vancouver, ISO, etc.
30

MARINO, ANDREA. « Algorithms for Biological Graphs : Analysis and Enumeration ». Doctoral thesis, 2013. http://hdl.handle.net/2158/803956.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

Takalani, Ntendeni Annah. « q- Enumeration of permutations avoiding adjacent patterns ». Diss., 2009. http://hdl.handle.net/11602/1059.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
32

Cleaton, Julie M. « Comparing Sight-Resight Methods for Dog Populations : Analysis of 2015 and 2016 Rabies Vaccination Campaign Data from Haiti ». 2017. http://scholarworks.gsu.edu/iph_theses/535.

Texte intégral
Résumé :
INTRODUCTION: Sight-resight studies are performed to estimate population sizes, in this case dog populations in rabies endemic areas. AIM: This study compares one- and two-day sight-resight methods with two-day as the standard to explore the feasibility and accuracy of the one-day method in different vaccination campaign strategies and dog population characteristics. METHODS: 2015 household survey data and sight-resight data are analyzed to find the percentage of free roaming and confined dogs in the community and use those to adjust the population estimate formulas. 2016 sight-resight data are analyzed as a two-day campaign and as if it had been a one-day campaign. In a sensitivity analysis, confidence intervals are explored in relation to vaccination coverage. RESULTS: Before missed mark and proportion free-roaming corrections, the one-day method results in slightly underestimated population estimates to the two-day method when the vaccination campaign is central point, overestimated when door-to-door, and far underestimated when capture, vaccinate, release. After corrections door-to-door estimates were accurate whereas central point and capture, vaccinate, release estimates substantially underestimated population sizes. DISCUSSION: Results suggest that the one-day mark-resight method could be used to conserve resources depending on the vaccination method and estimated coverage.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Chakrabarti, Sujit Kumar. « Using Explicit State Space Enumeration For Specification Based Regression Testing ». Thesis, 2008. http://hdl.handle.net/2005/738.

Texte intégral
Résumé :
Regression testing of an evolving software system may involve significant challenges. While, there would be a requirement of maximising the probability of finding out if the latest changes to the system has broken some existing feature, it needs to be done as economically as possible. A particularly important class of software systems are API libraries. Such libraries would typically constitute a very important component of many software systems. High quality requirements make it imperative to continually optimise the internal implementation of such libraries without affecting the external interface. Therefore, it is preferred to guide the regression testing by some kind of formal specification of the library. The testing problem comprises of three parts: computation of test data, execution of test, and analysis of test results. Current research mostly focuses on the first part. The objective of test data computation is to maximise the probability of uncovering bugs, and to do it with as few test cases as possible. The problem of test data computation for regression testing is to select a subset of the original test suite running which would suffice to test for bugs probably inserted in the modifications done after the last round of testing. A variant of this problem is that of regression testing of API libraries. The regression testing of an API is usually done by making function calls in such a way that the sequence of function calls thus made suffices a test specification. The test specification in turn embodies some concept of completeness. In this thesis, we focus on the problem of test sequence computation for the regression testing of API libraries. At the heart of this method lies the creation of a state space model of the API library by reverse engineering it by executing the system, with guidance from an formal API specification. Once the state space graph is obtained, it is used to compute test sequences for satisfying some test specification. We analyse the theoretical complexity of the problem of test sequence computation and provide various heuristic algorithms for the same. State space explosion is a classical problem encountered whenever there is an attempt of creating a finite state model of a program. Our method also faces this limitation. We explore a simple and intuitive method of ameliorating this problem – by simply reducing the size of the state vector. We develop the theoretical insights into this method. Also, we present experimental results indicating the practical effectiveness of this method. Finally, we bring all this together into the design and implementation of a tool called Modest.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Thimm, Georg [Verfasser]. « A graph theoretical approach to the analysis, comparison, and enumeration of crystal structures / vorgelegt von Georg Thimm ». 2008. http://d-nb.info/1001825144/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Schmidt, Philip J. « Addressing the Uncertainty Due to Random Measurement Errors in Quantitative Analysis of Microorganism and Discrete Particle Enumeration Data ». Thesis, 2010. http://hdl.handle.net/10012/5596.

Texte intégral
Résumé :
Parameters associated with the detection and quantification of microorganisms (or discrete particles) in water such as the analytical recovery of an enumeration method, the concentration of the microorganisms or particles in the water, the log-reduction achieved using a treatment process, and the sensitivity of a detection method cannot be measured exactly. There are unavoidable random errors in the enumeration process that make estimates of these parameters imprecise and possibly also inaccurate. For example, the number of microorganisms observed divided by the volume of water analyzed is commonly used as an estimate of concentration, but there are random errors in sample collection and sample processing that make these estimates imprecise. Moreover, this estimate is inaccurate if poor analytical recovery results in observation of a different number of microorganisms than what was actually present in the sample. In this thesis, a statistical framework (using probabilistic modelling and Bayes’ theorem) is developed to enable appropriate analysis of microorganism concentration estimates given information about analytical recovery and knowledge of how various random errors in the enumeration process affect count data. Similar models are developed to enable analysis of recovery data given information about the seed dose. This statistical framework is used to address several problems: (1) estimation of parameters that describe random sample-to-sample variability in the analytical recovery of an enumeration method, (2) estimation of concentration, and quantification of the uncertainty therein, from single or replicate data (which may include non-detect samples), (3) estimation of the log-reduction of a treatment process (and the uncertainty therein) from pre- and post-treatment concentration estimates, (4) quantification of random concentration variability over time, and (5) estimation of the sensitivity of enumeration processes given knowledge about analytical recovery. The developed models are also used to investigate alternative strategies that may enable collection of more precise data. The concepts presented in this thesis are used to enhance analysis of pathogen concentration data in Quantitative Microbial Risk Assessment so that computed risk estimates are more predictive. Drinking water research and prudent management of treatment systems depend upon collection of reliable data and appropriate interpretation of the data that are available.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Huang, Pei Ying, et 黃珮穎. « Industrial Analysis and Technology Valuation of the Liquid Biopsy : A Case Study of the Circulating Tumor Cell Enumeration Technology ». Thesis, 2019. http://ndltd.ncl.edu.tw/cgi-bin/gs32/gsweb.cgi/login?o=dnclcdr&s=id=%22107CGU05105004%22.&searchmode=basic.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
37

Idris, Muhammad. « Real-time Business Intelligence through Compact and Efficient Query Processing Under Updates ». 2018. https://tud.qucosa.de/id/qucosa%3A33726.

Texte intégral
Résumé :
Responsive analytics are rapidly taking over the traditional data analytics dominated by the post-fact approaches in traditional data warehousing. Recent advancements in analytics demand placing analytical engines at the forefront of the system to react to updates occurring at high speed and detect patterns, trends and anomalies. These kinds of solutions find applications in Financial Systems, Industrial Control Systems, Business Intelligence and on-line Machine Learning among others. These applications are usually associated with Big Data and require the ability to react to constantly changing data in order to obtain timely insights and take proactive measures. Generally, these systems specify the analytical results or their basic elements in a query language, where the main task then is to maintain these results under frequent updates efficiently. The task of reacting to updates and analyzing changing data has been addressed in two ways in the literature: traditional business intelligence (BI) solutions focus on historical data analysis where the data is refreshed periodically and in batches, and stream processing solutions process streams of data from transient sources as flow (or set of flows) of data items. Both kinds of systems share the niche of reacting to updates (known as dynamic evaluation); however, they differ in architecture, query languages, and processing mechanisms. In this thesis, we investigate the possibility of a reactive and unified framework to model queries that appear in both kinds of systems. In traditional BI solutions, evaluating queries under updates has been studied under the umbrella of incremental evaluation of updates that is based on relational incremental view maintenance model and mostly focus on queries that feature equi-joins. Streaming systems, in contrast, generally follow the automaton based models to evaluate queries under updates, and they generally process queries that mostly feature comparisons of temporal attributes (e.g., timestamp attributes) along-with comparisons of non-temporal attributes over streams of bounded sizes. Temporal comparisons constitute inequality constraints, while non-temporal comparisons can either be equality or inequality constraints, hence these systems mostly process inequality joins. As starting point, we postulate the thesis that queries in streaming systems can also be evaluated efficiently based on the paradigm of incremental evaluation just like in BI systems in a main-memory model. The efficiency of such a model is measured in terms of runtime memory footprint and the update processing cost. To this end, the existing approaches of dynamic evaluation in both kind of systems present a trade-off between memory footprint and the update processing cost. More specifically, systems that avoid materialization of query (sub) results incur high update latency and systems that materialize (sub) results incur high memory footprint. We are interested in investigating the possibility to build a model that can address this trade-off. In particular, we overcome this trade-off by investigating the possibility of practical dynamic evaluation algorithm for queries that appear in both kinds of systems, and present a main-memory data representation that allows to enumerate query (sub) results without materialization and can be maintained efficiently under updates. We call this representation the Dynamic Constant Delay Linear Representation (DCLR). We devise DCLRs with the following properties: 1) they allow, without materialization, enumeration of query results with bounded-delay (and with constant delay for a sub-class of queries); 2) they allow tuple lookup in query results with logarithmic delay (and with constant delay for conjunctive queries with equi-joins only); 3) they take space linear in the size of the database; 4) they can be maintained efficiently under updates. We first study the DCLRs with the above-described properties for the class of acyclic conjunctive queries featuring equi-joins with projections and present the dynamic evaluation algorithm. Then, we present the generalization of thiw algorithm to the class of acyclic queries featuring multi-way theta-joins with projections. We devise DCLRs with the above properties for acyclic conjunctive queries, and the working of dynamic algorithms over DCLRs is based on a particular variant of join trees, called the Generalized Join Trees (GJTs) that guarantee the above-described properties of DCLRs. We define GJTs and present the algorithms to test a conjunctive query featuring theta-joins for acyclicity and to generate GJTs for such queries. To do this, we extend the classical GYO algorithm from testing a conjunctive query with equalities for acyclicity to test a conjunctive query featuring multi-way theta-joins with projections for acyclicity. We further extend the GYO algorithm to generate GJTs for queries that are acyclic. We implemented our algorithms in a query compiler that takes as input the SQL queries and generates Scala executable code – a trigger program to process queries and maintain under updates. We tested our approach against state of the art main-memory BI and CEP systems. Our evaluation results have shown that our DCLRs based approach is over an order of magnitude efficient than existing systems for both memory footprint and update processing cost. We have also shown that the enumeration of query results without materialization in DCLRs is comparable (and in some cases efficient) as compared to enumerating from materialized query results.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie