Academic literature on the topic 'Uncertain sequences'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Uncertain sequences.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Dissertations / Theses on the topic "Uncertain sequences"

1

Howing, Frank. "Analysis and measurement of motion in 2D medical imaging sequences exploiting uncertain knowledge." Thesis, University of South Wales, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.393062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

BERNARDINI, GIULIA. "COMBINATORIAL METHODS FOR BIOLOGICAL DATA." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2021. http://hdl.handle.net/10281/305220.

Full text
Abstract:
Lo scopo di questa tesi è di elaborare e analizzare metodi rigorosi dal punto di vista matematico per l’analisi di due tipi di dati biologici: dati relativi a pan-genomi e filogenesi. Con il termine “pan-genoma” si indica, in generale, un insieme di sequenze genomiche strettamente correlate (tipicamente appartenenti a individui della stessa specie) che si vogliano utilizzare congiuntamente come sequenze di riferimento per un’intera popolazione. Una filogenesi, invece, rappresenta le relazioni evolutive in un gruppo di entità, che siano esseri viventi, geni, lingue naturali, manoscritti antichi o cellule tumorali. Con l’eccezione di uno dei risultati presentati in questa tesi, relativo all’analisi di filogenesi tumorali, il taglio della dissertazione è prevalentemente teorico: lo scopo è studiare gli aspetti combinatori dei problemi affrontati, più che fornire soluzioni efficaci in pratica. Una conoscenza approfondita degli aspetti teorici di un problema, del resto, permette un'analisi matematicamente rigorosa delle soluzioni già esistenti, individuandone i punti deboli e quelli di forza, fornendo preziosi dettagli sul loro funzionamento e aiutando a decidere quali problemi vadano ulteriormente investigati. Oltretutto, è spesso il caso che nuovi risultati teorici (algoritmi, strutture dati o riduzioni ad altri problemi più noti) si possano direttamente applicare o adattare come soluzione ad un problema pratico, o come minimo servano ad ispirare lo sviluppo di nuovi metodi efficaci in pratica. La prima parte della tesi è dedicata a nuovi metodi per eseguire delle operazioni fondamentali su un testo elastico-degenerato, un oggetto computazionale che codifica in maniera compatta un insieme di testi simili tra loro, come, ad esempio, un pan-genoma. Nello specifico, si affrontano il problema di cercare una sequenza di lettere in un testo elastico-degenerato, sia in maniera esatta che tollerando un numero prefissato di errori, e quello di confrontare due testi degenerati. Nella seconda parte si considerano sia filogenesi tumorali, che ricostruiscono per l'appunto l'evoluzione di un tumore, sia filogenesi "classiche", che rappresentano, ad esempio, la storia evolutiva delle specie viventi. In particolare, si presentano nuove tecniche per confrontare due o più filogenesi tumorali, necessarie per valutare i risultati di diversi metodi che ricostruiscono le filogenesi stesse, e una nuova e più efficiente soluzione a un problema di lunga data relativo a filogenesi "classiche", consistente nel determinare se sia possibile sistemare, in presenza di dati mancanti, un insieme di specie in un albero filogenetico che abbia determinate proprietà.<br>The main goal of this thesis is to develop new algorithmic frameworks to deal with (i) a convenient representation of a set of similar genomes and (ii) phylogenetic data, with particular attention to the increasingly accurate tumor phylogenies. A “pan-genome” is, in general, any collection of genomic sequences to be analyzed jointly or to be used as a reference for a population. A phylogeny, in turn, is meant to describe the evolutionary relationships among a group of items, be they species of living beings, genes, natural languages, ancient manuscripts or cancer cells. With the exception of one of the results included in this thesis, related to the analysis of tumor phylogenies, the focus of the whole work is mainly theoretical, the intent being to lay firm algorithmic foundations for the problems by investigating their combinatorial aspects, rather than to provide practical tools for attacking them. Deep theoretical insights on the problems allow a rigorous analysis of existing methods, identifying their strong and weak points, providing details on how they perform and helping to decide which problems need to be further addressed. In addition, it is often the case where new theoretical results (algorithms, data structures and reductions to other well-studied problems) can either be directly applied or adapted to fit the model of a practical problem, or at least they serve as inspiration for developing new practical tools. The first part of this thesis is devoted to methods for handling an elastic-degenerate text, a computational object that compactly encodes a collection of similar texts, like a pan-genome. Specifically, we attack the problem of matching a sequence in an elastic-degenerate text, both exactly and allowing a certain amount of errors, and the problem of comparing two degenerate texts. In the second part we consider both tumor phylogenies, describing the evolution of a tumor, and “classical” phylogenies, representing, for instance, the evolutionary history of the living beings. In particular, we present new techniques to compare two or more tumor phylogenies, needed to evaluate the results of different inference methods, and we give a new, efficient solution to a longstanding problem on “classical” phylogenies: to decide whether, in the presence of missing data, it is possible to arrange a set of species in a phylogenetic tree that enjoys specific properties.
APA, Harvard, Vancouver, ISO, and other styles
3

Touati, Sarah. "Complexity, aftershock sequences, and uncertainty in earthquake statistics." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/6224.

Full text
Abstract:
Earthquake statistics is a growing field of research with direct application to probabilistic seismic hazard evaluation. The earthquake process is a complex spatio-temporal phenomenon, and has been thought to be an example of the self-organised criticality (SOC) paradigm, in which events occur as cascades on a wide range of sizes, each determined by fine details of the rupture process. As a consequence, deterministic prediction of specific event sizes, locations, and times may well continue to remain elusive. However, probabilistic forecasting, based on statistical patterns of occurrence, is a much more realistic goal at present, and is being actively explored and tested in global initiatives. This thesis focuses on the temporal statistics of earthquake populations, exploring the uncertainties in various commonly-used procedures for characterising seismicity and explaining the origins of these uncertainties. Unlike many other SOC systems, earthquakes cluster in time and space through aftershock triggering. A key point in the thesis is to show that the earthquake inter-event time distribution is fundamentally bimodal: it is a superposition of a gamma component from correlated (co-triggered) events and an exponential component from independent events. Volcano-tectonic earthquakes at Italian and Hawaiian volcanoes exhibit a similar bimodality, which in this case, may arise as the sum of contributions from accelerating and decelerating rates of events preceding and succeeding volcanic activity. Many authors, motivated by universality in the scaling laws of critical point systems, have sought to demonstrate a universal data collapse in the form of a gamma distribution, but I show how this gamma form is instead an emergent property of the crossover between the two components. The relative size of these two components depends on how the data is selected, so there is no universal form. The mean earthquake rate—or, equivalently, inter-event time—for a given region takes time to converge to an accurate value, and it is important to characterise this sampling uncertainty. As a result of temporal clustering and non-independence of events, the convergence is found to be much slower than the Gaussian rate of the central limit theorem. The rate of this convergence varies systematically with the spatial extent of the region under consideration: the larger the region, the closer to Gaussian convergence. This can be understood in terms of the increasing independence of the inter-event times with increasing region size as aftershock sequences overlap in time to a greater extent. On the other hand, within this high-overlap regime, a maximum likelihood inversion of parameters for an epidemic-type statistical model suffers from lower accuracy and a systematic bias; specifically, the background rate is overestimated. This is because the effect of temporal overlapping is to mask the correlations and make the time series look more like a Poisson process of independent events. This is an important result with practical relevance to studies using inversions, for example, to infer temporal variations in background rate for time-dependent hazard estimation.
APA, Harvard, Vancouver, ISO, and other styles
4

Hildebrandt, Jordan. "Calibrating M-Sequence and C-Sequence GPTSs with uncertainty quantification and cyclostratigraphy." Wittenberg University Honors Theses / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=wuhonors1337876096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Herman, Joseph L. "Multiple sequence analysis in the presence of alignment uncertainty." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:88a56d9f-a96e-48e3-b8dc-a73f3efc8472.

Full text
Abstract:
Sequence alignment is one of the most intensely studied problems in bioinformatics, and is an important step in a wide range of analyses. An issue that has gained much attention in recent years is the fact that downstream analyses are often highly sensitive to the specific choice of alignment. One way to address this is to jointly sample alignments along with other parameters of interest. In order to extend the range of applicability of this approach, the first chapter of this thesis introduces a probabilistic evolutionary model for protein structures on a phylogenetic tree; since protein structures typically diverge much more slowly than sequences, this allows for more reliable detection of remote homologies, improving the accuracy of the resulting alignments and trees, and reducing sensitivity of the results to the choice of dataset. In order to carry out inference under such a model, a number of new Markov chain Monte Carlo approaches are developed, allowing for more efficient convergence and mixing on the high-dimensional parameter space. The second part of the thesis presents a directed acyclic graph (DAG)-based approach for representing a collection of sampled alignments. This DAG representation allows the initial collection of samples to be used to generate a larger set of alignments under the same approximate distribution, enabling posterior alignment probabilities to be estimated reliably from a reasonable number of samples. If desired, summary alignments can then be generated as maximum-weight paths through the DAG, under various types of loss or scoring functions. The acyclic nature of the graph also permits various other types of algorithms to be easily adapted to operate on the entire set of alignments in the DAG. In the final part of this work, methodology is introduced for alignment-DAG-based sequence annotation using hidden Markov models, and RNA secondary structure prediction using stochastic context-free grammars. Results on test datasets indicate that the additional information contained within the DAG allows for improved predictions, resulting in substantial gains over simply analysing a set of alignments one by one.
APA, Harvard, Vancouver, ISO, and other styles
6

Herner, Alan Eugene. "Measuring Uncertainty of Protein Secondary Structure." Wright State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=wright1302305875.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Theorell, Axel. "Uncertainty-aware Tracking of Single Bacteria over Image Sequences with Low Frame Rate." Thesis, KTH, Optimeringslära och systemteori, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-173801.

Full text
Abstract:
In single-cell analysis, the physiologic states of individual cells are studied. In some studies, the subject of interest is the development over time of some cell characteristic. To obtain time-resolved single-cell data, one possibility is to conduct an experiment on a cell population and make a sequence of images of the population over the course of the experiment. If a mapping is at hand, which determines which cell it is that is the cause of each measured cell in the image sequence, time resolved single-cell data can be extracted. Such a mapping is called a lineage tree, and the process of creating it is called tracking. One aim of this work is to develop a tracking algorithm that incorporates organism specific knowledge, such as average division time, in the tracking process. With respect to this aim, a Bayesian model that incorporates biological knowledge is derived, with which every hypothetical lineage tree can be assigned a probability. Additionally, two Monte Carlo algorithms are developed, that approximate the probability distribution of lineage trees given by the Bayesian model. When an approximate distribution is known, for example the most likely lineage tree can be extracted and used. In many cases, the information provided to an automatic tracking algorithm is insufficient for the algorithm to find the gold standard lineage tree. In these cases, a possibility is to construct the gold standard lineage tree by manual correction of the lineage tree that has been provided by the tracking algorithm. A second aim of this work is to provide a confidence to every assignment in a lineage tree, in order to give the person doing manual corrections useful information about what assignments to change. Such a confidence is provided by the Monte Carlo tracking methods developed in this work.<br>I enskild-cell analys studeras de fysiologiska tillståndet hos enskilda celler. I vissa studier är man intresserad av hur någon cellegenskap utvecklas över tid. Ett sätt att generera tidsupplöst data på enskild-cellnivå är att utföra ett experiment med en cellpopulation och avbilda den med mikroskop med jämna mellanrum. Med hjälp av en avbildning som beskriver vilken cell i experiment det är som ger upphov till vilken uppmätt cell i bildsekvensen, kan sedan enskild-cell data tillgås. En sådan avbildning kallas ett stamträd (lineage tree), och processen att bestämma stamträdet kallas tracking. En målsättning med detta arbete är att utveckla en trackingalgoritm som använder organismspecifik kunskap, såsom organismens genomsnittliga delningstid, i trackingprocessen. Med denna målsättning i hänseende härleds en bayesiansk modell med vilken varje stamträd kan tillskrivas en sannolikhet, och som kan ta hänsyn till biologisk fakta när detta sker. Därtill utvecklas två Monte Carlo algoritmer som approximerar sannolikhetsfördelningen av stamträd som härrör ur den bayesianska modellen. När en uppskattad fördelning är känd kan t ex det mest sannolika stamträdet i fördelningen användas för enskild-cell analys. I många fall är informationen som en automatisk trackingalgoritm har till hands inte tillräcklig för att algoritmen ska kunna producera gold standard stamträdet. I dessa fall kan det vara befogat att konstruera gold standard stamträdet genom att göra manuella korrektioner på ett stamträd som tagits fram automatiskt med en algoritm. En andra målsättning med detta arbete är att införa ett förtroendemått för enskilda länkar i ett stamträd. Detta förtroendemått ska göra det enklare för personen som gör manuella korrektioner att avgöra ifall en länk i ett stamträd behöver korrigeras eller ej. Ett sådant förtroendemått införs, och de två Monte Carlo algoritmerna som utvecklas i detta arbete tillskriver ett förtroende för varje länk i de stamträd som de levererar.
APA, Harvard, Vancouver, ISO, and other styles
8

Repo, T. (Tapio). "Modeling of structured 3-D environments from monocular image sequences." Doctoral thesis, University of Oulu, 2002. http://urn.fi/urn:isbn:9514268571.

Full text
Abstract:
Abstract The purpose of this research has been to show with applications that polyhedral scenes can be modeled in real time with a single video camera. Sometimes this can be done very efficiently without any special image processing hardware. The developed vision sensor estimates its three-dimensional position with respect to the environment and models it simultaneously. Estimates become recursively more accurate when objects are approached and observed from different viewpoints. The modeling process starts by extracting interesting tokens, like lines and corners, from the first image. Those features are then tracked in subsequent image frames. Also some previously taught patterns can be used in tracking. A few features in the same image are extracted. By this way the processing can be done at a video frame rate. New features appearing can also be added to the environment structure. Kalman filtering is used in estimation. The parameters in motion estimation are location and orientation and their first derivates. The environment is considered a rigid object in respect to the camera. The environment structure consists of 3-D coordinates of the tracked features. The initial model lacks depth information. The relational depth is obtained by utilizing facts such as closer points move faster on the image plane than more distant ones during translational motion. Additional information is needed to obtain absolute coordinates. Special attention has been paid to modeling uncertainties. Measurements with high uncertainty get less weight when updating the motion and environment model. The rigidity assumption is utilized by using shapes of a thin pencil for initial model structure uncertainties. By observing continuously motion uncertainties, the performance of the modeler can be monitored. In contrast to the usual solution, the estimations are done in separate state vectors, which allows motion and 3-D structure to be estimated asynchronously. In addition to having a more distributed solution, this technique provides an efficient failure detection mechanism. Several trackers can estimate motion simultaneously, and only those with the most confident estimates are allowed to update the common environment model. Tests showed that motion with six degrees of freedom can be estimated in an unknown environment. The 3-D structure of the environment is estimated simultaneously. The achieved accuracies were millimeters at a distance of 1-2 meters, when simple toy-scenes and more demanding industrial pallet scenes were used in tests. This is enough to manipulate objects when the modeler is used to offer visual feedback.
APA, Harvard, Vancouver, ISO, and other styles
9

Hanson-Smith, Victor 1981. "Error and Uncertainty in Computational Phylogenetics." Thesis, University of Oregon, 2011. http://hdl.handle.net/1794/12151.

Full text
Abstract:
xi, 119 p. : ill. (some col.)<br>The evolutionary history of protein families can be difficult to study because necessary ancestral molecules are often unavailable for direct observation. As an alternative, the field of computational phylogenetics has developed statistical methods to infer the evolutionary relationships among extant molecular sequences and their ancestral sequences. Typically, the methods of computational phylogenetic inference and ancestral sequence reconstruction are combined with other non-computational techniques in a larger analysis pipeline to study the inferred forms and functions of ancient molecules. Two big problems surrounding this analysis pipeline are computational error and statistical uncertainty. In this dissertation, I use simulations and analysis of empirical systems to show that phylogenetic error can be reduced by using an alternative search heuristic. I then use similar methods to reveal the relationship between phylogenetic uncertainty and the accuracy of ancestral sequence reconstruction. Finally, I provide a case-study of a molecular machine in yeast, to demonstrate all stages of the analysis pipeline. This dissertation includes previously published co-authored material.<br>Committee in charge: John Conery, Chair; Daniel Lowd, Member; Sara Douglas, Member; Joseph W. Thornton, Outside Member
APA, Harvard, Vancouver, ISO, and other styles
10

Taylor, Josh Ellis. "The Christchurch earthquake sequence : government decision-making and confidence in the face of uncertainty." Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/45214.

Full text
Abstract:
Natural disasters can create significant uncertainty for individuals and entire cities. This thesis examines the role of government decision-making and uncertainty in disaster recovery, focusing on a case study of post-earthquake Christchurch, New Zealand. Beginning in September 2010, Christchurch has been shaken by a devastating sequence of earthquakes, stretching over 18 months. The most severe event took place on February 22, 2011, taking the lives of 185 people and causing significant damage throughout the city. Building damage has forced the closure of portions of the Central Business District (CBD) for over 2 years as of July 2013, and over 7,000 residential properties have been purchased by the government due to land damage. The duration of the earthquake sequence, combined with the scale of damage, has created significant uncertainty for the city, specifically for the future of the CBD and the local property market. This thesis seeks to examine how government decision-making can incentivize a community of self-interested actors facing uncertainty to pull together, and create an outcome that benefits all of them. A conceptual framework is developed through which three key government decisions in the Christchurch case are analyzed in terms of how uncertainty has been managed. The three decisions are: 1) maintaining a Cordon around the CBD, 2) Establishing the Christchurch Central Development Unit to plan the rebuild of the CBD, and 3) Establishing a system of zoning to classify land damage for residential properties. A detailed description of the earthquake sequence and context is also provided. The primary research for this thesis was collected during 23 semi-structured key informant interviews conducted in New Zealand in May of 2012. Interviewees were selected with expertise in a range of different recovery issues, as well as different roles in the recovery, from decision-makers to those implementing the decisions, and those impacted. In conclusion, this thesis argues that uncertainty has been a major driver in government decision-making, and that those decisions have had a significant impact in terms of reducing uncertainty. In particular, decisions have addressed uncertainty in terms of the residential property market, and the future of the CBD.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography