Dissertations / Theses on the topic 'Temporal Algorithms'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Temporal Algorithms.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Chen, Xiaodong. "Temporal data mining : algorithms, language and system for temporal association rules." Thesis, Manchester Metropolitan University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.297977.
Full textChen, Feng. "Efficient Algorithms for Mining Large Spatio-Temporal Data." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/19220.
Full textgrowing interests. Recent advances on remote sensing technology mean
that massive amounts of spatio-temporal data are being collected,
and its volume keeps increasing at an ever faster pace. It becomes
critical to design efficient algorithms for identifying novel and
meaningful patterns from massive spatio-temporal datasets. Different
from the other data sources, this data exhibits significant
space-time statistical dependence, and the assumption of i.i.d. is
no longer valid. The exact modeling of space-time dependence will
render the exponential growth of model complexity as the data size
increases. This research focuses on the construction of efficient
and effective approaches using approximate inference techniques for
three main mining tasks, including spatial outlier detection, robust
spatio-temporal prediction, and novel applications to real world
problems.
Spatial novelty patterns, or spatial outliers, are those data points
whose characteristics are markedly different from their spatial
neighbors. There are two major branches of spatial outlier detection
methodologies, which can be either global Kriging based or local
Laplacian smoothing based. The former approach requires the exact
modeling of spatial dependence, which is time extensive; and the
latter approach requires the i.i.d. assumption of the smoothed
observations, which is not statistically solid. These two approaches
are constrained to numerical data, but in real world applications we
are often faced with a variety of non-numerical data types, such as
count, binary, nominal, and ordinal. To summarize, the main research
challenges are: 1) how much spatial dependence can be eliminated via
Laplace smoothing; 2) how to effectively and efficiently detect
outliers for large numerical spatial datasets; 3) how to generalize
numerical detection methods and develop a unified outlier detection
framework suitable for large non-numerical datasets; 4) how to
achieve accurate spatial prediction even when the training data has
been contaminated by outliers; 5) how to deal with spatio-temporal
data for the preceding problems.
To address the first and second challenges, we mathematically
validated the effectiveness of Laplacian smoothing on the
elimination of spatial autocorrelations. This work provides
fundamental support for existing Laplacian smoothing based methods.
We also discovered a nontrivial side-effect of Laplacian smoothing,
which ingests additional spatial variations to the data due to
convolution effects. To capture this extra variability, we proposed
a generalized local statistical model, and designed two fast forward
and backward outlier detection methods that achieve a better balance
between computational efficiency and accuracy than most existing
methods, and are well suited to large numerical spatial datasets.
We addressed the third challenge by mapping non-numerical variables
to latent numerical variables via a link function, such as logit
function used in logistic regression, and then utilizing
error-buffer artificial variables, which follow a Student-t
distribution, to capture the large valuations caused by outliers. We
proposed a unified statistical framework, which integrates the
advantages of spatial generalized linear mixed model, robust spatial
linear model, reduced-rank dimension reduction, and Bayesian
hierarchical model. A linear-time approximate inference algorithm
was designed to infer the posterior distribution of the error-buffer
artificial variables conditioned on observations. We demonstrated
that traditional numerical outlier detection methods can be directly
applied to the estimated artificial variables for outliers
detection. To the best of our knowledge, this is the first
linear-time outlier detection algorithm that supports a variety of
spatial attribute types, such as binary, count, ordinal, and
nominal.
To address the fourth and fifth challenges, we proposed a robust
version of the Spatio-Temporal Random Effects (STRE) model, namely
the Robust STRE (R-STRE) model. The regular STRE model is a recently
proposed statistical model for large spatio-temporal data that has a
linear order time complexity, but is not best suited for
non-Gaussian and contaminated datasets. This deficiency can be
systemically addressed by increasing the robustness of the model
using heavy-tailed distributions, such as the Huber, Laplace, or
Student-t distribution to model the measurement error, instead of
the traditional Gaussian. However, the resulting R-STRE model
becomes analytical intractable, and direct application of
approximate inferences techniques still has a cubic order time
complexity. To address the computational challenge, we reformulated
the prediction problem as a maximum a posterior (MAP) problem with a
non-smooth objection function, transformed it to a equivalent
quadratic programming problem, and developed an efficient
interior-point numerical algorithm with a near linear order
complexity. This work presents the first near linear time robust
prediction approach for large spatio-temporal datasets in both
offline and online cases.
Ph. D.
Civelek, Ferda N. (Ferda Nur). "Temporal Connectionist Expert Systems Using a Temporal Backpropagation Algorithm." Thesis, University of North Texas, 1993. https://digital.library.unt.edu/ark:/67531/metadc278824/.
Full textZhu, Linhong, Dong Guo, Junming Yin, Steeg Greg Ver, and Aram Galstyan. "Scalable temporal latent space inference for link prediction in dynamic social networks (extended abstract)." IEEE, 2017. http://hdl.handle.net/10150/626028.
Full textBeaumont, Matthew, and n/a. "Handling Over-Constrained Temporal Constraint Networks." Griffith University. School of Information Technology, 2004. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20041213.084512.
Full textBeaumont, Matthew. "Handling Over-Constrained Temporal Constraint Networks." Thesis, Griffith University, 2004. http://hdl.handle.net/10072/366603.
Full textThesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Institute for Integrated and Intelligent Systems
Full Text
Schiratti, Jean-Baptiste. "Methods and algorithms to learn spatio-temporal changes from longitudinal manifold-valued observations." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLX009/document.
Full textWe propose a generic Bayesian mixed-effects model to estimate the temporal progression of a biological phenomenon from manifold-valued observations obtained at multiple time points for an individual or group of individuals. The progression is modeled by continuous trajectories in the space of measurements, which is assumed to be a Riemannian manifold. The group-average trajectory is defined by the fixed effects of the model. To define the individual trajectories, we introduced the notion of « parallel variations » of a curve on a Riemannian manifold. For each individual, the individual trajectory is constructed by considering a parallel variation of the average trajectory and reparametrizing this parallel in time. The subject specific spatiotemporal transformations, namely parallel variation and time reparametrization, are defined by the individual random effects and allow to quantify the changes in direction and pace at which the trajectories are followed. The framework of Riemannian geometry allows the model to be used with any kind of measurements with smooth constraints. A stochastic version of the Expectation-Maximization algorithm, the Monte Carlo Markov Chains Stochastic Approximation EM algorithm (MCMC-SAEM), is used to produce produce maximum a posteriori estimates of the parameters. The use of the MCMC-SAEM together with a numerical scheme for the approximation of parallel transport is discussed. In addition to this, the method is validated on synthetic data and in high-dimensional settings. We also provide experimental results obtained on health data
Montana, Felipe. "Sampling-based algorithms for motion planning with temporal logic specifications." Thesis, University of Sheffield, 2019. http://etheses.whiterose.ac.uk/22637/.
Full textKobakian, Stephanie Rose. "New algorithms for effectively visualising Australian spatio-temporal disease data." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/203908/1/Stephanie_Kobakian_Thesis.pdf.
Full textEriksson, Leif. "Solving Temporal CSPs via Enumeration and SAT Compilation." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-162482.
Full textCarse, Brian. "Artificial evolution of fuzzy and temporal rule based systems." Thesis, University of the West of England, Bristol, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267551.
Full textChopra, Smriti. "Spatio-temporal multi-robot routing." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53383.
Full textHorton, Michael. "Algorithms for the Analysis of Spatio-Temporal Data from Team Sports." Thesis, The University of Sydney, 2018. http://hdl.handle.net/2123/17755.
Full textStockman, Peter. "Upper Bounds on the Time Complexity of Temporal CSPs." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129778.
Full textDémarez, Alice. "Investigating proteostasis and ageing of Escherichia coli using spatio-temporal algorithms." Paris 5, 2011. http://www.theses.fr/2011PA05T060.
Full textAn increase in the probability of death and a decrease in the reproduction rate (both accounting for decrease of the fitness) are signatures of ageing in living organisms. Using a morphological criteria, allowed to demonstrate that Escherichia coli a symmetrically dividing micro-organism is subject to ageing. Ageing is studied by using time-lapse movie of growth of microcolonies emanating from a single cells. This result in a huge amount of images to analyse. The duration of semi- or non-automated analyses of those images brings a serious limit to the rate of data available for statistical and biological analysis. Hence, the processing of images had to be automated to speed up the process and make possible the studies of large data set. To address this key issue, I developed a new approach based on one main idea: considering segmentation and tracking at the same time, whereby the spatio-temporal segmentation uses the advantage of the large time redundancy of data, contrary to existing methods relying on successive spatial segmentation and tracking. Specifically, I applied the image analysis tools to address the role of protein aggregation in bacterial ageing. We were able to show, among other things, that protein aggregation is associated with the decrease of growth rate associated with ageing in E. Coli. In conclusion in this work I developed new image analysis methodologies that improved speed, accuracy and reliability of the results on one hand, and shed light on the dynamics and effects of natural aggregates in bacterial ageing on the other hand
Wheeler, Brandon Myles. "Evaluating time-series smoothing algorithms for multi-temporal land cover classification." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/74313.
Full textMaster of Science
Martirosyan, Anahit. "Towards Design of Lightweight Spatio-Temporal Context Algorithms for Wireless Sensor Networks." Thèse, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/19857.
Full textWanchaleam, Pora. "Algorithms and structures for spatial and temporal equalisation in TDMA mobile communications." Thesis, Imperial College London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.322203.
Full textNilsson, Mikael. "Efficient Temporal Reasoning with Uncertainty." Licentiate thesis, Linköpings universitet, Artificiell intelligens och integrerade datorsystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-119409.
Full textYongkang, Hu, Zhang Qishan, Kou Yanhong, and Yang Dongkai. "STUDY ON GPS RECEIVER ALGORITHMS FOR SUPPRESSION OF NARROWBAND INTERFERENCE." International Foundation for Telemetering, 2007. http://hdl.handle.net/10150/604582.
Full textDespite the inherent resistance to narrowband signal interference afforded by GPS spread spectrum modulation, the low level of GPS signals makes them susceptible to narrowband interference. This paper discusses the application of a pre-correlation adaptive temporal filter for stationary and nonstationary narrowband interference suppression. Various adaptive algorithms are studied and implemented. Comparison of the convergence and tracking behavior of various algorithms is made.
Capresi, Chiara. "Algorithms for identifying clusters in temporal graphs and realising distance matrices by unicyclic graphs." Doctoral thesis, Università di Siena, 2022. http://hdl.handle.net/11365/1211314.
Full textZhang, Jun. "Nearest neighbor queries in spatial and spatio-temporal databases /." View abstract or full-text, 2003. http://library.ust.hk/cgi/db/thesis.pl?COMP%202003%20ZHANG.
Full textWang, Ziyang. "Next Generation Ultrashort-Pulse Retrieval Algorithm for Frequency-Resolved Optical Gating: The Inclusion of Random (Noise) and Nonrandom (Spatio-Temporal Pulse Distortions) Error." Diss., Available online, Georgia Institute of Technology, 2005, 2005. http://etd.gatech.edu/theses/available/etd-04122005-224257/unrestricted/wang%5Fziyang%5F200505%5Fphd.pdf.
Full textYou, Li, Committee Member ; Buck, John A., Committee Member ; Kvam, Paul, Committee Member ; Kennedy, Brian, Committee Member ; Trebino, Rick, Committee Chair. Vita. Includses bibliographical references.
Kleisarchaki, Sofia. "Analyse des différences dans le Big Data : Exploration, Explication, Évolution." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM055/document.
Full textVariability in Big Data refers to data whose meaning changes continuously. For instance, data derived from social platforms and from monitoring applications, exhibits great variability. This variability is essentially the result of changes in the underlying data distributions of attributes of interest, such as user opinions/ratings, computer network measurements, etc. {em Difference Analysis} aims to study variability in Big Data. To achieve that goal, data scientists need: (a) measures to compare data in various dimensions such as age for users or topic for network traffic, and (b) efficient algorithms to detect changes in massive data. In this thesis, we identify and study three novel analytical tasks to capture data variability: {em Difference Exploration, Difference Explanation} and {em Difference Evolution}.Difference Exploration is concerned with extracting the opinion of different user segments (e.g., on a movie rating website). We propose appropriate measures for comparing user opinions in the form of rating distributions, and efficient algorithms that, given an opinion of interest in the form of a rating histogram, discover agreeing and disargreeing populations. Difference Explanation tackles the question of providing a succinct explanation of differences between two datasets of interest (e.g., buying habits of two sets of customers). We propose scoring functions designed to rank explanations, and algorithms that guarantee explanation conciseness and informativeness. Finally, Difference Evolution tracks change in an input dataset over time and summarizes change at multiple time granularities. We propose a query-based approach that uses similarity measures to compare consecutive clusters over time. Our indexes and algorithms for Difference Evolution are designed to capture different data arrival rates (e.g., low, high) and different types of change (e.g., sudden, incremental). The utility and scalability of all our algorithms relies on hierarchies inherent in data (e.g., time, demographic).We run extensive experiments on real and synthetic datasets to validate the usefulness of the three analytical tasks and the scalability of our algorithms. We show that Difference Exploration guides end-users and data scientists in uncovering the opinion of different user segments in a scalable way. Difference Explanation reveals the need to parsimoniously summarize differences between two datasets and shows that parsimony can be achieved by exploiting hierarchy in data. Finally, our study on Difference Evolution provides strong evidence that a query-based approach is well-suited to tracking change in datasets with varying arrival rates and at multiple time granularities. Similarly, we show that different clustering approaches can be used to capture different types of change
Cordeiro, Thiago da Silva. "Controle das características geométricas de nanopartículas de prata através da conformação temporal de pulsos ultracurtos utilizando algorítimos genéticos." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/85/85134/tde-18102013-154842/.
Full textThis work used ultrashort laser pulses to modify, in a controlled way, the dimensional characteristics of silver nanoparticles in aqueous solution. To reach this goal, genetic algorithm and microfluidic circuits were used. A pulse shaper was used to create different temporal profiles for the ultrashort pulses used to irradiate the silver nanoparticle solutions. These temporal profiles were conformed in real time, aiming to optimize the experiment result, quantified by the decrease of the average diameter of the nanoparticles in the irradiated solutions. Since each nanoparticle diameter minimization experiment demanded hundreds of measurements, its achievement was possible by the use of a microfluidic circuit specially built for this work. This circuit enables the use of small sample quantities, leading to short irradiation and measurement intervals, besides evident sample savings. To make this work possible, a genetic algorithm was created and tested. This genetic algorithm was interfaced to several equipments, including an acustooptic programmable dispersive filter that modifies the ultrashort pulses temporal characteristics by the introduction of spectral phases in the pulses. The genetic algorithm and the acustooptic programmable dispersive filter were used in conjunction in experiments to temporally shorten the ultrashort pulses from the laser system, generating pulses durations close to the Fourier transform limited ones. Besides, experiments were performed with the Labview coded genetic algorithm to optimize its evolutionary process. The silver nanoparticles irradiation experiments showed that the ultrashort pulses temporal conformation allowed the control of these particles dimensions, decreasing its mean size by a factor of 2. These experiments characterize the nanoparticles irradiation by ultrashort pulses as an important technique to control the nanoparticles characteristics.
Saraiva, Gustavo Francisco Rosalin. "Análise temporal da sinalização elétrica em plantas de soja submetidas a diferentes perturbações externas." Universidade do Oeste Paulista, 2017. http://bdtd.unoeste.br:8080/jspui/handle/jspui/1087.
Full textMade available in DSpace on 2018-07-27T17:57:40Z (GMT). No. of bitstreams: 1 Gustavo Francisco Rosalin Saraiva.pdf: 5041218 bytes, checksum: 30127a7816b12d3bd7e57182e6229bc2 (MD5) Previous issue date: 2017-03-31
Plants are complex organisms with dynamic processes that, due to their sessile way of life, are influenced by environmental conditions at all times. Plants can accurately perceive and respond to different environmental stimuli intelligently, but this requires a complex and efficient signaling system. Electrical signaling in plants has been known for a long time, but has recently gained prominence with the understanding of the physiological processes of plants. The objective of this thesis was to test the following hypotheses: temporal series of data obtained from electrical signaling of plants have non-random information, with dynamic and oscillatory pattern, such dynamics being affected by environmental stimuli and that there are specific patterns in responses to stimuli. In a controlled environment, stressful environmental stimuli were applied in soybean plants, and the electrical signaling data were collected before and after the application of the stimulus. The time series obtained were analyzed using statistical and computational tools to determine Frequency Spectrum (FFT), Autocorrelation of Values and Approximate Entropy (ApEn). In order to verify the existence of patterns in the series, classification algorithms from the area of machine learning were used. The analysis of the time series showed that the electrical signals collected from plants presented oscillatory dynamics with frequency distribution pattern in power law. The results allow to differentiate with great efficiency series collected before and after the application of the stimuli. The PSD and autocorrelation analyzes showed a great difference in the dynamics of the electric signals before and after the application of the stimuli. The ApEn analysis showed that there was a decrease in the signal complexity after the application of the stimuli. The classification algorithms reached significant values in the accuracy of pattern detection and classification of the time series, showing that there are mathematical patterns in the different electrical responses of the plants. It is concluded that the time series of bioelectrical signals of plants contain discriminant information. The signals have oscillatory dynamics, having their properties altered by environmental stimuli. There are still mathematical patterns built into plant responses to specific stimuli.
As plantas são organismos complexos com processos dinâmicos que, devido ao seu modo séssil de vida, sofrem influência das condições ambientais todo o tempo. Plantas podem percebem e responder com precisão a diferentes estímulos ambientais de forma inteligente, mas para isso se faz necessário um complexo e eficiente sistema de sinalização. A sinalização elétrica em plantas já é conhecida há muito tempo, mas vem ganhando destaque recentemente com seu entendimento em relação aos processos fisiológicos das plantas. O objetivo desta tese foi testar as seguintes hipóteses: séries temporais de dados obtidos da sinalização elétrica de plantas possuem informação não aleatória, com padrão dinâmico e oscilatório, sendo tal dinâmica afetada por estímulos ambientais e que há padrões específicos nas respostas a estímulos. Em ambiente controlado, foram aplicados estímulos ambientais estressantes em plantas de soja, e captados os dados de sinalização elétrica antes e após a aplicação dos mesmos. As séries temporais obtidas foram analisadas utilizando ferramentas estatísticas e computacionais para se determinar o Espectro de Frequências (FFT), Autocorrelação dos valores e Entropia Aproximada (ApEn). Para se verificar a existência de padrões nas séries, foram utilizados algoritmos de classificação da área de aprendizado de máquina. A análise das séries temporais mostrou que os sinais elétricos coletados de plantas apresentaram dinâmica oscilatória com padrão de distribuição de frequências em lei de potência. Os resultados permitem diferenciar com grande eficácia séries coletadas antes e após a aplicação dos estímulos. As análises de PSD e autocorrelação mostraram grande diferença na dinâmica dos sinais elétricos antes e após a aplicação dos estímulos. A análise de ApEn mostrou haver diminuição da complexidade do sinal após a aplicação dos estímulos. Os algoritmos de classificação alcançaram valores significativos na acurácia de detecção de padrões e classificação das séries temporais, mostrando haver padrões matemáticos nas diferentes respostas elétricas das plantas. Conclui-se que as séries temporais de sinais bioelétricos de plantas possuem informação discriminante. Os sinais possuem dinâmica oscilatória, tendo suas propriedades alteradas por estímulos ambientais. Há ainda padrões matemáticos embutidos nas respostas da planta a estímulos específicos.
Matthews, Stephen. "Learning lost temporal fuzzy association rules." Thesis, De Montfort University, 2012. http://hdl.handle.net/2086/8257.
Full textPilourdault, Julien. "Scalable algorithms for monitoring activity traces." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM040/document.
Full textIn this thesis, we study scalable algorithms for monitoring activity traces. In several domains, monitoring is a key ability to extract value from data and improve a system. This thesis aims to design algorithms for monitoring two kinds of activity traces. First, we investigate temporal data monitoring. We introduce a new kind of interval join, that features scoring functions reflecting the degree of satisfaction of temporal predicates. We study these joins in the context of batch processing: we formalize Ranked Temporal Join (RTJ), that combine collections of intervals and return the k best results. We show how to exploit the nature of temporal predicates and the properties of their associated scored semantics to design TKIJ , an efficient query evaluation approach on a distributed Map-Reduce architecture. Our extensive experiments on synthetic and real datasets show that TKIJ outperforms state-of-the-art competitors and provides very good performance for n-ary RTJ queries on temporal data. We also propose a preliminary study to extend our work on TKIJ to stream processing. Second, we investigate monitoring in crowdsourcing. We advocate the need to incorporate motivation in task assignment. We propose to study an adaptive approach, that captures workers’ motivation during task completion and use it to revise task assignment accordingly across iterations. We study two variants of motivation-aware task assignment: Individual Task Assignment (Ita) and Holistic Task Assignment (Hta). First, we investigate Ita, where we assign tasks to workers individually, one worker at a time. We model Ita and show it is NP-Hard. We design three task assignment strategies that exploit various objectives. Our live experiments study the impact of each strategy on overall performance. We find that different strategies prevail for different performance dimensions. In particular, the strategy that assigns random and relevant tasks offers the best task throughput and the strategy that assigns tasks that best match a worker’s compromise between task diversity and task payment has the best outcome quality. Our experiments confirm the need for adaptive motivation-aware task assignment. Then, we study Hta, where we assign tasks to all available workers, holistically. We model Hta and show it is both NP-Hard and MaxSNP-Hard. We develop efficient approximation algorithms with provable guarantees. We conduct offline experiments to verify the efficiency of our algorithms. We also conduct online experiments with real workers and compare our approach with various non-adaptive assignment strategies. We find that our approach offers the best compromise between performance dimensions thereby assessing the need for adaptability
Tuck, Terry W. "Temporally Correct Algorithms for Transaction Concurrency Control in Distributed Databases." Thesis, University of North Texas, 2001. https://digital.library.unt.edu/ark:/67531/metadc2743/.
Full textJakkula, Vikramaditya Reddy. "Enhancing smart home resident activity prediction and anomaly detection using temporal relations." Online access for everyone, 2007. http://www.dissertations.wsu.edu/Thesis/Fall2007/v_jakkula_102207.pdf.
Full textRossi, Alfred Vincent III. "Temporal Clustering of Finite Metric Spaces and Spectral k-Clustering." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1500033042082458.
Full textPallikarakis, Christos A. "Development of temporal phase unwrapping algorithms for depth-resolved measurements using an electronically tuned Ti:Sa laser." Thesis, Loughborough University, 2017. https://dspace.lboro.ac.uk/2134/23918.
Full textRex, David Bruce. "Object Parallel Spatio-Temporal Analysis and Modeling System." PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/1278.
Full textMalik, Zohaib Mansoor. "Design and implementation of temporal filtering and other data fusion algorithms to enhance the accuracy of a real time radio location tracking system." Thesis, Högskolan i Gävle, Avdelningen för elektronik, matematik och naturvetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-13261.
Full textSalvaggio, Carl. "Automated segmentation of urban features from Landsat-Thematic Mapper imagery for use in pseudovariant feature temporal image normalization /." Online version of thesis, 1987. http://hdl.handle.net/1850/11371.
Full textLamus, Garcia Herreros Camilo. "Models and algorithms of brain connectivity, spatial sparsity, and temporal dynamics for the MEG/EEG inverse problem." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/103160.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 123-131).
Magnetoencephalography (MEG) and electroencephalography (EEG) are noninvasive functional neuroimaging techniques that provide high temporal resolution recordings of brain activity, offering a unique means to study fast neural dynamics in humans. Localizing the sources of brain activity from MEG/EEG is an ill-posed inverse problem, with no unique solution in the absence of additional information. In this dissertation I analyze how solutions to the MEG/EEG inverse problem can be improved by including information about temporal dynamics of brain activity and connectivity within and among brain regions. The contributions of my thesis are: 1) I develop a dynamic algorithm for source localization that uses local connectivity information and Empirical Bayes estimates to improve source localization performance (Chapter 1). This result led me to investigate the underlying theoretical principles that might explain the performance improvement observed in simulations and by analyzing experimental data. In my analysis, 2) I demonstrate theoretically how the inclusion of local connectivity information and basic source dynamics can greatly increase the number of sources that can be recovered from MEG/EEG data (Chapter 2). Finally, in order to include long distance connectivity information, 3) I develop a fast multi-scale dynamic source estimation algorithm based on the Subspace Pursuit and Kalman Filter algorithms that incorporates brain connectivity information derived from diffusion MRI (Chapter 3). Overall, I illustrate how dynamic models informed by neurophysiology and neuroanatomy can be used alongside advanced statistical and signal processing methods to greatly improve MEG/EEG source localization. More broadly, this work provides an example of how advanced modeling and algorithm development can be used to address difficult problems in neuroscience and neuroimaging.
by Camilo Lamus Garcia Herreros.
Ph. D.
Mülâyim, Mehmet Oǧuz. "Anytime Case-Based Reasoning in Large-Scale Temporal Case Bases." Doctoral thesis, Universitat Autònoma de Barcelona, 2020. http://hdl.handle.net/10803/671283.
Full textEl enfoque de la metodología Case-Based Reasoning (CBR) para la resolución de problemas de que "problemas similares tienen soluciones similares" ha demostrado ser bastante favorable para muchas aplicaciones de inteligencia artificial industrial. Sin embargo, las mismas ventajas de CBR dificultan su desempeño ya que las bases de casos (CB) crecen más que tamaños razonables. Buscar casos similares es costoso. Esta desventaja a menudo hace que CBR sea menos atractivo para los entornos de datos abundantes de hoy en día, mientras que, en realidad, cada vez hay más razones para beneficiarse de esta metodología eficaz. En consecuencia, el enfoque tradicional de la comunidad CBR de controlar el crecimiento de la CB para mantener el rendimiento está cambiando hacia la búsqueda de nuevas formas de tratar con datos abundantes. Como contribución a estos esfuerzos, esta tesis tiene como objetivo acelerar el CBR aprovechando tanto los espacios de problemas como los de soluciones en los CB de gran escala que se componen de casos relacionados temporalmente, como en el ejemplo de las historias clínicas electrónicas. Para las ocasiones en las que la aceleración que logramos para obtener resultados exactos aún no sea factible, dotamos al sistema CBR con capacidades de algoritmos anytime para proporcionar resultados aproximados con confianza en caso de interrupción. Aprovechar la temporalidad de los casos nos permite alcanzar ganancias superiores en el tiempo de ejecución para los CB de millones de casos. Los experimentos con conjuntos de datos del mundo real disponibles públicamente fomentan el uso continuo de CBR en dominios en los que CBR históricamente sobresale como la atención médica; y a su vez, no sufriendo, sino disfrutando del big data.
Case-Based Reasoning (CBR) methodology's approach to problem-solving that "similar problems have similar solutions" has proved quite favorable for many industrial artificial intelligence applications. However, CBR's very advantages hinder its performance as case bases (CBs) grow larger than moderate sizes. Searching similar cases is expensive. This handicap often makes CBR less appealing for today's ubiquitous data environments while, actually, there is ever more reason to benefit from this effective methodology. Accordingly, CBR community's traditional approach of controlling CB growth to maintain performance is shifting towards finding new ways to deal with abundant data. As a contribution to these efforts, this thesis aims to speed up CBR by leveraging both problem and solution spaces in large-scale CBs that are composed of temporally related cases, as in the example of electronic health records. For the occasions when the speed-up we achieve for exact results may still not be feasible, we endow the CBR system with anytime algorithm capabilities to provide approximate results with confidence upon interruption. Exploiting the temporality of cases allows us to reach superior gains in execution time for CBs of millions of cases. Experiments with publicly available real-world datasets encourage the continued use of CBR in domains where it historically excels like healthcare; and this time, not suffering from, but enjoying big data.
Universitat Autònoma de Barcelona. Programa de Doctorat en Informàtica
Stojkovic, Ivan. "Functional Norm Regularization for Margin-Based Ranking on Temporal Data." Diss., Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/522550.
Full textPh.D.
Quantifying the properties of interest is an important problem in many domains, e.g., assessing the condition of a patient, estimating the risk of an investment or relevance of the search result. However, the properties of interest are often latent and hard to assess directly, making it difficult to obtain classification or regression labels, which are needed to learn a predictive models from observable features. In such cases, it is typically much easier to obtain relative comparison of two instances, i.e. to assess which one is more intense (with respect to the property of interest). One framework able to learn from such kind of supervised information is ranking SVM, and it will make a basis of our approach. Applications in bio-medical datasets typically have specific additional challenges. First, and the major one, is the limited amount of data examples, due to an expensive measuring technology, and/or infrequency of conditions of interest. Such limited number of examples makes both identification of patterns/models and their validation less useful and reliable. Repeated samples from the same subject are collected on multiple occasions over time, which breaks IID sample assumption and introduces dependency structure that needs to be taken into account more appropriately. Also, feature vectors are highdimensional, and typically of much higher cardinality than the number of samples, making models less useful and their learning less efficient. Hypothesis of this dissertation is that use of the functional norm regularization can help alleviating mentioned challenges, by improving generalization abilities and/or learning efficiency of predictive models, in this case specifically of the approaches based on the ranking SVM framework. The temporal nature of data was addressed with loss that fosters temporal smoothness of functional mapping, thus accounting for assumption that temporally proximate samples are more correlated. Large number of feature variables was handled using the sparsity inducing L1 norm, such that most of the features have zero effect in learned functional mapping. Proposed sparse (temporal) ranking objective is convex but non-differentiable, therefore smooth dual form is derived, taking the form of quadratic function with box constraints, which allows efficient optimization. For the case where there are multiple similar tasks, joint learning approach based on matrix norm regularization, using trace norm L* and sparse row L21 norm was also proposed. Alternate minimization with proximal optimization algorithm was developed to solve the mentioned multi-task objective. Generalization potentials of the proposed high-dimensional and multi-task ranking formulations were assessed in series of evaluations on synthetically generated and real datasets. The high-dimensional approach was applied to disease severity score learning from gene expression data in human influenza cases, and compared against several alternative approaches. Application resulted in scoring function with improved predictive performance, as measured by fraction of correctly ordered testing pairs, and a set of selected features of high robustness, according to three similarity measures. The multi-task approach was applied to three human viral infection problems, and for learning the exam scores in Math and English. Proposed formulation with mixed matrix norm was overall more accurate than formulations with single norm regularization.
Temple University--Theses
Brighi, Marco. "Human Activity Recognition: A Comparative Evaluation of Spatio-Temporal Descriptors." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19436/.
Full textSon, Young Baek. "POC algorithms based on spectral remote sensing data and its temporal and spatial variability in the Gulf of Mexico." Texas A&M University, 2003. http://hdl.handle.net/1969.1/5965.
Full textMeyer, Dominik Jakob [Verfasser], Klaus [Akademischer Betreuer] Diepold, Matthias [Gutachter] Althoff, and Klaus [Gutachter] Diepold. "Accelerated Gradient Algorithms for Robust Temporal Difference Learning / Dominik Jakob Meyer ; Gutachter: Matthias Althoff, Klaus Diepold ; Betreuer: Klaus Diepold." München : Universitätsbibliothek der TU München, 2021. http://d-nb.info/1237413281/34.
Full textChalup, Stephan Konrad. "Incremental learning with neural networks, evolutionary computation and reinforcement learning algorithms." Thesis, Queensland University of Technology, 2001.
Find full textWedge, Daniel John. "Video sequence synchronization." University of Western Australia. School of Computer Science and Software Engineering, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0084.
Full textMaquet, Nicolas. "New algorithms and data structures for the emptiness problem of alternating automata." Doctoral thesis, Universite Libre de Bruxelles, 2011. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209961.
Full textOne of the most successful program verification techniques is model checking, in which programs are typically abstracted by a finite-state machine. After this abstraction step, properties (typically in the form of some temporal logic formula) can be checked against the finite-state abstraction, with the help of automated tools. Alternating automata play an important role in this context, since many temporal logics on words and trees can be efficiently translated into those automata. This property allows for the reduction of model checking to automata-theoretic questions and is called the automata-theoretic approach to model checking. In this work, we provide three novel approaches for the analysis (emptiness checking) of alternating automata over finite and infinite words. First, we build on the successful framework of antichains to devise new algorithms for LTL satisfiability and model checking, using alternating automata. These algorithms combine antichains with reduced ordered binary decision diagrams in order to handle the exponentially large alphabets of the automata generated by the LTL translation. Second, we develop new abstraction and refinement algorithms for alternating automata, which combine the use of antichains with abstract interpretation, in order to handle ever larger instances of alternating automata. Finally, we define a new symbolic data structure, coined lattice-valued binary decision diagrams that is particularly well-suited for the encoding of transition functions of alternating automata over symbolic alphabets. All of these works are supported with empirical evaluations that confirm the practical usefulness of our approaches. / Ce travail traite de l'étude de nouveaux algorithmes et structures de données dont l'usage est destiné à la vérification de programmes. Les ordinateurs sont de plus en plus présents dans notre vie quotidienne et, de plus en plus souvent, ils se voient confiés des tâches de nature critique pour la sécurité. Ces systèmes sont caractérisés par le fait qu'une panne ou un bug (erreur en jargon informatique) peut avoir des effets potentiellement désastreux, que ce soit en pertes humaines, dégâts environnementaux, ou économiques. Pour ces systèmes critiques, les concepteurs de systèmes industriels prônent de plus en plus l'usage de techniques permettant d'obtenir une assurance formelle de correction.
Une des techniques de vérification de programmes les plus utilisées est le model checking, avec laquelle les programmes sont typiquement abstraits par une machine a états finis. Après cette phase d'abstraction, des propriétés (typiquement sous la forme d'une formule de logique temporelle) peuvent êtres vérifiées sur l'abstraction à espace d'états fini, à l'aide d'outils de vérification automatisés. Les automates alternants jouent un rôle important dans ce contexte, principalement parce que plusieurs logiques temporelle peuvent êtres traduites efficacement vers ces automates. Cette caractéristique des automates alternants permet de réduire le model checking des logiques temporelles à des questions sur les automates, ce qui est appelé l'approche par automates du model checking. Dans ce travail, nous étudions trois nouvelles approches pour l'analyse (le test du vide) desautomates alternants sur mots finis et infinis. Premièrement, nous appliquons l'approche par antichaînes (utilisée précédemment avec succès pour l'analyse d'automates) pour obtenir de nouveaux algorithmes pour les problèmes de satisfaisabilité et du model checking de la logique temporelle linéaire, via les automates alternants.Ces algorithmes combinent l'approche par antichaînes avec l'usage des ROBDD, dans le but de gérer efficacement la combinatoire induite par la taille exponentielle des alphabets d'automates générés à partir de LTL. Deuxièmement, nous développons de nouveaux algorithmes d'abstraction et raffinement pour les automates alternants, combinant l'usage des antichaînes et de l'interprétation abstraite, dans le but de pouvoir traiter efficacement des automates de grande taille. Enfin, nous définissons une nouvelle structure de données, appelée LVBDD (Lattice-Valued Binary Decision Diagrams), qui permet un encodage efficace des fonctions de transition des automates alternants sur alphabets symboliques. Tous ces travaux ont fait l'objet d'implémentations et ont été validés expérimentalement.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
Santos, Ramon Nóbrega dos. "Uma abordagem temporal para identificação precoce de estudantes de graduação a distância com risco de evasão utilizando técnicas de mineração de dados." Universidade Federal da Paraíba, 2015. http://tede.biblioteca.ufpb.br:8080/handle/tede/7844.
Full textMade available in DSpace on 2016-02-15T18:37:51Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 2981698 bytes, checksum: 6dfa47590c870db030e7c1cbea499120 (MD5) Previous issue date: 2015-05-29
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Through the use of data mining techniques, more usually the classification algorithms, it is possible to implement predictive models that are able to early identify a student in risk of dropout. Several studies used data obtained from a Virtual Learning Environment (VLE) to implement predictive performance models in a discipline of a course. However, any study was carried out aimed at developing a model for dropout prediction, to distance graduation courses of longer duration, which integrates works that carry out performance prediction based on a VLE, allowing an early prediction during the first semester and throughout the others semesters. Thus, this work proposes a dropout identification approach for distance graduation courses that use the Rule-Based Classification technique to firstly identify the disciplines and grades limits that have higher influence on dropout, so that the predictive models for performance in a VLE can be used regarding the dropout detection of students along the whole distance graduation course. Experiments were carried out using four rulebased classification algorithms: JRip, OneR, PART and Ridor. Considering the use of this temporal approach, it was possible to prove the advantages of this approach, once better accuracies were obtained along the semesters and important rules were discovered to early identify students in risk of dropout. Among the applied algorithms, JRip and PART obtained the best predictive results with average accuracy of 81% at the end of first semester. Furthermore, considering our proposed partition methodology, where attributes of the predictive models are incrementally applied, it was possible to discovery rules potentially useful to dropout prevention.
Com a utilização de técnicas de mineração de dados, mais comumente os algoritmos de Classificação, pode-se construir modelos preditivos capazes de identificar precocemente um estudante com risco de evasão. Diversos estudos utilizaram dados obtidos de um Ambiente Virtual de Aprendizagem (AVA) para a construção de modelos preditivos de desempenho em uma disciplina de um curso. Porém, nenhum estudo foi realizado com o objetivo de desenvolver um modelo de predição de evasão, para um curso de graduação a distância de maior duração, que integre trabalhos que fazem a predição de desempenho a partir de um AVA, possibilitando uma predição da evasão antecipada durante o primeiro semestre e ao longo dos demais semestres. Assim, este trabalho propõe uma abordagem de identificação de evasão em um curso de graduação a distância a partir da utilização da técnica de classificação baseada em regras para, primeiramente, identificar as disciplinas e os limites de notas que mais influenciam na evasão para que os modelos preditivos de desempenhos em um AVA possam ser utilizados para a predição da evasão de um aluno com risco de evasão ao longo de todo o curso de graduação a distância. Foram realizados experimentos com quatro algoritmos de classificação baseados em regras: o JRip, o OneR, o PART e o Ridor. A partir da utilização da abordagem temporal proposta foi possível comprovar sua vantagem, uma vez que foram obtidos melhores desempenhos preditivos ao longo dos semestres e foram descobertas importantes regras para a identificação precoce de um estudante com risco de evasão. Entre os algoritmos estudados, JRip e PART obtiveram os melhores desempenhos preditivos com acurácia média de 81% ao final do primeiro semestre. A partir da metodologia proposta de partições, na qual os atributos dos modelos preditivos são aplicados de forma incremental, foi possível a descoberta de regras potencialmente úteis para prevenir a evasão.
Bundala, Daniel. "Algorithmic verification problems in automata-theoretic settings." Thesis, University of Oxford, 2014. https://ora.ox.ac.uk/objects/uuid:60b2d507-153f-4119-a888-56ccd47c3752.
Full textRomanenko, Ilya. "Novel image processing algorithms and methods for improving their robustness and operational performance." Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/16340.
Full textSichtig, Heike. "The SGE framework discovering spatio-temporal patterns in biological systems with spiking neural networks (S), a genetic algorithm (G) and expert knowledge (E) /." Diss., Online access via UMI:, 2009.
Find full textIncludes bibliographical references.
Rodriguez, Vila Juan Jose Franklin. "Clusterização e visualização espaço-temporal de dados georreferenciados adaptando o algoritmo marker clusterer: um caso de uso em Curitiba." Universidade Tecnológica Federal do Paraná, 2016. http://repositorio.utfpr.edu.br/jspui/handle/1/2832.
Full textCinquenta por cento da população mundial vive em cidades, e a expectativa para 2050 é de que essa porcentagem chegue a 70% (WHO, 2014). As cidades consomem 75% dos recursos naturais e de energia do mundo, e geram 80% dos gases-estufa responsáveis pelo efeito estufa; considerando que, ocupam apenas 2% do território mundial (Signori, 2008). As cidades são também o palco de grande parte dos problemas ambientais globais (Gomes, 2009), e é no contexto urbano onde a dimensão social, econômica e ambiental convergem mais intensamente (European Commission, 2007). Esse crescimento populacional, tem influências sociais, econômicas e ambientais que representam um grande desafio para o desenvolvimento sustentável do planejamento urbano. Os conceitos de sistemas de informação geográfica, cidades inteligentes, dados abertos, algoritmos de clusterização e visualização de dados, permitem entender diversas questões em relação a atividade urbana nas cidades. Em particular, se torna importante a variável “onde”: onde existe tráfego e quais são os horários mais frequentes; onde é necessário realizar modelagem de espera residencial, comercial e industrial de acordo com o crescimento populacional para o plano de uso da terra; quais são os tipos de negócios que mais cresceram em cada bairro e qual é a relação entre eles. Para este fim, esta dissertação apresenta um sistema web-mobile que permite entender o crescimento espaço-temporal e econômico dos alvarás de restaurantes dos bairros Centro, Batel e Tatuquara da cidade de Curitiba nas últimas três décadas (1980 até 2015), realizando clusterização e visualização de uma grande quantidade de dados abertos georreferenciados. Em termos de resultados alcançados destacam-se: 1) capacidade de resolver problemas computacionais de sobreposição de pontos sobre um mapa, 2) capacidade de entender o crescimento econômico dos alvarás e qual é a relação entre as diversas categorias e entre os bairros, 3) tempo de execução inferior a 3 segundos para 99% das consultas espaciais executadas, 4) 80,8% dos usuários em fase de avaliação consideram que a solução proposta permite uma melhor identificação e visualização de dados georreferenciados, e 5) possibilita a integração de novas fontes e tipos de dados.
Fifty percent of the world's population live in cities, and the expectation until 2050 is that it reaches 70% (WHO, 2014). Cities consume 75% of the world's natural resources and energy, and generate 80% of greenhouse gases responsible for the greenhouse effect, considering that they occupy only 2% of the world's territory (Signori, 2008). Cities are also the scene of most of the global environmental problems (Gomes, 2009), and it is in the urban context where the social, economic and environmental dimension converge more intensely (European Commission, 2007). This population growth has social, economic and environmental influences that represent a great challenge for the sustainable development of urban planning. The concepts of geographic information systems, smart cities, open data, clustering and data visualization algorithms allow us to understand several questions regarding urban activity in cities, especially, understand the variable "where" things happen. For example: where there is traffic and what time is the most frequent, where it is necessary to perform residential, commercial, industrial standby modeling according to population growth for the land use plan, what are the types of businesses that grew the most in each neighborhood and what is the relationship between them. For this purpose, the following thesis presents a web-mobile system that allows us to understand the spatiotemporal and economic growth of the restaurant licenses of districts Centro, Batel and Tatuquara of Curitiba for the last three decades, performing clustering and visualization of a large amount of open georeferenced data. In terms of achieved results, we can highlight: 1) ability to solve computational problems of overlapping points representing business on a map, 2) ability to understand the economic growth of restaurants licences and what is the relationship between different categories and between districts, 3) execution time less than 3 seconds for 99% of the spatial queries executed, 4) 80.8% of users in evaluation phase consider that the proposed solution allows a better identification and visualization of georeferenced data, and 5) it allows the integration of new sources and types of data.
Chevallier, Juliette. "Statistical models and stochastic algorithms for the analysis of longitudinal Riemanian manifold valued data with multiple dynamic." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX059/document.
Full textBeyond transversal studies, temporal evolution of phenomena is a field of growing interest. For the purpose of understanding a phenomenon, it appears more suitable to compare the evolution of its markers over time than to do so at a given stage. The follow-up of neurodegenerative disorders is carried out via the monitoring of cognitive scores over time. The same applies for chemotherapy monitoring: rather than tumors aspect or size, oncologists asses that a given treatment is efficient from the moment it results in a decrease of tumor volume. The study of longitudinal data is not restricted to medical applications and proves successful in various fields of application such as computer vision, automatic detection of facial emotions, social sciences, etc.Mixed effects models have proved their efficiency in the study of longitudinal data sets, especially for medical purposes. Recent works (Schiratti et al., 2015, 2017) allowed the study of complex data, such as anatomical data. The underlying idea is to model the temporal progression of a given phenomenon by continuous trajectories in a space of measurements, which is assumed to be a Riemannian manifold. Then, both a group-representative trajectory and inter-individual variability are estimated. However, these works assume an unidirectional dynamic and fail to encompass situations like multiple sclerosis or chemotherapy monitoring. Indeed, such diseases follow a chronic course, with phases of worsening, stabilization and improvement, inducing changes in the global dynamic.The thesis is devoted to the development of methodological tools and algorithms suited for the analysis of longitudinal data arising from phenomena that undergo multiple dynamics and to apply them to chemotherapy monitoring. We propose a nonlinear mixed effects model which allows to estimate a representative piecewise-geodesic trajectory of the global progression and together with spacial and temporal inter-individual variability. Particular attention is paid to estimation of the correlation between the different phases of the evolution. This model provides a generic and coherent framework for studying longitudinal manifold-valued data.Estimation is formulated as a well-defined maximum a posteriori problem which we prove to be consistent under mild assumptions. Numerically, due to the non-linearity of the proposed model, the estimation of the parameters is performed through a stochastic version of the EM algorithm, namely the Markov chain Monte-Carlo stochastic approximation EM (MCMC-SAEM). The convergence of the SAEM algorithm toward local maxima of the observed likelihood has been proved and its numerical efficiency has been demonstrated. However, despite appealing features, the limit position of this algorithm can strongly depend on its starting position. To cope with this issue, we propose a new version of the SAEM in which we do not sample from the exact distribution in the expectation phase of the procedure. We first prove the convergence of this algorithm toward local maxima of the observed likelihood. Then, with the thought of the simulated annealing, we propose an instantiation of this general procedure to favor convergence toward global maxima: the tempering-SAEM