Дисертації з теми "Temporal Algorithms"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Temporal Algorithms.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Temporal Algorithms".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Chen, Xiaodong. "Temporal data mining : algorithms, language and system for temporal association rules." Thesis, Manchester Metropolitan University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.297977.

Повний текст джерела
Анотація:
Studies on data mining are being pursued in many different research areas, such as Machine Learning, Statistics, and Databases. The work presented in this thesis is based on the database perspective of data mining. The main focuses are on the temporal aspects of data mining problems, especially association rule discovery, and issues on the integration of data mining and database systems. Firstly, a theoretical framework for temporal data mining is proposed in this thesis. Within this framework, not only potential patterns but also temporal features associated with the patterns are expected to be discovered. Calendar time expressions are suggested to represent temporal features and the minimum frequency of patterns is introduced as a new threshold in the model of temporal data mining. The framework also emphasises the necessary components to support temporal data mining tasks. As a specialisation of the proposed framework, the problem of mining temporal association rules is investigated. The methodology adopted in this thesis is eventually discovering potential temporal rules by alternatively using special search techniques for various restricted problems in an interactive and iterative process. Three forms of interesting mining tasks for temporal association rules with certain constraints are identified. These tasks are the discovery of valid time periods of association rules, the discovery of periodicities of association rules, and the discovery of association rules with temporal features. The search techniques and algorithms for those individual tasks are developed and presented in this thesis. Finally, an integrated query and mining system (IQMS) is presented in this thesis, covering the description of an interactive query and mining interface (IQMI) supplied by the IQMS system, the presentation of an SQL-like temporal mining language (TML) with the ability to express various data mining tasks for temporal association rules, and the suggestion of an IQMI-based interactive data mining process. The implementation of this system demonstrates an alternative approach for the integration of the DBMS and data mining functions.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Chen, Feng. "Efficient Algorithms for Mining Large Spatio-Temporal Data." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/19220.

Повний текст джерела
Анотація:
Knowledge discovery on spatio-temporal datasets has attracted
growing interests. Recent advances on remote sensing technology mean
that massive amounts of spatio-temporal data are being collected,
and its volume keeps increasing at an ever faster pace. It becomes
critical to design efficient algorithms for identifying novel and
meaningful patterns from massive spatio-temporal datasets. Different
from the other data sources, this data exhibits significant
space-time statistical dependence, and the assumption of i.i.d. is
no longer valid. The exact modeling of space-time dependence will
render the exponential growth of model complexity as the data size
increases. This research focuses on the construction of efficient
and effective approaches using approximate inference techniques for
three main mining tasks, including spatial outlier detection, robust
spatio-temporal prediction, and novel applications to real world
problems.

Spatial novelty patterns, or spatial outliers, are those data points
whose characteristics are markedly different from their spatial
neighbors. There are two major branches of spatial outlier detection
methodologies, which can be either global Kriging based or local
Laplacian smoothing based. The former approach requires the exact
modeling of spatial dependence, which is time extensive; and the
latter approach requires the i.i.d. assumption of the smoothed
observations, which is not statistically solid. These two approaches
are constrained to numerical data, but in real world applications we
are often faced with a variety of non-numerical data types, such as
count, binary, nominal, and ordinal. To summarize, the main research
challenges are: 1) how much spatial dependence can be eliminated via
Laplace smoothing; 2) how to effectively and efficiently detect
outliers for large numerical spatial datasets; 3) how to generalize
numerical detection methods and develop a unified outlier detection
framework suitable for large non-numerical datasets; 4) how to
achieve accurate spatial prediction even when the training data has
been contaminated by outliers; 5) how to deal with spatio-temporal
data for the preceding problems.

To address the first and second challenges, we mathematically
validated the effectiveness of Laplacian smoothing on the
elimination of spatial autocorrelations. This work provides
fundamental support for existing Laplacian smoothing based methods.
We also discovered a nontrivial side-effect of Laplacian smoothing,
which ingests additional spatial variations to the data due to
convolution effects. To capture this extra variability, we proposed
a generalized local statistical model, and designed two fast forward
and backward outlier detection methods that achieve a better balance
between computational efficiency and accuracy than most existing
methods, and are well suited to large numerical spatial datasets.

We addressed the third challenge by mapping non-numerical variables
to latent numerical variables via a link function, such as logit
function used in logistic regression, and then utilizing
error-buffer artificial variables, which follow a Student-t
distribution, to capture the large valuations caused by outliers. We
proposed a unified statistical framework, which integrates the
advantages of spatial generalized linear mixed model, robust spatial
linear model, reduced-rank dimension reduction, and Bayesian
hierarchical model. A linear-time approximate inference algorithm
was designed to infer the posterior distribution of the error-buffer
artificial variables conditioned on observations. We demonstrated
that traditional numerical outlier detection methods can be directly
applied to the estimated artificial variables for outliers
detection. To the best of our knowledge, this is the first
linear-time outlier detection algorithm that supports a variety of
spatial attribute types, such as binary, count, ordinal, and
nominal.

To address the fourth and fifth challenges, we proposed a robust
version of the Spatio-Temporal Random Effects (STRE) model, namely
the Robust STRE (R-STRE) model. The regular STRE model is a recently
proposed statistical model for large spatio-temporal data that has a
linear order time complexity, but is not best suited for
non-Gaussian and contaminated datasets. This deficiency can be
systemically addressed by increasing the robustness of the model
using heavy-tailed distributions, such as the Huber, Laplace, or
Student-t distribution to model the measurement error, instead of
the traditional Gaussian. However, the resulting R-STRE model
becomes analytical intractable, and direct application of
approximate inferences techniques still has a cubic order time
complexity. To address the computational challenge, we reformulated
the prediction problem as a maximum a posterior (MAP) problem with a
non-smooth objection function, transformed it to a equivalent
quadratic programming problem, and developed an efficient
interior-point numerical algorithm with a near linear order
complexity. This work presents the first near linear time robust
prediction approach for large spatio-temporal datasets in both
offline and online cases.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Civelek, Ferda N. (Ferda Nur). "Temporal Connectionist Expert Systems Using a Temporal Backpropagation Algorithm." Thesis, University of North Texas, 1993. https://digital.library.unt.edu/ark:/67531/metadc278824/.

Повний текст джерела
Анотація:
Representing time has been considered a general problem for artificial intelligence research for many years. More recently, the question of representing time has become increasingly important in representing human decision making process through connectionist expert systems. Because most human behaviors unfold over time, any attempt to represent expert performance, without considering its temporal nature, can often lead to incorrect results. A temporal feedforward neural network model that can be applied to a number of neural network application areas, including connectionist expert systems, has been introduced. The neural network model has a multi-layer structure, i.e. the number of layers is not limited. Also, the model has the flexibility of defining output nodes in any layer. This is especially important for connectionist expert system applications. A temporal backpropagation algorithm which supports the model has been developed. The model along with the temporal backpropagation algorithm makes it extremely practical to define any artificial neural network application. Also, an approach that can be followed to decrease the memory space used by weight matrix has been introduced. The algorithm was tested using a medical connectionist expert system to show how best we describe not only the disease but also the entire course of the disease. The system, first, was trained using a pattern that was encoded from the expert system knowledge base rules. Following then, series of experiments were carried out using the temporal model and the temporal backpropagation algorithm. The first series of experiments was done to determine if the training process worked as predicted. In the second series of experiments, the weight matrix in the trained system was defined as a function of time intervals before presenting the system with the learned patterns. The result of the two experiments indicate that both approaches produce correct results. The only difference between the two results was that compressing the weight matrix required more training epochs to produce correct results. To get a measure of the correctness of the results, an error measure which is the value of the error squared was summed over all patterns to get a total sum of squares.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Zhu, Linhong, Dong Guo, Junming Yin, Steeg Greg Ver, and Aram Galstyan. "Scalable temporal latent space inference for link prediction in dynamic social networks (extended abstract)." IEEE, 2017. http://hdl.handle.net/10150/626028.

Повний текст джерела
Анотація:
Understanding and characterizing the processes driving social interactions is one of the fundamental problems in social network research. A particular instance of this problem, known as link prediction, has recently attracted considerable attention in various research communities. Link prediction has many important commercial applications, e.g., recommending friends in an online social network such as Facebook and suggesting interesting pins in a collection sharing network such as Pinterest. This work is focused on the temporal link prediction problem: Given a sequence of graph snapshots G1, · ··, Gt from time 1 to t, how do we predict links in future time t + 1? To perform link prediction in a network, one needs to construct models for link probabilities between pairs of nodes. A temporal latent space model is proposed that is built upon latent homophily assumption and temporal smoothness assumption. First, the proposed modeling allows to naturally incorporate the well-known homophily effect (birds of a feather flock together). Namely, each dimension of the latent space characterizes an unobservable homogeneous attribute, and shared attributes tend to create a link in a network.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Beaumont, Matthew, and n/a. "Handling Over-Constrained Temporal Constraint Networks." Griffith University. School of Information Technology, 2004. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20041213.084512.

Повний текст джерела
Анотація:
Temporal reasoning has been an active research area for over twenty years, with most work focussing on either enhancing the efficiency of current temporal reasoning algorithms or enriching the existing algebras. However, there has been little research into handling over-constrained temporal problems except to recognise that a problem is over-constrained and then to terminate. As many real-world temporal reasoning problems are inherently over-constrained, particularly in the scheduling domain, there is a significant need for approaches that can handle over-constrained situations. In this thesis, we propose two backtracking algorithms to gain partial solutions to over-constrained temporal problems. We also propose a new representation, the end-point ordering model, to allow the use of local search algorithms for temporal reasoning. Using this model we propose a constraint weighting local search algorithm as well as tabu and random-restart algorithms to gain partial solutions to over-constrained temporal problems. Specifically, the contributions of this thesis are: The introduction and empirical evaluation of two backtracking algorithms to solve over-constrained temporal problems. We provide two backtracking algorithms to close the gap in current temporal research to solve over-constrained problems; The representation of temporal constraint networks using the end-point ordering model. As current representation models are not suited for local search algorithms, we develop a new model such that local search can be applied efficiently to temporal reasoning; The development of a constraint weighting local search algorithm for under-constrained problems. As constraint weighting has proven to be efficient for solving many CSP problems, we implement a constraint weighting algorithm to solve under-constrained temporal problems; An empirical evaluation of constraint weighting local search against traditional backtracking algorithms. We compare the results of a constraint weighting algorithm with traditional backtracking approaches and find that in many cases constraint weighting has superior performance; The development of a constraint weighting local search, tabu search and random-restart local search algorithm for over-constrained temporal problems. We extend our constraint weighting algorithm to solve under-constrained temporal problems as well as implement two other popular local search algorithms: tabu search and random-restart; An empirical evaluation of all three local search algorithms against the two backtracking algorithms. We compare the results of all three local search algorithms with our twobacktracking algorithms for solving over-constrained temporal reasoning problems and find that local search proves to be considerably superior.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Beaumont, Matthew. "Handling Over-Constrained Temporal Constraint Networks." Thesis, Griffith University, 2004. http://hdl.handle.net/10072/366603.

Повний текст джерела
Анотація:
Temporal reasoning has been an active research area for over twenty years, with most work focussing on either enhancing the efficiency of current temporal reasoning algorithms or enriching the existing algebras. However, there has been little research into handling over-constrained temporal problems except to recognise that a problem is over-constrained and then to terminate. As many real-world temporal reasoning problems are inherently over-constrained, particularly in the scheduling domain, there is a significant need for approaches that can handle over-constrained situations. In this thesis, we propose two backtracking algorithms to gain partial solutions to over-constrained temporal problems. We also propose a new representation, the end-point ordering model, to allow the use of local search algorithms for temporal reasoning. Using this model we propose a constraint weighting local search algorithm as well as tabu and random-restart algorithms to gain partial solutions to over-constrained temporal problems. Specifically, the contributions of this thesis are: The introduction and empirical evaluation of two backtracking algorithms to solve over-constrained temporal problems. We provide two backtracking algorithms to close the gap in current temporal research to solve over-constrained problems; The representation of temporal constraint networks using the end-point ordering model. As current representation models are not suited for local search algorithms, we develop a new model such that local search can be applied efficiently to temporal reasoning; The development of a constraint weighting local search algorithm for under-constrained problems. As constraint weighting has proven to be efficient for solving many CSP problems, we implement a constraint weighting algorithm to solve under-constrained temporal problems; An empirical evaluation of constraint weighting local search against traditional backtracking algorithms. We compare the results of a constraint weighting algorithm with traditional backtracking approaches and find that in many cases constraint weighting has superior performance; The development of a constraint weighting local search, tabu search and random-restart local search algorithm for over-constrained temporal problems. We extend our constraint weighting algorithm to solve under-constrained temporal problems as well as implement two other popular local search algorithms: tabu search and random-restart; An empirical evaluation of all three local search algorithms against the two backtracking algorithms. We compare the results of all three local search algorithms with our twobacktracking algorithms for solving over-constrained temporal reasoning problems and find that local search proves to be considerably superior.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Institute for Integrated and Intelligent Systems
Full Text
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Schiratti, Jean-Baptiste. "Methods and algorithms to learn spatio-temporal changes from longitudinal manifold-valued observations." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLX009/document.

Повний текст джерела
Анотація:
Dans ce manuscrit, nous présentons un modèle à effets mixtes, présenté dans un cadre Bayésien, permettant d'estimer la progression temporelle d'un phénomène biologique à partir d'observations répétées, à valeurs dans une variété Riemannienne, et obtenues pour un individu ou groupe d'individus. La progression est modélisée par des trajectoires continues dans l'espace des observations, que l'on suppose être une variété Riemannienne. La trajectoire moyenne est définie par les effets mixtes du modèle. Pour définir les trajectoires de progression individuelles, nous avons introduit la notion de variation parallèle d'une courbe sur une variété Riemannienne. Pour chaque individu, une trajectoire individuelle est construite en considérant une variation parallèle de la trajectoire moyenne et en reparamétrisant en temps cette parallèle. Les transformations spatio-temporelles sujet-spécifiques, que sont la variation parallèle et la reparamétrisation temporelle sont définnies par les effets aléatoires du modèle et permettent de quantifier les changements de direction et vitesse à laquelle les trajectoires sont parcourues. Le cadre de la géométrie Riemannienne permet d'utiliser ce modèle générique avec n'importe quel type de données définies par des contraintes lisses. Une version stochastique de l'algorithme EM, le Monte Carlo Markov Chains Stochastic Approximation EM (MCMC-SAEM), est utilisé pour estimer les paramètres du modèle au sens du maximum a posteriori. L'utilisation du MCMC-SAEM avec un schéma numérique permettant de calculer le transport parallèle est discutée dans ce manuscrit. De plus, le modèle et le MCMC-SAEM sont validés sur des données synthétiques, ainsi qu'en grande dimension. Enfin, nous des résultats obtenus sur différents jeux de données liés à la santé
We propose a generic Bayesian mixed-effects model to estimate the temporal progression of a biological phenomenon from manifold-valued observations obtained at multiple time points for an individual or group of individuals. The progression is modeled by continuous trajectories in the space of measurements, which is assumed to be a Riemannian manifold. The group-average trajectory is defined by the fixed effects of the model. To define the individual trajectories, we introduced the notion of « parallel variations » of a curve on a Riemannian manifold. For each individual, the individual trajectory is constructed by considering a parallel variation of the average trajectory and reparametrizing this parallel in time. The subject specific spatiotemporal transformations, namely parallel variation and time reparametrization, are defined by the individual random effects and allow to quantify the changes in direction and pace at which the trajectories are followed. The framework of Riemannian geometry allows the model to be used with any kind of measurements with smooth constraints. A stochastic version of the Expectation-Maximization algorithm, the Monte Carlo Markov Chains Stochastic Approximation EM algorithm (MCMC-SAEM), is used to produce produce maximum a posteriori estimates of the parameters. The use of the MCMC-SAEM together with a numerical scheme for the approximation of parallel transport is discussed. In addition to this, the method is validated on synthetic data and in high-dimensional settings. We also provide experimental results obtained on health data
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Montana, Felipe. "Sampling-based algorithms for motion planning with temporal logic specifications." Thesis, University of Sheffield, 2019. http://etheses.whiterose.ac.uk/22637/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Kobakian, Stephanie Rose. "New algorithms for effectively visualising Australian spatio-temporal disease data." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/203908/1/Stephanie_Kobakian_Thesis.pdf.

Повний текст джерела
Анотація:
This thesis contributes to improvements in effectively communicating population related cancer distributions and the associated burden of cancer on Australian communities. This thesis presents a new algorithm for creating an alternative map displays of tessellating hexagons. Alternative map displays can emphasise statistics in countries that contain densely populated cities. It is accompanied by a software implementation that automates the choice of one hexagon to represent each geographic unit, ensuring the statistic for each is equitably presented. The case study comparing a traditional choropleth map to the alternative hexagon tile map contributes to a growing field of visual inference studies.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Eriksson, Leif. "Solving Temporal CSPs via Enumeration and SAT Compilation." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-162482.

Повний текст джерела
Анотація:
The constraint satisfaction problem (CSP) is a powerful framework used in theoretical computer science for formulating a  multitude of problems. The CSP over a constraint language Γ (CSP(Γ)) is the decision problem of verifying whether a set of constraints based on the relations in Γ admits a satisfying assignment or not. Temporal CSPs are a special subclass of CSPs frequently encountered in AI. Here, the relations are first-order definable in the structure (Q;<), i.e the rationals with the usual order. These problems have previously often been solved by either enumeration or SAT compilation. We study a restriction of temporal CSPs where the constraint language is limited to logical disjunctions of <-, =-, ≠- and ≤-relations, and were each constraint contains at most k such basic relations (CSP({<,=,≠,≤}∨k)).   Every temporal CSP with a finite constraint language Γ is polynomial-time reducible to CSP({<,=,≠,≤}∨k) where k is only dependent on Γ. As this reduction does not increase the number of variables, the time complexity of CSP(Γ) is never worse than that of CSP({<,=,≠,≤}∨k). This makes the complexity of CSP({<,=,≠,≤}∨k) interesting to study.   We develop algorithms combining enumeration and SAT compilation to solve CSP({<,=,≠,≤}∨k), and study the asymptotic behaviour of these algorithms for different classes. Our results show that all finite constraint languages Γ first order definable over (Q;<) are solvable in O*(((1/(eln(2))-ϵk)n)^n) time for some ϵk>0 dependent on Γ. This is strictly better than O*((n/(eln(2)))^n), i.e. O*((0.5307n)^n), achieved by enumeration algorithms. Some examples of upper bounds on time complexity achieved in the thesis are CSP({<}∨2) in O*((0.1839n)^n) time, CSP({<,=,≤}∨2) in O*((0.2654n)^n) time, CSP({<,=,≠}∨3) in O*((0.4725n)^n) time and CSP({<,=,≠,≤}∨3) in O*((0.5067n)^n) time. For CSP({<}∨2) this should be compared to the bound O*((0.3679n)^n), from previously known enumeration algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Carse, Brian. "Artificial evolution of fuzzy and temporal rule based systems." Thesis, University of the West of England, Bristol, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267551.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Chopra, Smriti. "Spatio-temporal multi-robot routing." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53383.

Повний текст джерела
Анотація:
We analyze spatio-temporal routing under various constraints specific to multi-robot applications. Spatio-temporal routing requires multiple robots to visit spatial locations at specified time instants, while optimizing certain criteria like the total distance traveled, or the total energy consumed. Such a spatio-temporal concept is intuitively demonstrable through music (e.g. a musician routes multiple fingers to play a series of notes on an instrument at specified time instants). As such, we showcase much of our work on routing through this medium. Particular to robotic applications, we analyze constraints like maximum velocities that the robots cannot exceed, and information-exchange networks that must remain connected. Furthermore, we consider a notion of heterogeneity where robots and spatial locations are associated with multiple skills, and a robot can visit a location only if it has at least one skill in common with the skill set of that location. To extend the scope of our work, we analyze spatio-temporal routing in the context of a distributed framework, and a dynamic environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Horton, Michael. "Algorithms for the Analysis of Spatio-Temporal Data from Team Sports." Thesis, The University of Sydney, 2018. http://hdl.handle.net/2123/17755.

Повний текст джерела
Анотація:
Modern object tracking systems are able to simultaneously record trajectories—sequences of time-stamped location points—for large numbers of objects with high frequency and accuracy. The availability of trajectory datasets has resulted in a consequent demand for algorithms and tools to extract information from these data. In this thesis, we present several contributions intended to do this, and in particular, to extract information from trajectories tracking football (soccer) players during matches. Football player trajectories have particular properties that both facilitate and present challenges for the algorithmic approaches to information extraction. The key property that we look to exploit is that the movement of the players reveals information about their objectives through cooperative and adversarial coordinated behaviour, and this, in turn, reveals the tactics and strategies employed to achieve the objectives. While the approaches presented here naturally deal with the application-specific properties of football player trajectories, they also apply to other domains where objects are tracked, for example behavioural ecology, traffic and urban planning.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Stockman, Peter. "Upper Bounds on the Time Complexity of Temporal CSPs." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129778.

Повний текст джерела
Анотація:
The temporal constraint satisfaction problem (CSP) offers a formalized way to reason about in which order tasks should be accomplished. Using this we can model a wide set of specific problems from scheduling to AI. In this thesis we present two algorithms, Algorithm A and Algorithm B, to solve temporal CSPs focused on improving the worst case time complexity. The first algorithm solves temporal CSPs by an exhaustive search of all weak orderings. The time complexity is in , thus within a polynomial factor of the corresponding Ordered Bell Number. We also show that it can solve CSP in Allen’s algebra within a polynomial factor of the corresponding number of relations between intervals on a line, thus in   time. The second algorithm also solves temporal CSPs but where the constraints have a bounded number of disjuncts. It will assume some order and then make a backtracking search guided by the constraints. In this case the time complexity will be in . Finally we will show that this also improves the time complexity of CSP in Allen’s to .
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Démarez, Alice. "Investigating proteostasis and ageing of Escherichia coli using spatio-temporal algorithms." Paris 5, 2011. http://www.theses.fr/2011PA05T060.

Повний текст джерела
Анотація:
L’accroissement de la mortalité et la décroissance du nombre de descendants avec l’âge (tous deux conduisant à une diminution de la valeur sélective) sont les signatures du vieillissement dans les organismes vivants. Il a été montré que la bactérie Escherichia coli, bien que se divisant de façon symétrique, vieillie. Expérimentalement, ces questions sont étudiées en réalisant des films au cours du temps qui permettent de suivre l’évolution et l’histoire des bactéries et produisant une très importante quantité d’images. J’ai mis au point une nouvelle méthode d'analyse d'image qui permet de réaliser la segmentation et le suivie en une seule étape. L’utilisation d’un algorithme spatio-temporelle ayant l’avantage d’utiliser la redondance présente dans le jeu d’image, contrairement aux méthodes classiques basées sur deux étapes consécutives et indépendantes. J’ai appliqué les outils d’analyse d’image pour étudier le rôle de l’agrégation des protéines mal repliées pour les cellules. Nous avons pu mettre en évidence, entre autre, que 40% de la diminution du taux de croissance observée précédemment est causé par la présence des agrégats. Ce travail a permis de mettre en place une nouvelle méthodologie d’analyse d’image améliorant la vitesse, la précision et la fiabilité des résultats d’une part et d’augmenter les connaissances sur la dynamique et sur les effets des agrégats naturellement présents sur le vieillissement bactérien d’autre part
An increase in the probability of death and a decrease in the reproduction rate (both accounting for decrease of the fitness) are signatures of ageing in living organisms. Using a morphological criteria, allowed to demonstrate that Escherichia coli a symmetrically dividing micro-organism is subject to ageing. Ageing is studied by using time-lapse movie of growth of microcolonies emanating from a single cells. This result in a huge amount of images to analyse. The duration of semi- or non-automated analyses of those images brings a serious limit to the rate of data available for statistical and biological analysis. Hence, the processing of images had to be automated to speed up the process and make possible the studies of large data set. To address this key issue, I developed a new approach based on one main idea: considering segmentation and tracking at the same time, whereby the spatio-temporal segmentation uses the advantage of the large time redundancy of data, contrary to existing methods relying on successive spatial segmentation and tracking. Specifically, I applied the image analysis tools to address the role of protein aggregation in bacterial ageing. We were able to show, among other things, that protein aggregation is associated with the decrease of growth rate associated with ageing in E. Coli. In conclusion in this work I developed new image analysis methodologies that improved speed, accuracy and reliability of the results on one hand, and shed light on the dynamics and effects of natural aggregates in bacterial ageing on the other hand
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Wheeler, Brandon Myles. "Evaluating time-series smoothing algorithms for multi-temporal land cover classification." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/74313.

Повний текст джерела
Анотація:
In this study we applied the asymmetric Gaussian, double-logistic, and Savitzky-Golay filters to MODIS time-series NDVI data to compare the capability of smoothing algorithms in noise reduction for improving land cover classification in the Great Lakes Basin, and providing groundwork to support cyanobacteria and cyanotoxin monitoring efforts. We used inter-class separability and intra-class variability, at varying levels of pixel homogeneity, to evaluate the effectiveness of three smoothing algorithms. Based on these initial tests, the algorithm which returned the best results was used to analyze how image stratification by ecoregion can affect filter performance. MODIS 16-day 250m NDVI imagery of the Great Lakes Basin from 2001-2013 were used in conjunction with National Land Cover Database (NLCD) 2006 and 2011 data, and Cropland Data Layers (CDL) from 2008 to 2013 to conduct these evaluations. Inter-class separability was measured by Jeffries-Matusita (JM) distances between selected land cover classes (both general classes and specific crops), and intra-class variability was measured by calculating simple Euclidean distance for samples within a land cover class. Within the study area, it was found that the application of a smoothing algorithm can significantly reduce image noise, improving both inter-class separability and intra-class variability when compared to the raw data. Of the three filters examined, the asymmetric Gaussian filter consistently returned the highest values of interclass separability, while all three filters performed very similarly for within-class variability. The ecoregion analysis based on the asymmetric Gaussian dataset indicated that the scale of study area can heavily impact within-class separability. The criteria we established have potential for furthering our understanding of the strengths and weaknesses of different smoothing algorithms, thereby improving pre-processing decisions for land cover classification using time-series data.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Martirosyan, Anahit. "Towards Design of Lightweight Spatio-Temporal Context Algorithms for Wireless Sensor Networks." Thèse, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/19857.

Повний текст джерела
Анотація:
Context represents any knowledge obtained from Wireless Sensor Networks (WSNs) about the object being monitored (such as time and location of the sensed events). Time and location are important constituents of context as the information about the events sensed in WSNs is comprehensive when it includes spatio-temporal knowledge. In this thesis, we first concentrate on the development of a suite of lightweight algorithms on temporal event ordering and time synchronization as well as localization for WSNs. Then, we propose an energy-efficient clustering routing protocol for WSNs that is used for message delivery in the former algorithm. The two problems - temporal event ordering and synchronization - are dealt with together as both are concerned with preserving temporal relationships of events in WSNs. The messages needed for synchronization are piggybacked onto the messages exchanged in underlying algorithms. The synchronization algorithm is tailored to the clustered topology in order to reduce the overhead of keeping WSNs synchronized. The proposed localization algorithm has an objective of lowering the overhead of DV-hop based algorithms by reducing the number of floods in the initial position estimation phase. It also randomizes iterative refinement phase to overcome the synchronicity of DV-hop based algorithms. The position estimates with higher confidences are emphasized to reduce the impact of erroneous estimates on the neighbouring nodes. The proposed clustering routing protocol is used for message delivery in the proposed temporal algorithm. Nearest neighbour nodes are employed for inter-cluster communication. The algorithm provides Quality of Service by forwarding high priority messages via the paths with the least cost. The algorithm is also extended for multiple Sink scenario. The suite of algorithms proposed in this thesis provides the necessary tool for providing spatio-temporal context for context-aware WSNs. The algorithms are lightweight as they aim at satisfying WSN's requirements primarily in terms of energy-efficiency, low latency and fault tolerance. This makes them suitable for emergency response applications and ubiquitous computing.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Wanchaleam, Pora. "Algorithms and structures for spatial and temporal equalisation in TDMA mobile communications." Thesis, Imperial College London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.322203.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Nilsson, Mikael. "Efficient Temporal Reasoning with Uncertainty." Licentiate thesis, Linköpings universitet, Artificiell intelligens och integrerade datorsystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-119409.

Повний текст джерела
Анотація:
Automated Planning is an active area within Artificial Intelligence. With the help of computers we can quickly find good plans in complicated problem domains, such as planning for search and rescue after a natural disaster. When planning in realistic domains the exact duration of an action generally cannot be predicted in advance. Temporal planning therefore tends to use upper bounds on durations, with the explicit or implicit assumption that if an action happens to be executed more quickly, the plan will still succeed. However, this assumption is often false. If we finish cooking too early, the dinner will be cold before everyone is at home and can eat. Simple Temporal Networks with Uncertainty (STNUs) allow us to model such situations. An STNU-based planner must verify that the temporal problems it generates are executable, which is captured by the property of dynamic controllability (DC). If a plan is not dynamically controllable, adding actions cannot restore controllability. Therefore a planner should verify after each action addition whether the plan remains DC, and if not, backtrack. Verifying dynamic controllability of a full STNU is computationally intensive. Therefore, incremental DC verification algorithms are needed. We start by discussing two existing algorithms relevant to the thesis. These are the very first DC verification algorithm called MMV (by Morris, Muscettola and Vidal) and the incremental DC verification algorithm called FastIDC, which is based on MMV. We then show that FastIDC is not sound, sometimes labeling networks as dynamically controllable when they are not.  We analyze the algorithm to pinpoint the cause and show how the algorithm can be modified to correctly and efficiently detect uncontrollable networks. In the next part we use insights from this work to re-analyze the MMV algorithm. This algorithm is pseudo-polynomial and was later subsumed by first an n5 algorithm and then an n4 algorithm. We show that the basic techniques used by MMV can in fact be used to create an n4 algorithm for verifying dynamic controllability, with a new termination criterion based on a deeper analysis of MMV. This means that there is now a comparatively easy way of implementing a highly efficient dynamic controllability verification algorithm. From a theoretical viewpoint, understanding MMV is important since it acts as a building block for all subsequent algorithms that verify dynamic controllability. In our analysis we also discuss a change in MMV which reduces the amount of regression needed in the network substantially. In the final part of the thesis we show that the FastIDC method can result in traversing part of a temporal network multiple times, with constraints slowly tightening towards their final values.  As a result of our analysis we then present a new algorithm with an improved traversal strategy that avoids this behavior.  The new algorithm, EfficientIDC, has a time complexity which is lower than that of FastIDC. We prove that it is sound and complete.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Yongkang, Hu, Zhang Qishan, Kou Yanhong, and Yang Dongkai. "STUDY ON GPS RECEIVER ALGORITHMS FOR SUPPRESSION OF NARROWBAND INTERFERENCE." International Foundation for Telemetering, 2007. http://hdl.handle.net/10150/604582.

Повний текст джерела
Анотація:
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada
Despite the inherent resistance to narrowband signal interference afforded by GPS spread spectrum modulation, the low level of GPS signals makes them susceptible to narrowband interference. This paper discusses the application of a pre-correlation adaptive temporal filter for stationary and nonstationary narrowband interference suppression. Various adaptive algorithms are studied and implemented. Comparison of the convergence and tracking behavior of various algorithms is made.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Capresi, Chiara. "Algorithms for identifying clusters in temporal graphs and realising distance matrices by unicyclic graphs." Doctoral thesis, Università di Siena, 2022. http://hdl.handle.net/11365/1211314.

Повний текст джерела
Анотація:
In this thesis I present some results, which mainly concern finding algorithms for graphs problems, to which I worked on during my Ph.D with the supervision of my supervisors. In particular, as a first theme, it is presented a new temporal interpretation of the well studied \ClusterEditing which we called \EditTempClique (\ETC). In this regard, %at first it is shown that the corresponding versions in the temporal setting of any \NP-Hard version of \ClusterEditing is still \NP-Hard. Then, in this work, it is proved that \ETC is \NP-Complete even if we restrict the possible inputs to the class of temporal graphs with a path as their underlying graph. Furthermore, it is presented a result that shows that \ETC is instead tractable in polynomial time if the underlying graph is a path and the maximum number of appearances allowed for each of the edges of that path is fixed. Taking in mind the known key observation that a static graph is a cluster graph if and only if it does not contain any induced $P_3$, it is presented a local characterisation for cluster temporal graphs. This characterisation establishes that a temporal graph is a cluster temporal graph if and only if every subset of at most five vertices induces a cluster temporal graph. Using this characterisation, we obtain an \FPT~algorithm for \ETC parameterised simultaneously by the number of modifications and the lifetime (number of timesteps) of the input temporal graph. Furthermore, it is shown via a counterexample, that a cluster temporal graph can not be properly characterised by sets of at most four vertices. In the last Chapter of this thesis, at first it is proven a result on the realisation of distance matrices by $n-$cycles and then it is developed an algorithm that allows to establish if a given distance matrix $D$ can or cannot be realised by a weighted unicyclic graph or at least by a weighted tree. In case of affirmative answer, a second part of the algorithm reconstructs that graph. The algorithm takes $\mathcal O(n^4)$. Furthermore, it is shown that if the algorithm returns a unicyclic graph as a realisation of $D$, then this realisation is optimal.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Zhang, Jun. "Nearest neighbor queries in spatial and spatio-temporal databases /." View abstract or full-text, 2003. http://library.ust.hk/cgi/db/thesis.pl?COMP%202003%20ZHANG.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Wang, Ziyang. "Next Generation Ultrashort-Pulse Retrieval Algorithm for Frequency-Resolved Optical Gating: The Inclusion of Random (Noise) and Nonrandom (Spatio-Temporal Pulse Distortions) Error." Diss., Available online, Georgia Institute of Technology, 2005, 2005. http://etd.gatech.edu/theses/available/etd-04122005-224257/unrestricted/wang%5Fziyang%5F200505%5Fphd.pdf.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--Physics, Georgia Institute of Technology, 2005.
You, Li, Committee Member ; Buck, John A., Committee Member ; Kvam, Paul, Committee Member ; Kennedy, Brian, Committee Member ; Trebino, Rick, Committee Chair. Vita. Includses bibliographical references.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Kleisarchaki, Sofia. "Analyse des différences dans le Big Data : Exploration, Explication, Évolution." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM055/document.

Повний текст джерела
Анотація:
La Variabilité dans le Big Data se réfère aux données dont la signification change de manière continue. Par exemple, les données des plateformes sociales et les données des applications de surveillance, présentent une grande variabilité. Cette variabilité est dûe aux différences dans la distribution de données sous-jacente comme l’opinion de populations d’utilisateurs ou les mesures des réseaux d’ordinateurs, etc. L’Analyse de Différences a comme objectif l’étude de la variabilité des Données Massives. Afin de réaliser cet objectif, les data scientists ont besoin (a) de mesures de comparaison de données pour différentes dimensions telles que l’âge pour les utilisateurs et le sujet pour le traffic réseau, et (b) d’algorithmes efficaces pour la détection de différences à grande échelle. Dans cette thèse, nous identifions et étudions trois nouvelles tâches analytiques : L’Exploration des Différences, l’Explication des Différences et l’Evolution des Différences.L’Exploration des Différences s’attaque à l’extraction de l’opinion de différents segments d’utilisateurs (ex., sur un site de films). Nous proposons des mesures adaptées à la com- paraison de distributions de notes attribuées par les utilisateurs, et des algorithmes efficaces qui permettent, à partir d’une opinion donnée, de trouver les segments qui sont d’accord ou pas avec cette opinion. L’Explication des Différences s’intéresse à fournir une explication succinte de la différence entre deux ensembles de données (ex., les habitudes d’achat de deux ensembles de clients). Nous proposons des fonctions de scoring permettant d’ordonner les explications, et des algorithmes qui guarantissent de fournir des explications à la fois concises et informatives. Enfin, l’Evolution des Différences suit l’évolution d’un ensemble de données dans le temps et résume cette évolution à différentes granularités de temps. Nous proposons une approche basée sur le requêtage qui utilise des mesures de similarité pour comparer des clusters consécutifs dans le temps. Nos index et algorithmes pour l’Evolution des Différences sont capables de traiter des données qui arrivent à différentes vitesses et des types de changements différents (ex., soudains, incrémentaux). L’utilité et le passage à l’échelle de tous nos algorithmes reposent sur l’exploitation de la hiérarchie dans les données (ex., temporelle, démographique).Afin de valider l’utilité de nos tâches analytiques et le passage à l’échelle de nos algo- rithmes, nous réalisons un grand nombre d’expériences aussi bien sur des données synthé- tiques que réelles.Nous montrons que l’Exploration des Différences guide les data scientists ainsi que les novices à découvrir l’opinion de plusieurs segments d’internautes à grande échelle. L’Explication des Différences révèle la nécessité de résumer les différences entre deux ensembles de donnes, de manière parcimonieuse et montre que la parcimonie peut être atteinte en exploitant les relations hiérarchiques dans les données. Enfin, notre étude sur l’Evolution des Différences fournit des preuves solides qu’une approche basée sur les requêtes est très adaptée à capturer des taux d’arrivée des données variés à plusieurs granularités de temps. De même, nous montrons que les approches de clustering sont adaptées à différents types de changement
Variability in Big Data refers to data whose meaning changes continuously. For instance, data derived from social platforms and from monitoring applications, exhibits great variability. This variability is essentially the result of changes in the underlying data distributions of attributes of interest, such as user opinions/ratings, computer network measurements, etc. {em Difference Analysis} aims to study variability in Big Data. To achieve that goal, data scientists need: (a) measures to compare data in various dimensions such as age for users or topic for network traffic, and (b) efficient algorithms to detect changes in massive data. In this thesis, we identify and study three novel analytical tasks to capture data variability: {em Difference Exploration, Difference Explanation} and {em Difference Evolution}.Difference Exploration is concerned with extracting the opinion of different user segments (e.g., on a movie rating website). We propose appropriate measures for comparing user opinions in the form of rating distributions, and efficient algorithms that, given an opinion of interest in the form of a rating histogram, discover agreeing and disargreeing populations. Difference Explanation tackles the question of providing a succinct explanation of differences between two datasets of interest (e.g., buying habits of two sets of customers). We propose scoring functions designed to rank explanations, and algorithms that guarantee explanation conciseness and informativeness. Finally, Difference Evolution tracks change in an input dataset over time and summarizes change at multiple time granularities. We propose a query-based approach that uses similarity measures to compare consecutive clusters over time. Our indexes and algorithms for Difference Evolution are designed to capture different data arrival rates (e.g., low, high) and different types of change (e.g., sudden, incremental). The utility and scalability of all our algorithms relies on hierarchies inherent in data (e.g., time, demographic).We run extensive experiments on real and synthetic datasets to validate the usefulness of the three analytical tasks and the scalability of our algorithms. We show that Difference Exploration guides end-users and data scientists in uncovering the opinion of different user segments in a scalable way. Difference Explanation reveals the need to parsimoniously summarize differences between two datasets and shows that parsimony can be achieved by exploiting hierarchy in data. Finally, our study on Difference Evolution provides strong evidence that a query-based approach is well-suited to tracking change in datasets with varying arrival rates and at multiple time granularities. Similarly, we show that different clustering approaches can be used to capture different types of change
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Cordeiro, Thiago da Silva. "Controle das características geométricas de nanopartículas de prata através da conformação temporal de pulsos ultracurtos utilizando algorítimos genéticos." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/85/85134/tde-18102013-154842/.

Повний текст джерела
Анотація:
Este trabalho utilizou pulsos laser ultracurtos para modificar, de forma controlada, as características dimensionais de nanopartículas de prata em solução aquosa. Para atingir este objetivo foram empregados algoritmos genéticos e circuitos microfluídicos. Utilizou-se um conformador temporal de pulsos ultracurtos para criar diversos perfis temporais de pulsos que irradiaram soluções de nanopartículas de prata. Estes perfis temporais foram ajustados em tempo real, visando otimizar o resultado do experimento, quantificada pela diminuição do diâmetro médio das nanopartículas nas soluções irradiadas. Uma vez que cada experimento de minimização do diâmetro das nanopartículas exigiu centenas de medidas, sua realização foi possível em decorrência da utilização de um circuito microfluídico construído especialmente para este trabalho. Neste circuito é possível utilizar pequenas quantidades de amostra, levando a curtos tempos de irradiação e medição, além da evidente economia de amostras. Para a realização deste trabalho foi elaborado e testado um algoritmo genético interfaceado a diversos equipamentos, incluindo um filtro acustóptico dispersivo programável que modifica as características temporais dos pulsos ultracurtos, através da introdução de componentes de fases espectrais nestes pulsos. Utilizando o algoritmo genético e o filtro acustóptico dispersivo programável foram realizados experimentos de encurtamento da duração temporal dos pulsos ultracurtos provenientes do sistema laser, resultando na obtenção de pulsos com durações próximas às limitadas por transformada de Fourier. Além disso, foram realizados experimentos para a otimização do processo evolutivo do algoritmo genético escrito em Labview. Os experimentos de irradiação de soluções de nanopartículas de prata mostraram que, ao conformar a duração dos pulsos utilizados nas irradiações, pôde-se controlar as dimensões destas nanopartículas, diminuindo seu tamanho médio por um fator 2. Esses experimentos caracterizam a irradiação de nanopartículas por lasers de pulsos ultracurtos como uma importante técnica de controle de características de nanopartículas.
This work used ultrashort laser pulses to modify, in a controlled way, the dimensional characteristics of silver nanoparticles in aqueous solution. To reach this goal, genetic algorithm and microfluidic circuits were used. A pulse shaper was used to create different temporal profiles for the ultrashort pulses used to irradiate the silver nanoparticle solutions. These temporal profiles were conformed in real time, aiming to optimize the experiment result, quantified by the decrease of the average diameter of the nanoparticles in the irradiated solutions. Since each nanoparticle diameter minimization experiment demanded hundreds of measurements, its achievement was possible by the use of a microfluidic circuit specially built for this work. This circuit enables the use of small sample quantities, leading to short irradiation and measurement intervals, besides evident sample savings. To make this work possible, a genetic algorithm was created and tested. This genetic algorithm was interfaced to several equipments, including an acustooptic programmable dispersive filter that modifies the ultrashort pulses temporal characteristics by the introduction of spectral phases in the pulses. The genetic algorithm and the acustooptic programmable dispersive filter were used in conjunction in experiments to temporally shorten the ultrashort pulses from the laser system, generating pulses durations close to the Fourier transform limited ones. Besides, experiments were performed with the Labview coded genetic algorithm to optimize its evolutionary process. The silver nanoparticles irradiation experiments showed that the ultrashort pulses temporal conformation allowed the control of these particles dimensions, decreasing its mean size by a factor of 2. These experiments characterize the nanoparticles irradiation by ultrashort pulses as an important technique to control the nanoparticles characteristics.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Saraiva, Gustavo Francisco Rosalin. "Análise temporal da sinalização elétrica em plantas de soja submetidas a diferentes perturbações externas." Universidade do Oeste Paulista, 2017. http://bdtd.unoeste.br:8080/jspui/handle/jspui/1087.

Повний текст джерела
Анотація:
Submitted by Michele Mologni (mologni@unoeste.br) on 2018-07-27T17:57:40Z No. of bitstreams: 1 Gustavo Francisco Rosalin Saraiva.pdf: 5041218 bytes, checksum: 30127a7816b12d3bd7e57182e6229bc2 (MD5)
Made available in DSpace on 2018-07-27T17:57:40Z (GMT). No. of bitstreams: 1 Gustavo Francisco Rosalin Saraiva.pdf: 5041218 bytes, checksum: 30127a7816b12d3bd7e57182e6229bc2 (MD5) Previous issue date: 2017-03-31
Plants are complex organisms with dynamic processes that, due to their sessile way of life, are influenced by environmental conditions at all times. Plants can accurately perceive and respond to different environmental stimuli intelligently, but this requires a complex and efficient signaling system. Electrical signaling in plants has been known for a long time, but has recently gained prominence with the understanding of the physiological processes of plants. The objective of this thesis was to test the following hypotheses: temporal series of data obtained from electrical signaling of plants have non-random information, with dynamic and oscillatory pattern, such dynamics being affected by environmental stimuli and that there are specific patterns in responses to stimuli. In a controlled environment, stressful environmental stimuli were applied in soybean plants, and the electrical signaling data were collected before and after the application of the stimulus. The time series obtained were analyzed using statistical and computational tools to determine Frequency Spectrum (FFT), Autocorrelation of Values and Approximate Entropy (ApEn). In order to verify the existence of patterns in the series, classification algorithms from the area of machine learning were used. The analysis of the time series showed that the electrical signals collected from plants presented oscillatory dynamics with frequency distribution pattern in power law. The results allow to differentiate with great efficiency series collected before and after the application of the stimuli. The PSD and autocorrelation analyzes showed a great difference in the dynamics of the electric signals before and after the application of the stimuli. The ApEn analysis showed that there was a decrease in the signal complexity after the application of the stimuli. The classification algorithms reached significant values in the accuracy of pattern detection and classification of the time series, showing that there are mathematical patterns in the different electrical responses of the plants. It is concluded that the time series of bioelectrical signals of plants contain discriminant information. The signals have oscillatory dynamics, having their properties altered by environmental stimuli. There are still mathematical patterns built into plant responses to specific stimuli.
As plantas são organismos complexos com processos dinâmicos que, devido ao seu modo séssil de vida, sofrem influência das condições ambientais todo o tempo. Plantas podem percebem e responder com precisão a diferentes estímulos ambientais de forma inteligente, mas para isso se faz necessário um complexo e eficiente sistema de sinalização. A sinalização elétrica em plantas já é conhecida há muito tempo, mas vem ganhando destaque recentemente com seu entendimento em relação aos processos fisiológicos das plantas. O objetivo desta tese foi testar as seguintes hipóteses: séries temporais de dados obtidos da sinalização elétrica de plantas possuem informação não aleatória, com padrão dinâmico e oscilatório, sendo tal dinâmica afetada por estímulos ambientais e que há padrões específicos nas respostas a estímulos. Em ambiente controlado, foram aplicados estímulos ambientais estressantes em plantas de soja, e captados os dados de sinalização elétrica antes e após a aplicação dos mesmos. As séries temporais obtidas foram analisadas utilizando ferramentas estatísticas e computacionais para se determinar o Espectro de Frequências (FFT), Autocorrelação dos valores e Entropia Aproximada (ApEn). Para se verificar a existência de padrões nas séries, foram utilizados algoritmos de classificação da área de aprendizado de máquina. A análise das séries temporais mostrou que os sinais elétricos coletados de plantas apresentaram dinâmica oscilatória com padrão de distribuição de frequências em lei de potência. Os resultados permitem diferenciar com grande eficácia séries coletadas antes e após a aplicação dos estímulos. As análises de PSD e autocorrelação mostraram grande diferença na dinâmica dos sinais elétricos antes e após a aplicação dos estímulos. A análise de ApEn mostrou haver diminuição da complexidade do sinal após a aplicação dos estímulos. Os algoritmos de classificação alcançaram valores significativos na acurácia de detecção de padrões e classificação das séries temporais, mostrando haver padrões matemáticos nas diferentes respostas elétricas das plantas. Conclui-se que as séries temporais de sinais bioelétricos de plantas possuem informação discriminante. Os sinais possuem dinâmica oscilatória, tendo suas propriedades alteradas por estímulos ambientais. Há ainda padrões matemáticos embutidos nas respostas da planta a estímulos específicos.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Matthews, Stephen. "Learning lost temporal fuzzy association rules." Thesis, De Montfort University, 2012. http://hdl.handle.net/2086/8257.

Повний текст джерела
Анотація:
Fuzzy association rule mining discovers patterns in transactions, such as shopping baskets in a supermarket, or Web page accesses by a visitor to a Web site. Temporal patterns can be present in fuzzy association rules because the underlying process generating the data can be dynamic. However, existing solutions may not discover all interesting patterns because of a previously unrecognised problem that is revealed in this thesis. The contextual meaning of fuzzy association rules changes because of the dynamic feature of data. The static fuzzy representation and traditional search method are inadequate. The Genetic Iterative Temporal Fuzzy Association Rule Mining (GITFARM) framework solves the problem by utilising flexible fuzzy representations from a fuzzy rule-based system (FRBS). The combination of temporal, fuzzy and itemset space was simultaneously searched with a genetic algorithm (GA) to overcome the problem. The framework transforms the dataset to a graph for efficiently searching the dataset. A choice of model in fuzzy representation provides a trade-off in usage between an approximate and descriptive model. A method for verifying the solution to the hypothesised problem was presented. The proposed GA-based solution was compared with a traditional approach that uses an exhaustive search method. It was shown how the GA-based solution discovered rules that the traditional approach did not. This shows that simultaneously searching for rules and membership functions with a GA is a suitable solution for mining temporal fuzzy association rules. So, in practice, more knowledge can be discovered for making well-informed decisions that would otherwise be lost with a traditional approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Pilourdault, Julien. "Scalable algorithms for monitoring activity traces." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM040/document.

Повний текст джерела
Анотація:
Dans cette thèse, nous étudions des algorithmes pour le monitoring des traces d’activité à grande échelle. Le monitoring est une aptitude clé dans plusieurs domaines, permettant d’extraire de la valeur des données ou d’améliorer les performances d’un système. Nous explorons d’abord le monitoring de données temporelles. Nous présentons un nouveau type de jointure sur des intervalles, qui inclut des fonctions de score caractérisant le degré de satisfaction de prédicats temporels. Nous étudions ces jointures dans le contexte du batch processing (traitement par lots). Nous formalisons la Ranked Temporal Join (RTJ), une jointure qui combine des collections d’intervalles et retourne les k meilleurs résultats. Nous montrons comment exploiter les propriétés des prédicats temporels et de la sémantique de score associée afin de concevoir TKIJ , une méthode d’évaluation de requête distribuée basée sur Map-Reduce. Nos expériences sur des données synthétiques et réelles montrent que TKIJ est plus performant que les techniques de l’état de l’art et démontre de bonnes performances sur des requêtes RTJ n-aires sur des données temporelles. Nous proposons également une étude préliminaire afin d’étendre nos travaux sur TKIJ au domaine du stream processing (traitement de flots). Nous explorons ensuite le monitoring dans le crowdsourcing (production participative). Nous soutenons la nécessité d’intégrer la motivation des travailleurs dans le processus d’affectation des tâches. Nous proposons d’étudier une approche adaptative, qui évalue la motivation des travailleurs lors de l’exécution des tâches et l’exploite afin d’améliorer l’affectation de tâches qui est réalisée de manière itérative. Nous explorons une première variante nommée Individual Task Assignment (Ita), dans laquelle les tâches sont affectées individuellement, un travailleur à la fois. Nous modélisons Ita et montrons que ce problème est NP-Difficile. Nous proposons trois méthodes d’affectation de tâches qui poursuivent différents objectifs. Nos expériences en ligne étudient l’impact de chaque méthode sur la performance globale dans l’exécution de tâches. Nous observons que différentes stratégies sont dominantes sur les différentes dimensions de performance. En particulier, la méthode affectant des tâches aléatoires et correspondant aux intérêts d’un travailleur donne le meilleur flux d’exécution de tâches. La méthode affectant des tâches correspondant au compromis d’un travailleur entre diversité et niveau de rémunération des tâches donne le meilleur niveau de qualité. Nos expériences confirment l’utilité d’une affectation de tâches adaptative et tenant compte de la motivation. Nous étudions une deuxième variante nommée Holistic Task Assignment (Hta), où les tâches sont affectées à tous les travailleurs disponibles, de manière holistique. Nous modélisons Hta et montrons que ce problème est NP-Difficile et MaxSNP-Difficile. Nous développons des algorithmes d’approximation pour Hta. Nous menons des expériences sur des données synthétiques pour évaluer l’efficacité de nos algorithmes. Nous conduisons également des expériences en ligne et comparons notre approche avec d’autres stratégies non adaptatives. Nous observons que notre approche présente le meilleur compromis sur les différentes dimensions de performance
In this thesis, we study scalable algorithms for monitoring activity traces. In several domains, monitoring is a key ability to extract value from data and improve a system. This thesis aims to design algorithms for monitoring two kinds of activity traces. First, we investigate temporal data monitoring. We introduce a new kind of interval join, that features scoring functions reflecting the degree of satisfaction of temporal predicates. We study these joins in the context of batch processing: we formalize Ranked Temporal Join (RTJ), that combine collections of intervals and return the k best results. We show how to exploit the nature of temporal predicates and the properties of their associated scored semantics to design TKIJ , an efficient query evaluation approach on a distributed Map-Reduce architecture. Our extensive experiments on synthetic and real datasets show that TKIJ outperforms state-of-the-art competitors and provides very good performance for n-ary RTJ queries on temporal data. We also propose a preliminary study to extend our work on TKIJ to stream processing. Second, we investigate monitoring in crowdsourcing. We advocate the need to incorporate motivation in task assignment. We propose to study an adaptive approach, that captures workers’ motivation during task completion and use it to revise task assignment accordingly across iterations. We study two variants of motivation-aware task assignment: Individual Task Assignment (Ita) and Holistic Task Assignment (Hta). First, we investigate Ita, where we assign tasks to workers individually, one worker at a time. We model Ita and show it is NP-Hard. We design three task assignment strategies that exploit various objectives. Our live experiments study the impact of each strategy on overall performance. We find that different strategies prevail for different performance dimensions. In particular, the strategy that assigns random and relevant tasks offers the best task throughput and the strategy that assigns tasks that best match a worker’s compromise between task diversity and task payment has the best outcome quality. Our experiments confirm the need for adaptive motivation-aware task assignment. Then, we study Hta, where we assign tasks to all available workers, holistically. We model Hta and show it is both NP-Hard and MaxSNP-Hard. We develop efficient approximation algorithms with provable guarantees. We conduct offline experiments to verify the efficiency of our algorithms. We also conduct online experiments with real workers and compare our approach with various non-adaptive assignment strategies. We find that our approach offers the best compromise between performance dimensions thereby assessing the need for adaptability
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Tuck, Terry W. "Temporally Correct Algorithms for Transaction Concurrency Control in Distributed Databases." Thesis, University of North Texas, 2001. https://digital.library.unt.edu/ark:/67531/metadc2743/.

Повний текст джерела
Анотація:
Many activities are comprised of temporally dependent events that must be executed in a specific chronological order. Supportive software applications must preserve these temporal dependencies. Whenever the processing of this type of an application includes transactions submitted to a database that is shared with other such applications, the transaction concurrency control mechanisms within the database must also preserve the temporal dependencies. A basis for preserving temporal dependencies is established by using (within the applications and databases) real-time timestamps to identify and order events and transactions. The use of optimistic approaches to transaction concurrency control can be undesirable in such situations, as they allow incorrect results for database read operations. Although the incorrectness is detected prior to transaction committal and the corresponding transaction(s) restarted, the impact on the application or entity that submitted the transaction can be too costly. Three transaction concurrency control algorithms are proposed in this dissertation. These algorithms are based on timestamp ordering, and are designed to preserve temporal dependencies existing among data-dependent transactions. The algorithms produce execution schedules that are equivalent to temporally ordered serial schedules, where the temporal order is established by the transactions' start times. The algorithms provide this equivalence while supporting currency to the extent out-of-order commits and reads. With respect to the stated concern with optimistic approaches, two of the proposed algorithms are risk-free and return to read operations only committed data-item values. Risk with the third algorithm is greatly reduced by its conservative bias. All three algorithms avoid deadlock while providing risk-free or reduced-risk operation. The performance of the algorithms is determined analytically and with experimentation. Experiments are performed using functional database management system models that implement the proposed algorithms and the well-known Conservative Multiversion Timestamp Ordering algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Jakkula, Vikramaditya Reddy. "Enhancing smart home resident activity prediction and anomaly detection using temporal relations." Online access for everyone, 2007. http://www.dissertations.wsu.edu/Thesis/Fall2007/v_jakkula_102207.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Rossi, Alfred Vincent III. "Temporal Clustering of Finite Metric Spaces and Spectral k-Clustering." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1500033042082458.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Pallikarakis, Christos A. "Development of temporal phase unwrapping algorithms for depth-resolved measurements using an electronically tuned Ti:Sa laser." Thesis, Loughborough University, 2017. https://dspace.lboro.ac.uk/2134/23918.

Повний текст джерела
Анотація:
This thesis is concerned with (a) the development of full-field, multi-axis and phase contrast wavelength scanning interferometer, using an electronically tuned CW Ti:Sa laser for the study of depth resolved measurements in composite materials such as GFRPs and (b) the development of temporal phase unwrapping algorithms for depth re-solved measurements. Item (a) was part of the ultimate goal of successfully extracting the 3-D, depth-resolved, constituent parameters (Young s modulus E, Poisson s ratio v etc.) that define the mechanical behaviour of composite materials like GFRPs. Considering the success of OCT as an imaging modality, a wavelength scanning interferometer (WSI) capable of imaging the intensity AND the phase of the interference signal was proposed as the preferred technique to provide the volumetric displacement/strain fields (Note that displacement/strain fields are analogous to phase fields and thus a phase-contrast interferometer is of particular interest in this case). These would then be passed to the VFM and yield the sought parameters provided the loading scheme is known. As a result, a number of key opto-mechanical hardware was developed. First, a multiple channel (x6) tomographic interferometer realised in a Mach-Zehnder arrangement was built. Each of the three channels would provide the necessary information to extract the three orthogonal displacement/strain components while the other three are complementary and were included in the design in order to maximize the penetration depth (sample illuminated from both sides). Second, a miniature uniaxial (tensile and/or compression) loading machine was designed and built for the introduction of controlled and low magnitude displacements. Last, a rotation stage for the experimental determination of the sensitivity vectors and the re-registration of the volumetric data from the six channels was also designed and built. Unfortunately, due to the critical failure of the Ti:Sa laser data collection using the last two items was not possible. However, preliminary results at a single wavelength suggested that the above items work as expected. Item (b) involved the development of an optical sensor for the dynamic monitoring of wavenumber changes during a full 100 nm scan. The sensor is comprised of a set of four wedges in a Fizeau interferometer setup that became part of the multi-axis interferometer (7th channel). Its development became relevant due to the large amount of mode-hops present during a full scan of the Ti:Sa source. These are associated to the physics of the laser and have the undesirable effect of randomising the signal and thus preventing successful depth reconstructions. The multi-wedge sensor was designed so that it provides simultaneously high wavenumber change resolution and immunity to the large wavenumber jumps from the Ti:Sa. The analysis algorithms for the extraction of the sought wavenumber changes were based on 2-D Fourier transform method followed by temporal phase unwrapping. At first, the performance of the sensor was tested against that of a high-end commercial wavemeter for a limited scan of 1nm. A root mean square (rms) difference in measured wavenumber shift between the two of ~4 m-1 has been achieved, equivalent to an rms wavelength shift error of ~0.4 pm. Second, by resampling the interference signal and the wavenumber-change axis onto a uniformly sampled k-space, depth resolutions that are close to the theoretical limits were achieved for scans of up to 37 nm. Access of the full 100 nm range that is characterised by wavelength steps down to picometers level was achieved by introducing a number of improvements to the original temporal phase unwrapping algorithm reported in ref [1] tailored to depth resolved measurements. These involved the estimation and suppression of intensity background artefacts, improvements on the 2-D Fourier transform phase detection based on a previously developed algorithm in ref [2] and finally the introduction of two modifications to the original TPU. Both approaches are adaptive and involve signal re-referencing at regular intervals throughout the scan. Their purpose is to compensate for systematic and non-systematic errors owing to a small error in the value of R (a scaling factor applied to the lower sensitivity wedge phase-change signal used to unwrap the higher sensitivity one), or small changes in R with wavelength due to the possibility of a mismatch in the refractive dispersion curves of the wedges and/or a mismatch in the wedge angles. A hybrid approach combining both methods was proposed and used to analyse the data from each of the four wedges. It was found to give the most robust results of all the techniques considered, with a clear Fourier peak at the expected frequency, with significantly reduced spectral artefacts and identical depth resolutions for all four wedges of 2.2 μm measured at FWHM. The ability of the phase unwrapping strategy in resolving the aforementioned issues was demonstrated by successfully measuring the absolute thickness of four fused silica glasses using real experimental data. The results were compared with independent micrometer measurements and showed excellent agreement. Finally, due to the lack of additional experimental data and in an attempt to justify the validity of the proposed temporal phase unwrapping strategy termed as the hybrid approach, a set of simulations that closely matched the parameters characterising the real experimental data set analysed were produced and were subsequently analysed. The results of this final test justify that the various fixes included in the hybrid approach have not evolved to solve the problems of a particular data set but are rather of general nature thereby, highlighting its importance for PC-WSI applications concerning the processing and analysis of large scans.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Rex, David Bruce. "Object Parallel Spatio-Temporal Analysis and Modeling System." PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/1278.

Повний текст джерела
Анотація:
The dissertation will outline an object-oriented model from which a next-generation GIS can be derived. The requirements for a spatial information analysis and modeling system can be broken into three primary functional classes: data management (data classification and access), analysis (modeling, optimization, and simulation) and visualization (display of data). These three functional classes can be considered as the primary colors of the spectrum from which the different shades of spatial analysis are composed. Object classes will be developed which will be designed to manipulate the three primary functions as required by the user and the data.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Malik, Zohaib Mansoor. "Design and implementation of temporal filtering and other data fusion algorithms to enhance the accuracy of a real time radio location tracking system." Thesis, Högskolan i Gävle, Avdelningen för elektronik, matematik och naturvetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-13261.

Повний текст джерела
Анотація:
A general automotive navigation system is a satellite navigation system designed for use inautomobiles. It typically uses GPS to acquire position data to locate the user on a road in the unit's map database. However, due to recent improvements in the performance of small and lightweight micro-machined electromechanical systems (MEMS) inertial sensors have made the application of inertial techniques to such problems, possible. This has resulted in an increased interest in the topic of inertial navigation. In location tracking system, sensors are used either individually or in conjunction like in data fusion. However, still they remain noisy, and so there is a need to measure maximum data and then make an efficient system that can remove the noise from data and provide a better estimate. The task of this thesis work was to take data from two sensors, and use an estimation technique toprovide an accurate estimate of the true location. The proposed sensors were an accelerometer and a GPS device. This thesis however deals with using accelerometer sensor and using estimation scheme, Kalman filter. The thesis report presents an insight to both the proposed sensors and different estimation techniques. Within the scope of the work, the task was performed using simulation software Matlab. Kalman filter’s efficiency was examined using different noise levels.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Salvaggio, Carl. "Automated segmentation of urban features from Landsat-Thematic Mapper imagery for use in pseudovariant feature temporal image normalization /." Online version of thesis, 1987. http://hdl.handle.net/1850/11371.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Lamus, Garcia Herreros Camilo. "Models and algorithms of brain connectivity, spatial sparsity, and temporal dynamics for the MEG/EEG inverse problem." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/103160.

Повний текст джерела
Анотація:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 123-131).
Magnetoencephalography (MEG) and electroencephalography (EEG) are noninvasive functional neuroimaging techniques that provide high temporal resolution recordings of brain activity, offering a unique means to study fast neural dynamics in humans. Localizing the sources of brain activity from MEG/EEG is an ill-posed inverse problem, with no unique solution in the absence of additional information. In this dissertation I analyze how solutions to the MEG/EEG inverse problem can be improved by including information about temporal dynamics of brain activity and connectivity within and among brain regions. The contributions of my thesis are: 1) I develop a dynamic algorithm for source localization that uses local connectivity information and Empirical Bayes estimates to improve source localization performance (Chapter 1). This result led me to investigate the underlying theoretical principles that might explain the performance improvement observed in simulations and by analyzing experimental data. In my analysis, 2) I demonstrate theoretically how the inclusion of local connectivity information and basic source dynamics can greatly increase the number of sources that can be recovered from MEG/EEG data (Chapter 2). Finally, in order to include long distance connectivity information, 3) I develop a fast multi-scale dynamic source estimation algorithm based on the Subspace Pursuit and Kalman Filter algorithms that incorporates brain connectivity information derived from diffusion MRI (Chapter 3). Overall, I illustrate how dynamic models informed by neurophysiology and neuroanatomy can be used alongside advanced statistical and signal processing methods to greatly improve MEG/EEG source localization. More broadly, this work provides an example of how advanced modeling and algorithm development can be used to address difficult problems in neuroscience and neuroimaging.
by Camilo Lamus Garcia Herreros.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Mülâyim, Mehmet Oǧuz. "Anytime Case-Based Reasoning in Large-Scale Temporal Case Bases." Doctoral thesis, Universitat Autònoma de Barcelona, 2020. http://hdl.handle.net/10803/671283.

Повний текст джерела
Анотація:
L'enfocament de la metodologia Casi-Based Reasoning (CBR) per a la resolució de problemes que "problemes similars tenen solucions similars" ha demostrat ser bastant favorable per a moltes aplicacions d'intel·ligència artificial industrial. No obstant això, els mateixos avantatges de CBR dificulten el seu acompliment ja que les bases de casos (CB) creixen més que mides raonables. Cercar casos similars és costós. Aquest desavantatge sovint fa que CBR sigui menys atractiu per als entorns de dades abundants d'avui dia, mentre que, en realitat, cada vegada hi ha més raons per beneficiar-se d'aquesta metodologia eficaç. En conseqüència, l'enfocament tradicional de la comunitat CBR de controlar el creixement de la CB per mantenir el rendiment està canviant cap a la recerca de noves formes de tractar amb dades abundants. Com a contribució a aquests esforços, aquesta tesi té com a objectiu accelerar el CBR aprofitant tant els espais de problemes com els de solucions en els CB de gran escala que es componen de casos relacionats temporalment, com en l'exemple de les històries clíniques electròniques. Per a les ocasions en què l'acceleració que vam aconseguir per obtenir resultats exactes encara no sigui factible, dotem el sistema CBR amb capacitats d'algoritmes anytime per proporcionar resultats aproximats amb confiança en cas d'interrupció. Aprofitar la temporalitat dels casos ens permet assolir guanys superiors en el temps d'execució per als CB de milions de casos. Els experiments amb conjunts de dades del món real disponibles públicament fomenten l'ús continu de CBR en dominis en els quals CBR històricament sobresurt com l'atenció mèdica; i al seu torn, no patint, sinó gaudint del big data.
El enfoque de la metodología Case-Based Reasoning (CBR) para la resolución de problemas de que "problemas similares tienen soluciones similares" ha demostrado ser bastante favorable para muchas aplicaciones de inteligencia artificial industrial. Sin embargo, las mismas ventajas de CBR dificultan su desempeño ya que las bases de casos (CB) crecen más que tamaños razonables. Buscar casos similares es costoso. Esta desventaja a menudo hace que CBR sea menos atractivo para los entornos de datos abundantes de hoy en día, mientras que, en realidad, cada vez hay más razones para beneficiarse de esta metodología eficaz. En consecuencia, el enfoque tradicional de la comunidad CBR de controlar el crecimiento de la CB para mantener el rendimiento está cambiando hacia la búsqueda de nuevas formas de tratar con datos abundantes. Como contribución a estos esfuerzos, esta tesis tiene como objetivo acelerar el CBR aprovechando tanto los espacios de problemas como los de soluciones en los CB de gran escala que se componen de casos relacionados temporalmente, como en el ejemplo de las historias clínicas electrónicas. Para las ocasiones en las que la aceleración que logramos para obtener resultados exactos aún no sea factible, dotamos al sistema CBR con capacidades de algoritmos anytime para proporcionar resultados aproximados con confianza en caso de interrupción. Aprovechar la temporalidad de los casos nos permite alcanzar ganancias superiores en el tiempo de ejecución para los CB de millones de casos. Los experimentos con conjuntos de datos del mundo real disponibles públicamente fomentan el uso continuo de CBR en dominios en los que CBR históricamente sobresale como la atención médica; y a su vez, no sufriendo, sino disfrutando del big data.
Case-Based Reasoning (CBR) methodology's approach to problem-solving that "similar problems have similar solutions" has proved quite favorable for many industrial artificial intelligence applications. However, CBR's very advantages hinder its performance as case bases (CBs) grow larger than moderate sizes. Searching similar cases is expensive. This handicap often makes CBR less appealing for today's ubiquitous data environments while, actually, there is ever more reason to benefit from this effective methodology. Accordingly, CBR community's traditional approach of controlling CB growth to maintain performance is shifting towards finding new ways to deal with abundant data. As a contribution to these efforts, this thesis aims to speed up CBR by leveraging both problem and solution spaces in large-scale CBs that are composed of temporally related cases, as in the example of electronic health records. For the occasions when the speed-up we achieve for exact results may still not be feasible, we endow the CBR system with anytime algorithm capabilities to provide approximate results with confidence upon interruption. Exploiting the temporality of cases allows us to reach superior gains in execution time for CBs of millions of cases. Experiments with publicly available real-world datasets encourage the continued use of CBR in domains where it historically excels like healthcare; and this time, not suffering from, but enjoying big data.
Universitat Autònoma de Barcelona. Programa de Doctorat en Informàtica
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Stojkovic, Ivan. "Functional Norm Regularization for Margin-Based Ranking on Temporal Data." Diss., Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/522550.

Повний текст джерела
Анотація:
Computer and Information Science
Ph.D.
Quantifying the properties of interest is an important problem in many domains, e.g., assessing the condition of a patient, estimating the risk of an investment or relevance of the search result. However, the properties of interest are often latent and hard to assess directly, making it difficult to obtain classification or regression labels, which are needed to learn a predictive models from observable features. In such cases, it is typically much easier to obtain relative comparison of two instances, i.e. to assess which one is more intense (with respect to the property of interest). One framework able to learn from such kind of supervised information is ranking SVM, and it will make a basis of our approach. Applications in bio-medical datasets typically have specific additional challenges. First, and the major one, is the limited amount of data examples, due to an expensive measuring technology, and/or infrequency of conditions of interest. Such limited number of examples makes both identification of patterns/models and their validation less useful and reliable. Repeated samples from the same subject are collected on multiple occasions over time, which breaks IID sample assumption and introduces dependency structure that needs to be taken into account more appropriately. Also, feature vectors are highdimensional, and typically of much higher cardinality than the number of samples, making models less useful and their learning less efficient. Hypothesis of this dissertation is that use of the functional norm regularization can help alleviating mentioned challenges, by improving generalization abilities and/or learning efficiency of predictive models, in this case specifically of the approaches based on the ranking SVM framework. The temporal nature of data was addressed with loss that fosters temporal smoothness of functional mapping, thus accounting for assumption that temporally proximate samples are more correlated. Large number of feature variables was handled using the sparsity inducing L1 norm, such that most of the features have zero effect in learned functional mapping. Proposed sparse (temporal) ranking objective is convex but non-differentiable, therefore smooth dual form is derived, taking the form of quadratic function with box constraints, which allows efficient optimization. For the case where there are multiple similar tasks, joint learning approach based on matrix norm regularization, using trace norm L* and sparse row L21 norm was also proposed. Alternate minimization with proximal optimization algorithm was developed to solve the mentioned multi-task objective. Generalization potentials of the proposed high-dimensional and multi-task ranking formulations were assessed in series of evaluations on synthetically generated and real datasets. The high-dimensional approach was applied to disease severity score learning from gene expression data in human influenza cases, and compared against several alternative approaches. Application resulted in scoring function with improved predictive performance, as measured by fraction of correctly ordered testing pairs, and a set of selected features of high robustness, according to three similarity measures. The multi-task approach was applied to three human viral infection problems, and for learning the exam scores in Math and English. Proposed formulation with mixed matrix norm was overall more accurate than formulations with single norm regularization.
Temple University--Theses
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Brighi, Marco. "Human Activity Recognition: A Comparative Evaluation of Spatio-Temporal Descriptors." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19436/.

Повний текст джерела
Анотація:
The study presented in this thesis aims to implement and evaluate three different features for the recognition of human activities (HAR). A feature is a piece of information appropriately extracted from raw data that is relevant for solving a particular task. With HAR is meant that discipline able to automatically recognize the activity carried out by an agent through the collection of data. These systems can be realized by exploiting data of different kinds and coming from different sources. Within this study, it was decided to use visual data from properly recorded videos. Each feature is analyzed and evaluated in every detail and finally implemented in a project that uses the most famous and used computer vision and machine learning libraries. The effectiveness of each is evaluated through two different datasets recognized by the scientific community. Each feature was evaluated not only for its performance but also for the computational cost involved in the extraction. The final analysis is therefore the result of a compromise between results obtained and costs to be incurred.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Son, Young Baek. "POC algorithms based on spectral remote sensing data and its temporal and spatial variability in the Gulf of Mexico." Texas A&M University, 2003. http://hdl.handle.net/1969.1/5965.

Повний текст джерела
Анотація:
This dissertation consists of three studies dealing with particulate organic carbon (POC). The first study describes the temporal and spatial variability of particulate matter (PM) and POC, and physical processes that affect the distribution of PM and POC with synchronous remote sensing data. The purpose of the second study is to develop POC algorithms in the Gulf of Mexico based on satellite data using numerical methods and to compare POC estimates with spectral radiance. The purpose of the third study is to investigate climatological variations from the temporal and spatial POC estimates based on SeaWiFS spectral radiance and physical processes, and to determine the physical mechanisms that affect the distribution of POC in the Gulf of Mexico. For the first and second studies, hydrographic data from the Northeastern Gulf of Mexico (NEGOM) study were collected on each of 9 cruises from November 1997 to August 2000 across 11 lines. Remotely sensed data sets were obtained from NASA and NOAA using algorithms that have been developed for interpretation of ocean color data from various satellite sensors. For the third study, we use the time-series of POC estimates, sea surface temperature (SST), sea surface height anomaly (SSHA), sea surface wind (SSW), and precipitation rate (PR) that might cause climatological variability and physical processes. The distribution of surface PM and POC concentrations were affected by one or more factors such as river discharge, wind stress, stratification, and the Loop Current/Eddies. To estimate POC concentration, empirical and model-based approaches were used using regression and principal component analysis (PCA) methods. We tested simulated data for reasonable and suitable algorithms in Case 1 and Case 2 waters. Monthly mean values of POC concentrations calculated with PCA algorithms. The spatial and temporal variations of POC and physical forcing data were analyzed with the empirical orthogonal function (EOF) method. The results showed variations in the Gulf of Mexico on both annual and inter-annual time scales.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Meyer, Dominik Jakob [Verfasser], Klaus [Akademischer Betreuer] Diepold, Matthias [Gutachter] Althoff, and Klaus [Gutachter] Diepold. "Accelerated Gradient Algorithms for Robust Temporal Difference Learning / Dominik Jakob Meyer ; Gutachter: Matthias Althoff, Klaus Diepold ; Betreuer: Klaus Diepold." München : Universitätsbibliothek der TU München, 2021. http://d-nb.info/1237413281/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Chalup, Stephan Konrad. "Incremental learning with neural networks, evolutionary computation and reinforcement learning algorithms." Thesis, Queensland University of Technology, 2001.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Wedge, Daniel John. "Video sequence synchronization." University of Western Australia. School of Computer Science and Software Engineering, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0084.

Повний текст джерела
Анотація:
[Truncated abstract] Video sequence synchronization is necessary for any computer vision application that integrates data from multiple simultaneously recorded video sequences. With the increased availability of video cameras as either dedicated devices, or as components within digital cameras or mobile phones, a large volume of video data is available as input for a growing range of computer vision applications that process multiple video sequences. To ensure that the output of these applications is correct, accurate video sequence synchronization is essential. Whilst hardware synchronization methods can embed timestamps into each sequence on-the-fly, they require specialized hardware and it is necessary to set up the camera network in advance. On the other hand, computer vision-based software synchronization algorithms can be used to post-process video sequences recorded by cameras that are not networked, such as common consumer hand-held video cameras or cameras embedded in mobile phones, or to synchronize historical videos for which hardware synchronization was not possible. The current state-of-the-art software algorithms vary in their input and output requirements and camera configuration assumptions. ... Next, I describe an approach that synchronizes two video sequences where an object exhibits ballistic motions. Given the epipolar geometry relating the two cameras and the imaged ballistic trajectory of an object, the algorithm uses a novel iterative approach that exploits object motion to rapidly determine pairs of temporally corresponding frames. This algorithm accurately synchronizes videos recorded at different frame rates and takes few iterations to converge to sub-frame accuracy. Whereas the method presented by the first algorithm integrates tracking data from all frames to synchronize the sequences as a whole, this algorithm recovers the synchronization by locating pairs of temporally corresponding frames in each sequence. Finally, I introduce an algorithm for synchronizing two video sequences recorded by stationary cameras with unknown epipolar geometry. This approach is unique in that it recovers both the frame rate ratio and the frame offset of the two sequences by finding matching space-time interest points that represent events in each sequence; the algorithm does not require object tracking. RANSAC-based approaches that take a set of putatively matching interest points and recover either a homography or a fundamental matrix relating a pair of still images are well known. This algorithm extends these techniques using space-time interest points in place of spatial features, and uses nested instances of RANSAC to also recover the frame rate ratio and frame offset of a pair of video sequences. In this thesis, it is demonstrated that each of the above algorithms can accurately recover the frame rate ratio and frame offset of a range of real video sequences. Each algorithm makes a contribution to the body of video sequence synchronization literature, and it is shown that the synchronization problem can be solved using a range of approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Maquet, Nicolas. "New algorithms and data structures for the emptiness problem of alternating automata." Doctoral thesis, Universite Libre de Bruxelles, 2011. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209961.

Повний текст джерела
Анотація:
This work studies new algorithms and data structures that are useful in the context of program verification. As computers have become more and more ubiquitous in our modern societies, an increasingly large number of computer-based systems are considered safety-critical. Such systems are characterized by the fact that a failure or a bug (computer error in the computing jargon) could potentially cause large damage, whether in loss of life, environmental damage, or economic damage. For safety-critical systems, the industrial software engineering community increasingly calls for using techniques which provide some formal assurance that a certain piece of software is correct.

One of the most successful program verification techniques is model checking, in which programs are typically abstracted by a finite-state machine. After this abstraction step, properties (typically in the form of some temporal logic formula) can be checked against the finite-state abstraction, with the help of automated tools. Alternating automata play an important role in this context, since many temporal logics on words and trees can be efficiently translated into those automata. This property allows for the reduction of model checking to automata-theoretic questions and is called the automata-theoretic approach to model checking. In this work, we provide three novel approaches for the analysis (emptiness checking) of alternating automata over finite and infinite words. First, we build on the successful framework of antichains to devise new algorithms for LTL satisfiability and model checking, using alternating automata. These algorithms combine antichains with reduced ordered binary decision diagrams in order to handle the exponentially large alphabets of the automata generated by the LTL translation. Second, we develop new abstraction and refinement algorithms for alternating automata, which combine the use of antichains with abstract interpretation, in order to handle ever larger instances of alternating automata. Finally, we define a new symbolic data structure, coined lattice-valued binary decision diagrams that is particularly well-suited for the encoding of transition functions of alternating automata over symbolic alphabets. All of these works are supported with empirical evaluations that confirm the practical usefulness of our approaches. / Ce travail traite de l'étude de nouveaux algorithmes et structures de données dont l'usage est destiné à la vérification de programmes. Les ordinateurs sont de plus en plus présents dans notre vie quotidienne et, de plus en plus souvent, ils se voient confiés des tâches de nature critique pour la sécurité. Ces systèmes sont caractérisés par le fait qu'une panne ou un bug (erreur en jargon informatique) peut avoir des effets potentiellement désastreux, que ce soit en pertes humaines, dégâts environnementaux, ou économiques. Pour ces systèmes critiques, les concepteurs de systèmes industriels prônent de plus en plus l'usage de techniques permettant d'obtenir une assurance formelle de correction.

Une des techniques de vérification de programmes les plus utilisées est le model checking, avec laquelle les programmes sont typiquement abstraits par une machine a états finis. Après cette phase d'abstraction, des propriétés (typiquement sous la forme d'une formule de logique temporelle) peuvent êtres vérifiées sur l'abstraction à espace d'états fini, à l'aide d'outils de vérification automatisés. Les automates alternants jouent un rôle important dans ce contexte, principalement parce que plusieurs logiques temporelle peuvent êtres traduites efficacement vers ces automates. Cette caractéristique des automates alternants permet de réduire le model checking des logiques temporelles à des questions sur les automates, ce qui est appelé l'approche par automates du model checking. Dans ce travail, nous étudions trois nouvelles approches pour l'analyse (le test du vide) desautomates alternants sur mots finis et infinis. Premièrement, nous appliquons l'approche par antichaînes (utilisée précédemment avec succès pour l'analyse d'automates) pour obtenir de nouveaux algorithmes pour les problèmes de satisfaisabilité et du model checking de la logique temporelle linéaire, via les automates alternants.Ces algorithmes combinent l'approche par antichaînes avec l'usage des ROBDD, dans le but de gérer efficacement la combinatoire induite par la taille exponentielle des alphabets d'automates générés à partir de LTL. Deuxièmement, nous développons de nouveaux algorithmes d'abstraction et raffinement pour les automates alternants, combinant l'usage des antichaînes et de l'interprétation abstraite, dans le but de pouvoir traiter efficacement des automates de grande taille. Enfin, nous définissons une nouvelle structure de données, appelée LVBDD (Lattice-Valued Binary Decision Diagrams), qui permet un encodage efficace des fonctions de transition des automates alternants sur alphabets symboliques. Tous ces travaux ont fait l'objet d'implémentations et ont été validés expérimentalement.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished

Стилі APA, Harvard, Vancouver, ISO та ін.
45

Santos, Ramon Nóbrega dos. "Uma abordagem temporal para identificação precoce de estudantes de graduação a distância com risco de evasão utilizando técnicas de mineração de dados." Universidade Federal da Paraíba, 2015. http://tede.biblioteca.ufpb.br:8080/handle/tede/7844.

Повний текст джерела
Анотація:
Submitted by Clebson Anjos (clebson.leandro54@gmail.com) on 2016-02-15T18:37:51Z No. of bitstreams: 1 arquivototal.pdf: 2981698 bytes, checksum: 6dfa47590c870db030e7c1cbea499120 (MD5)
Made available in DSpace on 2016-02-15T18:37:51Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 2981698 bytes, checksum: 6dfa47590c870db030e7c1cbea499120 (MD5) Previous issue date: 2015-05-29
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Through the use of data mining techniques, more usually the classification algorithms, it is possible to implement predictive models that are able to early identify a student in risk of dropout. Several studies used data obtained from a Virtual Learning Environment (VLE) to implement predictive performance models in a discipline of a course. However, any study was carried out aimed at developing a model for dropout prediction, to distance graduation courses of longer duration, which integrates works that carry out performance prediction based on a VLE, allowing an early prediction during the first semester and throughout the others semesters. Thus, this work proposes a dropout identification approach for distance graduation courses that use the Rule-Based Classification technique to firstly identify the disciplines and grades limits that have higher influence on dropout, so that the predictive models for performance in a VLE can be used regarding the dropout detection of students along the whole distance graduation course. Experiments were carried out using four rulebased classification algorithms: JRip, OneR, PART and Ridor. Considering the use of this temporal approach, it was possible to prove the advantages of this approach, once better accuracies were obtained along the semesters and important rules were discovered to early identify students in risk of dropout. Among the applied algorithms, JRip and PART obtained the best predictive results with average accuracy of 81% at the end of first semester. Furthermore, considering our proposed partition methodology, where attributes of the predictive models are incrementally applied, it was possible to discovery rules potentially useful to dropout prevention.
Com a utilização de técnicas de mineração de dados, mais comumente os algoritmos de Classificação, pode-se construir modelos preditivos capazes de identificar precocemente um estudante com risco de evasão. Diversos estudos utilizaram dados obtidos de um Ambiente Virtual de Aprendizagem (AVA) para a construção de modelos preditivos de desempenho em uma disciplina de um curso. Porém, nenhum estudo foi realizado com o objetivo de desenvolver um modelo de predição de evasão, para um curso de graduação a distância de maior duração, que integre trabalhos que fazem a predição de desempenho a partir de um AVA, possibilitando uma predição da evasão antecipada durante o primeiro semestre e ao longo dos demais semestres. Assim, este trabalho propõe uma abordagem de identificação de evasão em um curso de graduação a distância a partir da utilização da técnica de classificação baseada em regras para, primeiramente, identificar as disciplinas e os limites de notas que mais influenciam na evasão para que os modelos preditivos de desempenhos em um AVA possam ser utilizados para a predição da evasão de um aluno com risco de evasão ao longo de todo o curso de graduação a distância. Foram realizados experimentos com quatro algoritmos de classificação baseados em regras: o JRip, o OneR, o PART e o Ridor. A partir da utilização da abordagem temporal proposta foi possível comprovar sua vantagem, uma vez que foram obtidos melhores desempenhos preditivos ao longo dos semestres e foram descobertas importantes regras para a identificação precoce de um estudante com risco de evasão. Entre os algoritmos estudados, JRip e PART obtiveram os melhores desempenhos preditivos com acurácia média de 81% ao final do primeiro semestre. A partir da metodologia proposta de partições, na qual os atributos dos modelos preditivos são aplicados de forma incremental, foi possível a descoberta de regras potencialmente úteis para prevenir a evasão.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Bundala, Daniel. "Algorithmic verification problems in automata-theoretic settings." Thesis, University of Oxford, 2014. https://ora.ox.ac.uk/objects/uuid:60b2d507-153f-4119-a888-56ccd47c3752.

Повний текст джерела
Анотація:
Problems in formal verification are often stated in terms of finite automata and extensions thereof. In this thesis we investigate several such algorithmic problems. In the first part of the thesis we develop a theory of completeness thresholds in Bounded Model Checking. A completeness threshold for a given model M and a specification φ is a bound k such that, if no counterexample to φ of length k or less can be found in M, then M in fact satisfies φ. We settle a problem of Kroening et al. [KOS+11] in the affirmative, by showing that the linearity problem for both regular and ω-regular specifications (provided as finite automata and Buchi automata respectively) is PSPACE-complete. Moreover, we establish the following dichotomies: for regular specifications, completeness thresholds are either linear or exponential, whereas for ω-regular specifications, completeness thresholds are either linear or at least quadratic in the recurrence diameter of the model under consideration. Given a formula in a temporal logic such as LTL or MTL, a fundamental problem underpinning automata-based model checking is the complexity of evaluating the formula on a given finite word. For LTL, the complexity of this task was recently shown to be in NC [KF09]. In the second part of the thesis we present an NC algorithm for MTL, a quantitative (or metric) extension of LTL, and give an AC1 algorithm for UTL, the unary fragment of LTL. We then establish a connection between LTL path checking and planar circuits which, among others, implies that the complexity of LTL path checking depends on the Boolean connectives allowed: adding Boolean exclusive or yields a temporal logic with P-complete path-checking problem. In the third part of the thesis we study the decidability of the reachability problem for parametric timed automata. The problem was introduced over 20 years ago by Alur, Henzinger, and Vardi [AHV93]. It is known that for three or more parametric clocks the problem is undecidable. We translate the problem to reachability questions in certain extensions of parametric one-counter machines. By further reducing to satisfiability in Presburger arithmetic with divisibility, we obtain decidability results for several classes of parametric one-counter machines. As a corollary, we show that, in the case of a single parametric clock (with arbitrarily many nonparametric clocks) the reachability problem is NEXP-complete, improving the nonelementary decision procedure of Alur et al. The case of two parametric clocks is open. Here, we show that the reachability is decidable in this case of automata with a single parameter.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Romanenko, Ilya. "Novel image processing algorithms and methods for improving their robustness and operational performance." Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/16340.

Повний текст джерела
Анотація:
Image processing algorithms have developed rapidly in recent years. Imaging functions are becoming more common in electronic devices, demanding better image quality, and more robust image capture in challenging conditions. Increasingly more complicated algorithms are being developed in order to achieve better signal to noise characteristics, more accurate colours, and wider dynamic range, in order to approach the human visual system performance levels.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Sichtig, Heike. "The SGE framework discovering spatio-temporal patterns in biological systems with spiking neural networks (S), a genetic algorithm (G) and expert knowledge (E) /." Diss., Online access via UMI:, 2009.

Знайти повний текст джерела
Анотація:
Thesis (Ph. D.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Bioengineering, Biomedical Engineering, 2009.
Includes bibliographical references.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Rodriguez, Vila Juan Jose Franklin. "Clusterização e visualização espaço-temporal de dados georreferenciados adaptando o algoritmo marker clusterer: um caso de uso em Curitiba." Universidade Tecnológica Federal do Paraná, 2016. http://repositorio.utfpr.edu.br/jspui/handle/1/2832.

Повний текст джерела
Анотація:
CNPq; CAPES;
Cinquenta por cento da população mundial vive em cidades, e a expectativa para 2050 é de que essa porcentagem chegue a 70% (WHO, 2014). As cidades consomem 75% dos recursos naturais e de energia do mundo, e geram 80% dos gases-estufa responsáveis pelo efeito estufa; considerando que, ocupam apenas 2% do território mundial (Signori, 2008). As cidades são também o palco de grande parte dos problemas ambientais globais (Gomes, 2009), e é no contexto urbano onde a dimensão social, econômica e ambiental convergem mais intensamente (European Commission, 2007). Esse crescimento populacional, tem influências sociais, econômicas e ambientais que representam um grande desafio para o desenvolvimento sustentável do planejamento urbano. Os conceitos de sistemas de informação geográfica, cidades inteligentes, dados abertos, algoritmos de clusterização e visualização de dados, permitem entender diversas questões em relação a atividade urbana nas cidades. Em particular, se torna importante a variável “onde”: onde existe tráfego e quais são os horários mais frequentes; onde é necessário realizar modelagem de espera residencial, comercial e industrial de acordo com o crescimento populacional para o plano de uso da terra; quais são os tipos de negócios que mais cresceram em cada bairro e qual é a relação entre eles. Para este fim, esta dissertação apresenta um sistema web-mobile que permite entender o crescimento espaço-temporal e econômico dos alvarás de restaurantes dos bairros Centro, Batel e Tatuquara da cidade de Curitiba nas últimas três décadas (1980 até 2015), realizando clusterização e visualização de uma grande quantidade de dados abertos georreferenciados. Em termos de resultados alcançados destacam-se: 1) capacidade de resolver problemas computacionais de sobreposição de pontos sobre um mapa, 2) capacidade de entender o crescimento econômico dos alvarás e qual é a relação entre as diversas categorias e entre os bairros, 3) tempo de execução inferior a 3 segundos para 99% das consultas espaciais executadas, 4) 80,8% dos usuários em fase de avaliação consideram que a solução proposta permite uma melhor identificação e visualização de dados georreferenciados, e 5) possibilita a integração de novas fontes e tipos de dados.
Fifty percent of the world's population live in cities, and the expectation until 2050 is that it reaches 70% (WHO, 2014). Cities consume 75% of the world's natural resources and energy, and generate 80% of greenhouse gases responsible for the greenhouse effect, considering that they occupy only 2% of the world's territory (Signori, 2008). Cities are also the scene of most of the global environmental problems (Gomes, 2009), and it is in the urban context where the social, economic and environmental dimension converge more intensely (European Commission, 2007). This population growth has social, economic and environmental influences that represent a great challenge for the sustainable development of urban planning. The concepts of geographic information systems, smart cities, open data, clustering and data visualization algorithms allow us to understand several questions regarding urban activity in cities, especially, understand the variable "where" things happen. For example: where there is traffic and what time is the most frequent, where it is necessary to perform residential, commercial, industrial standby modeling according to population growth for the land use plan, what are the types of businesses that grew the most in each neighborhood and what is the relationship between them. For this purpose, the following thesis presents a web-mobile system that allows us to understand the spatiotemporal and economic growth of the restaurant licenses of districts Centro, Batel and Tatuquara of Curitiba for the last three decades, performing clustering and visualization of a large amount of open georeferenced data. In terms of achieved results, we can highlight: 1) ability to solve computational problems of overlapping points representing business on a map, 2) ability to understand the economic growth of restaurants licences and what is the relationship between different categories and between districts, 3) execution time less than 3 seconds for 99% of the spatial queries executed, 4) 80.8% of users in evaluation phase consider that the proposed solution allows a better identification and visualization of georeferenced data, and 5) it allows the integration of new sources and types of data.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Chevallier, Juliette. "Statistical models and stochastic algorithms for the analysis of longitudinal Riemanian manifold valued data with multiple dynamic." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX059/document.

Повний текст джерела
Анотація:
Par delà les études transversales, étudier l'évolution temporelle de phénomènes connait un intérêt croissant. En effet, pour comprendre un phénomène, il semble plus adapté de comparer l'évolution des marqueurs de celui-ci au cours du temps plutôt que ceux-ci à un stade donné. Le suivi de maladies neuro-dégénératives s'effectue par exemple par le suivi de scores cognitifs au cours du temps. C'est également le cas pour le suivi de chimiothérapie : plus que par l'aspect ou le volume des tumeurs, les oncologues jugent que le traitement engagé est efficace dès lors qu'il induit une diminution du volume tumoral.L'étude de données longitudinales n'est pas cantonnée aux applications médicales et s'avère fructueuse dans des cadres d'applications variés tels que la vision par ordinateur, la détection automatique d'émotions sur un visage, les sciences sociales, etc.Les modèles à effets mixtes ont prouvé leur efficacité dans l'étude des données longitudinales, notamment dans le cadre d'applications médicales. Des travaux récent (Schiratti et al., 2015, 2017) ont permis l'étude de données complexes, telles que des données anatomiques. L'idée sous-jacente est de modéliser la progression temporelle d'un phénomène par des trajectoires continues dans un espace de mesures, que l'on suppose être une variété riemannienne. Sont alors estimées conjointement une trajectoire moyenne représentative de l'évolution globale de la population, à l'échelle macroscopique, et la variabilité inter-individuelle. Cependant, ces travaux supposent une progression unidirectionnelle et échouent à décrire des situations telles que la sclérose en plaques ou le suivi de chimiothérapie. En effet, pour ces pathologies, vont se succéder des phases de progression, de stabilisation et de remision de la maladie, induisant un changement de la dynamique d'évolution globale.Le but de cette thèse est de développer des outils méthodologiques et algorithmiques pour l’analyse de données longitudinales, dans le cas de phénomènes dont la dynamique d'évolution est multiple et d'appliquer ces nouveaux outils pour le suivi de chimiothérapie. Nous proposons un modèle non-linéaire à effets mixtes dans lequel les trajectoires d'évolution individuelles sont vues comme des déformations spatio-temporelles d'une trajectoire géodésique par morceaux et représentative de l'évolution de la population. Nous présentons ce modèle sous des hypothèses très génériques afin d'englober une grande classe de modèles plus spécifiques.L'estimation des paramètres du modèle géométrique est réalisée par un estimateur du maximum a posteriori dont nous démontrons l'existence et la consistance sous des hypothèses standards. Numériquement, du fait de la non-linéarité de notre modèle, l'estimation est réalisée par une approximation stochastique de l'algorithme EM, couplée à une méthode de Monte-Carlo par chaînes de Markov (MCMC-SAEM). La convergence du SAEM vers les maxima locaux de la vraisemblance observée ainsi que son efficacité numérique ont été démontrées. En dépit de cette performance, l'algorithme SAEM est très sensible à ses conditions initiales. Afin de palier ce problème, nous proposons une nouvelle classe d'algorithmes SAEM dont nous démontrons la convergence vers des minima locaux. Cette classe repose sur la simulation par une loi approchée de la vraie loi conditionnelle dans l'étape de simulation. Enfin, en se basant sur des techniques de recuit simulé, nous proposons une version tempérée de l'algorithme SAEM afin de favoriser sa convergence vers des minima globaux
Beyond transversal studies, temporal evolution of phenomena is a field of growing interest. For the purpose of understanding a phenomenon, it appears more suitable to compare the evolution of its markers over time than to do so at a given stage. The follow-up of neurodegenerative disorders is carried out via the monitoring of cognitive scores over time. The same applies for chemotherapy monitoring: rather than tumors aspect or size, oncologists asses that a given treatment is efficient from the moment it results in a decrease of tumor volume. The study of longitudinal data is not restricted to medical applications and proves successful in various fields of application such as computer vision, automatic detection of facial emotions, social sciences, etc.Mixed effects models have proved their efficiency in the study of longitudinal data sets, especially for medical purposes. Recent works (Schiratti et al., 2015, 2017) allowed the study of complex data, such as anatomical data. The underlying idea is to model the temporal progression of a given phenomenon by continuous trajectories in a space of measurements, which is assumed to be a Riemannian manifold. Then, both a group-representative trajectory and inter-individual variability are estimated. However, these works assume an unidirectional dynamic and fail to encompass situations like multiple sclerosis or chemotherapy monitoring. Indeed, such diseases follow a chronic course, with phases of worsening, stabilization and improvement, inducing changes in the global dynamic.The thesis is devoted to the development of methodological tools and algorithms suited for the analysis of longitudinal data arising from phenomena that undergo multiple dynamics and to apply them to chemotherapy monitoring. We propose a nonlinear mixed effects model which allows to estimate a representative piecewise-geodesic trajectory of the global progression and together with spacial and temporal inter-individual variability. Particular attention is paid to estimation of the correlation between the different phases of the evolution. This model provides a generic and coherent framework for studying longitudinal manifold-valued data.Estimation is formulated as a well-defined maximum a posteriori problem which we prove to be consistent under mild assumptions. Numerically, due to the non-linearity of the proposed model, the estimation of the parameters is performed through a stochastic version of the EM algorithm, namely the Markov chain Monte-Carlo stochastic approximation EM (MCMC-SAEM). The convergence of the SAEM algorithm toward local maxima of the observed likelihood has been proved and its numerical efficiency has been demonstrated. However, despite appealing features, the limit position of this algorithm can strongly depend on its starting position. To cope with this issue, we propose a new version of the SAEM in which we do not sample from the exact distribution in the expectation phase of the procedure. We first prove the convergence of this algorithm toward local maxima of the observed likelihood. Then, with the thought of the simulated annealing, we propose an instantiation of this general procedure to favor convergence toward global maxima: the tempering-SAEM
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії