Dissertations / Theses on the topic 'Constrained exploration'

To see the other types of publications on this topic, follow the link: Constrained exploration.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Constrained exploration.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Garcelon, Evrard. "Constrained Exploration in Reinforcement Learning." Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAG007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Une application majeure de l'apprentissage machine automatisée est la personnalisation des différents contenus recommandé à différents utilisateurs. Généralement, les algorithmes étant à la base de ces systèmes sont dit supervisé. C'est-à-dire que les données utilisées lors de la phase d'apprentissage sont supposées provenir de la même distribution. Cependant, ces données sont générées par des interactions entre un utilisateur et ces mêmes algorithmes. Ainsi, les recommandations pour un utilisateur à un instant t peuvent modifier l'ensemble des recommandations pertinentes à un instant ultérieur. Il est donc nécessaire de prendre en compte ces interactions afin de produire un service de la meilleure qualité possible. Ce type d'interaction est réminiscente du problème d'apprentissage en ligne. Parmi les algorithmes dit en ligne, les algorithmes de bandits et d'apprentissage par Renforcement (AR) semblent être les mieux positionnés afin de remplacer les méthodes d'apprentissage supervisé pour des applications nécessitant un certain degré de personnalisation. Le déploiement en production d'algorithmes d'apprentissage par Renforcement présente un certain nombre de difficultés tel que garantir un certain niveau de performance lors des phases d'exploration ou encore comment garantir la confidentialité des données collectées par ces algorithmes. Dans cette thèse nous considérons différentes contraintes freinant l’utilisation d’algorithmes d’apprentissage par renforcement, en fournissant des résultats à la fois empirique et théorique sur la vitesse d’apprentissage en présence de différentes contraintes
A major application of machine learning is to provide personnalized content to different users. In general, the algorithms powering those recommandation are supervised learning algorithm. That is to say the data used to train those algorithms are assumed to be sampled from the same distribution. However, the data are generated through interactions between the users and the recommendation algorithms. Thus, recommendations for a user a time t can have an impact on the set of pertinent recommandation at a later time. Therefore, it is necessary to take those interactions into account. This setting is reminiscent of the online learning setting. Among online learning algorithms, Reinforcement Learning algorithms (RL) looks the most promising to replace supervised learning algorithms for applications requiring a certain degree of personnalization. The deployement in production of RL algorithms presents some challenges such as being able to guarantee a certain level of performance during exploration phases or how to guarantee privacy of the data collected by RL algorithms. In this thesis, we consider different constraints limiting the use of RL algorithms and provides both empirical and theoretical results on the impact of those constraints on the learning process
2

Carvalho, Filho José Gilmar Nunes de. "Multi-robot exploration with constrained communication." reponame:Repositório Institucional da UFSC, 2016. https://repositorio.ufsc.br/xmlui/handle/123456789/171998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia de Automação e Sistemas, Florianópolis, 2016.
Made available in DSpace on 2016-12-27T03:11:05Z (GMT). No. of bitstreams: 1 342277.pdf: 5444057 bytes, checksum: 36965f3be2f7f870b8ed9ef5eee8e702 (MD5) Previous issue date: 2016
Abstract : Over the last two decades, several methods for exploration with Multi-Robot Systems (MRS) have been proposed, most of them based on the allocation of frontiers (exploration targets) and typically applying local optimization policies. However, communication issues have usually been neglected. This thesis investigates multi-robot exploration by considering that robots have limited communication radius. Two methods, one based on a flat network architecture (DSM) and another based on a hierarchical architecture (HSM), were proposed to share map information. While DSM considers a propagation scheme to share information and synchronize the map of robots, HSM organizes robots in a hierarchical architecture where some robots act as leaders (clusterheads) and are responsible for synchronizing the maps of the robots in the network. Formal proof that both methods guarantee the synchronization of the map of all robots in a network is presented. In addition, experiments were conducted by considering systems with different number of robots, network topologies and different map's sizes. The results show that both methods are able to synchronize the map of the robots when they can lose communication links, but HKM usually presents smaller convergence time, number of exchanged messages and amount of transmitted data. We also propose Hierarchical K-Means (HKME), a method for multi-robot coordination in exploration tasks that handles communication problems, such as link losses. To handle communication among robots, HKME arranges them into clusters and elects leaders for each. Clusters evolve dynamically as robots lose or establish communication with their peers. HKME uses HSM to guarantee that the map of the robots are synchronized and also uses the hierarchical organization of the robots to coordinate them in order to minimize the variance of the time at which they reach all regions of the workspace, while balancing their workload and decreasing the exploration time. Experiments were conducted by considering different types of workspace and communication radius. The results show that HKME behaves like a centralized algorithm when communication is granted, while being able to withstand severe degradation in communication radius.

Ao longo das últimas décadas, vários métodos de exploração com os Sistemas Multi-robôs (SMR) têm sido propostos, a maioria deles com base na alocação de fronteiras (alvos de exploração) e normalmente aplicando políticas de otimização locais. No entanto, os problemas de comunicação têm geralmente sido negligenciados. Esta tese investiga a exploração multi-robô, considerando que os robôs têm raio de comunicação limitado. Dois métodos, um baseado em uma arquitetura de rede plana (DSM) e outro baseado em uma arquitetura hierárquica (HSM), foram propostos para compartilhar informações de mapa. Enquanto o DSM considera um esquema de propagação para compartilhar informações e sincronizar o mapa dos robôs, o HSM organiza robôs em uma arquitetura hierárquica, onde alguns robôs atuam como líderes (clusterheads) e são responsáveis por sincronizar os mapas dos robôs na rede. A prova formal de que ambos os métodos garantem a sincronização do mapa de todos os robôs na rede é apresentada. Além disso, experimentos foram conduzidos considerando sistemas com diferentes números de robôs, topologias de rede e tamanhos de mapa. Os resultados mostram que ambos os métodos são capazes de sincronizar o mapa dos robôs quando eles podem perder links de comunicação, mas o HKM geralmente apresenta menor tempo de convergência, o número de mensagens trocadas e a quantidade de dados transmitidos. Propomos também Hierarchical K-Means (HKME), um método de coordenação multi-robô em tarefas de exploração que lida com problemas de comunicação, tais como perdas de links. Para lidar com a comunicação entre robôs, o HKME os organiza em clusters e elege os líderes de cada um. Clusters evoluem dinamicamente a medida que os robôs perdem ou estabelecem links de comunicação. O HKME usa o HSM para garantir que o mapa dos robôs se mantenham sincronizados e também usa a organização hierárquica dos robôs para coordená-los, a fim de minimizar a variância do momento em que eles atinjem todas as regiões do espaço de trabalho, ao mesmo tempo que equilibra a carga de trabalho e diminui o tempo de exploração. Experimentos foram realizadas considerando diferentes tipos de espaço de trabalho e raios de comunicação. Os resultados mostram que o HKME comporta-se como um algoritmo centralizada quando a comunicação é garantida, sendo capaz de lidar com uma degradação severa no raio de comunicação.
3

Duong, Khanh-Chuong. "Constrained clustering by constraint programming." Thesis, Orléans, 2014. http://www.theses.fr/2014ORLE2049/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La classification non supervisée, souvent appelée par le terme anglais de clustering, est une tâche importante en Fouille de Données. Depuis une dizaine d'années, la classification non supervisée a été étendue pour intégrer des contraintes utilisateur permettant de modéliser des connaissances préalables dans le processus de clustering. Différents types de contraintes utilisateur peuvent être considérés, des contraintes pouvant porter soit sur les clusters, soit sur les instances. Dans cette thèse, nous étudions le cadre de la Programmation par Contraintes (PPC) pour modéliser les tâches de clustering sous contraintes utilisateur. Utiliser la PPC a deux avantages principaux : la déclarativité, qui permet d'intégrer aisément des contraintes utilisateur et la capacité de trouver une solution optimale qui satisfait toutes les contraintes (s'il en existe). Nous proposons deux modèles basés sur la PPC pour le clustering sous contraintes utilisateur. Les modèles sont généraux et flexibles, ils permettent d'intégrer des contraintes d'instances must-link et cannot-link et différents types de contraintes sur les clusters. Ils offrent également à l'utilisateur le choix entre différents critères d'optimisation. Afin d'améliorer l'efficacité, divers aspects sont étudiés. Les expérimentations sur des bases de données classiques et variées montrent qu'ils sont compétitifs par rapport aux approches exactes existantes. Nous montrons que nos modèles peuvent être intégrés dans une procédure plus générale et nous l'illustrons par la recherche de la frontière de Pareto dans un problème de clustering bi-critère sous contraintes utilisateur
Cluster analysis is an important task in Data Mining with hundreds of different approaches in the literature. Since the last decade, the cluster analysis has been extended to constrained clustering, also called semi-supervised clustering, so as to integrate previous knowledge on data to clustering algorithms. In this dissertation, we explore Constraint Programming (CP) for solving the task of constrained clustering. The main principles in CP are: (1) users specify declaratively the problem in a Constraint Satisfaction Problem; (2) solvers search for solutions by constraint propagation and search. Relying on CP has two main advantages: the declarativity, which enables to easily add new constraints and the ability to find an optimal solution satisfying all the constraints (when there exists one). We propose two models based on CP to address constrained clustering tasks. The models are flexible and general and supports instance-level constraints and different cluster-level constraints. It also allows the users to choose among different optimization criteria. In order to improve the efficiency, different aspects have been studied in the dissertation. Experiments on various classical datasets show that our models are competitive with other exact approaches. We show that our models can easily be embedded in a more general process and we illustrate this on the problem of finding the Pareto front of a bi-criterion optimization process
4

Kettler, Daniel Terrance. "Mechanical design for the tactile exploration of constrained internal geometries." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/50272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2009.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
MIT Institute Archives copy: with CD-ROM; divisional library copy with no CD-ROM.
Includes bibliographical references (p. 93-98).
Rising world oil prices and advanced oil recovery techniques have made it economically attractive to rehabilitate abandoned oil wells. This requires guiding tools through well junctions where divergent branches leave the main wellbore. The unknown locations and shapes of these junctions must be determined. Harsh down-well conditions prevent the use of ranged sensors. However, robotic tactile exploration using a manipulator is well suited to this problem. This tactile characterization must be done quickly because of the high costs of working on oil wells. Consequently, intelligent tactile exploration algorithms that can characterize a shape using sparse data sets must be developed. This thesis explores the design and system architecture of robotic manipulators for down-well tactile exploration. A design approach minimizing sensing is adopted to produce a system that is mechanically robust and suited to the harsh down-well environment. A feasibility study on down-well tactile exploration manipulators is conducted. This study focuses on the mature robotic technology of link and joint manipulators with zero or low kinematic redundancy. This study produces a field system architecture that specifies a unified combination of control, sensing, kinematic solutions for down-well applications. An experimental system is built to demonstrate the proposed field system architecture and test control and intelligent tactile exploration algorithms. Experimental results to date have indicated acceptability of the proposed field system architecture and have demonstrated the ability to characterize geometry with sparse tactile data.
(cont.) Serpentine manipulators implemented using digital mechatronic actuation are also considered. Digital mechatronic devices use actuators with discrete output states and the potential to be mechanically robust and inexpensive. The design of digital mechatronic devices is challenging. Design parameter optimization methods are developed and applied to a design case study of a manipulator in a constrained workspace. This research demonstrates that down-well tactile exploration with a manipulator is feasible. Experimental results show that the proposed field system architecture, a 4 degree-of-freedom anthropomorphic manipulator, can obtain accurate tactile data without using any sensor feedback besides manipulator joint angles.
by Daniel Terrance Kettler.
S.M.
5

Chung, Jen Jen. "Learning to soar: exploration strategies in reinforcement learning for resource-constrained missions." Thesis, The University of Sydney, 2014. http://hdl.handle.net/2123/11733.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
An unpowered aerial glider learning to soar in a wind field presents a new manifestation of the exploration-exploitation trade-off. This thesis proposes a directed, adaptive and nonmyopic exploration strategy in a temporal difference reinforcement learning framework for tackling the resource-constrained exploration-exploitation task of this autonomous soaring problem. The complete learning algorithm is developed in a SARSA() framework, which uses a Gaussian process with a squared exponential covariance function to approximate the value function. The three key contributions of this thesis form the proposed exploration-exploitation strategy. Firstly, a new information measure is derived from the change in the variance volume surrounding the Gaussian process estimate. This measure of information gain is used to define the exploration reward of an observation. Secondly, a nonmyopic information value is presented that captures both the immediate exploration reward due to taking an action as well as future exploration opportunities that result. Finally, this information value is combined with the state-action value of SARSA() through a dynamic weighting factor to produce an exploration-exploitation management scheme for resource-constrained learning systems. The proposed learning strategy encourages either exploratory or exploitative behaviour depending on the requirements of the learning task and the available resources. The performance of the learning algorithms presented in this thesis is compared against other SARSA() methods. Results show that actively directing exploration to regions of the state-action space with high uncertainty improves the rate of learning, while dynamic management of the exploration-exploitation behaviour according to the available resources produces prudent learning behaviour in resource-constrained systems.
6

Sun, Hua. "Throughput constrained and area optimized dataflow synthesis for FPGAS." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2276.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bhattacharjee, Protim [Verfasser], Veniamin [Akademischer Betreuer] Morgenshtern, and Martin [Gutachter] Burger. "Compressed Sensing based Image Acquisition Methodologies for Constrained Autonomous Exploration Systems with Single Pixel Cameras / Protim Bhattacharjee ; Gutachter: Martin Burger ; Betreuer: Veniamin Morgenshtern." Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2021. http://d-nb.info/123348429X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Williams, Nicholas Cory. "Geologically-constrained UBC–GIF gravity and magnetic inversions with examples from the Agnew-Wiluna greenstone belt, Western Australia." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Geologically-constrained inversion of geophysical data is a powerful method for predicting geology beneath cover. The process seeks 3D physical property models that are consistent with the geology and explain measured geophysical responses. The recovered models can guide mineral explorers to prospective host rocks, structures, alteration and mineralisation. This thesis provides a comprehensive analysis of how the University of British Columbia Geophysical Inversion Facility (UBC–GIF) gravity and magnetic inversions can be applied to subsurface mapping and exploration by demonstrating the necessary approach, data types, and typical results. The non-uniqueness of inversion demands that geological information be included. Commonly available geological data, including structural and physical property measurements, mapping, drilling, and 3D interpretations, can be translated into appropriate inversion constraints using tools developed herein. Surface information provides the greatest improvement in the reliability of recovered models; drilling information enhances resolution at depth. The process used to prepare inversions is as important as the geological constraints themselves. Use of a systematic workflow, as developed in this study, minimises any introduced ambiguity. Key steps include defining the problem, preparing the data, setting inversion parameters and developing geological constraints. Once reliable physical property models are recovered they must be interpreted in a geological context. Where alteration and mineralisation occupy significant volumes, the mineralogy associated with the physical properties can be identified; otherwise a lithological classification of the properties can be applied. This approach is used to develop predictive 3D lithological maps from geologically-constrained gravity and magnetic inversions at several scales in the Agnew-Wiluna greenstone belt in Australia’s Yilgarn Craton. These maps indicate a spatial correlation between thick mafic-ultramafic rock packages and gold deposit locations, suggesting a shared structural control. The maps also identify structural geometries and relationships consistent with the published regional tectonic framework. Geophysical inversion provides a framework into which geological and geophysical data sets can be integrated to produce a holistic prediction of the subsurface. The best possible result is one that cannot be dismissed as inconsistent with some piece of geological knowledge. Such a model can only be recovered by including all available geological knowledge using a consistent workflow process.
9

Clifford, Gayle. ""Am iz kwiin" (I'm his queen) : an exploration of mothers' disclosure of maternal HIV to their children in Kingston, Jamaica : using feminist Interpretative Phenomenological Analysis (IPA) in a resource-constrained context." Thesis, City, University of London, 2018. http://openaccess.city.ac.uk/21213/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Introduction: World Health Organisation (WHO) policy presents parental HIV disclosure to children as beneficial and encourages parents to disclose. Most research on disclosure has been conducted in high income countries and tends to represent women’s choices in terms of a disclosure/non-disclosure binary which, I argue, is premised on rationalist theory models of decision making and disclosure which fail to contextualise women’s experiences, particularly those women who live in the Global South. This research study aimed to address gaps in existing research by exploring the maternal disclosure experiences of HIV positive Jamaican mothers to their seronegative children and offers a critique of existing WHO policy. Methods: I carried out in-depth interviews with 15 HIV positive Jamaican women with at least one seronegative child aged over 10 years, associated with one clinic and one NGO in Kingston, Jamaica. I adopted a feminist approach to Interpretative Phenomenological Analysis (IPA) and applied Hochschild’s concept of emotion work to make sense of women’s experiences. In attending to the structural factors shaping health actions, a feminist approach highlights the relationship between Jamaican contextual factors of poverty, violence and complex familial formations and women’s disclosure decisions. Conceptual resources drew on feminist critiques of dominant discourses of motherhood, including governmentality and responsibilisation, which, I argue, underpin policy imperatives on disclosure to children. Results: Mothers’ experiences of maternal disclosure to children occurred on a spectrum, rather than a disclosure/nondisclosure binary, and included: full disclosure, partial disclosure, nondisclosure, denial of HIV, differential disclosure (telling only some of their children) and disclosure by others. Experiences of disclosure were affected by financial risks and practical issues as well as consideration of children’s long-term physical and mental health, education prospects and the impact on other family relationships. Mothering at a distance (mothers living apart from their child/ren) and the fear or reality of ‘downfallment’ (a child being HIV positive) further complicated disclosure experiences. The women described strategies which challenged negative characterisations of HIV positive women in order to present themselves as capable mothers and manage their own and their children’s emotions. Conclusion: Disclosure of maternal HIV to children is a complex issue, carrying risks as well as benefits, which are particularly heightened in low income contexts. When women disclose this could be seen as a form of governmentality and when they don’t disclose their mothering is called into question within policy discourses predicated on evidence from the Anglo North. The over simplistic disclosure /non-disclosure binary fails to consider the emotion work women engage in to manage their illness and their mothering identity in the context of their relationships with their children. This research adds to the HIV disclosure literature from low and middle income countries and extends maternal HIV disclosure research through the use of a novel approach, feminist IPA, to understand women’s experiences. The research findings point to the need for a more nuanced policy on disclosure in low and middle income countries.
10

Craparo, Emily M. (Emily Marie) 1980. "Cooperative exploration under communication constraints." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/46558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2008.
Includes bibliographical references (leaves 131-137).
The cooperative exploration problem necessarily involves communication among agents, while the spatial separation inherent in this task places fundamental limits on the amount of data that can be transmitted. However, the impact of limited communication on the exploration process has not been fully characterized. Existing exploration algorithms do not realistically model the tradeoff between expansion, which allows more rapid exploration of the area of interest, and maintenance of close relative proximity among agents, which facilitates communication. This thesis develops new algorithms applicable to the problem of cooperative exploration under communication constraints. The exploration problem is decomposed into two parts. In the first part, cooperative exploration is considered in the context of a hierarchical communication framework known as a mobile backbone network. In such a network, mobile backbone nodes, which have good mobility and communication capabilities, provide communication support for regular nodes, which are constrained in movement and communication capabilities but which can sense the environment. New exact and approximation algorithms are developed for throughput optimization in networks composed of stationary regular nodes, and new extensions are formulated to take advantage of regular node mobility. These algorithms are then applied to a cooperative coverage problem. In the second part of this work, techniques are developed for utilizing a given level of throughput in the context of cooperative estimation. The mathematical properties of the information form of the Kalman filter are leveraged in the development of two algorithms for selecting highly informative portions of the information matrix for transmission. One algorithm, a fully polynomial time approximation scheme, provides provably good results in computationally tractable time for problem instances of a particular structure. The other, a heuristic method applicable to instances of arbitrary matrix structure, performs very well in simulation for randomly-generated problems of realistic dimension.
by Emily M. Craparo.
Ph.D.
11

Kilian, Axel 1971. "Design exploration through bidirectional modeling of constraints." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/33803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Architecture, 2006.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 315-324).
Today digital models for design exploration are not used to their full potential. The research efforts in the past decades have placed geometric design representations firmly at the center of digital design environments. In this thesis it is argued that models for design exploration that bridge different representation aid in the discovery of novel designs. Replacing commonly used analytical, uni-directional models for linking representations, with bidirectional ones, further supports design exploration. The key benefit of bidirectional models is the ability to swap the role of driver and driven in the exploration. The thesis developed around a set of design experiments that tested the integration of bidirectional computational models in domain specific designs. From the experiments three main exploration types emerged. They are: branching explorations for establishing constraints for an undefined design problem; illustrated in the design of a concept car. Circular explorations for the refinement of constraint relationships; illustrated in the design of a chair. Parallel explorations for exercising well-understood constraints; illustrated in a form finding model in architecture. A key contribution of the thesis is the novel use of constraint diagrams developed to construct design explorers for the experiments. The diagrams show the importance of translations between design representations in establishing design drivers from the set of constraints. The incomplete mapping of design features across different representations requires the redescription of the design for each translation.
(cont.) This redescription is a key aspect of exploration and supports design innovation. Finally, this thesis argues that the development of design specific design explorers favors a shift in software design away from monolithic, integrated software environments and towards open software platforms that support user development.
by Axel Kilian.
Ph.D.
12

Cerf, Loïc. "Constraint-based mining of closed patterns in noisy n-ary relations." Lyon, INSA, 2010. http://theses.insa-lyon.fr/publication/2010ISAL0050/these.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Useful knowledge discovery processes can be based on patterns extracted from large datasets. Designing efficient data mining algorithms to compute collections of relevant patterns is an active research domain. Many datasets record whether some properties hold for some objects, e. G. , whether an item is bought by a customer or whether a gene is over-expressed in a biological sample. Such datasets are binary relations and can be represented as 0/1 matrices. In such matrices, a closed itemset is a maximal rectangle of ’1’s modulo arbitrary permutations of the lines (objects) and the columns (properties). Thus, every closed itemset supports the discovery of a maximal subset of objects sharing the same maximal subset of properties. Efficiently extracting every closed itemset satisfying user-defined relevancy constraints has been extensively studied. Despite its success across many application domains, this framework often turns out to be too narrow. First of all, many datasets are n-ary relations, i. E. , 0/1 tensors. Reducing their analysis to two dimensions is ignoring potentially interesting additional dimensions, e. G. , where a customer buys an item (localized analysis) or when a gene expression is measured (kinetic analysis). The presence of noise in most real-life datasets is a second issue, which leads to the fragmentation of the patterns to discover. Generalizing the definition of a closed itemset to make it suit relations of higher arity and tolerate some noise is straightforward (maximal hyper-rectangle with an upper bound of ’0’s tolerated per hyper-plan). On the contrary, generalizing their extraction is very hard. Indeed, classical algorithms exploit a mathematical property (the Galois connection) of the closed itemsets that none of the two generalizations preserve. That is why our extractor browses the candidate pattern space in an original way that does not favor any dimension. This search can be guided by a very broad class of relevancy constraints the patterns must satisfy. In particular, this thesis studies constraints specifically designed for mining almost-persistent cliques in dynamic graphs. Our extractor is orders of magnitude faster than known competitors focusing on exact patterns in ternary relations or on noise-tolerant patterns in binary relations. Despite these results, such an exhaustive approach often cannot, in a reasonable time, tolerate as much noise as the dataset contains. In this case, complementing the extraction with a hierarchical agglomeration of the (insufficiently noise-tolerant) patterns increases the quality of the returned collection of patterns
Les processus de découverte de connaissances nouvelles peuvent être fondés sur des motifs locaux extraits de grands jeux de données. Concevoir des algorithmes de fouille de données efficaces pour calculer des collections de motifs pertinents est un domaine actif de recherche. Beaucoup de jeux de données enregistrent si des objets présentent ou non certaines propriétés; par exemple si un produit est acheté par un client ou si un gène est sur exprimé dans un échantillon biologique. Ces jeux de données sont des relations binaires et peuvent être représentés par des matrices 0/1. Dans de telles matrices, un ensemble fermé est un rectangle maximal de '1's modulo des permutations arbitraires des lignes (objets) et des colonnes (propriétés). Ainsi, chaque ensemble fermé sous tend la découverte d'un sous ensemble maximal d'objets partageant le même sous ensemble maximal de propriétés. L'extraction efficace de tous les ensembles fermés, satisfaisant des contraintes de pertinences définies par l'utilisateur, a été étudiée en profondeur. Malgré son succès dans de nombreux domaines applicatifs, ce cadre de travail se révèle souvent trop étroit. Tout d'abord, beaucoup de jeux de données sont des relations n-aires, c'est à dire des tenseurs 0/1. Réduire leur analyse à deux dimensions revient à ignorer des dimensions additionnelles potentiellement intéressantes; par exemple où un client achète un produit (analyse spatiale) ou quand l'expression d'un gène est mesurée (analyse cinétique). La présence de bruit dans la plupart des jeux de données réelles est un second problème qui conduit à la fragmentation des motifs à découvrir. On généralise facilement la définition d'un ensemble fermé pour la rendre applicable à des relations de plus grande arité et tolérante au bruit (hyper rectangle maximal avec une borne supérieure de '0's tolérés par hyperplan). Au contraire, généraliser leur extraction est très difficile. En effet, les algorithmes classiques exploitent une propriété mathématique (la connexion de Galois) des ensembles fermés qu'aucune des deux généralisations ne préserve. C'est pourquoi notre extracteur parcourt l'espace des motifs candidats d'une façon originale qui ne favorise aucune dimension. Cette recherche peut être guidée par une très grande classe de contraintes de pertinence que les motifs doivent satisfaire. En particulier, cette thèse étudie des contraintes spécifiquement conçues pour la fouille de quasi cliques presque persistantes dans des graphes dynamiques. Notre extracteur est plusieurs ordres de grandeurs plus efficaces que les algorithmes existants se restreignant à la fouille de motifs exacts dans des relations ternaires ou à la fouille de motifs tolérants aux erreurs dans des relations binaires. Malgré ces résultats, une telle approche exhaustive ne peut souvent pas, en un temps raisonnable, tolérer tout le bruit contenu dans le jeu de données. Dans ce cas, compléter l'extraction avec une agglomération hiérarchique des motifs (qui ne tolèrent pas suffisamment de bruit) améliore la qualité des collections de motifs renvoyées
13

Mitas̃iūnaite, Ieva. "Mining string data under similarity and soft-frequency constraints : application to promoter sequence analysis." Lyon, INSA, 2009. http://theses.insa-lyon.fr/publication/2009ISAL0036/these.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Nous étudions l'extraction de motifs sous contraintes dans des collections de chaînes de caractères et le développement de solveurs complets et génériques pour l'extraction de tous les motifs satisfaisant une combinaison de contraintes primitives. Un solveur comme FAVST permet d'optimiser des conjonctions de contraintes dites monotones et/ou anti-monotones (e. G. , des contraintes de fréquence maximale et minimale). Nous avons voulu compléter ce type d'outil en traitant des contraintes pour la découverte de motifs tolérants aux exceptions. Nous proposons différentes définitions des occurrences approchées et l'exploitation de contraintes de fréquence approximative. Ceci nous conduit à spécifier des contraintes difficiles (e. G. , pour l'expression de la similarité) comme des conjonctions de primitives monotones et anti-monotones optimisées par notre solveur MARGUERITE. Soucieux de sa mise en ?uvre dans des processus de découverte de connaissances à partir de données, nous avons analysé le réglage des paramètres d'extraction (e. G. , quel seuil choisir pour les fréquences). Nous proposons une méthode originale pour estimer le nombre de motifs qui satisfont une contrainte au moyen d'un échantillonnage de l'espace des motifs. Nous avons également étudié l'identification des paramètres les plus stringents pour fournir des motifs qui ne sont probablement pas de faux positifs. Ces contributions ont été appliquées à l'analyse des séquences promotrices des gènes. En étroite collaboration avec une équipe de biologistes du CGMC, nous avons pu identifier des sites de fixation putatifs de facteurs transcription impliqués dans le processus de différenciation cellulaire
An inductive database is a database that contains not only data but also patterns. Inductive databases are designed to support the KDD process. Recent advances in inductive databases research have given rise to a generic solvers capable of solving inductive queries that are arbitrary Boolean combinations of anti-monotonic and monotonic constraints. They are designed to mine different types of pattern (i. E. , patterns from different pattern languages). An instance of such a generic solver exists that is capable of mining string patterns from string data sets. In our main application, promoter sequence analysis, there is a requirement to handle fault-tolerance, as the data intrinsically contains errors, and the phenomenon we are trying to capture is fundamentally degenerate. Our research contribution to fault-tolerant pattern extraction in string data sets is the use of a generic solver, based on a non-trivial formalisation of fault-tolerant pattern extraction as a constraint-based mining task. We identified the stages in the process of the extraction of such patterns where state-of-art strategies can be applied to prune the search space. We then developed a fault-tolerant pattern match function InsDels that generic constraint solving strategies can soundly tackle. We also focused on making local patterns actionable. The bottleneck of most local pattern extraction methods is the burden of spurious patterns. As the analysis of patterns by the application domain experts is time consuming, we cannot afford to present patterns without any objective clue about their relevancy. Therefore we have developed two methods of computing the expected number of patterns extracted in random data sets. If the number of extracted patterns is strongly different from the expected number from random data sets, one can then state that the results exhibits local associations that are a priori relevant because they are unexpected. Among others applications, we have applied our approach to support the discovery of new motifs in gene promoter sequences with promising results
14

Chahdi, Hatim. "Apports des ontologies à l'analyse exploratoire des images satellitaires." Thesis, Montpellier, 2017. http://www.theses.fr/2017MONTS014/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A l'heure actuelle, les images satellites constituent une source d'information incontournable face à de nombreux enjeux environnementaux (déforestation, caractérisation des paysages, aménagement du territoire, etc.). En raison de leur complexité, de leur volume important et des besoins propres à chaque communauté, l'analyse et l'interprétation des images satellites imposent de nouveaux défis aux méthodes de fouille de données. Le parti-pris de cette thèse est d'explorer de nouvelles approches, que nous situons à mi-chemin entre représentation des connaissances et apprentissage statistique, dans le but de faciliter et d'automatiser l'extraction d'informations pertinentes du contenu de ces images. Nous avons, pour cela, proposé deux nouvelles méthodes qui considèrent les images comme des données quantitatives massives dépourvues de labels sémantiques et qui les traitent en se basant sur les connaissances disponibles. Notre première contribution est une approche hybride, qui exploite conjointement le raisonnement à base d'ontologie et le clustering semi-supervisé. Le raisonnement permet l'étiquetage sémantique des pixels à partir de connaissances issues du domaine concerné. Les labels générés guident ensuite la tâche de clustering, qui permet de découvrir de nouvelles classes tout en enrichissant l'étiquetage initial. Notre deuxième contribution procède de manière inverse. Dans un premier temps, l'approche s'appuie sur un clustering topographique pour résumer les données en entrée et réduire de ce fait le nombre de futures instances à traiter par le raisonnement. Celui-ci n'est alors appliqué que sur les prototypes résultant du clustering, l'étiquetage est ensuite propagé automatiquement à l'ensemble des données de départ. Dans ce cas, l'importance est portée sur l'optimisation du temps de raisonnement et à son passage à l'échelle. Nos deux approches ont été testées et évaluées dans le cadre de la classification et de l'interprétation d'images satellites. Les résultats obtenus sont prometteurs et montrent d'une part, que la qualité de la classification peut être améliorée par une prise en compte automatique des connaissances et que l'implication des experts peut être allégée, et d'autre part, que le recours au clustering topographique en amont permet d'éviter le calcul des inférences sur la totalité des pixels de l'image
Satellite images have become a valuable source of information for Earth observation. They are used to address and analyze multiple environmental issues such as landscapes characterization, urban planning or biodiversity conservation to cite a few.Despite of the large number of existing knowledge extraction techniques, the complexity of satellite images, their large volume, and the specific needs of each community of practice, give rise to new challenges and require the development of highly efficient approaches.In this thesis, we investigate the potential of intelligent combination of knowledge representation systems with statistical learning. Our goal is to develop novel methods which allow automatic analysis of remote sensing images. We elaborate, in this context, two new approaches that consider the images as unlabeled quantitative data and examine the possible use of the available domain knowledge.Our first contribution is a hybrid approach, that successfully combines ontology-based reasoning and semi-supervised clustering for semantic classification. An inference engine first reasons over the available domain knowledge in order to obtain semantically labeled instances. These instances are then used to generate constraints that will guide and enhance the clustering. In this way, our method allows the improvement of the labeling of existing classes while discovering new ones.Our second contribution focuses on scaling ontology reasoning over large datasets. We propose a two step approach where topological clustering is first applied in order to summarize the data, in term of a set of prototypes, and reduces by this way the number of future instances to be treated by the reasoner. The representative prototypes are then labeled using the ontology and the labels automatically propagated to all the input data.We applied our methods to the real-word problem of satellite images classification and interpretation and the obtained results are very promising. They showed, on the one hand, that the quality of the classification can be improved by automatic knowledge integration and that the involvement of experts can be reduced. On the other hand, the upstream exploitation of topographic clustering avoids the calculation of the inferences on all the pixels of the image
15

Spirin, Victor. "Multi-agent exploration of indoor environments under limited communication constraints." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:b337a9a7-91c7-451c-b32f-b1cd05ef983d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis considers cooperation strategies for teams of agents autonomously exploring an unknown indoor environment under limited communication constraints. The primary application considered is Urban Search-and-Rescue, although other applications are possible, such as surveying hazardous areas. We focus on developing cooperation strategies that enable periodic communication between the exploring agents and the base station (human operators). Such strategies involve an inherent trade-off between allocating team resources towards facilitating communication and increasing the speed of exploration. We propose two classes of approaches to address this problem: using opportunistic rendezvous to guide the team behaviour, and explicitly arranging rendezvous between agents. In the opportunistic approach, the allocation of team resources between exploration and communication can be indicated with a single numerical parameter between 0 and 1 -- the return ratio -- which leads to complex emergent cooperative behaviour. We show that in some operating environments agents can benefit from explicitly arranging rendezvous. We propose a novel definition of a rendezvous location as a tuple of points and show how such locations can be generated so that the topology of the environment and the communication ranges of agents can be exploited. We show how such rendezvous locations can be used to both improve the speed of exploration and to improve team connectivity by allowing relays to contribute to the overall exploration. We evaluate these approaches extensively in simulation and discuss their applicability in search-and-rescue scenarios.
16

McIver, Donald A. "Epithermal precious metal deposits physicochemical constraints, classification characteristics and exploration guidelines." Thesis, Rhodes University, 1997. http://hdl.handle.net/10962/d1005553.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Epithermal deposits include a broad range of precious metal, base metal, mercury, and stibnite deposits. These deposits exhibit a low temperature of formation (180-280°C) at pressures of less than a few hundred bars (equivalent to depths of 1.5 - 2.0lkm). Epithermal gold deposits are the product of large-scale hydrothermal systems which mostly occur in convergent plate margin settings. Associated volcanism is largely of andesitic arc (calcalkaline to alkaline), or rhyolitic back-arc type. Porphyry Cu-Mo-Au deposits form deeper in the same systems. Genetic processes within individual deposits take place in an extremely complex manner. The resultant mineral associations, alteration styles and metal deposition patterns are even more complicated. Many attempts have been made to classify epithermal deposits based on mineralogy and alteration, host rocks, deposit form, genetic models, and standard deposits. For the explorationist, the most useful classification schemes should be brief, simple, descriptive, observationally based, and informative. Ultimately, two distinct styles of epithermal gold deposits are readily recognised: high-sulphidation, acid sulphate and low-sulphidation, adularia-sericite types. The terms high-sulphidation (HS) and low-sulphidation (IS) are based on the sulphidation state of associated sulphide minerals, which, along with characteristic hydrothermal alteration, reflect fundamental chemical differences in the epithermal environment. Highsulphidation-type deposits form in the root zones of volcanic domes from acid waters that contain residual magmatic volatiles. The low-sulphidation-type deposits form in geothermal systems where surficial waters mix with deeper, heated saline waters in a lateral flow regime, where neutral to weakly acidic, alkali chloride waters are dominant. The HSILS classification, combined with a simple description of the form of the deposit, conveys a large amount of information on mineralogy, alteration, and spatial characteristics of the mineralisation, and allows inferences to be drawn regarding likely regional controls, and the characteristics of the ore-forming fluids. The modern understanding of these environments allows us to quite effectively identify the most probable foci of mineral deposition in any given district. Current knowledge of these deposits has been derived from studies of active geothermal systems. Through comparison with alteration zones within these systems, the exploration geologist may determine the potential distribution and types of ore in a fossil geothermal system. Alteration zoning specifically can be used as a guide towards the most prospective part of the system. Epithermal gold deposits of both HS- and LS-styles are nevertheless profoundly difficult exploration targets. Successful exploration must rely on the integration of a variety of exploration techniques, guided by an understanding of the characteristics of the deposits and the processes that form them. There are no simple formulae for success in epithermal exploration: what works best must be determined for each terrain and each prospect. On a regional scale tectonic, igneous and structural settings can be used, together with assessment of the depth of erosion, to select areas for project area scale exploration. Integrated geological-geophysical interpretation derived from airborne geophysics providesa basis of targeting potential ore environments for follow-up. Geology, geochemistry and surface geophysics localise mineral concentrations within these target areas
17

Acevedo, Valle Juan Manuel. "Sensorimotor exploration: constraint awareness and social reinforcement in early vocal development." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/667500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This research is motivated by the benefits that knowledge regarding early development in infants may provide to different fields of science. In particular, early sensorimotor exploration behaviors are studied in the framework of developmental robotics. The main objective is about understanding the role of motor constraint awareness and imitative behaviors during sensorimotor exploration. Particular emphasis is placed on prelinguistic vocal development because during this stage infants start to master the motor systems that will later allow them to pronounce their first words. Previous works have demonstrated that goal-directed intrinsically motivated sensorimotor exploration is an essential element for sensorimotor control learning. Moreover, evidence coming from biological sciences strongly suggests that knowledge acquisition is shaped by the environment in which an agent is embedded and the embodiment of the agent itself, including developmental processes that shape what can be learned and when. In this dissertation, we firstly provide a collection of theoretical evidence that supports the relevance of our study. Starting from concepts of cognitive and developmental sciences, we arrived al the conclusion that spoken language, i.e., early \/ocal development, must be studied asan embodied and situated phenomena. Considering a synthetic approach allow us to use robots and realistic simulators as artifacts to study natural cognitive phenomena. In this work, we adopta toy example to test our cognitive architectures and a speech synthesizer that mimics the mechanisms by which humans produce speech. Next, we introduce a mechanism to endow embodied agents with motor constraint awareness. lntrinsic motivation has been studied as an importan! element to explain the emergence of structured developmental stages during early vocal development. However, previous studies failed to acknowledge the constraints imposed by the embodiment and situatedness, al sensory, motor, cognitive and social levels. We assume that during the onset of sensorimotor exploratory behaviors, motor constraints are unknown to the developmental agent. Thus, the agent must discover and learn during exploration what !hose motor constraints are. The agent is endowed with a somesthetic system based on tactile information. This system generales a sensor signal indicating if a motor configuration was reached or not. This information is later used to create a somesthetic model to predict constraint violations. Finally, we propase to include social reinforcement during exploration. Sorne works studying early vocal development have shown that environmental speech shapes the sensory space explored during babbling. More generally, imitative behaviors have been demonstrated to be crucial for early development in children as they constraint the search space.during sensorimotor exploration. Therefore, based on early interactions of infants and caregivers we proposed an imitative mechanism to reinforce intrinsically motivated sensorimotor exploration with relevan! sensory units. Thus, we modified the constraints aware sensorimotor exploration architecture to include a social instructor, expert in sensor units relevant to communication, which interacts with the developmental agent. lnteraction occurs when the learner production is ·enough' similar to one relevan! to communication. In that case, the instructor perceives this similitude and reformulates with the relevan! sensor unit. When the learner perceives an utterance by the instructor, it attempts to imitate it. In general, our results suggest that somesthetic senses and social reinforcement contribute to achieving better results during intrinsically motivated exploration. Achieving lest redundant exploration, decreasing exploration and evaluation errors, as well as showing a clearer picture of developmental transitions.
La motivación principal de este trabajo es la magnitud que las contribuciones al conocimiento en relación al desarrollo infantil pueden aportar a diferentes campos de la ciencia. Particularmente, este trabajo se enfoca en el estudio de los comportamientos de autoexploración sensorimotora en un marco robótico e inspirado en el campo de la psicología del desarrollo. Nuestro objetivo principal es entender el papel que juegan las restricciones motoras y los reflejos imitativos durante la exploración espontánea observada en infantes. Así mismo, este trabajo hace especial énfasis en el desarrollo vocal-auditivo en infantes, que les provee con las herramientas que les permitirán producir sus primeras palabras. Trabajos anteriores han demostrado que los comportamientos de autoexploración sensorimotora en niños, la cual ocurre en gran medida por motivaciones intrínsecas, es un elemento importante para aprender a controlar su cuerpo con tal de alcanzar estados sensoriales específicos. Además, evidencia obtenida de estudios biológicos sugiere tajantemente que la adquisición de conocimiento es regulada por el ambiente en el cual un agente cognitivo se desenvuelve y por el cuerpo del agente per se. Incluso, los procesos de desarrollo que ocurren a nivel físico, cognitivo y social también regulan que es aprendido y cuando esto es aprendido. La primera parte de este trabajo provee al lector con la evidencia teórica y práctica que demuestran la relevancia de esta investigación. Recorriendo conceptos que van desde las ciencias cognitivas y del desarrollo, llegamos a la conclusión de que el lenguaje, y por tanto el habla, deben ser estudiados como fenómenos cognitivos que requieren un cuerpo físico y además un ambiente propicio para su existencia. En la actualidad los sistemas robóticos, reales y simulados, pueden ser considerados como elementos para el estudio de los fenómenos cognitivos naturales. En este trabajo consideramos un ejemplo simple para probar las arquitecturas cognitivas que proponemos, y posteriormente utilizamos dichas arquitecturas con un sintetizador de voz similar al mecanismo humano de producción del habla. Como primera contribución de este trabajo proponemos introducir un mecanismo para construir robots capaces de considerar sus propias restricciones motoras durante la etapa de autoexploración sensorimotora. Ciertos mecanismos de motivación intrínseca para exploración sensorimotora han sido estudiados como posibles conductores de las trayectorias de desarrollo observadas durante el desarrollo temprano del habla. Sin embargo, en previos estudios no se consideró o que este desarrollo está a delimitado por restricciones debido al ambiente, al cuerpo físico, y a las capacidades sensoriales, motoras y cognitivas. En nuestra arquitectura, asumimos que un agente artificial no cuenta con conocimiento de sus limitantes motoras, y por tanto debe descubrirlas durante la etapa de autoexploración. Para tal efecto, el agente es proveído de un sistema somatosensorial que le indica cuando una configuración motora viola las restricciones impuestas por el propio cuerpo. Finalmente, como segunda parte de nuestra contribución proponemos incluir un mecanismo para reforzar el aprendizaje durante la autoexploración. Estudios anteriores demostraron que el ambiente lingüístico en que se desarrolla un infante, o un agente artificial, condiciona sus producciones vocales durante la autoexploración o balbuceo. En este trabajo nos enfocamos en el estudio de episodios de imitación que ocurren durante el desarrollo temprano de un agente. Basados en estudios sobre la interacción entre madres e hijos durante la etapa pre lingüística, proponemos un mecanismo para reforzar el aprendizaje durante la autoexploración con unidades sensoriales relevantes. Entonces, a partir de la arquitectura con autoconocimiento de restricciones motores, construimos una arquitectura que incluye un instructor experto en control sensorimotor. Las interacciones entre el aprendiz y el experto ocurren cuando el aprendiz produce una unidad sensorial relevante para la comunicación durante la autoexploración. En este caso, el experto percibe esta similitud y responde reformulando la producción del aprendiz como la unidad relevante. Cuando el aprendiz percibe una acción del experto, inmediatamente intenta imitarlo. Los resultados presentados en este trabajo sugieren que, los sistemas somatosensoriales, y el reforzamiento social contribuyen a lograr mejores resultados durante la etapa de autoexploración sensorimotora motivada intrínsecamente. En este sentido, se logra una exploración menos redundante, los errores de exploración y evaluación disminuyen, y por último se obtiene una imagen más nítida de las transiciones entre etapas del desarrollo.
La motivació principal d'aquest treball és la magnitud que les contribucions al coneixement en relació al desenvolupament infantil poden aportar a diferents camps de la ciència. Particularment, aquest treball s'enfoca en l'estudi dels comportaments d’autoexploració sensorimotora en un marc robòtic i inspirat en el camp de la psicologia del desenvolupament. El nostre objectiu principal és entendre el paper que juguen les restriccions motores i els reflexos imitatius durant l’exploració espontània observada en infants. Així mateix, aquest treball fa especial èmfasi en el desenvolupament vocal-auditiu en infants, que els proveeix amb les eines que els permetran produir les seves primeres paraules. Treballs anteriors han demostrat que els comportaments d'autoexploració sensorimotora en nens, la qual ocorre en gran mesura per motivacions intrínseques, és un element important per aprendre a controlar el seu cos per tal d'assolir estats sensorials específics. A més, evidencies obtingudes d'estudis biològics suggereixen que l’adquisició de coneixement és regulada per l'ambient en el qual un agent cognitiu es desenvolupa i pel cos de l'agent per se. Fins i tot, els processos de desenvolupament que ocorren a nivell físic, cognitiu i social també regulen què és après i quan això ès après. La primera part d'aquest treball proveeix el lector amb les evidencies teòrica i pràctica que demostren la rellevància d'aquesta investigació. Recorrent conceptes que van des de les ciències cognitives i del desenvolupament, vam arribar a la conclusió que el llenguatge, i per tant la parla, han de ser estudiats com a fenòmens cognitius que requereixen un cos físic i a més un ambient propici per a la seva existència. En l'actualitat els sistemes robòtics, reals i simulats, poden ser considerats com a elements per a l'estudi dels fenòmens cognitius naturals. En aquest treball considerem un exemple simple per provar les arquitectures cognitives que proposem, i posteriorment utilitzem aquestes arquitectures amb un sintetitzador de veu similar al mecanisme humà de producció de la parla. Com a primera contribució d'aquest treball proposem introduir un mecanisme per construir robots capaços de considerar les seves pròpies restriccions motores durant l'etapa d'autoexploració sensorimotora. Certs mecanismes de motivació intrínseca per exploració sensorimotora han estat estudiats com a possibles conductors de les trajectòries de desenvolupament observades durant el desenvolupament primerenc de la parla. No obstant això, en previs estudis no es va considerar que aquest desenvolupament és delimitat per restriccions a causa de l'ambient, el cos físic, i les capacitats sensorials, motores i cognitives. A la nostra arquitectura, assumim que un agent artificial no compta amb coneixement dels seus limitants motors, i per tant ha de descobrir-los durant l'etapa d'autoexploració. Per a tal efecte, l'agent és proveït d'un sistema somatosensorial que li indica quan una configuració motora viola les restriccions imposades pel propi cos. Finalment, com a segona part de la nostra contribució proposem incloure un mecanisme per reforçar l'aprenentatge durant l'autoexploració. Estudis anteriors han demostrat que l'ambient lingüísticstic en què es desenvolupa un infant, o un agent artificial, condiciona les seves produccions vocals durant l'autoexploració o balboteig. En aquest treball ens enfoquem en l'estudi d'episodis d’imitació que ocorren durant el desenvolupament primerenc d'un agent. Basats en estudis sobre la interacció entre mares i fills durant l'etapa prelingüística, proposem un mecanisme per reforçar l'aprenentatge durant l'autoexploració amb unitats sensorials rellevants. Aleshores, a partir de l'arquitectura amb autoconeixement de restriccions motors, vam construir una arquitectura que inclou un instructor expert en control sensorimotor. Les interaccions entre l'aprenent i l'expert, ocorren quan una producció sensorial de l'aprenent durant l'autoexploració és similar a una unitat sensorial rellevant per a la comunicació. En aquest cas, l'expert percep aquesta similitud i respon reformulant la producció de l'aprenent com la unitat rellevant. Quan l'aprenent percep una acció de l'expert, immediatament intenta imitar-lo. Els resultats presentats en aquest treball suggereixen que els sistemes somatosensorials i el reforçament social contribueixen a aconseguir millors resultats durant l'etapa d'autoexploració sensorimotora motivada intrínsecament. En aquest sentit, s'aconsegueix una exploració menys redundant, els errors d’exploració i avaluació disminueixen, i finalment s’obté una imatge més nítida de les transicions entre etapes del desenvolupament
18

Belaid, Mohamed-Bachir. "Declarative Itemset Mining Based on Constraint Programming." Thesis, Montpellier, 2020. http://www.theses.fr/2020MONTS004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La fouille de données est l'art de découvrir des informations à partir de bases de données.L'utilisateur spécifie le type de motifs à extraire et le spécialiste utilise des techniques pour trouver les motifs requis.De nombreuses techniques ont été introduites pour l'extraction des motifs classiques tels que les motifs fréquents, les règles d'association, etc.Cependant, l'extraction des motifs avec des propriétés supplémentaires restent un problème pour les spécialistes car des efforts algorithmiques sont requises pour gérer ces propriétés.Récemment, les chercheurs ont profité de la flexibilité de la programmation par contraintes pour modéliser plusieurs problèmes de la fouille de données.En termes de temps d'exécution, les méthodes basées sur la programmation par contraintes ne sont pas encore concurrentes avec les algorithmes spécialisées.Cependant, leur flexibilité permet la modélisation des requêtes complexes sans la nécessité de réviser le processus de résolution.Dans cette thèse, nous proposons d’utiliser la programmation par contraintes pour résoudre des problèmes de la fouille de données.Notre première contribution est un modèle basé sur la programmation par contraintes pour l'extraction des règles d'association.Pour mettre en œuvre notre modèle, nous introduisons une nouvelle contrainte globale,CONFIDENT, pour assurer la confiance des règles.Nous prouvons que propager complètement CONFIDENT est NP-difficile.Nous fournissons donc un propagateur non-complet et une décomposition pour la contrainte CONFIDENT.Nous capturons également les règles minimales non redondantes, une représentation condensée des règles d'association, en introduisant la contrainte globale GENERATOR. GENERATOR est utilisé pour extraire des motifs qui sont des générateurs. Pour cette contrainte, nous proposons un propagateur polynomial complet.Notre deuxième contribution est un model générique basé sur la programmation par contraintes permettant l'extraction des deux frontières des motifs fréquents, à savoir la frontière positive ou les motifs maximaux fréquents et la frontière négative ou les motifs minimaux infréquents.Il est facile de choisir la frontière à extraire en fixant un simple paramètre.Pour cela, nous introduisons deux nouvelles contraintes globales, FREQUENTSUBS et INFREQUENTSUPERS,avec des propagateurs polynomiaux complets.Nous examinons ensuite le problème de l'extraction des frontières avec des contraintes supplémentaires.Nous prouvons que ce problème est coNP-difficile. Cela implique qu’il n’existe aucun CSP représentant ce problème (sauf si coNP est dans NP)
Data mining is the art of discovering knowledge from databases. The user specifies the type of patterns to be mined, and the miner uses techniques to find the required patterns. Many techniques have been introduced for mining traditional patterns like frequent itemsets, association rules, etc. However, mining patterns with additional properties remains a bottleneck for specialists nowadays due to the algorithmic effort needed to handle these properties.Recently, researchers have taken advantage of the flexibility of constraint programming to model various data mining problems. In terms of CPU time, constraint programming-based methods have not yet competed with ad hoc algorithms. However, their flexibility allows the modeling of complex user queries without revising the solving process.In this thesis we propose to use constraint programming for modeling and solving some well known data mining problems.Our first contribution is a constraint programming model for mining association rules. To implement our model, we introduce a new global constraint, CONFIDENT, for ensuring the confidence of rules.We prove that completely propagating CONFIDENT is NP-hard. We thus provide a non-complete propagator and a decomposition for CONFIDENT. We also capture the minimal non-redundant rules, a condensed representation of association rules, by introducing the global constraint GENERATOR. GENERATOR is used for mining itemsets that are generators. For this constraint, we propose a complete polynomial propagator.Our second contribution is a generic framework based on constraint programming to mine both borders of frequent itemsets, i.e. the positive border or maximal frequent itemsets and the negative border or minimal infrequent itemsets. One can easily decide which border to mine by setting a simple parameter. For this, we introduce two new global constraints, FREQUENTSUBS and INFREQUENTSUPERS, with complete polynomial propagators. We then consider the problem of mining borders with additional constraints. We prove that this problem is coNP-hard, ruling out the hope for the existence of a single CSP solving this problem (unless coNP is in NP)
19

Pillay, Poobalan. "An empirical exploration of supply chain constraints facing the construction industry in South Africa." Thesis, Vaal University of Technology, 2016. http://hdl.handle.net/10352/382.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The South African Construction Industry is one of the largest contributors to the gross domestic product of the country as well as to employment. It has, however, been experiencing significant challenges due to multifaceted factors. The main objective of this research was to identify the supply chain management constraints within the South African Construction Industry and how these can be overcome. This study is by nature descriptive and exploratory and contains qualitative elements. The problems were identified through a literature review, focused group discussions and interviews with major construction companies in South Africa. The findings also indicate that the main supply chain management constraints are to a greater extent internal and typical of supply chain methodologies and approaches. These constraints are among others the lack of coordination, collaboration and commitment between suppliers and clients within the supply chain, poor leadership in key areas of systems, design problems (many changes and inconsistent information), deficient internal and external communication and information transfer, inadequate management within the supply chain, mainly poor planning and control just to mention a few. A model based on supply chain system management as well as the Theory Of Constraints (TOC) has been developed that can be a useful tool to address the constraints in the construction sector. Originating from the study are applicable recommendations for the South African construction industry supply chains, covering key themes that have been articulated in the study, particularly benchmarking to the theory of constrains. Such recommendations include further research core components of supply chain such as, collaborations, logistics and how each of system components can be linked to performance of the supply chain management system.
20

Guerrera, Christine. "Flexibility and constraint in lexical access: Explorations in transposed-letter priming." Diss., The University of Arizona, 2004. http://hdl.handle.net/10150/280702.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In order to recognize a written word, the relative positions of its component letters must be encoded. Ultimately, this information must be precise enough to distinguish between anagrams such as causal and casual while retaining enough flexibility to recognize elehpant as elephant. The lexical decision experiments reported here used a more dramatic version of transposed letter priming than has previously been reported in order to identify the constraints on this flexibility. In light of the observed data, several current models of letter position coding were evaluated and suggestions for future models were proposed. The first goal of this research was to determine the degree of flexibility in word recognition in terms of how many transposed letters can be tolerated in the input. Reliable priming was observed throughout the experiments when as many as six of the eight letters had been transposed (most ps < .01). However, Experiments 5 and 6 identified the limit of this flexibility, in that fully transposed primes did not activate their target entries. The second goal was to identify letter position effects, or differences in the importance of various letter positions in lexical access. Experiments 1-4 supported Jordan et al.'s (2003) claim that the exterior letters of a word are the most crucial. Stronger priming was derived from primes with correctly placed exterior letters and transposed interior letters than from the reverse case. Support was also found for Inhoff et al.'s (2003) claim that a word's initial letters are more important to lexical access than later letters (Experiment 7). Overall, a trend of decreasing importance from left to right was observed, with the possible exception of the final letter. The observed data were compared to the predictions made by the BLIRNET model (Mozer, 1991), Grainger & van Heuven's (in press) open bigram coding scheme, the SOLAR model (Davis, 1999), and the SERIOL model (Whitney, 1999). This enabled us to identify particularly effective and problematic approaches to letter position coding. Finally, it is proposed that a visual word recognition system with two parallel, complementary processing streams best describes the data.
21

Zhao, Jing. "Household debt service burden outlook an exploration on the effect of credit constraints /." Connect to this title online, 2003. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1054650767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--Ohio State University, 2003.
Title from first page of PDF file. Document formatted into pages; contains xii, 210 p.; also includes graphics (some col.) Includes bibliographical references (p. 203-210). Available online via OhioLINK's ETD Center
22

Gharbi, Amna. "Constraint programming for design space exploration of dataflow applications on multi-bus architectures." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse a été effectuée à Télécom Paris et a été financée par Nokia Bell Labs France. Dans ce contexte, nous nous intéressons à l’exploration d’architecture des systèmes embarqués pour le déploiement des applications de traitement de signal, au niveau système. Ici, l’exploration d’architecture vise à identifier l’allocation et l’ordonnancement des deux composants des applications : les tâches et leurs transferts des données. Cette identification a un impact clé sur la performance (e.g., latence de bout en bout) globale du système. Tandis que plusieurs travaux se sont intéressés aux diverses architectures de communication, cette thèse se focalise sur les architectures multi-bus, particulièrement adaptées aux plateformes de calcul pour les applications de traitement de signal. Pour ce type de plateformes, nous montrons que les contributions déjà proposées sont insuffisantes. A cet égard, nous proposons trois contributions : 1) Une formulation satisfiability modulo theories (SMT) qui permet d’explorer les décisions d’allocation et d’ordonnancement sur les architectures multi-bus pour l'optimisation de la latence ; Nous démontrons son applicabilité pour produire des solutions pour des applications connues. 2) Pour améliorer la scalabilité de la recherche optimale de la première contribution, nous proposons une nouvelle technique pour couper l’espace des solutions recherchées. Notre évaluation démontre un gain de scalabilité. Finalement, 3) la consommation de puissance par les communications est étudiée ; nous montrons comment optimiser la latence et la consommation conjointement. Nos évaluations montrent comment différents compromis entre latence et consommation de puissance peuvent être étudiés. De plus, nous montrons comment nos contribution ont été intégrées à un outil de modélisation et de vérification particulièrement adapté à la conception des systèmes embarqués au niveau système (TTool). Enfin, nous identifions deux axes principaux pour les perspectives de ce travail. Le premier porte sur l’extension de la formulation actuelle pour modéliser de nouveaux aspects des systèmes étudiés (e.g., mémoire partagée, débit). Le deuxième axe concerne l’élaboration de nouvelles techniques pour améliorer davantage la scalabilité de la recherche optimale
This thesis is part of a collaboration between Télécom Paris and Nokia Bell Labs France. In this context, we focus on the system-level Design Space Exploration of embedded systems for the execution of signal processing applications. In the system we target, the design space exploration process intends to identify the allocation and scheduling of both application tasks and data transfers between these tasks: this identification plays a key role in the overall performance (e.g. end-to-end latency) of these systems. While there are already multiple works for diverse communication architectures, this thesis focuses on multi-bus architectures that are particularly well-suited for computation platforms of signal processing applications. For these platforms, we show that only limited contributions have already been proposed. Three contributions are proposed to tackle the above mentioned problem. 1) A satisfiability modulo theories (SMT) formulation which allows to explore mapping and scheduling decisions on multi-bus architectures for latency optimization; We demonstrate its ability to produce a solution for well-known applications. Yet, 2) to mitigate the scalability limitations for the optimal solution search of this first contribution, we propose a technique to prune the design space of searched solutions. Evaluations we provide demonstrate a better scalability. Last, 3) communication allocation is enhanced with power consumption, and we show how to jointly optimize latency and power consumption. Our evaluation is again applied to a set of well-known signal processing applications and demonstrates how different trade-offs between latency and power consumption can be studied.Our contributions are integrated into a state-of-the-art modeling and verification tool for the system-level design of embedded systems (TTool). Perspectives are articulated in mainly two axes. 1) Extending the current formulation to account for new design aspects (e.g., shared memory, throughput). 2) Further improving the scalability of the optimal search
23

Le, Calvar Théo. "Exploration d’ensembles de modèles." Thesis, Angers, 2019. http://www.theses.fr/2019ANGE0035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La transformation de modèles a prouvé qu’elle était un moyen efficace pour produire des modèles cibles à partir de modèles sources le tout en raisonnant en terme de métamodèle. La majorité des techniques de transformation de modèles se concentrent sur la génération d’un modèle cible pour une source donnée. Cependant, il existe des situations dans lesquelles il est préférable de considérer une transformation générant un ensemble de modèles cibles. Un tel ensemble pourra ensuite être exploré par l’utilisateur afin de sélectionner un modèle ayant des propriétés spécifiques. Dans ce manuscrit, nous proposons une solution alliant transformation de modèles et programmation par contraintes pour permettre l’expression et l’exploration de ces ensembles de modèles. Nous proposons deux implémentations fonctionnelles de cette approche ainsi que plusieurs cas d’étude créés avec ces deux implémentations
Model transformation has proven to be an effective technique to produce target models from source models. Most transformation approaches focus on generating a single target model from a given source model. However there are situations where a collection of possible target models is preferred over a single one. Such situations arise when some choices cannot be encoded in the transformation. Then, search techniques can be used to help select a target model having specific properties. In this thesis, we present an approach combining model transformation with constraint solving to generate and explore these model sets. Moreover, we present two implementations of this approach along with multiple case studies showcasing these implementations and there usefulness
24

Boonvisut, Pasu. "Active Exploration of Deformable Object Boundary Constraints and Material Parameters Through Robotic Manipulation Data." Case Western Reserve University School of Graduate Studies / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=case1369078402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Mthimkhulu, Alfred Mbekezeli. "Small enterprise development in South Africa : an exploration of the constraints and job creation potential." Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/97117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (PhD)--Stellenbosch University, 2015.
ENGLISH ABSTRACT: This thesis, presented in six thematic chapters, investigates an approach for promoting the growth of small businesses in South Africa. Chapter 1 motivates the thesis by discussing the contested role of small businesses in reducing unemployment and fostering social equity. Chapter 2 reviews the small business development policy in South Africa and explicates the socioeconomic conditions underpinning the policy. Chapters 3, 4 and 5 are empirical analyses using data from the World Bank Enterprise Surveys of 2003 and 2007, and the World Bank Financial Crisis Survey of 2010 to determine key impediments to the growth of small businesses and characteristics of firms creating and retaining most jobs in South Africa. Chapter 3 uses two methods to investigate the key impediments. The first method is based on a count of obstacles that entrepreneurs rate as seriously affecting enterprise operations. The second estimates the effects of the obstacles on growth through sequential multivariate regressions and identifies binding constraints for different categories of firms. It emerges that medium-sized firms are mildly affected by most obstacles but micro and small firms are significantly affected by crime, electricity and transportation problems. The chapter provides important insight on the sequencing of interventions to address the impediments to growth. Chapter 4 studies the finance constraint. It evaluates the importance of the constraint firstly by assessing whether firms rating finance as a serious problem underperform firms rating the problem as less important. Thereafter, the chapter studies the experiences of firms when seeking external finance and identifies four levels of the finance constraint. Using an ordered logit model and a binary logit model, the chapter explores the profile of financially constrained firms. Results show that firms owned by ethnic groups disadvantaged in the apartheid era are more likely to be credit-constrained. The results also suggest that the likelihood of being credit-constrained decreases with higher levels of formal education. The results inform policy on the types of firms that financial interventions must target. Chapter 5 builds on a growing body of evidence which shows that a small proportion of firms in an economy account for over 50 percent of net new jobs. The evidence from the literature suggests that such high-growth enterprises have distinct characteristics that could make it possible for interventions to nurture or for other firms to emulate. The chapter employs two methods to investigate the characteristics of high-growth firms. The first is logit regression, which the investigation uses to determine characteristics of firms that create more jobs than the average firm. The characteristics are also interacted to identify interaction terms most associated with growth. The second method is quantile regression, which makes it possible to assess the importance of each characteristic for firms in different levels of growth rates. The results show that the typical high-growth firm is more likely to be black-owned. The results of the chapter however highlight the need for further research into characteristics that may perhaps explain high-growth firms more robustly than variables in the survey instrument. The research ends with a summary, a discussion of areas of further research, and policy recommendations in Chapter 6.
26

Hernández, Vega Juan David. "Online path planning for autonomous underwater vehicles under motion constraints." Doctoral thesis, Universitat de Girona, 2017. http://hdl.handle.net/10803/457592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The most common applications of autonomous underwater vehicles (AUVs) include imaging and inspecting different kinds of structures on the sea. Most of these applications require a priori information of the area or structure to be inspected, either to navigate at a safe and conservative altitude or to 2/2 pre-calculate a survey path. However, there are other applications where it's unlikely that such information is available (e.g., exploring confined natural environments like underwater caves). In this respect, this thesis presents an approach that endows an AUV with the capabilities to move through unexplored environments. To do so, it proposes a computational framework for planning feasible and safe paths online. This approach allows the vehicle to incrementally build a map of the surroundings, while simultaneously (re)plan a feasible path to a specified goal. The framework takes into account motion constraints in planning feasible paths, i.e., those that meet the vehicle's motion capabilities
Les aplicacions més comunes dels vehicles autònoms submarins o AUVs són l’obtenció d'imatges i inspecció de diferents tipus d'estructures, com per exemple, cascos de vaixells o estructures naturals en el fons marí. Moltes d'aquestes aplicacions requereixen informació a priori de l'àrea o estructura que es vol inspeccionar. No obstant, existeixen aplicacions similars o noves, com l'exploració d'entorns naturals confinats (e.g., coves submarines), on aquesta informació pot ser inexistent. En aquest sentit, aquesta tesi presenta una alternativa per dotar un AUV amb l'habilitat de moure’s a través d'entorns no explorats. Per aconseguir aquesta fita, aquesta tesi proposa un mètode per calcular en temps real camins factibles i segurs. El mètode proposat permet al vehicle construir de forma incremental un mapa de l'entorn, i al mateix temps replanificar un camí factible cap a l'objectiu establert. El mètode proposat te en compte les restriccions de moviment del vehicle per planificar camins que siguin factibles
27

Phan, Leon L. "A methodology for the efficient integration of transient constraints in the design of aircraft dynamic systems." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34750.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Transient regimes experienced by dynamic systems may have severe impacts on the operation of the aircraft. They are often regulated by dynamic constraints, requiring the dynamic signals to remain within bounds whose values vary with time. The verification of these peculiar types of constraints, which generally requires high-fidelity time-domain simulation, intervenes late in the system development process, thus potentially causing costly design iterations. The research objective of this thesis is to develop a methodology that integrates the verification of dynamic constraints in the early specification of dynamic systems. In order to circumvent the inefficiencies of time-domain simulation, multivariate dynamic surrogate models of the original time-domain simulation models are generated using wavelet neural networks (or wavenets). Concurrently, an alternate approach is formulated, in which the envelope of the dynamic response, extracted via a wavelet-based multiresolution analysis scheme, is subject to transient constraints. Dynamic surrogate models using sigmoid-based neural networks are generated to emulate the transient behavior of the envelope of the time-domain response. The run-time efficiency of the resulting dynamic surrogate models enables the implementation of a data farming approach, in which the full design space is sampled through a Monte-Carlo Simulation. An interactive visualization environment, enabling what-if analyses, is developed; the user can thereby instantaneously comprehend the transient response of the system (or its envelope) and its sensitivities to design and operation variables, as well as filter the design space to have it exhibit only the design scenarios verifying the dynamic constraints. The proposed methodology, along with its foundational hypotheses, is tested on the design and optimization of a 350VDC network, where a generator and its control system are concurrently designed in order to minimize the electrical losses, while ensuring that the transient undervoltage induced by peak demands in the consumption of a motor does not violate transient power quality constraints.
28

Fell, Alison. "Agoraphobia : mental disorder or societal constraint? : a gendered exploration of symptoms of agoraphobia in a non-clinical population." Thesis, University of East London, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.532493.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The first aim of the research was to extend the work of Gelfond (1991) to consider how far the symptoms of agoraphobia were present in a non-clinical sample of males and females. This was achieved using both statistical and qualitative (Interpretative Phenomenological Analysis) methods. The results indicated that both male and female participants' experience discomfort when alone in public places on a continuum with clinical descriptions of agoraphobia. Two differences between clinical and non-clinical accounts were identified. The first being that differences appeared to be dimensional (e. g. intensity, preoccupation). Secondly, non-clinical participants' accounts did not describe 'catastrophic misinterpretations' of physiological arousal as seen in clinical accounts. The second aim of the research related to how a gender analysis of male and female participants' accounts of their use of public places alone would contribute to our understanding of agoraphobia. Statistical results suggested that only female participants were significantly avoidant of public places alone compared to when they were accompanied. In addition, three qualitative tools of analysis (Interpretative Phenomenological Analysis, Rhetorical Discourse Analysis and Foucauldian Analysis) were adopted. These analyses highlighted social processes by which lone women may experience greater discomfort than lone men in public places, as well as exploring how such processes predispose males and females to react to discomfort in different ways. It is argued that these social processes 'prepare' women, in particular, for anxiety and avoidance on a continuum with symptoms of agoraphobia. This in turn provides an explanation as to why the majority of those diagnosed with agoraphobia are women. This poses questions for the assertion in DSM IV (American Psychiatric Association, 1994) that social practices that restrict women's use of public places should be distinguished from agoraphobia. Clinical and research implications are discussed.
29

McLaughlin, Josetta S. "Operationalalizing social contract: application of relational contract theory to exploration of constraints on implementation of an employee assistance program." Diss., Virginia Tech, 1993. http://hdl.handle.net/10919/39741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Lasbouygues, Adrien. "Exploration robotique de l’environnement aquatique : les modèles au coeur du contrôle." Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTS078/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les robots sous-marins peuvent aujourd'hui évoluer dans des environnements complexes difficilement accessibles à l'Homme pour des raisons de coût ou de sécurité. Ils peuvent donc intervenir dans une grande variété de missions en environnement aquatique. Or, la complexité de ces milieux impose de doter le vecteur robotique d'une autonomie opérationnelle suffisante afin qu'il puisse mener sa mission à bien tout en préservant son intégrité. Cela nécessite de développer des lois de commande répondant aux spécificités de l'application. Ces lois de commande se basent sur des connaissances provenant de différentes disciplines scientifiques ce qui souligne l'interdisciplinarité inhérente à la robotique. Une fois la loi de commande développée, il faut implémenter le contrôleur sur le robot sous forme de logiciel de contrôle basé sur une architecture logicielle temps-réel.Or la conception actuelle des lois de commande, sous forme de blocs "monolithiques", rend difficile l'évolution d'une loi de commande d'une application à l'autre, l'intégration de connaissances provenant d'autres disciplines scientifiques que ne maitrisent pas forcément les automaticiens et pénalisent son implémentation sur des architectures logicielles qui nécessitent la modularité. Pour résoudre ces problèmes nous cherchons à proprement séparer les différentes connaissances afin que chacune soit aisément manipulable, son rôle clair et que les relations établies entre les différentes connaissances soient explicites. Cela permettra en outre une projection plus efficace sur l'architecture logicielle. Nous proposons donc un nouveau formalisme de description des lois de commande selon une composition modulaire d'entités de base appelées Atomes et qui encapsulent les différents éléments de connaissance. Nous nous intéressons également à l'établissement d'une meilleure synergie entre les aspects automatique et génie logiciel qui se construit autour de préoccupations communes telles que les contraintes temporelles et la stabilité. Pour cela, nous enrichissons nos Atomes de contraintes chargées de véhiculer les informations relatives à ces aspects temporels. Nous proposons également une méthodologie basée sur notre formalisme afin de guider l'implémentation de nos stratégies de commande sur un Middleware temps-réel, dans notre cas le Middleware ContrACT développé au LIRMM.Nous illustrons notre approche par diverses fonctionnalités devant être mises en oeuvre lors de missions d'exploration de l'environnement aquatique et notamment pour l'évitement de parois lors de l'exploration d'un aquifère karstique
Underwater robots can nowadays operate in complex environments in a broad scope of missions where the use of human divers is difficult for cost or safety reasons. However the complexity of aquatic environments requires to give the robotic vector an autonomy sufficient to perform its mission while preserving its integrity. This requires to design control laws according to application requirements. They are built on knowledge from several scientific fields, underlining the interdisciplinarity inherent to robotics. Once the control law designed, it must be implemented as a control Software working on a real-time Software architecture.Nonetheless the current conception of control laws, as "monolithic" blocks, makes difficult the adaptation of a control from an application to another and the integration of knowledge from various scientific fields which are often not fully understood by control engineers. It also penalizes the implementation of control on Software architectures, at least its modularity and evolution. To solve those problems we seek a proper separation of knowledge so that each knowledge item can be easily used, its role precisely defined and we want to reify the interactions between them. Moreover this will allow us a more efficient projection on the Software architecture. We thus propose a new formalism for control laws description as a modular composition of basic entities named Atoms used to encapsulate the knowledge items.We also aim at building a better synergy between control and software engineering based on shared concerns such as temporal constraints and stability. Hence we extend the definition of our Atoms with constraints carrying information related to their temporal behaviour. We propose as well a methodology relying on our formalism to guide the implementation of control on a real-time Middleware. We will focus on the ContrACT Middleware developed at LIRMM.Finally we illustrate our approach on several robotic functionalities that can be used during aquatic environments exploration and especially for wall avoidance during the exploration of a karst aquifer
31

Karakus, Mehmet. "From constraint to opportunity : an exploration of Ireland and Sweden's experience of relating neutrality to participation in EU's CFSP." Thesis, University of Newcastle Upon Tyne, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.251338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Marcus, Ventovaara, and Hasanbegović Arman. "A Method for Optimised Allocation of System Architectures with Real-time Constraints." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-39492.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Optimised allocation of system architectures is a well researched area as it can greatly reduce the developmental cost of systems and increase performance and reliability in their respective applications.In conjunction with the recent shift from federated to integrated architectures in automotive, and the increasing complexity of computer systems, both in terms of software and hardware, the applications of design space exploration and optimised allocation of system architectures are of great interest.This thesis proposes a method to derive architectures and their allocations for systems with real-time constraints.The method implements integer linear programming to solve for an optimised allocation of system architectures according to a set of linear constraints while taking resource requirements, communication dependencies, and manual design choices into account.Additionally, this thesis describes and evaluates an industrial use case using the method wherein the timing characteristics of a system were evaluated, and, the method applied to simultaneously derive a system architecture, and, an optimised allocation of the system architecture.This thesis presents evidence and validations that suggest the viability of the method and its use case in an industrial setting.The work in this thesis sets precedence for future research and development, as well as future applications of the method in both industry and academia.
33

Lartigue, Thomas. "Mixtures of Gaussian Graphical Models with Constraints Gaussian Graphical Model exploration and selection in high dimension low sample size setting." Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La description des co-variations entre plusieurs variables aléatoires observées est un problème délicat. Les réseaux de dépendance sont des outils populaires qui décrivent les relations entre les variables par la présence ou l’absence d’arêtes entre les nœuds d’un graphe. En particulier, les graphes de corrélations conditionnelles sont utilisés pour représenter les corrélations “directes” entre les nœuds du graphe. Ils sont souvent étudiés sous l’hypothèse gaussienne et sont donc appelés “modèles graphiques gaussiens” (GGM). Un seul réseau peut être utilisé pour représenter les tendances globales identifiées dans un échantillon de données. Toutefois, lorsque les données observées sont échantillonnées à partir d’une population hétérogène, il existe alors différentes sous-populations qui doivent toutes être décrites par leurs propres graphes. De plus, si les labels des sous populations (ou “classes”) ne sont pas disponibles, des approches non supervisées doivent être mises en œuvre afin d’identifier correctement les classes et de décrire chacune d’entre elles avec son propre graphe. Dans ce travail, nous abordons le problème relativement nouveau de l’estimation hiérarchique des GGM pour des populations hétérogènes non labellisées. Nous explorons plusieurs axes clés pour améliorer l’estimation des paramètres du modèle ainsi que l’identification non supervisee des sous-populations. ´ Notre objectif est de s’assurer que les graphes de corrélations conditionnelles inférés sont aussi pertinents et interprétables que possible. Premièrement - dans le cas d’une population simple et homogène - nous développons une méthode composite qui combine les forces des deux principaux paradigmes de l’état de l’art afin d’en corriger les faiblesses. Pour le cas hétérogène non labellisé, nous proposons d’estimer un mélange de GGM avec un algorithme espérance-maximisation (EM). Afin d’améliorer les solutions de cet algorithme EM, et d’éviter de tomber dans des extrema locaux sous-optimaux quand les données sont en grande dimension, nous introduisons une version tempérée de cet algorithme EM, que nous étudions théoriquement et empiriquement. Enfin, nous améliorons le clustering de l’EM en prenant en compte l’effet que des cofacteurs externes peuvent avoir sur la position des données observées dans leur espace
Describing the co-variations between several observed random variables is a delicate problem. Dependency networks are popular tools that depict the relations between variables through the presence or absence of edges between the nodes of a graph. In particular, conditional correlation graphs are used to represent the “direct” correlations between nodes of the graph. They are often studied under the Gaussian assumption and consequently referred to as “Gaussian Graphical Models” (GGM). A single network can be used to represent the overall tendencies identified within a data sample. However, when the observed data is sampled from a heterogeneous population, then there exist different sub-populations that all need to be described through their own graphs. What is more, if the sub-population (or “class”) labels are not available, unsupervised approaches must be implemented in order to correctly identify the classes and describe each of them with its own graph. In this work, we tackle the fairly new problem of Hierarchical GGM estimation for unlabelled heterogeneous populations. We explore several key axes to improve the estimation of the model parameters as well as the unsupervised identification of the sub-populations. Our goal is to ensure that the inferred conditional correlation graphs are as relevant and interpretable as possible. First - in the simple, homogeneous population case - we develop a composite method that combines the strengths of the two main state of the art paradigms to correct their weaknesses. For the unlabelled heterogeneous case, we propose to estimate a Mixture of GGM with an Expectation Maximisation (EM) algorithm. In order to improve the solutions of this EM algorithm, and avoid falling for sub-optimal local extrema in high dimension, we introduce a tempered version of this EM algorithm, that we study theoretically and empirically. Finally, we improve the clustering of the EM by taking into consideration the effect of external co-features on the position in space of the observed data
34

Menezes, Jeffrey Louis. "Use of isoperformance, constraint programming, and mixed integer linear programing for architecture tradespace exploration of passive Optical Earth Observation Systems." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, In conjunction with the Leaders for Global Operations Program at MIT, 2018.
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management 2018 In conjunction with the Leaders for Global Operations Program at MIT
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 147-150).
This thesis presents work performed during the course of an internship at An Aerospace Company (AAC) and research performed at Massachusetts Institute of Technology (MIT) Lincoln Laboratory as part of a fellowship. Both efforts entailed the development of architecture tradespace exploration models for space systems. The tradespace exploration model developed at AAC, called the Earth Observation Architecture Isoperformance Model (EO-AIM), uses automation techniques, isoperformance, and constraint programming to rapidly construct potential space-based passive optical EO sensor architecture concepts which meet a given set of customer requirements. Cost estimates are also generated for each sensor concept via integration with stakeholder-trusted cost modeling software allowing for cost to be treated as both an independent variable and consequence when evaluating various architecture solutions. The EO-AIM then uses simple algorithms to identify potential satellite bus options for hosting each sensor architecture in orbit. The total cost of populating an entire constellation based on the sensor architecture is finally estimated using cost estimates for the sensor, satellite bus, and the best launch vehicle option capable of lifting the satellite(s) to orbit. In general, the EO-AIM seeks to bolster's AAC's capabilities for conducting architecture trade space exploration and initial proposal development given advancements in satellite bus, launch vehicle, and sensing technologies. The tradespace exploration model developed at MIT Lincoln Laboratory is a satellite network mixed integer linear program (MILP) which is used for making system architecture decisions and estimating final architecture cost. The satellite network MILP is formulated as both an assignment problem and a network maximum flow problem which must send sensor generated data to a ground user. Results of the MILP vary with the selected objective function and provide insights on the potential benefits of architecture decisions such as sensor disaggregation and the utility of introducing additional communication nodes into existing networks. The satellite network MILP is also capable of verifying network data volume throughput capacity and providing an optimized link schedule for the duration of the simulation. Overall, the satellite network MILP model explores the general problem of optimizing use of limited resources for a given space-based sensor while ensuring mission data needs are met. It is a higher fidelity alternative to the simple satellite bus and launch vehicle compatibility algorithm used in EO-AIM. Both models are shown to improve architecture tradespace exploration of space-based passive-optical EO systems. With a simple demonstration, it is exhibited that using the EO-AIM can increase sensor architecture concepts generated by a factor of ten or more by creating all feasible sensor architecture concepts given user inputs and settings. Furthermore, the use of the satellite network MILP to examine alternative network architecture options for NASA's HyspIRI mission resulted in a system architecture with 20% higher data throughput for marginally less cost.
by Jeffrey Louis Menezes.
S.M.
M.B.A.
35

Von, Blottnitz Magali. "Dysfunctional market or insufficient creditworthiness? : an exploration of financial constraint experienced by small, medium and micro enterprises in South Africa." Doctoral thesis, University of Cape Town, 2009. http://hdl.handle.net/11427/5620.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Includes abstract.
Includes bibliographical references (p. 212-228 ).
The existence and prevalence of financial constraints has been extensively discussed in the international economic literature, and is implicit in debates on the performance and needs of South Africa’s Small, Medium and Micro Enterprises (SMMEs). However, there is little solid research measuring financial constraints among South African SMMEs. In addition, the reasons advanced for their financial constraints are often speculative and anecdotal rather than the result of sound research. The hypothesis of credit rationing, resulting from information asymmetries, is well established in theory but an additional explanatory hypothesis, the fragile financial structure of SMMEs, is often voiced by the South African finance community. With South African data being scarce and patchy, none of these hypotheses has been validated by empirical studies. The most likely reason for these gaps in literature is not a lack of interest, but the considerable difficulty of raising reliable data from SMMEs, a joint result of confidentiality, widespread informality in the sector, and the limitations of publicly available statistics in developing countries. Surveys of banks or SMMEs raise risks of partiality and limited ability of respondents to provide quantitative data, while accounting data are characterised by limited usability and reliability. This thesis attempts to address those challenges by exploring primary and secondary sources of data, combining the respective strengths of interview and financial data.
36

Zine, Elabidine Khouloud. "Méthode de prototypage virtuel permettant l'évaluation précoce de la consommation énergétique dans les systèmes intégrés sur puce." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066669/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Depuis quelques années, les systèmes embarqués n’ont pas cessé d’évoluer. Cette évolution a conduit à des circuits de plus en plus complexes pouvant comporter plusieurs centaines de processeurs sur une même puce.Si la progression des techniques de fabrication des systèmes intégrés, a permis l’amélioration des performances de ces derniers en terme de temps et de capacité de traitement, elle a malheureusement amené une nouvelle contrainte de conception. En effet, cette nouvelle génération de systèmes consomme plus d’énergie et nécessite donc la prise en compte, pendant la phase de conception, des caractéristiques énergétiques dans le but de trouver le meilleur compromis (performance / énergie). Des études montrent qu’une estimation précoce de la consommation – i.e. au niveau comportemental – permet une meilleure diminution de l’énergie consommée par le système.L’outil EDPE (Early Design Power Estimation), objet de cette thèse, propose en réponse à ce besoin, une procédure permettant la caractérisation énergétique précoce d’une architecture de type MPSoC (MultiProcessor System on Chip) dans la phase de prototypage virtuel en System C. EDEP s’appuie sur des modèles de consommation par composant pour en déduire l’énergie dissipée par le système global lorsque le système est simulé au niveau CABA(Cycle Accurate Byte Accurate) ou encore TLM (Transaction Level Model). Les modèles proposés par EDPE, ont été intégrés dans la bibliothèque de prototypage virtuel SoClib. Ainsi, pendant la phase d’exploration architecturale, le concepteur dispose en plus des caractéristiques temporelles et spatiales de son circuit, d’une estimation précise de sa consommation énergétique.L’élaboration de modèles de consommation pour les différents composants matériels d’un système, à l’aide d’EDPE, est simple, homogène et facilement généralisable.Les résultats obtenus montrent la capacité d’EDPE à prédire la consommation énergétique de différentes applications logicielles déployées sur une même architecture matérielle de manière précise et rapide
Technological trends towards high-level integration combined with the increasing operating frequencies, made embedded systems design become more and more complex.The increase in number of computing resources in integrated circuit (IC) led toover-constrained systems.In fact, SoC (System on Chip) designers must reduce overall system costs, including board space, power consumption and development time.Although many researches have developed methodologies to deal with the emerging requirements of IC design, few of these focused on the power consumption constraint.While the highest accuracy is achieved at the lowest level, estimation time increases significantly when we move down to lower levels.Early power estimation is interesting since it allows to widely explore the architectural design space during the system level partitioning and to early adjust architectural design choices.EDPE estimates power consumption at the system levels and especially CABA (Cycle Accurate Bit Accurate) and TLM (Transaction Level Modelling) levels.The EDPE have been integrated into SoCLib library.The main goal of EDPE (Early Design Power Estimation) is to compare the power consumption of different design partitioning alternatives and chooses the best trade-off power/ performance.Experimental results show that EDPE (Early Design Power Estimation) method provides fast, yet accurate, early power estimation for MPSoCs (MultiprocessorSystem on Chip).EDPE uses few parameters per hardware components and is based on homogeneous and easy characterization method.EDPE is easily generalized to any virtual prototyping library
37

Rathnam, Ravi Kulan [Verfasser], Andreas [Akademischer Betreuer] Birk, Kaustubh [Akademischer Betreuer] Pathak, and Dieter [Akademischer Betreuer] Kraus. "Distributive Cooperative 3D Exploration under Range Communication Constraints / Ravi Kulan Rathnam. Betreuer: Andreas Birk. Gutachter: Andreas Birk ; Kaustubh Pathak ; Dieter Kraus." Bremen : IRC-Library, Information Resource Center der Jacobs University Bremen, 2015. http://d-nb.info/1087324033/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Zine, Elabidine Khouloud. "Méthode de prototypage virtuel permettant l'évaluation précoce de la consommation énergétique dans les systèmes intégrés sur puce." Electronic Thesis or Diss., Paris 6, 2014. http://www.theses.fr/2014PA066669.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Depuis quelques années, les systèmes embarqués n’ont pas cessé d’évoluer. Cette évolution a conduit à des circuits de plus en plus complexes pouvant comporter plusieurs centaines de processeurs sur une même puce.Si la progression des techniques de fabrication des systèmes intégrés, a permis l’amélioration des performances de ces derniers en terme de temps et de capacité de traitement, elle a malheureusement amené une nouvelle contrainte de conception. En effet, cette nouvelle génération de systèmes consomme plus d’énergie et nécessite donc la prise en compte, pendant la phase de conception, des caractéristiques énergétiques dans le but de trouver le meilleur compromis (performance / énergie). Des études montrent qu’une estimation précoce de la consommation – i.e. au niveau comportemental – permet une meilleure diminution de l’énergie consommée par le système.L’outil EDPE (Early Design Power Estimation), objet de cette thèse, propose en réponse à ce besoin, une procédure permettant la caractérisation énergétique précoce d’une architecture de type MPSoC (MultiProcessor System on Chip) dans la phase de prototypage virtuel en System C. EDEP s’appuie sur des modèles de consommation par composant pour en déduire l’énergie dissipée par le système global lorsque le système est simulé au niveau CABA(Cycle Accurate Byte Accurate) ou encore TLM (Transaction Level Model). Les modèles proposés par EDPE, ont été intégrés dans la bibliothèque de prototypage virtuel SoClib. Ainsi, pendant la phase d’exploration architecturale, le concepteur dispose en plus des caractéristiques temporelles et spatiales de son circuit, d’une estimation précise de sa consommation énergétique.L’élaboration de modèles de consommation pour les différents composants matériels d’un système, à l’aide d’EDPE, est simple, homogène et facilement généralisable.Les résultats obtenus montrent la capacité d’EDPE à prédire la consommation énergétique de différentes applications logicielles déployées sur une même architecture matérielle de manière précise et rapide
Technological trends towards high-level integration combined with the increasing operating frequencies, made embedded systems design become more and more complex.The increase in number of computing resources in integrated circuit (IC) led toover-constrained systems.In fact, SoC (System on Chip) designers must reduce overall system costs, including board space, power consumption and development time.Although many researches have developed methodologies to deal with the emerging requirements of IC design, few of these focused on the power consumption constraint.While the highest accuracy is achieved at the lowest level, estimation time increases significantly when we move down to lower levels.Early power estimation is interesting since it allows to widely explore the architectural design space during the system level partitioning and to early adjust architectural design choices.EDPE estimates power consumption at the system levels and especially CABA (Cycle Accurate Bit Accurate) and TLM (Transaction Level Modelling) levels.The EDPE have been integrated into SoCLib library.The main goal of EDPE (Early Design Power Estimation) is to compare the power consumption of different design partitioning alternatives and chooses the best trade-off power/ performance.Experimental results show that EDPE (Early Design Power Estimation) method provides fast, yet accurate, early power estimation for MPSoCs (MultiprocessorSystem on Chip).EDPE uses few parameters per hardware components and is based on homogeneous and easy characterization method.EDPE is easily generalized to any virtual prototyping library
39

Merino, Perez Irene. "Geophysical constraints on the nature of geological domains of continental rifted margins: examples from the West Iberia margin and Ligurian Basin." Doctoral thesis, Universitat de Barcelona, 2021. http://hdl.handle.net/10803/673631.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this PhD work, we carry out a geophysical and geological study of two classical examples of rifted continental margins: the Gulf of Lions (GoL), located in the Western Mediterranean, and the Tagus Abyssal Plain (TAP), located in the West Iberia margin. In spite of numerous previous studies of these regions, there is a debate on the crustal structure and the processes that led to the formation of the basin. Our study aims to 1) determine the nature of rocks forming the main geological domains of the basin, 2) define the tectonic structure of the basement and 3) place constrains and discuss the kinematics and tectonic and magmatic mechanisms involved in basin formation. To study the GoL, we used a geophysical data set acquired during the SARDINIA-2006 survey by the Ifremer Institute (France). In particular, we used a Multi-Channel Seismic (MCS) line and a coincident Wide-Angle Seismic (WAS) profile. Altogether, these lines cover a seismic transect that runs NW-SE across the GoL until the central part of the Liguro-Provençal basin. The geophysical data used to study the TAP were collected during the FRAME-2018 survey within the framework of the FRAME project. We present spatially coincident MCS and WAS data, along a 350 km-long, E-W trending profile located at 38º N, crossing the basin in the North-West Iberian margin. We apply joint refraction and reflection travel-time tomography (TT) that combines travel-times from MCS and WAS data to provide new constraints on the structure and petrological nature of basement domains along the margins. The result of this joint WAS- MCS tomography is a P-waves velocity (Vp) model of the margin that is fully consistent with the MCS image along the profile, making the geological interpretation less subjective. The processing of MCS data provides the tectonic structure and geometry of the sedimentary basins. The results from the GoL support the existence of three geological domains that are: 1) a continental domain formed by normal faults that tilted the continental basement, 2) a ~100 km wide domain bounded by continental crust domains, characterized by a 4-5 km thick layer with high velocity and steep gradients that we interpret as a lens-shaped body of oceanic crust and 3) a thin continental crust (<4 km). This configuration implies that the continent-ocean transition (COT) occurs abruptly (<10 km along profile) at each side of the oceanic domain. In the case of the TAP, the models show that the crustal structure is more complex, presenting sharp boundaries between five different domains at the base of the continental slope and across the J-anomaly. Thus, the profile across the TAP shows that Domain I and Domain III are made of 4-6 km thick continental crust. Domain III shows a lower crust with comparatively higher velocities possibly due to limited magmatic intrusions. Domain II, previously interpreted as oceanic crust, is shown to constitute a ~70 km wide domain of exhumed and serpentinized mantle. The westernmost 200 km of the profile include Domain IV and Domain V with a basement made of oceanic crust. The new Vp model and seismic images support that the COT is located ~300 km offshore and that occurs abruptly from 10 to 15 km wide. Based on these results, we discuss a new geodynamic scenario characterized by two main phases of crustal extension. According to the presented distribution of the basement, rifting in the TAP would have started with continental crust extension, continued with exhumation of the mantle, followed by the formation of the oceanic crust of the J-magnetic anomaly, and continued with spreading of oceanic crust of the Cretaceous Magnetic Quiet Zone. The interpretation of these results differs from current conceptual models of the formation of both examples of rifting systems. Its integration offers the opportunity to review the existing conceptual models related to rifted margins that involve mantle exhumation and indicate that the response of the continental lithosphere to extension processes may be more complex than previously assumed.
En esta tesis doctoral, se ha realizado un estudio geofísico y geológico de dos ejemplos clásicos de márgenes continentales: el margen de la Cuenca de Liguria y el margen de la Llanura Abisal del Tajo, ubicada en el margen Oeste de Iberia. A pesar de los diversos estudios previos de estas regiones, existe un debate abierto tanto sobre su estructura cortical como sobre los procesos que operaban durante su formación. Este trabajo de tesis ha tenido como objetivos: 1) determinar la naturaleza de las rocas que forman los principales dominios geológicos de ambos márgenes, 2) definir la estructura tectónica del basamento y 3) discutir la cinemática y la interacción de mecanismos tectónicos y magmáticos involucrados en la formación de los márgenes. Para conseguir estos objetivos, se han analizado e integrado diversos datos geofísicos. Los datos principales son de sísmica de reflexión de “streamer” multicanal y de sísmica de refracción y refracción de gran ángulo marinos. También se han integrado datos batimétricos y gravimétricos. La parte metodológica más novedosa de esta tesis es la utilización de los tiempos de trayecto de fases sísmicas de datos de streamer y gran ángulo en una tomografía conjunta. Esta metodología permite determinar con más precisión que otros métodos las velocidades de las ondas sísmicas (Vp) a través del basamento a lo largo de los perfiles. El modelo resultante permite establecer la naturaleza petrológica con menos incertidumbre que los métodos más comúnmente usados. El análisis, procesamiento, modelado e interpretación de estos conjuntos de datos permite una interpretación novedosa de los aspectos relacionados con la estructura y naturaleza de la corteza, así como la discusión de nuevas propuestas para los procesos tectónicos que llevaron a la configuración actual de cada uno de los ejemplos de márgenes continentales. La interpretación desarrollada difiere en gran medida de modelos previos en cuanto a la formación de ambos sistemas de rifting. Por ello, proponemos que su integración ofrece la oportunidad de revisar modelos conceptuales existentes en la literatura. En particular, los resultados muestran que la respuesta de la litosfera continental a los procesos de extensión puede ser más compleja de lo que se suponía hasta ahora.
40

Vigneron, Vincent. "Programmation par contraintes et découverte de motifs sur données séquentielles." Thesis, Angers, 2017. http://www.theses.fr/2017ANGE0028/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Des travaux récents ont montré l’intérêt de la programmation par contraintes pour la fouille de données. Dans cette thèse, nous nous intéressons à la recherche de motifs sur séquences, et en particulier à la caractérisation, à l’aide de motifs, de classes de séquences pré-établies. Nous proposons à cet effet un langage de modélisation à base de contraintes qui suppose une représentation matricielle du jeu de séquences. Un motif s’y définit comme un ensemble de caractères (ou de patrons) et pour chacun une localisation dans différentes séquences. Diverses contraintes peuvent alors s’appliquer : validité des localisations, couverture d’une classe de séquences, ordre sur les localisations des caractères commun aux séquences, etc. Nous formulons deux problèmes de caractérisation NP-complets : la caractérisation par motif totalement ordonné (e.g. sous-séquence exclusive à une classe) ou partiellement ordonné. Nous en donnons deux modélisations CSP qui intègrent des contraintes globales pour la preuve d’exclusivité. Nous introduisons ensuite un algorithme mémétique pour l’extraction de motifs partiellement ordonnés qui s’appuie sur la résolution CSP lors des phases d’initialisation et d’intensification. Cette approche hybride se révèle plus performante que l’approche CSP pure sur des séquences biologiques. La mise en forme matricielle de jeux de séquences basée sur une localisation des caractères peut être de taille rédhibitoire. Nous proposons donc de localiser des patrons plutôt que des caractères. Nous présentons deux méthodes ad-hoc, l’une basée sur un parcours de treillis et l’autre sur la programmation dynamique
Recent works have shown the relevance of constraint programming to tackle data mining tasks. This thesis follows this approach and addresses motif discovery in sequential data. We focus in particular, in the case of classified sequences, on the search for motifs that best fit each individual class. We propose a language of constraints over matrix domains to model such problems. The language assumes a preprocessing of the data set (e.g., by pre-computing the locations of each character in each sequence) and views a motif as the choice of a sub-matrix (i.e., characters, sequences, and locations). We introduce different matrix constraints (compatibility of locations with the database, class covering, location-based character ordering common to sequences, etc.) and address two NP-complete problems: the search for class-specific totally ordered motifs (e.g., exclusive subsequences) or partially ordered motifs. We provide two CSP models that rely on global constraints to prove exclusivity. We then present a memetic algorithm that uses this CSP model during initialisation and intensification. This hybrid approach proves competitive compared to the pure CSP approach as shown by experiments carried out on protein sequences. Lastly, we investigate data set preprocessing based on patterns rather than characters, in order to reduce the size of the resulting matrix domain. To this end, we present and compare two alternative methods, one based on lattice search, the other on dynamic programming
41

Guillame-Bert, Mathieu. "Apprentissage de règles associatives temporelles pour les séquences temporelles de symboles." Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENM081/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'apprentissage de modèles temporels constitue l'une des grandes problématiques de l'Exploration de Données (Data Mining). Dans cette thèse, nous avons développé un nouveau modèle temporel appelé TITA Rules (Règle associative temporelle basé sur des arbres d'intervalles). Ce modèle permet de décrire des phénomènes ayant un certain degré d'incertitude et/ou d'imprécision. Ce modèle permet entre autres d'exprimer la synchronicité entre évènements, les contraintes temporelles disjonctives et la négation temporelle. De par leur nature, les TITA Rules peuvent êtes utilisées pour effectuer des prédictions avec une grande précision temporel. Nous avons aussi développé un algorithme capable de découvrir et d'extraire de manière efficace des TITA Rules dans de grandes bases de données temporelles. Le cœur de l'algorithme est basé sur des techniques de minimisation d'entropie, de filtrage par Apriori et par des analyses de co-dépendance. Note modèle temporelle et notre algorithme ont été appliqués et évalués sur plusieurs jeux de données issues de phénomènes réels et de phénomènes simulés. La seconde partie de cette thèse à consisté à étudier l'utilisation de notre modèle temporel sur la problématique de la Planification Automatique. Ces travaux ont mené au développement d'un algorithme de planification automatique. L'algorithme prend en entrée un ensemble de TITA Rules décrivant le fonctionnement d'un système quelconque, une description de l'état initial du système, et un but à atteindre. En retour, l'algorithme calcule un plan décrivant la meilleure façon d'atteindre le but donné. Par la nature même des TITA Rules, cet algorithme est capable de gérer l'incertain (probabilités), l'imprécision temporelle, les contraintes temporelles disjonctives, ainsi que les événements exogènes prédictibles mais imprécis
The learning of temporal patterns is a major challenge of Data mining. We introduce a temporal pattern model called Temporal Interval Tree Association Rules (Tita rules or Titar). This pattern model can be used to express both uncertainty and temporal inaccuracy of temporal events. Among other things, Tita rules can express the usual time point operators, synchronicity, order, and chaining,disjunctive time constraints, as well as temporal negation. Tita rules are designed to allow predictions with optimum temporal precision. Using this representation, we present the Titar learner algorithm that can be used to extract Tita rules from large datasets expressed as Symbolic Time Sequences. This algorithm based on entropy minimization, apriori pruning and statistical dependence analysis. We evaluate our technique on simulated and real world datasets. The problem of temporal planning with Tita rules is studied. We use Tita rules as world description models for a Planning and Scheduling task. We present an efficient temporal planning algorithm able to deal with uncertainty, temporal inaccuracy, discontinuous (or disjunctive) time constraints and predictable but imprecisely time located exogenous events. We evaluate our technique by joining a learning algorithm and our planning algorithm into a simple reactive cognitive architecture that we apply to control a robot in a virtual world
42

Bekkouche, Mohammed. "Combinaison des techniques de Bounded Model Checking et de programmation par contraintes pour l'aide à la localisation d'erreurs : exploration des capacités des CSP pour la localisation d'erreurs." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4096/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Un vérificateur de modèle peut produire une trace de contreexemple, pour un programme erroné, qui est souvent difficile à exploiter pour localiser les erreurs dans le code source. Dans ma thèse, nous avons proposé un algorithme de localisation d'erreurs à partir de contreexemples, nommé LocFaults, combinant les approches de Bounded Model Checking (BMC) avec un problème de satisfaction de contraintes (CSP). Cet algorithme analyse les chemins du CFG (Control Flow Graph) du programme erroné pour calculer les sous-ensembles d'instructions suspectes permettant de corriger le programme. En effet, nous générons un système de contraintes pour les chemins du graphe de flot de contrôle pour lesquels au plus k instructions conditionnelles peuvent être erronées. Ensuite, nous calculons les MCSs (Minimal Correction Sets) de taille limitée sur chacun de ces chemins. La suppression de l'un de ces ensembles de contraintes donne un sous-ensemble satisfiable maximal, en d'autres termes, un sous-ensemble maximal de contraintes satisfaisant la postcondition. Pour calculer les MCSs, nous étendons l'algorithme générique proposé par Liffiton et Sakallah dans le but de traiter des programmes avec des instructions numériques plus efficacement. Cette approche a été évaluée expérimentalement sur des programmes académiques et réalistes
A model checker can produce a trace of counter-example for erroneous program, which is often difficult to exploit to locate errors in source code. In my thesis, we proposed an error localization algorithm from counter-examples, named LocFaults, combining approaches of Bounded Model-Checking (BMC) with constraint satisfaction problem (CSP). This algorithm analyzes the paths of CFG (Control Flow Graph) of the erroneous program to calculate the subsets of suspicious instructions to correct the program. Indeed, we generate a system of constraints for paths of control flow graph for which at most k conditional statements can be wrong. Then we calculate the MCSs (Minimal Correction Sets) of limited size on each of these paths. Removal of one of these sets of constraints gives a maximal satisfiable subset, in other words, a maximal subset of constraints satisfying the postcondition. To calculate the MCSs, we extend the generic algorithm proposed by Liffiton and Sakallah in order to deal with programs with numerical instructions more efficiently. This approach has been experimentally evaluated on a set of academic and realistic programs
43

Ngo, Tu. "Decision support system for enhancing health care services to reduce potentially avoidable hospitalizations." Thesis, Montpellier, 2020. http://www.theses.fr/2020MONTS035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les hospitalisations potentiellement évitables (HPE) sont les admissions à l'hôpital qui auraient pu être évitées grâce à des traitements rapides et efficaces. Les taux élevés de HPE sont associés à de nombreux facteurs. Ces facteurs comprennent des taux de mortalité élevés, une faible densité de médecins de soins primaires, un faible revenu médian ou de faibles niveaux d'éducation ainsi que des caractéristiques organisationnelles des systèmes de santé telles qu'une mauvaise coordination entre les soins de santé. fournisseurs. D'un autre côté, ces hospitalisations évitables sont associées à un coût de plusieurs centaines de millions d'euros pour l'assurance maladie. En d'autres termes, la réduction des HPE améliore non seulement la qualité de vie des patients, mais pourrait également faire économiser des coûts substantiels grâce aux traitements des patients. Par conséquent, les autorités sanitaires sont très intéressées par des solutions améliorant les services de santé pour réduire les HPE. Certaines études récentes en France ont suggéré que l'augmentation du nombre d'infirmières dans certaines zones géographiques pourrait conduire à une réduction des taux des HPE. Dans notre approche, après avoir évalué certaines méthodes de régression courantes, nous avons étendu la machine à vecteurs de support pour la régression à l'information spatiale. Cette approche nous permet de sélectionner non seulement les zones géographiques mais aussi le nombre d'infirmières à ajouter dans ces zones pour la plus forte réduction du nombre des HPE. Concrètement, notre approche est appliquée en Occitanie, en France et les zones géographiques mentionnées ci-dessus sont les bassins de vie (BV).D'un autre côté, la température extrême pourrait être un facteur potentiel associé à des taux élevés de HPE. Par conséquent, une partie de nos travaux consiste à mesurer l'impact de la température extrême sur les HPE ainsi qu'à inclure ces données environnementales dans notre approche ci-dessus. Dans nos travaux, nous avons utilisé les valeurs de température mesurées toutes les heures par des capteurs dans les stations météorologiques. Cependant, ces valeurs sont parfois discontinues et nous avons besoin d'une méthode d'imputation pour ces valeurs manquantes. Dans la littérature, deux approches les plus populaires traitant de cette étape de traitement exploitent soit la composante spatiale soit la composante temporelle des données de température. Respectivement, ces approches sont des méthodes d'interpolation spatiale telles que la pondération de distance inverse (IDW) et des modèles de séries temporelles tels que la moyenne mobile intégrée autorégressive (ARIMA). Pour nous aider à choisir la méthode la plus fiable, nous proposons une nouvelle approche qui combine les deux dimensions pour améliorer les performances en termes de qualité. Les résultats montrent que par rapport aux méthodes IDW et ARIMA, notre approche fonctionne mieux à 100% et 99,8% (604 sur 605) stations météorologiques respectivement.De plus, l'amélioration de la coordination entre les prestataires de soins de santé pourrait conduire à la réduction des HPE. Dans le cas où les patients changeraient d’hôpital pour des traitements, afin d’assurer des traitements efficaces et de haute qualité, les médecins devraient avoir accès au dossier médical des patients des hôpitaux précédents. Par conséquent, nous proposons une approche graphique pour résoudre ce problème. En particulier, nous modélisons d'abord les flux de données des patients entre les hôpitaux sous forme de graphique pondéré non orienté dans lequel les nœuds et les bords présentent respectivement les hôpitaux et la quantité de flux de patients. Ensuite, après avoir évalué deux méthodes de regroupement de graphes courantes, nous personnalisons celle qui convient le mieux à nos besoins. Notre résultat fournit des informations intéressantes par rapport aux approches basées sur les limites administratives
Potentially avoidable hospitalizations (PAHs) are the hospital admissions that could have been prevented with timely and effective treatments. The high rates of PAHs are associated with many factors. These factors include high mortality rates, low density of primary care physicians, lack of continuity of care, and lack of access to primary care, low median income or low education levels as well as organizational features of health systems such as poor coordination between health care providers. On the other side, in France, there are about 300,000 PAHs every year. These preventable hospitalizations are associated with a cost of several hundred million Euros for the Health Insurance. In other words, reducing PAHs not only enhances patients’ quality of life but also could save substantial costs due to patient treatments. Therefore, health authorities are highly interested in solutions improving health care services to reduce PAHs.Some recent studies in France have suggested that increasing the number of nurses in selected geographic areas could lead to the reduction of the rates of PAHs in those areas. In our approach, after evaluating some common regression methods, we extended the support vector machine for regression to spatial information. This approach allows us to select not only the geographic areas but also the number of to-be-added nurses in these areas for the biggest reduction in the number of PAHs. Specifically, our approach is applied in the Occitanie region, France and geographic areas mentioned above are the cross-border living areas (fr. Bassins de vie - BVs). However, our approach can be extended at the national level or to other regions or countries.On the other side, the extreme temperature could be one potential factor associated with high rates of PAHs. Therefore, a part of our works is to measure the impact of the extreme temperature to PAHs as well as to include this environmental data in our approach above. In our works, we used the temperature values measured hourly by sensors at the weather stations. However, these values are sometimes discontinuous and we need an imputation method for these missing values. In the literature, two most popular approaches dealing with this processing step exploit either the spatial component or temporal component of the temperature data. Respectively, these approaches are spatial interpolation methods such as Inverse Distance Weighted (IDW) and time-series models such as Autoregressive Integrated Moving Average (ARIMA). To help us select the more reliable method, we first compare the performances of both approaches. In addition, we propose a novel approach that combines both dimensions to improve the performance in terms of quality. The results show that compared with IDW and ARIMA methods, our approach performs better at 100% and 99.8% (604 over 605) weather stations respectively.In addition, as mentioned at the beginning, improving the coordination between the health care providers could lead to the reduction of the PAHs. Moreover, in the cases that the patients change hospitals for treatments, to ensure efficient and high-quality treatments, doctors would need access to the patients’ medical records at the previous hospitals. Therefore, health authorities are interested in building hospital communities whereby medical records can be shared among the hospitals. Therefore, we propose a graph-based approach to address this problem. Particularly, we first model data flows of patients between hospitals as an undirected weighted graph in which nodes and edges present the hospitals and the amount of patient flows respectively. Then, after evaluating two common graph clustering methods, we customize the more suitable one for our needs. Our result provides interesting insights compared with approaches based on administrative boundaries
44

Ferdjoukh, Adel. "Une approche déclarative pour la génération de modèles." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT325/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Disposer de données dans le but de valider ou tester une approche ou un concept est d'une importance primordiale dans beaucoup de domaines différents. Malheureusement, ces données ne sont pas toujours disponibles, sont coûteuses à obtenir, ou bien ne répondent pas à certaines exigences de qualité ce qui les rend inutiles dans certains cas de figure.Un générateur automatique de données est un bon moyen pour obtenir facilement et rapidement des données valides, de différentes tailles, pertinentes et diversifiées. Dans cette thèse, nous proposons une nouvelle approche complète, dirigée par les modèles et basée sur la programmation par contraintes pour la génération de données
Owning data is useful in many different fields. Data can be used to test and to validate approaches, algorithms and concepts. Unfortunately, data is rarely available, is cost to obtain, or is not adapted to most of cases due to a lack of quality.An automated data generator is a good way to generate quickly and easily data that are valid, in different sizes, likelihood and diverse.In this thesis, we propose a novel and complete model driven approach, based on constraint programming for automated data generation
45

Zalila, Faiez. "Methods and tools for the integration of formal verification in domain-specific languages." Phd thesis, Toulouse, INPT, 2014. http://oatao.univ-toulouse.fr/14159/1/zalila.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Domain specific Modeling Languages (DSMLs) are increasingly used at the early phases in the development of complex systems, in particular, for safety critical systems. The goal is to be able to reason early in the development on these models and, in particular, to fulfill verification and validation activities (V and V). A widely used technique is the exhaustive behavioral model verification using model-checking by providing a translational semantics to build a formal model from DSML conforming models in order to reuse powerful tools available for this formal domain. Defining a translational semantics, expressing formal properties to be assessed and analysing such verification results require such an expertise in formal methods that it restricts their adoption and may discourage the designers. It is thus necessary to build for each DSML, a toolchain which hides formal aspects for DSML end-users. The goal of this thesis consists in easing the development of such verification toolchains. Our contribution includes 1) expressing behavioral properties in the DSML level by relying on TOCL (Temporal Object Constraint Language), a temporal extension of OCL; 2) An automated transformation of these properties on formal properties while reusing the key elements of the translational semantics; 3) the feedback of verification results thanks to a higher-order transformation and a language which defines mappings between DSML and formal levels; 4) the associated process implementation. Our approach was validated by the experimentation on a subset of the development process modeling language SPEM, and on Ladder Diagram language used to specify programmable logic controllers (PLCs), and by the integration of a formal intermediate language (FIACRE) in the verification toolchain. This last point allows to reduce the semantic gap between DSMLs and formal domains.
46

Ben, Zakour Asma. "Extraction des utilisations typiques à partir de données hétérogènes en vue d'optimiser la maintenance d'une flotte de véhicules." Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14539/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le travail produit s'inscrit dans un cadre industriel piloté par la société 2MoRO Solutions. La réalisation présentée dans cette thèse doit servir à l'élaboration d'un service à haute valeur, permettant aux exploitants aéronautiques d'optimiser leurs actions de maintenance. Les résultats obtenus permettent d'intégrer et de regrouper les tâches de maintenance en vue de minimiser la durée d'immobilisation des aéronefs et d'en réduire les risques de panne.La méthode que nous proposons comporte trois étapes : (i) une étape de rationalisation des séquences afin de pouvoir les combiner [...]
The present work is part of an industrial project driven by 2MoRO Solutions company.It aims to develop a high value service enabling aircraft operators to optimize their maintenance actions.Given the large amount of data available around aircraft exploitation, we aim to analyse the historical events recorded with each aircraft in order to extract maintenance forecasting. Theresults are used to integrate and consolidate maintenance tasks in order to minimize aircraft downtime and risk of failure. The proposed method involves three steps : (i) streamlining information in order to combinethem, (ii) organizing this data for easy analysis and (iii) an extraction step of useful knowledgein the form of interesting sequences. [...]
47

Abboud, Yacine. "Fouille de motifs : entre accessibilité et robustesse." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0176/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'information occupe désormais une place centrale dans notre vie quotidienne, elle est à la fois omniprésente et facile d'accès. Pourtant, l'extraction de l'information à partir des données est un processus souvent inaccessible. En effet, même si les méthodes de fouilles de données sont maintenant accessibles à tous, les résultats de ces fouilles sont souvent complexes à obtenir et à exploiter pour l'utilisateur. La fouille de motifs combinée à l'utilisation de contraintes est une direction très prometteuse de la littérature pour à la fois améliorer l'efficience de la fouille et rendre ses résultats plus appréhendables par l'utilisateur. Cependant, la combinaison de contraintes désirée par l'utilisateur est souvent problématique car, elle n'est pas toujours adaptable aux caractéristiques des données fouillées tel que le bruit. Dans cette thèse, nous proposons deux nouvelles contraintes et un algorithme pour pallier ce problème. La contrainte de robustesse permet de fouiller des données bruitées en conservant la valeur ajoutée de la contrainte de contiguïté. La contrainte de clôture allégée améliore l'appréhendabilité de la fouille de motifs tout en étant plus résistante au bruit que la contrainte de clôture classique. L'algorithme C3Ro est un algorithme générique de fouille de motifs séquentiels intégrant de nombreuses contraintes, notamment les deux nouvelles contraintes que nous avons introduites, afin de proposer à l'utilisateur la fouille la plus efficiente possible tout en réduisant au maximum la taille de l'ensemble des motifs extraits. C3Ro rivalise avec les meilleurs algorithmes de fouille de motifs de la littérature en termes de temps d'exécution tout en consommant significativement moins de mémoire. C3Ro a été expérimenté dans le cadre de l’extraction de compétences présentes dans les offres d'emploi sur le Web
Information now occupies a central place in our daily lives, it is both ubiquitous and easy to access. Yet extracting information from data is often an inaccessible process. Indeed, even though data mining methods are now accessible to all, the results of these mining are often complex to obtain and exploit for the user. Pattern mining combined with the use of constraints is a very promising direction of the literature to both improve the efficiency of the mining and make its results more apprehensible to the user. However, the combination of constraints desired by the user is often problematic because it does not always fit with the characteristics of the searched data such as noise. In this thesis, we propose two new constraints and an algorithm to overcome this issue. The robustness constraint allows to mine noisy data while preserving the added value of the contiguity constraint. The extended closedness constraint improves the apprehensibility of the set of extracted patterns while being more noise-resistant than the conventional closedness constraint. The C3Ro algorithm is a generic sequential pattern mining algorithm that integrates many constraints, including the two new constraints that we have introduced, to provide the user the most efficient mining possible while reducing the size of the set of extracted patterns. C3Ro competes with the best pattern mining algorithms in the literature in terms of execution time while consuming significantly less memory. C3Ro has been experienced in extracting competencies from web-based job postings
48

Boulil, Kamal. "Une approche automatisée basée sur des contraintes d’intégrité définies en UML et OCL pour la vérification de la cohérence logique dans les systèmes SOLAP : applications dans le domaine agri-environnemental." Thesis, Clermont-Ferrand 2, 2012. http://www.theses.fr/2012CLF22285/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les systèmes d'Entrepôts de Données et OLAP spatiaux (EDS et SOLAP) sont des technologies d'aide à la décision permettant l'analyse multidimensionnelle de gros volumes de données spatiales. Dans ces systèmes, la qualité de l'analyse dépend de trois facteurs : la qualité des données entreposées, la qualité des agrégations et la qualité de l’exploration des données. La qualité des données entreposées dépend de critères comme la précision, l'exhaustivité et la cohérence logique. La qualité d'agrégation dépend de problèmes structurels (e.g. les hiérarchies non strictes qui peuvent engendrer le comptage en double des mesures) et de problèmes sémantiques (e.g. agréger les valeurs de température par la fonction Sum peut ne pas avoir de sens considérant une application donnée). La qualité d'exploration est essentiellement affectée par des requêtes utilisateur inconsistantes (e.g. quelles ont été les valeurs de température en URSS en 2010 ?). Ces requêtes peuvent engendrer des interprétations erronées des résultats. Cette thèse s'attaque aux problèmes d'incohérence logique qui peuvent affecter les qualités de données, d'agrégation et d'exploration. L'incohérence logique est définie habituellement comme la présence de contradictions dans les données. Elle est typiquement contrôlée au moyen de Contraintes d'Intégrité (CI). Dans cette thèse nous étendons d'abord la notion de CI (dans le contexte des systèmes SOLAP) afin de prendre en compte les incohérences relatives aux agrégations et requêtes utilisateur. Pour pallier les limitations des approches existantes concernant la définition des CI SOLAP, nous proposons un Framework basé sur les langages standards UML et OCL. Ce Framework permet la spécification conceptuelle et indépendante des plates-formes des CI SOLAP et leur implémentation automatisée. Il comporte trois parties : (1) Une classification des CI SOLAP. (2) Un profil UML implémenté dans l'AGL MagicDraw, permettant la représentation conceptuelle des modèles des systèmes SOLAP et de leurs CI. (3) Une implémentation automatique qui est basée sur les générateurs de code Spatial OCL2SQL et UML2MDX qui permet de traduire les spécifications conceptuelles en code au niveau des couches EDS et serveur SOLAP. Enfin, les contributions de cette thèse ont été appliquées dans le cadre de projets nationaux de développement d'applications (S)OLAP pour l'agriculture et l'environnement
Spatial Data Warehouse (SDW) and Spatial OLAP (SOLAP) systems are Business Intelligence (BI) allowing for interactive multidimensional analysis of huge volumes of spatial data. In such systems the quality ofanalysis mainly depends on three components : the quality of warehoused data, the quality of data aggregation, and the quality of data exploration. The warehoused data quality depends on elements such accuracy, comleteness and logical consistency. The data aggregation quality is affected by structural problems (e.g., non-strict dimension hierarchies that may cause double-counting of measure values) and semantic problems (e.g., summing temperature values does not make sens in many applications). The data exploration quality is mainly affected by inconsistent user queries (e.g., what are temperature values in USSR in 2010?) leading to possibly meaningless interpretations of query results. This thesis address the problems of logical inconsistency that may affect the data, aggregation and exploration qualities in SOLAP. The logical inconsistency is usually defined as the presence of incoherencies (contradictions) in data ; It is typically controlled by means of Integrity Constraints (IC). In this thesis, we extends the notion of IC (in the SOLAP domain) in order to take into account aggregation and query incoherencies. To overcome the limitations of existing approaches concerning the definition of SOLAP IC, we propose a framework that is based on the standard languages UML and OCL. Our framework permits a plateforme-independent conceptual design and an automatic implementation of SOLAP IC ; It consists of three parts : (1) A SOLAP IC classification, (2) A UML profile implemented in the CASE tool MagicDraw, allowing for a conceptual design of SOLAP models and their IC, (3) An automatic implementation based on the code generators Spatial OCLSQL and UML2MDX, which allows transforming the conceptual specifications into code. Finally, the contributions of this thesis have been experimented and validated in the context of French national projetcts aimming at developping (S)OLAP applications for agriculture and environment
49

Merabet, Massinissa. "Solutions optimales des problèmes de recouvrement sous contraintes sur le degré des nœuds." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20138/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le travail que nous développons dans le cadre de cette thèse s'articule autour des problèmes de recherche de structure de recouvrement de graphes sous contrainte sur le degré des sommets. Comme l'arbre de recouvrement couvre les sommets d'un graphe connexe avec un minimum de liens, il est généralement proposé comme solution à ce type de problèmes. Cependant, pour certaines applications telles que le routage dans les réseaux optiques, les solutions ne sont pas nécessairement des sous-graphes. Nous supposons dans cette thèse que la contrainte sur le degré est due à une capacité limitée instantanée des sommets et que la seule exigence sur le recouvrement est sa connexité. Dans ce cas, la solution peut être différente d'un arbre. Nous reformulons ces problèmes de recouvrement en nous appuyant sur une extension du concept d'arbre appelée hiérarchie de recouvrement. Notre objectif principal est de démontrer son intérêt vis-à-vis de l'arbre en termes de faisabilité et de coût du recouvrement. Nous considérons deux types de contraintes sur le degré : des bornes sur le degré des sommets ou une borne sur le nombre de sommets de branchement et cherchons dans les deux cas un recouvrement de coût minimum. Nous illustrons aussi l'applicabilité des hiérarchies en étudiant un problème prenant davantage en compte la réalité du routage optique. Pour ces différents problèmes NP-difficiles, nous montrons, tant sur le coût des solutions optimales que sur la garantie de performance des solutions approchées, l'intérêt des hiérarchies de recouvrement. Ce constat se voit conforté par des expérimentations sur des graphes aléatoires
The work conducted in this thesis is focused on the minimum spanning problems in graphs under constraints on the vertex degrees. As the spanning tree covers the vertices of a connected graph with a minimum number of links, it is generally proposed as a solution for this kind of problems. However, for some applications such as the routing in optical networks, the solution is not necessarily a sub-graph. In this thesis, we assume that the degree constraints are due to a limited instantaneous capacity of the vertices and that the only pertinent requirement on the spanning structure is its connectivity. In that case, the solution may be different from a tree. We propose the reformulation of this kind of spanning problems. To find the optimal coverage of the vertices, an extension of the tree concept called hierarchy is proposed. Our main purpose is to show its interest regarding the tree in term of feasibility and costs of the coverage. Thus, we take into account two types of degree constraints: either an upper bound on the degree of vertices and an upper bound on the number of branching vertices. We search a minimum cost spanning hierarchy in both cases. Besides, we also illustrate the applicability of hierarchies by studying a problem that takes more into account the reality of the optical routing. For all those NP-hard problems, we show the interest of the spanning hierarchy for both costs of optimal solutions and performance guarantee of approximate solutions. These results are confirmed by several experimentations on random graphs
50

Julea, Andreea Maria. "Extraction de motifs spatio-temporels dans des séries d'images de télédétection : application à des données optiques et radar." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00652810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les Séries Temporelles d'Images Satellitaires (STIS), visant la même scène en évolution, sont très intéressantes parce qu'elles acquièrent conjointement des informations temporelles et spatiales. L'extraction de ces informations pour aider les experts dans l'interprétation des données satellitaires devient une nécessité impérieuse. Dans ce mémoire, nous exposons comment on peut adapter l'extraction de motifs séquentiels fréquents à ce contexte spatio-temporel dans le but d'identifier des ensembles de pixels connexes qui partagent la même évolution temporelle. La démarche originale est basée sur la conjonction de la contrainte de support avec différentes contraintes de connexité qui peuvent filtrer ou élaguer l'espace de recherche pour obtenir efficacement des motifs séquentiels fréquents groupés (MSFG) avec signification pour l'utilisateur. La méthode d'extraction proposée est non supervisée et basée sur le niveau pixel. Pour vérifier la généricité du concept de MSFG et la capacité de la méthode proposée d'offrir des résultats intéressants à partir des SITS, sont réalisées des expérimentations sur des données réelles optiques et radar.

To the bibliography