Tesis sobre el tema "Systèmes des données echantillones"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Systèmes des données echantillones".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Falcón, Prado Ricardo. "Active vibration control of flexible structures under input saturation through delay-based controllers and anti-windup compensators". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG042.
Texto completoIn this work, the problem of active vibration control of flexible mechanical structures is addressed through infinite and finite dimensional techniques. The compared approaches are adjusted for an output feedback controller based on delayed proportional actions,through a quasipolynomial-based approach, and an optimalH∞ controller design computed with an LMI approach.They are shown in order to analyze their capabilities to damp some vibrational modes in the frequency band with of interest, and to avoid the so called “spillover”phenomenon. These controllers are synthetized through a finite dimensional model, derived from a finite element analysis of the mechanical structure, combined with some reduction methods.The flexible structures considered here are, firstly, aflexible aluminium beam in the Euler-Bernoulli configuration,and secondly, an axysimmetric membrane. Bothof them are equipped with two piezoelectric patches that are bounded and collocated on each face of the structure.We intend to examine and discuss the aforementioned performances in both simulation and experimental environments
Jawad, Mohamed. "Confidentialité de données dans les systèmes P2P". Phd thesis, Université de Nantes, 2011. http://tel.archives-ouvertes.fr/tel-00638721.
Texto completoJanyene, Abderrahmane. "Validation de données des systèmes dynamiques linéaires". Nancy 1, 1987. http://www.theses.fr/1987NAN10190.
Texto completoAbdali, Abdelkebir. "Systèmes experts et analyse de données industrielles". Lyon, INSA, 1992. http://www.theses.fr/1992ISAL0032.
Texto completoTo analyses industrial process behavio, many kinds of information are needed. As tye ar mostly numerical, statistical and data analysis methods are well-suited to this activity. Their results must be interpreted with other knowledge about analysis prcess. Our work falls within the framework of the application of the techniques of the Artificial Intelligence to the Statistics. Its aim is to study the feasibility and the development of statistical expert systems in an industrial process field. The prototype ALADIN is a knowledge-base system designed to be an intelligent assistant to help a non-specialist user analyze data collected on industrial processes, written in Turbo-Prolong, it is coupled with the statistical package MODULAD. The architecture of this system is flexible and combing knowledge with general plants, the studied process and statistical methods. Its validation is performed on continuous manufacturing processes (cement and cast iron processes). At present time, we have limited to principal Components analysis problems
Tos, Uras. "Réplication de données dans les systèmes de gestion de données à grande échelle". Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30066/document.
Texto completoIn recent years, growing popularity of large-scale applications, e.g. scientific experiments, Internet of things and social networking, led to generation of large volumes of data. The management of this data presents a significant challenge as the data is heterogeneous and distributed on a large scale. In traditional systems including distributed and parallel systems, peer-to-peer systems and grid systems, meeting objectives such as achieving acceptable performance while ensuring good availability of data are major challenges for service providers, especially when the data is distributed around the world. In this context, data replication, as a well-known technique, allows: (i) increased data availability, (ii) reduced data access costs, and (iii) improved fault-tolerance. However, replicating data on all nodes is an unrealistic solution as it generates significant bandwidth consumption in addition to exhausting limited storage space. Defining good replication strategies is a solution to these problems. The data replication strategies that have been proposed for the traditional systems mentioned above are intended to improve performance for the user. They are difficult to adapt to cloud systems. Indeed, cloud providers aim to generate a profit in addition to meeting tenant requirements. Meeting the performance expectations of the tenants without sacrificing the provider's profit, as well as managing resource elasticities with a pay-as-you-go pricing model, are the fundamentals of cloud systems. In this thesis, we propose a data replication strategy that satisfies the requirements of the tenant, such as performance, while guaranteeing the economic profit of the provider. Based on a cost model, we estimate the response time required to execute a distributed database query. Data replication is only considered if, for any query, the estimated response time exceeds a threshold previously set in the contract between the provider and the tenant. Then, the planned replication must also be economically beneficial to the provider. In this context, we propose an economic model that takes into account both the expenditures and the revenues of the provider during the execution of any particular database query. Once the data replication is decided to go through, a heuristic placement approach is used to find the placement for new replicas in order to reduce the access time. In addition, a dynamic adjustment of the number of replicas is adopted to allow elastic management of resources. Proposed strategy is validated in an experimental evaluation carried out in a simulation environment. Compared with another data replication strategy proposed in the cloud systems, the analysis of the obtained results shows that the two compared strategies respond to the performance objective for the tenant. Nevertheless, a replica of data is created, with our strategy, only if this replication is profitable for the provider
Voisard, Agnès. "Bases de données géographiques : du modèle de données à l'interface utilisateur". Paris 11, 1992. http://www.theses.fr/1992PA112354.
Texto completoJaff, Luaï. "Structures de Données dynamiques pour les Systèmes Complèxes". Phd thesis, Université du Havre, 2007. http://tel.archives-ouvertes.fr/tel-00167104.
Texto completola porte vers des applications en économie via les systèmes complexes.
Les structures de données que nous avons étudiées sont les permutations qui ne contiennent pas de sous-suite croissante de longueur plus que deux, les tableaux de Young standards rectangles à deux lignes, les mots de Dyck et les codes qui lient ces structures de données.
Nous avons proposé un modèle économique qui modélise le bénéfice d'un compte bancaire dont l'énumération des configurations possible se fait à l'aide d'un code adapté. Une seconde application
concerne l'évolution de populations d'automate génétique . Ces populations sont étudiées par analyse spectrale et des expérimentations sont données sur des automates probabilistes dont l'évolution conduit à contrôler la dissipation par auto-régulation.
L'ensemble de ce travail a pour ambition de donner quelques outils calculatoires liés à la dynamique de structures de données pour analyser la complexité des systèmes.
Delot, Thierry. "Accès aux données dans les systèmes d'information pervasifs". Habilitation à diriger des recherches, Université de Valenciennes et du Hainaut-Cambresis, 2009. http://tel.archives-ouvertes.fr/tel-00443664.
Texto completoHeraud, Nicolas. "Validation de données et observabilité des systèmes multilinéairesé". Vandoeuvre-les-Nancy, INPL, 1991. http://www.theses.fr/1991INPL082N.
Texto completoThe aim of this study is to investigate data validation and observability of miltilinear systems to diagnose instrumentation in a process. Data validation and observability in linear systems are first reviewed and these notions are extended to multilinear systems. Differents methods such as hierarchical computation, constraint linearization and penalization functions, are presented to estimate true values when some values are lacking. After comparing the different methods, a recurrent calculus of estimates using constraint linearization and penalization functions is developed. An observable system is required in order to perform data validation. Thus, we developed an original method, based on arborescent diagrams. The technique of data validation has been successfully applied to a complex uranium processing plant owned by the French company Total Compagnie Minière France. On this partially instrumented process, measurements for volumic flow, density and uranium in both solid and liquid phase are available. The analysis allows first to obtain coherent date. Furthemore, it can be used to detect sensors faults
Meyer, Michel. "Validation de données sur des systèmes incomplètement observés". Toulouse, INPT, 1990. http://www.theses.fr/1990INPT032G.
Texto completoLiroz, Miguel. "Partitionnement dans les systèmes de gestion de données parallèles". Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2013. http://tel.archives-ouvertes.fr/tel-01023039.
Texto completoPetit, Loïc. "Gestion de flux de données pour l'observation de systèmes". Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00849106.
Texto completoLiroz-Gistau, Miguel. "Partitionnement dans les Systèmes de Gestion de Données Parallèles". Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2013. http://tel.archives-ouvertes.fr/tel-00920615.
Texto completoMichel, François. "Validation de systèmes répartis : symétries d'architecture et de données". Toulouse, INPT, 1996. http://www.theses.fr/1996INPT099H.
Texto completoRobin, Jean-Marc. "L'estimation des systèmes de demandes sur données individuelles d'enquêtes". Paris 1, 1988. http://www.theses.fr/1988PA010043.
Texto completoThe fact that all households do not purchase all commodities during short periods of time is a source of trouble when estimating demand systems from household survey data. To avoid possible selection biases when selecting only households whe did purchase during the recording period, we have to model explicitely purchasing behaviors. The various models proposed are analysed then tested on the data of the french "ecquete consommation alimentaire en 1981
Villamil, Giraldo María del Pilar. "Service de localisation de données pour les systèmes P2P". Grenoble INPG, 2006. http://www.theses.fr/2006INPG0052.
Texto completoThis thesis is oriented to the interrogation in massively distributed systems. It proposes a data location service for peer to peer systems based on distributed hash tables. These systems are characterized by a high degree of distribution, a large set of heterogeneous peers, a very dynamic configuration and a "blind" distribution of the data. These characteristics make it difficult to provide an efficient data management. Ln fact, it is almost impossible to have a coherent view regarding the global state of the system. Moreover, the location, using declarative queries, of data shared in the system is becoming very problematic. The objective of the location service proposed is to provide a query management adapted to the peer to peer context. As a result, the service uses distributed indexation techniques, query evaluation models, cache and materialized queries. The query evaluation models permitted theoretical performance analysis wich using prototype experiments in a large scale system (1300 peers were deployed). The behaviour observed shows good properties particularly regarding the solution scalability according to the number of participant sites. This is one of the critical issues for providing successful massively distributed systems
Liroz, Gistau Miguel. "Partitionnement dans les systèmes de gestion de données parallèles". Thesis, Montpellier 2, 2013. http://www.theses.fr/2013MON20117/document.
Texto completoDuring the last years, the volume of data that is captured and generated has exploded. Advances in computer technologies, which provide cheap storage and increased computing capabilities, have allowed organizations to perform complex analysis on this data and to extract valuable knowledge from it. This trend has been very important not only for industry, but has also had a significant impact on science, where enhanced instruments and more complex simulations call for an efficient management of huge quantities of data.Parallel computing is a fundamental technique in the management of large quantities of data as it leverages on the concurrent utilization of multiple computing resources. To take advantage of parallel computing, we need efficient data partitioning techniques which are in charge of dividing the whole data and assigning the partitions to the processing nodes. Data partitioning is a complex problem, as it has to consider different and often contradicting issues, such as data locality, load balancing and maximizing parallelism.In this thesis, we study the problem of data partitioning, particularly in scientific parallel databases that are continuously growing and in the MapReduce framework.In the case of scientific databases, we consider data partitioning in very large databases in which new data is appended continuously to the database, e.g. astronomical applications. Existing approaches are limited since the complexity of the workload and continuous appends restrict the applicability of traditional approaches. We propose two partitioning algorithms that dynamically partition new data elements by a technique based on data affinity. Our algorithms enable us to obtain very good data partitions in a low execution time compared to traditional approaches.We also study how to improve the performance of MapReduce framework using data partitioning techniques. In particular, we are interested in efficient data partitioning of the input datasets to reduce the amount of data that has to be transferred in the shuffle phase. We design and implement a strategy which, by capturing the relationships between input tuples and intermediate keys, obtains an efficient partitioning that can be used to reduce significantly the MapReduce's communication overhead
Madera, Cedrine. "L’évolution des systèmes et architectures d’information sous l’influence des données massives : les lacs de données". Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS071/document.
Texto completoData is on the heart of the digital transformation.The consequence is anacceleration of the information system evolution , which must adapt. The Big data phenomenonplays the role of catalyst of this evolution.Under its influence appears a new component of the information system: the data lake.Far from replacing the decision support systems that make up the information system, data lakes comecomplete information systems’s architecture.First, we focus on the factors that influence the evolution of information systemssuch as new software and middleware, new infrastructure technologies, but also the decision support system usage itself.Under the big data influence we study the impact that this entails especially with the appearance ofnew technologies such as Apache Hadoop as well as the current limits of the decision support system .The limits encountered by the current decision support system force a change to the information system which mustadapt and that gives birth to a new component: the data lake.In a second time we study in detail this new component, formalize our definition, giveour point of view on its positioning in the information system as well as with regard to the decision support system .In addition, we highlight a factor influencing the architecture of data lakes: data gravity, doing an analogy with the law of gravity and focusing on the factors that mayinfluence the data-processing relationship. We highlight, through a use case, that takingaccount of the data gravity can influence the design of a data lake.We complete this work by adapting the software product line approach to boot a methodof formalizations and modeling of data lakes. This method allows us:- to establish a minimum list of components to be put in place to operate a data lake without transforming it into a data swamp,- to evaluate the maturity of an existing data lake,- to quickly diagnose the missing components of an existing data lake that would have become a dataswamp- to conceptualize the creation of data lakes by being "software agnostic “
Barbier, Sébastien. "Visualisation distance temps-réel de grands volumes de données". Grenoble 1, 2009. http://www.theses.fr/2009GRE10155.
Texto completoNumerical simulations produce huger and huger meshes that can reach dozens of million tetrahedra. These datasets must be visually analyzed to understand the physical simulated phenomenon and draw conclusions. The computational power for scientific visualization of such datasets is often smaller than for numerical simulation. As a consequence, interactive exploration of massive meshes is barely achieved. In this document, we propose a new interactive method to interactively explore massive tetrahedral meshes with over forty million tetrahedra. This method is fully integrated into the simulation process, based on two meshes at different resolutions , one fine mesh and one coarse mesh , of the same simulation. A partition of the fine vertices is computed guided by the coarse mesh. It allows the on-the-fly extraction of a mesh, called \textit{biresolution}, mixed of the two initial resolutions as in usual multiresolution approaches. The extraction of such meshes is carried out into the main memory (CPU), the last generation of graphics cards (GPU) and with an out-of-core algorithm. They guarantee extraction rates never reached in previous work. To visualize the biresolution meshes, a new direct volume rendering (DVR) algorithm is fully implemented into graphics cards. Approximations can be performed and are evaluated in order to guarantee an interactive rendering of any biresolution meshes
Peerbocus, Mohamed Ally. "Gestion de l'évolution spatiotemporelle dans une base de données géographiques". Paris 9, 2001. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2001PA090055.
Texto completoAllouti, Faryel. "Visualisation dans les systèmes informatiques coopératifs". Paris 5, 2011. http://www.theses.fr/2011PA05S003.
Texto completoClustering techniques and visualization tools of complex data are two recurring themes in the community of Mining and Knowledge Management. At the intersection of these two themes there are the visualization methods such as multidimensional scaling or the Self-Organizing Maps (SOM). The SOM is constructed using K-means algorithm to which is added the notion of neighborhood allowing in this way the preservation of the topo-logy of the data. Thus, the learning moves closer, in the space of data, the centers that are neighbors on a two dimensions grid generally, to form a discrete surface which is a representation of the distribution of the cloud to explore. In this thesis, we are interested in the visualization in a cooperative context, where co-operation is established via an asynchronous communication and the media is the e-mail. This tool has emerged with the advent of information technology and communication. It is widely used in organizations, it allows an immediate and fast distribution of the in-formation to several persons at the same time, without worrying about their presence. Our objective consisted in proposing a tool of visual exploration of textual data which are files attached to the electronic messages. In order to do this, we combined clustering and visualization methods. We investigated the mixture approach, which is a very useful contribution for classification. In our context, we used the multinomial mixture model (Go-vaert and Nadif, 2007) to determine the classes of files. In addition, we studied the aspect of visualization of the obtained classes and documents using the multidimensional scaling and DC (Difference of Convex functions) and Self-Organizing Maps of Kohonen
Lassoued, Yassine. "Médiation de qualité dans les systèmes d'information géographique". Aix-Marseille 1, 2005. http://www.theses.fr/2005AIX11027.
Texto completoGarnerin, Mahault. "Des données aux systèmes : étude des liens entre données d’apprentissage et biais de performance genrés dans les systèmes de reconnaissance automatique de la parole". Thesis, Université Grenoble Alpes, 2022. http://www.theses.fr/2022GRALL006.
Texto completoMachine learning systems contribute to the reproduction of social inequalities, because of the data they use and for lack of critical approches, thys feeding a discourse on the ``biases of artificial intelligence''. This thesis aims at contributing to collective thinking on the biases of automatic systems by investigating the existence of gender biases in automatic speech recognition (ASR) systems.Critically thinking about the impact of systems requires taking into account both the notion of bias (linked with the architecture, or the system and its data) and that of discrimination, defined at the level of each country's legislation. A system is considered discriminatory when it makes a difference in treatment on the basis of criteria defined as breaking the social contract. In France, sex and gender identity are among the 23 criteria protected by law.Based on theoretical considerations on the notions of bias, and in particular on the predictive (or performance) bias and the selection bias, we propose a set of experiments to try to understand the links between selection bias in training data and predictive bias of the system. We base our work on the study of an HMM-DNN system trained on French media corpus, and an end-to-end system trained on audio books in English. We observe that a significant gender selection bias in the training data contributes only partially to the predictive bias of the ASR system, but that the latter emerges nevertheless when the speech data contain different utterance situations and speaker roles. This work has also led us to question the representation of women in speech data, and more generally to rethink the links between theoretical conceptions of gender and ASR systems
Hajji, Hicham. "Gestion des risques naturels : une approche fondée sur l'intégration des données". Lyon, INSA, 2005. http://theses.insa-lyon.fr/publication/2005ISAL0039/these.pdf.
Texto completoThere is a huge geographic data available with many organizations collecting geographic data for centuries, but some of that is still in the form of paper maps or in traditional files or databases, and with the emergence of latest technologies in the field of software and data storage some has been digitized and is stored in latest GIS systems. However, too often their reuse for new applications is a nightmare, due to diversity of data sets, heterogeneity of existing systems in terms of data modeling concepts, data encoding techniques, obscure semantics of data,storage structures, access functionality, etc. Such difficulties are more common in natural hazards information systems. In order to support advanced natural hazards management based on heterogeneous data, this thesis develops a new approach to the integration of semantically heterogeneous geographic information which is capable of addressing the spatial and thematic aspects of geographic information. The approach is based on OpenGIS standard. It uses it as a common model for data integration. The proposed methodology takes into consideration a large number of the aspects involved in the construction and the modelling of natural hazards management information system. Another issue has been addressed in this thesis, which is the design of an ontology for natural hazards. The ontology design has been extensively studied in recent years, we have tried throughout this work to propose an ontology to deal with semantic heterogeneity existing between different actors and to model existing knowledge present for this issue. The ontology contains the main concepts and relationships between these concepts using OWL Language
Kaplan, Stéphane. "Spécification algébrique de types de données à accès concurrent". Paris 11, 1987. http://www.theses.fr/1987PA112335.
Texto completoSaidi, Selma. "Optimisation des transferts de données sur systèmes multiprocesseurs sur puce". Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00875582.
Texto completoSiriopoulos, Costas-Panou. "Essai sur les systèmes experts et l'analyse confirmatoire des données". Aix-Marseille 3, 1988. http://www.theses.fr/1988AIX32010.
Texto completoA) artificial intelligence in statistical analysis: the key use of a. I. In statistics has been to enable statisticians to study strategies of data analysis. The existence, today, of a large number of statistical packages poses problems for potential users. Incorporating expert guidance in statistical software is technically challenging but worthwhile undertaking. Different systems and projects have been suggested. Recently, m. Egea and j. P. Marciano have proposed the a. I. D. A. Project in c. D. A. We consider the autocorrelation problem in a multiple linear model and we propose a typology to study the required statistical knowledge. We also propose a corpus of 30 rules of thumb and 5 meta-rules. We conclude in a possible strategy, for the detection and correction of the problem, in the form of an hierarchical tree in which each node is represented by a frame. B) statistics in artificial intelligence: expert systems work with propositions that may be uncertain. Accepting a probabilistic nature of uncertainty, we have to make a crucial assumption, namely, the additivity axiome. Once this assumption is dropped, we have other ways of characterizing uncertainty, in particular, possibilistic uncertainty based on fuzzy sets
Pradel, Bruno. "Evaluation des systèmes de recommandation à partir d'historiques de données". Paris 6, 2013. http://www.theses.fr/2013PA066263.
Texto completoThis thesis presents various experimental protocols leading to abetter offline estimation of errors in recommender systems. As a first contribution, results form a case study of a recommendersystem based on purchased data will be presented. Recommending itemsis a complex task that has been mainly studied considering solelyratings data. In this study, we put the stress on predicting thepurchase a customer will make rather than the rating he will assign toan item. While ratings data are not available for many industries andpurchases data widely used, very few studies considered purchasesdata. In that setting, we compare the performances of variouscollaborative filtering models from the litterature. We notably showthat some changes the training and testing phases, and theintroduction of contextual information lead to major changes of therelative perfomances of algorithms. The following contributions will focus on the study of ratings data. Asecond contribution will present our participation to the Challenge onContext-Aware Movie Recommendation. This challenge provides two majorchanges in the standard ratings prediction protocol: models areevaluated conisdering ratings metrics and tested on two specificsperiod of the year: Christmas and Oscars. We provides personnalizedrecommendation modeling the short-term evolution of the popularitiesof movies. Finally, we study the impact of the observation process of ratings onranking evaluation metrics. Users choose the items they want to rateand, as a result, ratings on items are not observed at random. First,some items receive a lot more ratings than others and secondly, highratings are more likely to be oberved than poor ones because usersmainly rate the items they likes. We propose a formal analysis ofthese effects on evaluation metrics and experiments on the Yahoo!Musicdataset, gathering standard and randomly collected ratings. We showthat considering missing ratings as negative during training phaseleads to good performances on the TopK task, but these performancescan be misleading favoring methods modeling the popularities of itemsmore than the real tastes of users
Akbarinia, Reza. "Techniques d'accès aux données dans des systèmes pair-à-pair". Nantes, 2007. http://www.theses.fr/2007NANT2060.
Texto completoThe goal of this thesis is to contribute to the development of new data access techniques for query processing services in P2P environments. We focus on novel techniques for two important kinds of queries: queries with currency guarantees and top-k queries. To improve data availability, most P2P systems rely on data replication, but without currency guarantees. However, for many applications which could take advantage of a P2P system (e. G. Agenda management), the ability to get the current data is very important. To support these applications, the query processing service must be able to efficiently detect and retrieve a current, i. E. Up-to-date, replica in response to a user requesting a data. The second problem which we address is supporting top-k queries which are very useful in large scale P2P systems, e. G. They can reduce the network traffic significantly. However, efficient execution of these queries is very difficult in P2P systems because of their special characteristics, in particular in DHTs. In this thesis, we first survey the techniques which have been proposed for query processing in P2P systems. We give an overview of the existing P2P networks, and compare their properties from the perspective of query processing. Second, we propose a complete solution to the problem of current data retrieval in DHTs. We propose a service called Update Management Service (UMS) which deals with updating replicated data and efficient retrieval of current replicas based on timestamping. Third, we propose novel solutions for top-k query processing in structured, i. E. DHTs, and unstructured P2P systems. We also propose new algorithms for top-k query processing over sorted lists which is a general model for top-k queries in many centralized, distributed and P2P systems, especially in super-peer networks. We validated our solutions through a combination of implementation and simulation and the results show very good performance, in terms of communication and response time
Alustwani, Husam. "Interactivité et disponibilité des données dans les systèmes multimédias distribués". Besançon, 2009. http://www.theses.fr/2009BESA2041.
Texto completoThe works in this thesis have been guided by two problems: (a) how to efficiently support fast browsing interactions in streamed multimedia presentations and (b) how to enhance data availability in pure P2P streaming systems ? In order to enable quick browsing within streamed multimedia presentations, we proposed an approach that takes full advantage of object multiplicity in a multimedia presentation. Our approach allows, among other features, to preserve the semantic on the presentation, when a fast browsing interaction occurs. In a second time, we studied the performances of our approach through the proposal of a Content-Based Prefetching Strategy, called CPS. Our strategy enables to considerably reduce the new interaction’s latency, that is to reduce the response time of a fast browsing action. Data availability in P2P streaming systems differs fundamentally from that observed in classical systems, in the sense that the use data are time-dependent. Thus, this problem arises in terms of the opportunity for a peer (consumer) to entirely receive a video content, that is able to watch the content to its end. However, spontaneous PSP systems are characterised, mainly, by the volatility of the peers. The unpredictable departure of peers poses the problem of the availability of peers that are sources for streaming. We have studied this problem by setting-up, a centralised caching mechanism to reduce the effects of peer’s departure and by only replicating the suffixes (last parts) of the videos that are being accessed. In a second step, we extended our approach towards a distributed virtual cache. The simulation results showed the relevance of the proposed approaches. Finally, we described the design and implementation of a prototype, that demonstrates the feasibility of a spontaneous P2P streaming system
Saint-Joan, Denis. "Données géographiques et raisonnement : le système GEODES". Toulouse 3, 1995. http://www.theses.fr/1995TOU30179.
Texto completoBazin, Cyril. "Tatouage de données géographiques et généralisation aux données devant préserver des contraintes". Caen, 2010. http://www.theses.fr/2010CAEN2006.
Texto completoDigital watermaking is a fundamental process for intellectual property protection. It consists in inserting a mark into a digital document by slightly modifications. The presence of this mark allows the owner of a document to prove the priority of his rights. The originality of our work is twofold. In one hand, we use a local approach to ensure a priori that the quality of constrained documents is preserved during the watermark insertion. On the other hand, we propose a generic watermarking scheme. The manuscript is divided in three parts. Firstly, we introduce the basic concepts of digital watermarking for constrainted data and the state of the art of geographical data watermarking. Secondly, we present our watermarking scheme for digital vectorial maps often used in geographic information systems. This scheme preserves some topological and metric qualities of the document. The watermark is robust, it is resilient against geometric transformations and cropping. We give an efficient implementation that is validated by many experiments. Finally, we propose a generalization of the scheme for constrainted data. This generic scheme will facilitate the design of watermarking schemes for new data type. We give a particular example of application of a generic schema for relational databases. In order to prove that it is possible to work directly on the generic scheme, we propose two detection protocols straightly applicable on any implementation of generic scheme
Fénié, Patrick. "Graico : méthode de modélisation et de conception de systèmes d'exploitation de systèmes de production". Bordeaux 1, 1994. http://www.theses.fr/1994BOR10622.
Texto completoTahir, Hassane. "Aide à la contextualisation de l’administration de base de données". Paris 6, 2013. http://www.theses.fr/2013PA066789.
Texto completoThe complexity of database administration tasks requires the development of tools for supporting database experts. When problems occur, the database administrator (DBA) is frequently the first person blamed. Most DBAs work in a fire-fighting mode and have little opportunity to be proactive. They must be constantly ready to analyze and correct failures based on a large set of procedures. In addition, they are continually readjusting these procedures and developing practices to manage a multitude of specific situations that differ from the generic situation by some few contextual elements. These practices have to deal with these contextual elements in order to solve the problem at hand. This thesis aims to use Contextual Graphs formalism in order to improve existing procedures used in database administration. The thesis shows also the benefits of using Contextual Graphs to capture user practices in order to be reused in the working contexts. Up to now, this improvement is achieved by a DBA through practices that adapt procedures to the context in which tasks should be performed and the incidents appear. This work will be the basis for designing and implementing a Context-Based Intelligent Assistant System (CBIAS) for supporting DBAs
Heba, Nurja Ines. "Contributions à l'analyse statistique et économétrique des données géoréférencées". Toulouse 1, 2005. http://www.theses.fr/2005TOU10048.
Texto completoSpatial analysis is a research topic that might develop the exploration capacity of goegraphical information systems. We study different aspects of georeferenced data modelling. 1) We build a tool-box called GEOXP (using Matlab) organised with statistical functions offering an exploratory analysis of georeferenced data with spatial dimension. These functions use statistical tools adapted for spatial data. 2) We study the theoretical context of a spatial analysis of real estate data in urban environment to prepare an empirical work. This study bring us to a new research on weight matrix choice in spatial regression models and we offer a new type of weight matrix, built on location density. 3) We build a new methodology to classify data flows between geographical units by using two matrix, one describing the flows themselves and the othe one describing their neighbourhood relations
Coupaye, Thierry. "Un modèle d'exécution paramétrique pour systèmes de bases de données actifs". Phd thesis, Université Joseph Fourier (Grenoble), 1996. http://tel.archives-ouvertes.fr/tel-00004983.
Texto completoWalwer, Damian. "Dynamique non linéaire des systèmes volcaniques à partir des données géodésiques". Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEE004/document.
Texto completoWe study the use of the "multichannel singular spectrum analysis" on GPS time series. This method allows to simultaneously analyze a set of time series in order to extract from it common modes of variability without using any a priori on the temporal or the spatial structure of geophysical fields. The extracted modes correspond either to nonlinear trends, oscillations or noise. The method is applied on a set of GPS time series recorded at Akutan, a volcano located in Aleutian arc in Alaska. Two types of signals are extracted from it. The first one corresponds to seasonal deformations and the other represents two successive cycles of inflation and subsidence of Akutan volcano. The inflations are fast and short and are followed by deflations that are slower and longer. In the second part we take benefit of the M-SSA to analyze GPS time series recorded at several volcanoes. Okmok and Shishaldin in Alaska and Piton de la Fournaise in La Réunion possess a part of their deformation history that is similar to Akutan volcano. The cyclic nature of the observed deformations leads us to make an analogy between the oscillatory regime of a simple nonlinear oscillator and the deformation cycles of these volcanoes. Geochemical, petrological and geophysical data available for Okmok and Piton de la Fournaise combined with the constraint on the qualitative dynamics bring by the nonlinear oscillator allow to propose a physical model. Two shallow reservoirs are connected by a cylindrical conduit in which the magma have a viscosity that depends on the temperature. Such system behaves like the nonlinear oscillator mentioned above. When the temperature gradient inside theconduit is large enough and the flux of magma entering the shallow system is bounded by values that are determined analytically anonlinear oscillatory regime arises
Keller, Jean-Yves. "Contribution a la validation de données des systèmes statiques et dynamiques". Nancy 1, 1991. http://www.theses.fr/1991NAN10201.
Texto completoBeaudenon, Vincent. "Diagrammes de décision de données pour la vérification de systèmes matériels". Paris 6, 2006. http://www.theses.fr/2006PA066337.
Texto completoBoumediene, Mohamed Salah. "Définition d'un système générique de partage de données entre systèmes existants". Lyon, INSA, 2005. http://theses.insa-lyon.fr/publication/2005ISAL0125/these.pdf.
Texto completoMy thesis deals with the database integration problems and the confidentiality of the exchanged data. My aim, however, is to solve the problems related to the mediator schema creation. We proposed a solution which will generate a global view of the different databases by reducing, considerably, the manual interventions. To achieve this, we will describe, at the beginning, each schema using ontologic terms. This description will create for each database an XML file which will be used ,then, for the creation of mediator schema and the matching rules. In order to process the mediator schema, we created a mediator that allows users to query the different databases trough the global view. To lighten the data input process, we used the DRUID system which allows users to input their data under the form of files which will be processed, then, to populate the databases. To handle the confidentiality of the data entry and access, however, we proposed the use of (DTD) documents models and files to each type of user's profil, whether, for writing or reading files. These DTD are generated, automatically, from the database schema and then modified, for each user type according to their rights on the database. Our solution was applied in the medical domain through the consulting of a distributed medical file
Estrada, Garcia Hector Javier. "Commande de systèmes mécaniques avec retards dans la transmission de données". Nantes, 2008. http://www.theses.fr/2008NANT2063.
Texto completoIn this thesis, the development of a synchronization technique is investigated, for dynamic systems with delays in the communication channel. It is assumed that the delays may be large but bounded. The study of the problem of synchronization of two distant mechanical devices (one being in Nantes, France; the other in Ensenada, Mexico), interconnected through a network. The mechanical system is underactuated and consists in an inverted pendulum linked to a transversal beam through a prismatic joint. The contributions of this thesis generalize the available synchronization results in the current literature
Bard, Sylvain. "Méthode d'évaluation de la qualité de données géographiques généralisées : application aux données urbaines". Paris 6, 2004. http://www.theses.fr/2004PA066004.
Texto completoBellosta, Marie-Jo. "Systèmes d'interfaces pour la gestion d'objets persistants, Omnis". Paris 6, 1992. http://www.theses.fr/1992PA066034.
Texto completoPostoyan, Romain. "Commande et construction d’observateurs pour des systèmes non linéaires incertains à données échantillonnées et en réseau". Paris 11, 2009. http://www.theses.fr/2009PA112163.
Texto completoThe rise of digital technologies has promoted the development of new controllers implementations that have many advantages compared to traditional control structures. Indeed, digital controllers have become very popular due to their low cost and great flexibility in comparison with analogical controllers. The implementation of control structures via a network also offers a new point of view. They are generally easier to use and to maintain than point-by-point wiring, they allow one to significantly reduce data exchanges and, as a consequence, the energy cost. However, induced communication constraints can have a significant impact on system dynamical behaviour. In this thesis, we first propose adaptive and robust stabilisation methods for classes of nonlinear sampled-data systems affected by uncertainties; the main objective is to improve closed-loop performance compared to the emulation of a continuous-time control law. When data exchanges are sampled and time-scheduled via a network, we have developed a framework for the observer design by emulation. It is shown that various observer designs (linear, high gain, circle criterion) and various network configurations fit our framework
Toumani, Farouk. "Le raisonnement taxinomique dans les modèles conceptuels de données : application à la retro-conception des bases de données relationnelles". Lyon, INSA, 1997. http://www.theses.fr/1997ISAL0051.
Texto completoTerrninological logics, as modem knowledge representation formalisms, are acknowledged to be one of the most promising artificial intelligence techniques in database applications. They allow the development of new data models equipped with taxonomic reasoning ability. However, these languages turned out to be inadequate in conceptual modelling area where emphasis must be put on the accurate and natural desc1iption of the universe of discourse. In this work, we first examine the features of terminological logics with respect to the requirements of conceptual modelling. We show that terminological logics do not support direct mode/ling requirement and constructs in these formalisms are semantically overloaded. Then we propose a model, called as a formalization of a semantic model, namely an Entity Relationship (E/R) model, using terminological logics. We demonstrate that E/R schemas and schemas are equivalent with respect to their formation capacity measure. This result ensure that the reasoning on an E/R schema can be reduced to a reasoning on its equivalent M schemas. As an application of this work, we propose to use in a relational database reverse engineering process in order to support automatic construction and enrichment of conceptual schemas and to maintain their correctness (consistency and minimality)
Rannou, Éric. "Modélisation explicative de connaissances à partir de données". Toulouse 3, 1998. http://www.theses.fr/1998TOU30290.
Texto completoKerhervé, Brigitte. "Vues relationnelles : implantation dans les systèmes de gestion de bases de données centralisés et répartis". Paris 6, 1986. http://www.theses.fr/1986PA066090.
Texto completoThièvre, Jérôme. "Cartographies pour la recherche et l'exploration de données documentaires". Montpellier 2, 2006. http://www.theses.fr/2006MON20115.
Texto completoThis thesis is based on information visualization techniques in order to explore and analyze documentary data. Two representations are studied from a theoretical and practical point of view: Venn-Euler diagrams and node-link diagrams. Venn-Euler diagrams are set-based representations. We use them as graphical formulation interface for boolean queries. Each diagram can also be seen as a map of the documents base which provides information on its content and feedback on the quality of the search keywords. Node-link diagrams are used to visualize graphs. We studied layout, filtering and graphical encoding methods applicable to this kind of diagrams. We designed a graph visualization API which allows us to evaluate the properties of various force models, such as the classics from Fruchterman-Reingold and Eades, or the visual clustering models from Noack. We implemented several filtering algorithms in order to enhance the readability of diagrams while controlling the lost of information. Graphical encoding is the use of various visual display elements, such as color, size and shape to map data attributes. Customization of graphical encoding allows users to bring to the foreground their objects of interest within the visualization. The association of these methods provides us solutions to create interactive and customizable displays which are particularly useful for exploration and visual analysis of various real complex graphs, such as web pages, bibliographical and documentary data networks
Fernandez, Conception. "Modélisation des systèmes d'exploitation par HBDS". Paris 6, 1988. http://www.theses.fr/1988PA066235.
Texto completoLamenza, Catalina A. "Organisation physique des bases de données pour les champs continus". Lyon 1, 2003. http://www.theses.fr/2003LYO10191.
Texto completo