Dissertations / Theses on the topic 'Données Transactionnelles'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 23 dissertations / theses for your research on the topic 'Données Transactionnelles.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Crain, Tyler. "Faciliter l'utilisation des mémoires transactionnelles logicielles." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00861274.
Full textKirchgessner, Martin. "Fouille et classement d'ensembles fermés dans des données transactionnelles de grande échelle." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM060/document.
Full textThe recent increase of data volumes raises new challenges for itemset mining algorithms. In this thesis, we focus on transactional datasets (collections of items sets, for example supermarket tickets) containing at least a million transactions over hundreds of thousands items. These datasets usually follow a "long tail" distribution: a few items are very frequent, and most items appear rarely. Such distributions are often truncated by existing itemset mining algorithms, whose results concern only a very small portion of the available items (the most frequents, usually). Thus, existing methods fail to concisely provide relevant insights on large datasets. We therefore introduce a new semantics which is more intuitive for the analyst: browsing associations per item, for any item, and less than a hundred associations at once.To address the items' coverage challenge, our first contribution is the item-centric mining problem. It consists in computing, for each item in the dataset, the k most frequent closed itemsets containing this item. We present an algorithm to solve it, TopPI. We show that TopPI computes efficiently interesting results over our datasets, outperforming simpler solutions or emulations based on existing algorithms, both in terms of run-time and result completeness. We also show and empirically validate how TopPI can be parallelized, on multi-core machines and on Hadoop clusters, in order to speed-up computation on large scale datasets.Our second contribution is CAPA, a framework allowing us to study which existing measures of association rules' quality are relevant to rank results. This concerns results obtained from TopPI or from jLCM, our implementation of a state-of-the-art frequent closed itemsets mining algorithm (LCM). Our quantitative study shows that the 39 quality measures we compare can be grouped into 5 families, based on the similarity of the rankings they produce. We also involve marketing experts in a qualitative study, in order to discover which of the 5 families we propose highlights the most interesting associations for their domain.Our close collaboration with Intermarché, one of our industrial partners in the Datalyse project, allows us to show extensive experiments on real, nation-wide supermarket data. We present a complete analytics workflow addressing this use case. We also experiment on Web data. Our contributions can be relevant in various other fields, thanks to the genericity of transactional datasets.Altogether our contributions allow analysts to discover associations of interest in modern datasets. We pave the way for a more reactive discovery of items' associations in large-scale datasets, whether on highly dynamic data or for interactive exploration systems
Alchicha, Élie. "Confidentialité Différentielle et Blowfish appliquées sur des bases de données graphiques, transactionnelles et images." Thesis, Pau, 2021. http://www.theses.fr/2021PAUU3067.
Full textDigital data is playing crucial role in our daily life in communicating, saving information, expressing our thoughts and opinions and capturing our precious moments as digital pictures and videos. Digital data has enormous benefits in all the aspects of modern life but forms also a threat to our privacy. In this thesis, we consider three types of online digital data generated by users of social media and e-commerce customers: graphs, transactional, and images. The graphs are records of the interactions between users that help the companies understand who are the influential users in their surroundings. The photos posted on social networks are an important source of data that need efforts to extract. The transactional datasets represent the operations that occurred on e-commerce services.We rely on a privacy-preserving technique called Differential Privacy (DP) and its generalization Blowfish Privacy (BP) to propose several solutions for the data owners to benefit from their datasets without the risk of privacy breach that could lead to legal issues. These techniques are based on the idea of recovering the existence or non-existence of any element in the dataset (tuple, row, edge, node, image, vector, ...) by adding respectively small noise on the output to provide a good balance between privacy and utility.In the first use case, we focus on the graphs by proposing three different mechanisms to protect the users' personal data before analyzing the datasets. For the first mechanism, we present a scenario to protect the connections between users (the edges in the graph) with a new approach where the users have different privileges: the VIP users need a higher level of privacy than standard users. The scenario for the second mechanism is centered on protecting a group of people (subgraphs) instead of nodes or edges in a more advanced type of graphs called dynamic graphs where the nodes and the edges might change in each time interval. In the third scenario, we keep focusing on dynamic graphs, but this time the adversaries are more aggressive than the past two scenarios as they are planting fake accounts in the dynamic graphs to connect to honest users and try to reveal their representative nodes in the graph. In the second use case, we contribute in the domain of transactional data by presenting an existed mechanism called Safe Grouping. It relies on grouping the tuples in such a way that hides the correlations between them that the adversary could use to breach the privacy of the users. On the other side, these correlations are important for the data owners in analyzing the data to understand who might be interested in similar products, goods or services. For this reason, we propose a new mechanism that exposes these correlations in such datasets, and we prove that the level of privacy is similar to the level provided by Safe Grouping.The third use-case concerns the images posted by users on social networks. We propose a privacy-preserving mechanism that allows the data owners to classify the elements in the photos without revealing sensitive information. We present a scenario of extracting the sentiments on the faces with forbidding the adversaries from recognizing the identity of the persons. For each use-case, we present the results of the experiments that prove that our algorithms can provide a good balance between privacy and utility and that they outperform existing solutions at least in one of these two concepts
Amo, Sandra De. "Contraintes dynamiques et schémas transactionnels." Paris 13, 1995. http://www.theses.fr/1995PA132002.
Full textBogo, Gilles. "Conception d'applications pour systèmes transactionnels coopérants." S.l. : Université Grenoble 1, 2008. http://tel.archives-ouvertes.fr/tel-00315574.
Full textBogo, Gilles. "Conception d'applications pour systèmes transactionnels coopérants." Habilitation à diriger des recherches, Grenoble INPG, 1985. http://tel.archives-ouvertes.fr/tel-00315574.
Full textFritzke, Jr Udo. "Les systèmes transactionnels répartis pour données dupliquées fondés sur la communication de groupes." Rennes 1, 2001. http://www.theses.fr/2001REN10002.
Full textFournié, Laurent Henri. "Stockage et manipulation transactionnels dans une base de données déductives à objets : techniques et performances." Versailles-St Quentin en Yvelines, 1998. http://www.theses.fr/1998VERS0017.
Full textBillard, David. "La reprise dans les systèmes transactionnels exploitant la sémantique des opérations typées." Montpellier 2, 1995. http://www.theses.fr/1995MON20056.
Full textMalta, Carmelo. "Les systèmes transactionnels pour environnements d'objets : principes et mise en oeuvre." Montpellier 2, 1993. http://www.theses.fr/1993MON20154.
Full textKanellou, Eleni. "Data structures for current multi-core and future many-core architectures." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S171/document.
Full textThough a majority of current processor architectures relies on shared, cache-coherent memory, current prototypes that integrate large amounts of cores, connected through a message-passing substrate, indicate that architectures of the near future may have these characteristics. Either of those tendencies requires that processes execute in parallel, making concurrent programming a necessary tool. The inherent difficulty of reasoning about concurrency, however, may make the new processor architectures hard to program. In order to deal with issues such as this, we explore approaches for providing ease of programmability. We propose WFR-TM, an approach based on transactional memory (TM), which is a concurrent programming paradigm that employs transactions in order to synchronize the access to shared data. A transaction may either commit, making its updates visible, or abort, discarding its updates. WFR-TM combines desirable characteristics of pessimistic and optimistic TM. In a pessimistic TM, no transaction ever aborts; however, in order to achieve that, existing TM algorithms employ locks in order to execute update transactions sequentially, decreasing the degree of achieved parallelism. Optimistic TMs execute all transactions concurrently but commit them only if they have encountered no conflict during their execution. WFR-TM provides read-only transactions that are wait-free, without ever executing expensive synchronization operations (like CAS, LL/SC, etc), or sacrificing the parallelism between update transactions. We further present Dense, a concurrent graph implementation. Graphs are versatile data structures that allow the implementation of a variety of applications. However, multi-process applications that rely on graphs still largely use a sequential implementation. We introduce an innovative concurrent graph model that provides addition and removal of any edge of the graph, as well as atomic traversals of a part (or the entirety) of the graph. Dense achieves wait-freedom by relying on light-weight helping and provides the inbuilt capability of performing a partial snapshot on a dynamically determined subset of the graph. We finally aim at predicted future architectures. In the interest of ode reuse and of a common paradigm, there is recent momentum towards porting software runtime environments, originally intended for shared-memory settings, onto non-cache-coherent machines. JVM, the runtime environment of the high-productivity language Java, is a notable example. Concurrent data structure implementations are important components of the libraries that environments like these incorporate. With the goal of contributing to this effort, we study general techniques for implementing distributed data structures assuming they have to run on many-core architectures that offer either partially cache-coherent memory or no cache coherence at all and present implementations of stacks, queues, and lists
Gürgen, Levent. "Gestion à grande échelle de données de capteurs hétérogènes." Grenoble INPG, 2007. http://www.theses.fr/2007INPG0093.
Full textThis dissertation deals with the issues related to scalable management of heterogeneous sensor data. Ln fact, sensors are becoming less and less expensive, more and more numerous and heterogeneous. This naturally raises the scalability problem and the need for integrating data gathered from heterogeneous sensors. We propose a distributed and service-oriented architecture in which data processing tasks are distributed at severallevels in the architecture. Data management functionalities are provided in terms of "services", in order to hide sensor heterogeneity behind generic services. We equally deal with system management issues in sensor farms, a subject not yet explored in this context
Machado, Javam de Castro. "Parallélisme et transactions dans les bases de données à objets." Université Joseph Fourier (Grenoble), 1995. https://tel.archives-ouvertes.fr/tel-00005039.
Full textNous avons implanté un premier prototype qui met en œuvre le modèle de parallélisation des transactions. Pour cela, nous avons utilisé le système de bases de données à objet 02. Notre prototype introduit le parallélisme par la création et la synchronisation des activités parallèles au sein du processus client 02 qui exécute une application. Le système étant développé sur une machine monoprocesseur, les fonctions liées au parallélisme utilisent de processus légers. Nous avons applique ensuite notre modèle de parallélisations au système de règles NAOS. Notre approche considère l'ensemble de règles d'un cycle d'exécution, dites règles candidates, pour la parallélisation. Nous construisons un plan d'exécution pour les règles candidates d'un cycle qui détermine l'exécution séquentielle ou parallèle pour les règles
Crain, Tyler. "On improving the ease of use of the software transactional memory abstraction." Thesis, Rennes 1, 2013. http://www.theses.fr/2013REN1S022/document.
Full textMulticore architectures are changing the way we write programs. Writing concurrent programs is well known to be difficult task. Traditionally, the use of locks allowing code to execute in mutual exclusion has been the most widely used abstraction to write concurrent programs. Unfortunately, using locks it is difficult to write correct concurrent programs that perform efficiently. Additionally, locks present other problems such as scalability issues. Transactional memory has been proposed as a possible promising solution to these difficulties of writing concurrent programs. Transactions can be viewed as a high level abstraction or methodology for writing concurrent programs, allowing the programmer to be able to declare what sections of his code should be executed atomically, without having to worry about synchronization details. Unfortunately, although arguably easier to use then locks, transactional memory still suffers from performance and ease of use problems. In fact many concepts surrounding the usage and semantics of transactions have no widely agreed upon standards. This thesis specifically focuses on these ease of use problems by discussing how previous research has dealt with them and proposing new solutions putting ease of use first. The thesis starts with a chapter giving a brief overview of software transactional memory (STM) as well as a discussion of the problem of ease of use that is focused on in the later chapters. The research contributions are then divided into four main chapters, each looking at different approaches working towards making transactional memory easier to use
Declercq, Charlotte. "Conception et développement d'un service web de mise à jour incrémentielle pour les cubes de données spatiales." Thesis, Université Laval, 2008. http://www.theses.ulaval.ca/2008/25814/25814.pdf.
Full textGuerni, Mahoui Malika. "L'impact des objets typés sur le modèle transactionnel à effets différés." Montpellier 2, 1995. http://www.theses.fr/1995MON20147.
Full textIhaddadene, Nacim. "Extraction de modèles de processus métiers à partir de journaux d'événements." Lille 1, 2007. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/2007/50376-2007-143.pdf.
Full textMartinez, José. "Contribution aux problèmes de contrôle de concurrence et de reprise dans les bases de données à objets." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 1992. http://tel.archives-ouvertes.fr/tel-00429663.
Full textAbdouli, Majeb. "Étude des modèles étendus de transactions : adaptation aux SGBD temps réel." Le Havre, 2006. http://www.theses.fr/2006LEHA0011.
Full textReal-time database systems (RTDBS) are defined as systems whose objective is not only to respect the transactions and data temporal constraints (as in real-time systems), but they also respect the logical consistency of the database (as in classical DBS). In a DBS, it is difficult to deal with real-time contraints in addition to the database logical consistency. On the other hand, real-time systems are not designed to meet transactions real-time constraints when there is a large amount of data. In the majority of previous works on RTBS, the systems are based on the flat transactions modle and the main aim is to respect the two kinds of constraints. In this model, a transaction is composed of two primitive operation : "read" and "write". If an operation fails, then the whole transaction is aborted and restarted, leading often the transaction to miss its deadline. Wa deduce from this that this model is not appropriate to RTDBS. Our contribution in this work has consisisted of developing protocols to manage the intra-transactions conflicts in both centralized and distributed environments. We've also developed an concurrency control protocol based on transaction urgency. Finally, we've proposed an hierarchical commit protocol which guarantees the uniform distributed transaction model based on imprecise computation. Each proposed protocol is evaluated and compared to the protocols proposed in the literature
Cotard, Sylvain. "Contribution à la robustesse des systèmes temps réel embarqués multicœur automobile." Phd thesis, Université de Nantes, 2013. http://tel.archives-ouvertes.fr/tel-00936548.
Full textWalter, Benjamin. "Two essays on the market for Bitcoin mining and one essay on the fixed effects logit model with panel data." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLG002/document.
Full textMy dissertation concatenates two independent parts. The first one dealswith crypto-economics whereas the second one is about theoretical econometrics. In the first chapter, I present a model which predicts bitcoin miners’ total computing power using the bitcoin / dollar exchange rate. The second chapter builds on a simplified version of the preceeding model to show to which extent the current Bitcoin protocol is inefficient and suggest a simple solution to lower the cryptocurrency’s electricity consumption. The third chapter explains how to identify and estimate the sharp bounds of the average marginal effect’s identification region in a fixed effects logit model with panel data
Huynh, Ngoc Tho. "A development process for building adaptative software architectures." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0026/document.
Full textAdaptive software is a class of software which is able to modify its own internal structure and hence its behavior at runtime in response to changes in its operating environment. Adaptive software development has been an emerging research area of software engineering in the last decade. Many existing approaches use techniques issued from software product lines (SPLs) to develop adaptive software architectures. They propose tools, frameworks or languages to build adaptive software architectures but do not guide developers on the process of using them. Moreover, they suppose that all elements in the SPL specified are available in the architecture for adaptation. Therefore, the adaptive software architecture may embed unnecessary elements (components that will never be used) thus limiting the possible deployment targets. On the other hand, the components replacement at runtime remains a complex task since it must ensure the validity of the new version, in addition to preserving the correct completion of ongoing activities. To cope with these issues, this thesis proposes an adaptive software development process where tasks, roles, and associate artifacts are explicit. The process aims at specifying the necessary information for building adaptive software architectures. The result of such process is an adaptive software architecture that only contains necessary elements for adaptation. On the other hand, an adaptation mechanism is proposed based on transactions management for ensuring consistent dynamic adaptation. Such adaptation must guarantee the system state and ensure the correct completion of ongoing transactions. In particular, transactional dependencies are specified at design time in the variability model. Then, based on such dependencies, components in the architecture include the necessary mechanisms to manage transactions at runtime consistently
Roncancio, Claudia Lucia. "Intergiciels et services pour la gestion de données distribuées." Habilitation à diriger des recherches, 2004. http://tel.archives-ouvertes.fr/tel-00007234.
Full text