Rozprawy doktorskie na temat „Decentralized data management”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Decentralized data management.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 21 najlepszych rozpraw doktorskich naukowych na temat „Decentralized data management”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Essilfie-Conduah, Nana S. M. Massachusetts Institute of Technology. "A systems analysis of insider data exfiltration : a decentralized framework for disincentivizing and auditing data exfiltration". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122440.

Pełny tekst źródła
Streszczenie:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 105-110).
It has become common place to hear of data breaches. Typically, we hear of external hackers as the perpetrators, however, the reality is there is a high frequency of threats from insiders within an organization and that the cost and challenge in detecting these threats is considerable. The issue has affected companies in multiple private sectors (finance, retail) and the public sector is also at risk as apparent with the Edward Snowden and Chelsea Manning cases. This thesis explores the current space of insider threats in terms of frequency, cost and complexity in attack assessment. It also explores the multiple perspectives and stakeholders that make up the complex insider threat systems. Insights from multiple insider threat cases as well as subject matter experts in cyber security were used to model and pinpoint the high value metrics around access management and logging that will aid audit efforts. Following this an exploration of kill chains, blockchain technology and hierarchical organization exploration is made. Research findings highlight the wide reach of excessive privileges and the crucial role resource access and event logging of stakeholder actions plays in the success of insider threat prevention. In response to this finding a proposal is made for a combined solution that aims to provide an easy and accessible interface for searching and requesting access to resources that scales with an organization. This proposal suggests the capitalization of the transparent and immutable properties of blockchain to ledger the requesting and approval of file access through dynamic and multi user approval logic. The solution combines simplistic file-based resource access in an accessible manner with a multi layered security approach that adds further hurdles for bad actors but provides a visible and reliable look back on an immutable audit path.
by Nana Essilfie-Conduah.
S.M. in Engineering and Management
S.M.inEngineeringandManagement Massachusetts Institute of Technology, System Design and Management Program
Style APA, Harvard, Vancouver, ISO itp.
2

Hull, R. "Decentralized resource and data management in a fault tolerant distributed computer system". Thesis, University of Sussex, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.356505.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Fleming, Theodor. "Decentralized Identity Management for a Maritime Digital Infrastructure : With focus on usability and data integrity". Thesis, Linköpings universitet, Programvara och system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-155115.

Pełny tekst źródła
Streszczenie:
When the Internet was created it did not include any protocol for identifying the person behind the computer. Instead, the act of identification has primarily been established by trusting a third party. But, the rise of Distributed Ledger Technology has made it possible to authenticate a digital identity and build trust without the need of a third party. The Swedish Maritime Administration are currently validating a new maritime digital infrastructure for the maritime transportation industry. The goal is to reduce the number of accidents, fuel consumption and voyage costs. Involved actors has their identity stored in a central registry that relies on the trust of a third party. This thesis investigates how a conversion from the centralized identity registry to a decentralized identity registry affects the usability and the risk for compromised data integrity. This is done by implementing a Proof of Concept of a decentralized identity registry that replaces the current centralized registry, and comparing them. The decentralized Proof of Concept’s risk for compromised data integrity is 95.1% less compared with the centralized registry, but this comes with a loss of 53% in efficiency.
Style APA, Harvard, Vancouver, ISO itp.
4

Wang, Mianyu Kam Moshe Kandasamy Nagarajan. "A decentralized control and optimization framework for autonomic performance management of web-server systems /". Philadelphia, Pa. : Drexel University, 2007. http://hdl.handle.net/1860/2643.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Ponnakanti, Hari Priya. "A Hyperledger based Secure Data Management and Disease Diagnosis Framework Design for Healthcare". University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1627662565879478.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Diallo, El-hacen. "Study and Design of Blockchain-based Decentralized Road Traffic Data Management in VANET (Vehicular Ad hoc NETworks)". Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG017.

Pełny tekst źródła
Streszczenie:
La prolifération des véhicules autonomes a imposé la nécessité d'une gestion plus sécurisée des données du trafic routier (c'est-à-dire les événements liés aux accidents, l'état de la circulation, le rapport d'attaque, etc.) dans les réseaux Ad hoc pour véhicules (VANET). Les systèmes centralisés traditionnels répondent à ce besoin en exploitant des serveurs distants éloignés des véhicules. Cette solution n’est pas optimale, car les données relatives au trafic routier doivent être distribuées et mises en cache de manière sécurisée à proximité des véhicules. Cela améliore la latence et réduit la surcharge sur la bande passante du réseau de communication.La technologie Blockchain est apparue comme une solution prometteuse grâce à sa propriété de décentralisation. Certaines questions restent néanmoins sans réponse. Comment concevoir une validation appropriée des données du trafic routier par blockchain, qui semble plus complexe qu'une transaction financière ? Quelles sont les performances attendues dans les scénarios VANET ?Cette thèse offre des réponses à ces questions en concevant une gestion des données du trafic routier adaptée aux contraintes imposée par la blockchain. La performance ainsi que la validité des protocoles proposés sont ensuite évaluées à travers diverses simulations de scénarios pris d’un trafic routier réel.Nous proposons d'abord une adaptation du mécanisme de consensus Preuve de Travail (PoW) dans un réseau VANET, où les infrastructures situées aux bords de routes (RSUs) maintiennent une base de données décentralisée des données du trafic routier. Ensuite, une évaluation rigoureuse des performances en présence de véhicules malveillants est réalisée. Les résultats ont montré que le schéma proposé permet de construire une base de données sécurisée et décentralisée des données du trafic routier au niveau des RSUs.Ensuite, motivés par nos résultats, nous utilisons PBFT (Practical Byzantine Fault Tolerance), un mécanisme de consensus établi grâce au vote, pour réduire la latence dans le processus de validation dans une blockchain. Les RSUs validatrices de données de trafic sont sélectionnées dynamiquement en fonction de la localisation des événements du trafic. Nous proposons un nouveau schéma de réplication de la blockchain entre les RSUs. Cette réplication choisit un compromis entre les performances en termes de latence et la fréquence de réplication des blocs de la chaine. Les résultats de simulation montrent de meilleures performances, lorsque les RSUs validatrices, sont réduites au minimum.Dans la dernière partie de la thèse, nous proposons un modèle de confiance pour réduire au minimum le nombre de validatrices sans compromettre la décentralisation et l'équité de la création de blocs. Ce modèle de confiance s'appuie sur la distance géographique et la confiance des RSUs pour former dynamiquement un groupe de validateurs pour chaque bloc de la chaîne. Nous formalisons et évaluons ce modèle de réputation, en considérant divers scénarios avec des RSUs malicieuses. Les résultats démontrent l'efficacité de la proposition pour minimiser le groupe de validateurs tout en isolant les RSUs malicieuses
The prominence of autonomous vehicles has imposed the need for more secure road traffic data (i.e., events related to accidents, traffic state, attack report, etc.) management in VANET (Vehicular Ad hoc NETworks). Traditional centralized systems address this need by leveraging remote servers far from the vehicles. That is not an optimal solution as road traffic data must be distributed and securely cached close to cars to enhance performance and reduce bandwidth overhead. Blockchain technology offers a promising solution thanks to its decentralization property. But some questions remain unanswered: how to design blockchain-adapted traffic data validation, which is more complex than an economic transaction? What is the performance in real-world VANET scenarios?This thesis addresses those questions by designing blockchain-adapted traffic data management. The performance analysis and the validation of the proposed schemes are conducted through various simulations of real scenarios.We first adapt the PoW (Proof of Work) consensus mechanism to the VANET context whereby the RSUs (Road Side Units) maintain the decentralized database of road traffic data. After that, the proposed scheme is evaluated in the presence of malicious vehicles. The results show that the proposed approach enables a secure and decentralized database of road traffic data at the RSUs level.Next, motivated by our findings, we adopt PBFT (Practical Byzantine Fault Tolerance), a voting-based consensus mechanism, to reduce the blockchain latency. The traffic data validators are dynamically selected based on traffic event appearance location. Finally, we propose a novel blockchain replication scheme between RSUs. This scheme offers a trade-off between the blockchain latency and replication frequency. Simulation results show better performance when the validators (i.e., RSUs) are minimized.Finally, we propose a trust model to minimize the validators without compromising the decentralization and fairness of block-creation. This trust model leverages the geographical distance and the RSUs trust to dynamically form a group of validators for each block in the blockchain. We formalize and evaluate this trust model, considering various scenarios with malicious RSUs. Results show the efficiency of the proposed model to minimize the validators group while isolating malicious RSUs
Style APA, Harvard, Vancouver, ISO itp.
7

Bögels, Machteld. "Digital Waste : ELIMINATING NON-VALUE ADDING ACTIVITIES THROUGH DECENTRALIZED APPLICATION DEVELOPMENT". Thesis, KTH, Skolan för industriell teknik och management (ITM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263903.

Pełny tekst źródła
Streszczenie:
In an era where the network of interconnected devices is rapidly expanding, it is difficult for organizations to adapt to the increasingly data-rich and dynamic environment while remaining competitive. Employees experience that much of their time and resources is spent daily on repetitive, inefficient and mundane tasks. Whereas lean manufacturing has manifested itself as a well-known optimization concept, lean information management and the removal of waste is not yet being used to its full potential as its direct value is less visible. A case study was conducted to define which types of non-value adding activities can be identified within information flows and to determine whether decentralized application development can eliminate this digital waste. An internal information flow was modelled, analyzed and optimized by developing customized applications on the Microsoft Power Platform. Based on literature from the field of manufacturing and software development, a framework was developed to categorize digital waste as well as higher order root causes in terms of business strategy and IT infrastructure. While decentralized app development provides the ability to significantly reduce operational digital waste in a simplified manner, it can also enable unnecessary expansion of a common data model and requires application lifecycle management efforts as well as edge security to ensure data compliance and governance. Although limited to one case study, the suggested framework could give insights to organizations that aim to optimize internal workflows by identifying and eliminating digital waste and its root causes.
I en tid där nätverk av sammankopplade enheter expanderar snabbt, är det svårt för organisationer att anpassa sig till den allt mer datoriserade och dynamiska miljön och samtidigt förbli konkurrenskraftiga. Anställda upplever att mycket av deras tid och resurser spenderas på repetitiva, ineffektiva och vardagliga uppgifter. Lean manufacturing har visat sig vara ett välkänt optimeringskoncept, dock har informationshantering och avlägsnande av slöseri inte ännu nått sin fulla potential eftersom dess direkta värde är svårare att se och räkna. En fallstudie genomfördes för att definiera vilka typer av icke-värdeskapande aktiviteter som kan identifieras inom informationsflöden och för att avgöra om decentraliserad applikationsutveckling kan eliminera detta digitala slöseri. Ett internt informationsflöde modellerades, analyserades och optimerades genom att utveckla anpassade applikationer på Microsoft Power Platform. Baserat på litteratur från tillverknings- och mjukvaruutvecklingsområdet utvecklades en ram för att kategorisera digitalt slöseri samt högre grundorsaker när det gäller affärsstrategi och ITinfrastruktur. Medan decentraliserad apputveckling ger möjlighet att avsevärt minska det operativa digitala slöseriet på ett förenklat sätt, så kan det också möjliggöra onödig expansion av en gemensam datamodell och kräver hantering av livscykelanalyser samt kantsäkerhet för att säkerställa datahantering och styrning. Trots begränsad till en fallstudie, så kan det föreslagna ramverket ge insikter till organisationer som syftar till att optimera interna arbetsflöden genom att identifiera och eliminera digitalt slöseri och dess grundläggande orsaker.
Style APA, Harvard, Vancouver, ISO itp.
8

Mehra, Varun S. M. Massachusetts Institute of Technology. "Optimal sizing of solar and battery assets in decentralized micro-grids with demand-side management". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/108959.

Pełny tekst źródła
Streszczenie:
Thesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, Technology and Policy Program, 2017.
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 199-209).
Solar-based community micro-grids and individual home systems have been recognized as key enablers of electricity provision to the over one billion people living without energy access to-date. Despite significant cost reductions in solar panels, these options can still be cost-prohibitive mainly due over-sizing of generation assets corresponding with a lack of ability to actively manage electricity demand. The main contribution shared is the methodology and optimization approach of least-cost combinations of generation asset sizes, in solar panels and batteries, subject to meeting reliability constraints; these results are based on a techno-economic modeling approach constructed for assessing decentralized micro-grids with demand-side management capabilities. The software model constructed is implemented to represent the technical characteristics of a low-voltage, direct current network architecture and computational capabilities of a power management device. The main use-case of the model presented is based on serving representative, aggregated, household-level load profiles combined with simulated power output from solar photovoltaic modules and the kinetic operating constraints of lead-acid batteries at hourly timesteps over year-long simulations. The state-space for solutions is based on available solar module and battery capacities from distributors in Jharkhand, India. Additional work presented also extends to real-time operation of such isolated micro-grids with requisite local computation. First, for load disaggregation and forecasting purposes, clustering algorithms and statistical learning techniques are applied on quantitative results from inferred load profiles based on data logged from off-grid solar home systems. Second, results from an optimization approach to accurately parametrize a lead-acid battery model for potential usage in real-time field implementation are also shared. Economic results, sensitivity analyses around key technical and financial input assumptions, and comparisons in cost reductions due to the optimization of solar and battery assets for decentralized micro-grids with demand-side management capabilities are subsequently presented. The work concludes with insights and policy implications on establishing differentiated willingness-to-pay, tiers of service, and dynamic price-setting in advanced micro-grids.
by Varun Mehra.
S.M. in Technology and Policy
S.M.
Style APA, Harvard, Vancouver, ISO itp.
9

Rodriguez, German Darío Rivas. "Decentralized Architecture for Load Balancing in District Heating Systems". Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3329.

Pełny tekst źródła
Streszczenie:
Context. In forthcoming years, sustainability will lead the development of society. Implementation of innovative systems to make the world more sustainable is becoming one of the key points for science. Load balancing strategies aim to reduce economic and ecological cost of the heat production in district heating systems. Development of a decentralized solution lies in the objective of making the load balancing more accessible and attractive for the companies in charge of providing district-heating services. Objectives. This master thesis aims to find a new alternative for implementing decentralized load balancing in district heating systems. Methods. The development of this master thesis involved the review of the state-of-the-art on demand side management in district heating systems and power networks. It also implied the design of the architecture, creation of a software prototype and execution of a simulation of the system to measure the performance in terms of response time. Results. Decentralized demand side management algorithm and communication framework, software architecture description and analysis of the prototype simulation performance. Conclusions. The main conclusion is that it is possible to create a decentralized algorithm that performs load balancing without compromising the individuals’ privacy. It is possible to say that the algorithm shows good levels of performance not only from the system aggregated response time, but also from the individual performance, in terms of memory consumption and CPU consumption.
(+46) 709706206
Style APA, Harvard, Vancouver, ISO itp.
10

Ribe-Baumann, Elizabeth [Verfasser], Kai-Uwe [Akademischer Betreuer] Sattler, Jochen [Akademischer Betreuer] Seitz i Manfred [Akademischer Betreuer] Hauswirth. "Resource and Location Aware Robust Decentralized Data Management / Elizabeth Ribe-Baumann. Gutachter: Jochen Seitz ; Manfred Hauswirth. Betreuer: Kai-Uwe Sattler". Ilmenau : Universitätsbibliothek Ilmenau, 2015. http://d-nb.info/1074139607/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Ribe-Baumann, Liz [Verfasser], Kai-Uwe [Akademischer Betreuer] Sattler, Jochen [Akademischer Betreuer] Seitz i Manfred [Akademischer Betreuer] Hauswirth. "Resource and Location Aware Robust Decentralized Data Management / Elizabeth Ribe-Baumann. Gutachter: Jochen Seitz ; Manfred Hauswirth. Betreuer: Kai-Uwe Sattler". Ilmenau : Universitätsbibliothek Ilmenau, 2015. http://d-nb.info/1074139607/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Schanzenbach, Martin [Verfasser], Claudia [Akademischer Betreuer] Eckert, Georg [Gutachter] Carle i Claudia [Gutachter] Eckert. "Towards Self-sovereign, decentralized personal data sharing and identity management / Martin Schanzenbach ; Gutachter: Georg Carle, Claudia Eckert ; Betreuer: Claudia Eckert". München : Universitätsbibliothek der TU München, 2020. http://d-nb.info/1225479959/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Nicolae, Bogdan. "BlobSeer : towards efficient data storage management for large-scale, distributed systems". Phd thesis, Université Rennes 1, 2010. http://tel.archives-ouvertes.fr/tel-00552271.

Pełny tekst źródła
Streszczenie:
With data volumes increasing at a high rate and the emergence of highly scalable infrastructures (cloud computing, petascale computing), distributed management of data becomes a crucial issue that faces many challenges. This thesis brings several contributions in order to address such challenges. First, it proposes a set of principles for designing highly scalable distributed storage systems that are optimized for heavy data access concurrency. In particular, it highlights the potentially large benefits of using versioning in this context. Second, based on these principles, it introduces a series of distributed data and metadata management algorithms that enable a high throughput under concurrency. Third, it shows how to efficiently implement these algorithms in practice, dealing with key issues such as high-performance parallel transfers, efficient maintainance of distributed data structures, fault tolerance, etc. These results are used to build BlobSeer, an experimental prototype that is used to demonstrate both the theoretical benefits of the approach in synthetic benchmarks, as well as the practical benefits in real-life, applicative scenarios: as a storage backend for MapReduce applications, as a storage backend for deployment and snapshotting of virtual machine images in clouds, as a quality-of-service enabled data storage service for cloud applications. Extensive experimentations on the Grid'5000 testbed show that BlobSeer remains scalable and sustains a high throughput even under heavy access concurrency, outperforming by a large margin several state-of-art approaches.
Style APA, Harvard, Vancouver, ISO itp.
14

Nicolae, Bogdan. "BlobSeer : towards efficient data storage management for large-scale, distributed systems". Phd thesis, Rennes 1, 2010. http://www.theses.fr/2010REN1S123.

Pełny tekst źródła
Streszczenie:
With data volumes increasing at a high rate and the emergence of highly scalable infrastructures (cloud computing, petascale computing), distributed management of data becomes a crucial issue that faces many challenges. This thesis brings several contributions in order to address such challenges. First, it proposes a set of principles for designing highly scalable distributed storage systems that are optimized for heavy data access concurrency. In particular, it highlights the potentially large benefits of using versioning in this context. Second, based on these principles, it introduces a series of distributed data and metadata management algorithms that enable a high throughput under concurrency. Third, it shows how to efficiently implement these algorithms in practice, dealing with key issues such as high-performance parallel transfers, efficient maintainance of distributed data structures, fault tolerance, etc. These results are used to build BlobSeer, an experimental prototype that is used to demonstrate both the theoretical benefits of the approach in synthetic benchmarks, as well as the practical benefits in real-life, applicative scenarios: as a storage backend for MapReduce applications, as a storage backend for deployment and snapshotting of virtual machine images in clouds, as a quality-of-service enabled data storage service for cloud applications. Extensive experimentations on the Grid'5000 testbed show that BlobSeer remains scalable and sustains a high throughput even under heavy access concurrency, outperforming by a large margin several state-of-art approaches
Avec des volumes de données en forte augmentation et l'émergence d'infrastructures avec un fort passage à l'échelle (cloud computing, calcul petascale), une gestion distribuée des données devient un problème crucial qui présente plusieurs challenges. Cette thèse apporte plusieurs contributions afin de résoudre ces challenges. Premièrement, elle propose un ensemble de principes permettant de concevoir des systèmes de stockage distribués passant largement à l'échelle et optimisés pour des accès sous forte concurrence. En particulier, ellemet en évidence les avantages liés à l'utilisation du versionnement dans ce contexte. Ensuite, fondé sur ces principes, elle introduit un ensemble d'algorithmes de gestion distribuée de données et méta-données qui permettent un débit d'accès important sous haute concurrence. Enfin, elle montre comment mettre en oeuvre de façon efficace ces algorithmes, en résolvant des problèmes pratiques. Ces résultats sont utilisés pour construire BlobSeer, un prototype expérimental utilisé pour démontrer les avantages théoriques de cette approche dans des tests synthétiques, ansi que ses avantages pratiques dans des applications réelles: système de stockage pour applications MapReduce, système de stockage pour déployer et sauvegarder des images de machines virtuelles dans les clouds, et service de stockage de données pour des applications sur les clouds. Des analyses approfondies sur la plate-forme d'expérimentation Grid'5000 démontrent que BlobSeer passe largement à l'échelle et soutient un débit d'accès important même sous haute concurrence, dépassant d'une large marge plusieurs approches de l'état de l'art courant
Style APA, Harvard, Vancouver, ISO itp.
15

Dufour, Luc. "Contribution à la mise au point d'un pilotage énergétique décentralisé par prédiction". Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2017. http://www.theses.fr/2017EMAC0004/document.

Pełny tekst źródła
Streszczenie:
Comment satisfaire les besoins en énergie d’une population de 9 milliards d’êtres humains en 2050, de façon économiquement viable tout en minimisant l’impact sur l’environnement. Une des réponses est l’insertion de production d’énergie propre d’origine éolienne et photovoltaïque mais leurs totales dépendances aux variations climatiques accentuent une pression sur le réseau. Les modèles prédictifs historiques centralisés et paramétriques ont du mal à appréhender les variations brutales de productions et de consommations. La révolution internet permet aujourd’hui une convergence entre le numérique et l’énergie. En Europe et depuis cinq ans, l’axe d’étude est celui de la maîtrise locale de l’électricité. Ainsi plusieurs quartiers intelligents ont été créés et les modèles utilisés de pilotage et de prédiction restent souvent la propriété des partenaires des projets. Dans cette thèse, Il s’agit de réaliser un bilan énergétique chaque heure pour prédire l’ensemble des vecteurs énergétiques d’un système. Le besoin en énergie d’un système comme une maison est décomposée en un besoin en chauffage, en un besoin en eau chaude sanitaire, en un besoin en luminaires, en besoin de ventilation et en usages spécifiques électriques utiles. Le système peut posséder une production décentralisée et un système de stockage ce qui augmentera sa capacité d’effacement. Pour le centre de pilotage, l’objectif est d’avoir une possibilité de scénarios de surproductions ou surconsommations sur un quartier donnée à court terme. Nous considérerons dans cette thèse un horizon à l’heure pour notre bilan énergétique. Cela implique une prédiction fine des différents flux énergétiques d’un système en particulier le chauffage et l’eau chaude qui représente le plus gros potentiel de flexibilité dans les bâtiments. Pour réaliser un bilan, nous devons calculer les différents flux énergétiques à l’intérieur de notre système : les déperditions par l’enveloppe et la ventilation, les gains internes solaires, des personnes et des appareils, le stockage, la production d’eau chaude sanitaire, les usages spécifiques électriques utiles. Sur certains de ces points, nous pouvons évaluer assez précisément et en fonction du temps les quantités d’énergie échangées. Pour les autres (ECS, USE, gains internes, stockage), la bibliographie nous donne que des méthodes globales et indépendantes du temps. Il n’est donc pas possible d’envisager une méthode correspondant au pas de temps souhaité. Ceci impose la mise au point d’une méthode prédictive et apprenante dont nos modèles de simulation énergétique seront le point de référence. Il n’en reste pas moins que ces modèles permettent la compréhension du comportement énergétique du système. L’outil se devra non intrusif, personnalisé, robuste et simple. Pour limiter le caractère intrusif de l’outil, il s’agit à la fois d’ajouter de l’intelligence comme par exemple l’identification des appareils utiles à partir d’un seul point de mesure mais aussi la collection et l’analyse d’informations localement. Les données privées ne sont pas transmises vers l’extérieur. Seules les informations de prédictions énergétiques sont envoyées à un niveau supérieur pour agrégation des données des quartiers. L’intelligence est également au niveau des prédictions réalisées issues de méthodes d’apprentissage comme l’utilisation des réseaux de neurones ou des arbres de décision. La robustesse est étudiée d’un point de vue technologie (plusieurs protocoles de communication ont été testés), techniques (plusieurs méthodes de collecte) et d’un point de vue du stockage de données (limiter la fréquence de collecte). La simplicité d’usage engendre une simplicité d’installation minimiser le nombre de données d’entrée tout en gardant une précision souhaitable sera notre principal axe d’optimisation
This work presents a data-intensive solution to manage energy flux after a low transformer voltage named microgrid concept. A microgrid is an aggregation of building with a decentralized energy production and or not a storage system. These microgrid can be aggregate to create an intelligent virtual power plant. However, many problems must be resolved to increase the part of these microgrid and the renewable resource in a energy mix. The physic model can not integrate and resolve in a short time the quickly variations. The intelligent district can be integrate a part of flexibility in their production with a storage system. This storage can be electrical with a battery or thermal with the heating and the hot water. For a virtual power plant, the system can be autonomous when the price electricity prediction is low and increase the production provided on the market when the price electricity is high. For a energy supplier and with a decentralized production building distant of a low transformer voltage, a regulation with a storage capacity enable a tension regulation. Finally, the auto-consumption becomes more and more interesting combined with a low electrical storage price and the result of the COP 21 in Paris engage the different country towards the energy transition. In these cases, a flexibility is crucial at the building level but this flexibility is possible if, and only if, the locally prediction are correct to manage the energy. The main novelties of our approach is to provide an easy implemented and flexible solution to predict the consumption and the production at the building level based on the machine learning technique and tested on the real use cases in a residential and tertiary sector. A new evaluation of the consumption is realized: the point of view is energy and not only electrical. The energy consumption is decomposed between the heating consumption, the hot water consumption and the electrical devices consumption. A prediction every hour is provided for the heating and the hot water consumption to estimate the thermal storage capacity. A characterization of Electrical devices consumption is realized by a non-intrusive disaggregation from the global load curve. The heating and the hot water are identify to provide a non intrusive methodology of prediction. Every day, the heating, the hot water, the household appliances, the cooling and the stand by are identified. Every 15 minutes, our software provide a hot water prediction, a heating prediction, a decentralized prediction and a characterization of the electrical consumption. A comparison with the different physic model simulated enable an error evaluation the error of our different implemented model
Style APA, Harvard, Vancouver, ISO itp.
16

Utete, Simukai. "Network management in decentralised sensing systems". Thesis, University of Oxford, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.297308.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Kourtellis, Nicolas. "On the Design of Socially-Aware Distributed Systems". Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4107.

Pełny tekst źródła
Streszczenie:
Social media services and applications enable billions of users to share an unprecedented amount of social information, which is further augmented by location and collocation information from mobile phones, and can be aggregated to provide an accurate digital representation of the social world. This dissertation argues that extracted social knowledge from this wealth of information can be embedded in the design of novel distributed, socially-aware applications and services, consequently improving system response time, availability and resilience to attacks, and reducing system overhead. To support this thesis, two research avenues are explored. First, this dissertation presents Prometheus, a socially-aware peer-to-peer service that collects social information from multiple sources, maintains it in a decentralized fashion on user-contributed nodes, and exposes it to applications through an interface that implements non-trivial social inferences. The system's socially-aware design leads to multiple system improvements: 1) it increases service availability by allowing users to manage their social information via socially-trusted peers, 2) it improves social inference performance and reduces message overhead by exploiting naturally-formed social groups, and 3) it reduces the opportunity of attackers to influence application requests. These performance improvements are assessed via simulations and a prototype deployment on a local cluster and on a worldwide testbed (PlanetLab) under emulated application workloads. Second, this dissertation defines the projection graph, the result of decentralizing a social graph onto a peer-to-peer system such as Prometheus, and studies the system's network properties and how they can be used to design more efficient socially-aware distributed applications and services. In particular: 1) it analytically formulates the relation between centrality metrics such as degree centrality, node betweenness centrality, and edge betweenness centrality in the social graph and in the emerging projection graph, 2) it experimentally demonstrates on real networks that for small groups of users mapped on peers, there is high association of social and projection graph properties, 3) it shows how these properties of the (dynamic) projection graph can be accurately inferred from the properties of the (slower changing) social graph, and 4) it demonstrates with two search application scenarios the usability of the projection graph in designing social search applications and unstructured P2P overlays. These research results lead to the formulation of lessons applicable to the design of socially-aware applications and distributed systems for improved application performance such as social search, data dissemination, data placement and caching, as well as for reduced system communication overhead and increased system resilience to attacks.
Style APA, Harvard, Vancouver, ISO itp.
18

Deaves, R. H. "The management of communications in decentralised Bayesian data fusion system". Thesis, University of Bristol, 1998. http://hdl.handle.net/1983/ae6cb4d5-96e8-4af4-90c0-fddb4f188369.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Javet, Ludovic. "Privacy-preserving distributed queries compatible with opportunistic networks". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG038.

Pełny tekst źródła
Streszczenie:
Dans la société actuelle, où l'IoT et les plateformes numériques transforment notre vie quotidienne, les données personnelles sont générées à profusion et leur utilisation échappe souvent à notre contrôle. Des législations récentes comme le RGPD en Europe proposent des solutions concrètes pour réguler ces nouvelles pratiques et protéger notre vie privée. Parallèlement, sur le plan technique, de nouvelles architectures émergent pour répondre à ce besoin urgent de se réapproprier nos propres données personnelles. C'est le cas des systèmes de gestion des données personnelles (PDMS) qui offrent un moyen décentralisé de stocker et de gérer les données personnelles, permettant aux individus d'avoir un meilleur contrôle sur leur vie numérique.Cette thèse explore l'utilisation distribuée de ces PDMS dans un contexte de réseau opportuniste, où les messages sont transférés d'un appareil à l'autre sans nécessiter d'infrastructure. L'objectif est de permettre la mise en œuvre de traitements complexes croisant les données de milliers d'individus, tout en garantissant la sécurité et la tolérance aux pannes des exécutions.L'approche proposée utilise les environnements d'exécution de confiance pour définir un nouveau paradigme informatique, intitulé Edgelet computing, qui satisfait à la fois les propriétés de validité, de résilience et de confidentialité. Les contributions comprennent (1) des mécanismes de sécurité pour protéger les exécutions contre les attaques malveillantes visant à piller les données personnelles, (2) des stratégies de résilience pour tolérer les défaillances et les pertes de messages induites par l'environnement décentralisé, (3) des validations approfondies et des démonstrations pratiques des méthodes proposées
In today's society, where IoT and digital platforms are transforming our daily lives, personal data is generated in profusion and its usage is often beyond our control. Recent legislations like the GDPR in Europe propose concrete solutions to regulate these new practices and protect our privacy. Meanwhile, on the technical side, new architectures are emerging to respond to this urgent need to reclaim our own personal data. This is the case of Personal Data Management Systems (PDMS) which offer a decentralized way to store and manage personal data, empowering individuals with greater control over their digital lives.This thesis explores the distributed use of these PDMS in an Opportunistic Network context, where messages are transferred from one device to another without the need for any infrastructure. The objective is to enable the implementation of complex processing crossing data from thousands of individuals, while guaranteeing the security and fault tolerance of the executions.The proposed approach leverages the Trusted Execution Environments to define a new computing paradigm, entitled Edgelet computing, that satisfies both validity, resiliency and privacy properties. Contributions include: (1) security mechanisms to protect executions from malicious attacks seeking to plunder personal data, (2) resiliency strategies to tolerate failures and message losses induced by the fully decentralized environment, (3) extensive validations and practical demonstrations of the proposed methods
Style APA, Harvard, Vancouver, ISO itp.
20

Jaradat, Ward. "On the construction of decentralised service-oriented orchestration systems". Thesis, University of St Andrews, 2016. http://hdl.handle.net/10023/8036.

Pełny tekst źródła
Streszczenie:
Modern science relies on workflow technology to capture, process, and analyse data obtained from scientific instruments. Scientific workflows are precise descriptions of experiments in which multiple computational tasks are coordinated based on the dataflows between them. Orchestrating scientific workflows presents a significant research challenge: they are typically executed in a manner such that all data pass through a centralised computer server known as the engine, which causes unnecessary network traffic that leads to a performance bottleneck. These workflows are commonly composed of services that perform computation over geographically distributed resources, and involve the management of dataflows between them. Centralised orchestration is clearly not a scalable approach for coordinating services dispersed across distant geographical locations. This thesis presents a scalable decentralised service-oriented orchestration system that relies on a high-level data coordination language for the specification and execution of workflows. This system's architecture consists of distributed engines, each of which is responsible for executing part of the overall workflow. It exploits parallelism in the workflow by decomposing it into smaller sub-workflows, and determines the most appropriate engines to execute them using computation placement analysis. This permits the workflow logic to be distributed closer to the services providing the data for execution, which reduces the overall data transfer in the workflow and improves its execution time. This thesis provides an evaluation of the presented system which concludes that decentralised orchestration provides scalability benefits over centralised orchestration, and improves the overall performance of executing a service-oriented workflow.
Style APA, Harvard, Vancouver, ISO itp.
21

Yan, Jun, i jyan@it swin edu au. "A framework and coordination technologies for peer-to-peer based decentralised workflow systems". Swinburne University of Technology, 2004. http://adt.lib.swin.edu.au./public/adt-VSWT20050307.170020.

Pełny tekst źródła
Streszczenie:
This thesis investigates an innovative framework and process coordination technologies for peer-to-peer based decentralised workflow systems. The aim of this work is to address some of the unsolved problems in the contemporary workflow research rudimentally from an architectural viewpoint. The problems addressed in this thesis, i.e., bad performance, vulnerability to failures, poor scalability, user restrictions, unsatisfactory system openness, and lack of support for incompletely specified processes, have become major obstacles for wide deployment of workflow in real-world. After an in-depth analysis of the above problems, this thesis reveals that most of these problems are mainly caused by the mismatch between application nature, i.e., distributed, and system design, i.e., centralised management. Thus, the old-fashioned client-server paradigm which is conventionally used in most of today�s workflow systems should be replaced with a peer-to-peer based, open,collaborative and decentralised framework which can reflect workflow�s distributed feature more naturally. Combining workflow technology and peer-to-peer computing technology, SwinDeW which is a genuinely decentralised workflow approach is proposed in this thesis. The distinguished design of SwinDeW removes both the centralised data repository and the centralised workflow engine from the system. Hence, workflow participants are facilitated by automated peers which are able to communicate and collaborate with one another directly to fulfil both build-time and run-time workflow functions. To achieve this goal, an innovative data storage approach, known as �know what you should know�, is proposed, which divides a process model into individual task partitions and distributes each partition to relevant peers properly according to the capability match. Based on such a data storage approach, the novel mechanisms for decentralised process instantiation, instance execution and execution monitoring are explored. Moreover, SwinDeW is further extended to support incompletely-specified processes in the decentralised environment. New technologies for handling incompletely-specified processes at run-time are presented. The major contributions of this research are an innovative, decentralised workflow system framework and corresponding process coordination technologies for system functionality. Issues regarding system performance, reliability, scalability,user support, system openness, and incompletely-specified process support are discussed deeply. Moreover, this thesis also contributes the SwinDeW prototype which implements and demonstrates this design and functionality for proof-of concept purposes. With these outcomes, performance bottlenecks in workflow systems are likely to be eliminated whilst increased resilience to failure, enhanced scalability, better user support and improved system openness are likely to be achieved with support for both completely- and incompletely-specified processes. As a consequence, workflow systems will be expected to be widely deployable to real world applications to support processes, which was infeasible before.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii