Dissertations / Theses on the topic 'Data traceability'

To see the other types of publications on this topic, follow the link: Data traceability.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 28 dissertations / theses for your research on the topic 'Data traceability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Maté, Alejandro. "Data Warehouses: Traceability and Alignment with Corporate Strategies." Doctoral thesis, Universidad de Alicante, 2013. http://hdl.handle.net/10045/36383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gemesi, Hafize Gunsu. "Food traceability information modeling and data exchange and GIS based farm traceability model design and application." [Ames, Iowa : Iowa State University], 2010. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1476294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pritchard, Jeffrey W. "The Advanced Traceability and Control system performance data analysis." Thesis, Monterey, California. Naval Postgraduate School, 1992. http://hdl.handle.net/10945/23520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ali, Mufajjul. "Provenance-based data traceability model and policy enforcement framework for cloud services." Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/393423/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the context of software, provenance holds the key to retaining a reproduceable instance of the duration of a service, which can be replayed/reproduced from the beginning. This entails the nature of invocations that took place, how/where the data were created, modified, updated and the user's engagement with the service. With the emergence of the cloud and the benefits it encompasses, there has been a rapid proliferation of services being developed and adopted by commercial businesses. However, these services expose very little internal workings to their customers, and insufficient means to check for the right working order. This can cause transparency and compliance issues, especially in the event of a fault or violation, customers and providers are left to point finger at each other. Provenance-based traceability provides a means to address a part of this problem by being able to capture and query events that have occurred in the past to understand how and why it took place. On top of that, provenance-based policies are required to facilitate the validation and enforcement of business level requirements for end-users satisfaction. This dissertation makes four contributions to the state of the art: i) By defining and implementing an enhanced provenance-based cloud traceability model (cProv), that extends the standardized Prov model to support characteristics related to cloud services. The model is then able to conceptualize the traceability of a running cloud service. ii) By the creation of a provenance-based policy language (cProvl) in order to facilitate the declaration and enforcement of the business level requirements. iii) By developing a traceability framework, that provides client and server-side stacks for integrating service-level traceability and policy-based enforcement of business rules. iv) Finally by the implementation and evaluation of the framework, that leverages on the standardized industry solutions. The framework is then applied to the commercial service: `ConfidenShare' as a proof of concept.
5

Rush, David, F. W. (Bill) Hafner, and Patsy Humphrey. "DEVELOPMENT OF A REQUIREMENTS REPOSITORY FOR THE ADVANCED DATA ACQUISITION AND PROCESSING SYSTEM (ADAPS)." International Foundation for Telemetering, 1999. http://hdl.handle.net/10150/607313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada
Standards lead to the creation of requirements listings and test verification matrices allow developer and acquirer to assure themselves and each other that the requested system is actually what is being constructed. Further, in the intricacy of the software test description, traceability of test process to the requirement under test is mandated so the acceptance test process can be accomplished in an efficient manner. In the view of the logistician, the maintainability of the software and the repair of fond faults is primary, while these statistics can be gathered by the producer to ultimately enhance the Capability Maturity Module (CMM) rating of the vendor.
6

Seibel, Andreas. "Traceability and model management with executable and dynamic hierarchical megamodels." Phd thesis, Universität Potsdam, 2012. http://opus.kobv.de/ubp/volltexte/2013/6422/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Nowadays, model-driven engineering (MDE) promises to ease software development by decreasing the inherent complexity of classical software development. In order to deliver on this promise, MDE increases the level of abstraction and automation, through a consideration of domain-specific models (DSMs) and model operations (e.g. model transformations or code generations). DSMs conform to domain-specific modeling languages (DSMLs), which increase the level of abstraction, and model operations are first-class entities of software development because they increase the level of automation. Nevertheless, MDE has to deal with at least two new dimensions of complexity, which are basically caused by the increased linguistic and technological heterogeneity. The first dimension of complexity is setting up an MDE environment, an activity comprised of the implementation or selection of DSMLs and model operations. Setting up an MDE environment is both time-consuming and error-prone because of the implementation or adaptation of model operations. The second dimension of complexity is concerned with applying MDE for actual software development. Applying MDE is challenging because a collection of DSMs, which conform to potentially heterogeneous DSMLs, are required to completely specify a complex software system. A single DSML can only be used to describe a specific aspect of a software system at a certain level of abstraction and from a certain perspective. Additionally, DSMs are usually not independent but instead have inherent interdependencies, reflecting (partial) similar aspects of a software system at different levels of abstraction or from different perspectives. A subset of these dependencies are applications of various model operations, which are necessary to keep the degree of automation high. This becomes even worse when addressing the first dimension of complexity. Due to continuous changes, all kinds of dependencies, including the applications of model operations, must also be managed continuously. This comprises maintaining the existence of these dependencies and the appropriate (re-)application of model operations. The contribution of this thesis is an approach that combines traceability and model management to address the aforementioned challenges of configuring and applying MDE for software development. The approach is considered as a traceability approach because it supports capturing and automatically maintaining dependencies between DSMs. The approach is considered as a model management approach because it supports managing the automated (re-)application of heterogeneous model operations. In addition, the approach is considered as a comprehensive model management. Since the decomposition of model operations is encouraged to alleviate the first dimension of complexity, the subsequent composition of model operations is required to counteract their fragmentation. A significant portion of this thesis concerns itself with providing a method for the specification of decoupled yet still highly cohesive complex compositions of heterogeneous model operations. The approach supports two different kinds of compositions - data-flow compositions and context compositions. Data-flow composition is used to define a network of heterogeneous model operations coupled by sharing input and output DSMs alone. Context composition is related to a concept used in declarative model transformation approaches to compose individual model transformation rules (units) at any level of detail. In this thesis, context composition provides the ability to use a collection of dependencies as context for the composition of other dependencies, including model operations. In addition, the actual implementation of model operations, which are going to be composed, do not need to implement any composition concerns. The approach is realized by means of a formalism called an executable and dynamic hierarchical megamodel, based on the original idea of megamodels. This formalism supports specifying compositions of dependencies (traceability and model operations). On top of this formalism, traceability is realized by means of a localization concept, and model management by means of an execution concept.
Die modellgetriebene Softwareentwicklung (MDE) verspricht heutzutage, durch das Verringern der inhärenten Komplexität der klassischen Softwareentwicklung, das Entwickeln von Software zu vereinfachen. Um dies zu erreichen, erhöht MDE das Abstraktions- und Automationsniveau durch die Einbindung domänenspezifischer Modelle (DSMs) und Modelloperationen (z.B. Modelltransformationen oder Codegenerierungen). DSMs sind konform zu domänenspezifischen Modellierungssprachen (DSMLs), die dazu dienen das Abstraktionsniveau der Softwareentwicklung zu erhöhen. Modelloperationen sind essentiell für die Softwareentwicklung da diese den Grad der Automatisierung erhöhen. Dennoch muss MDE mit Komplexitätsdimensionen umgehen die sich grundsätzlich aus der erhöhten sprachlichen und technologischen Heterogenität ergeben. Die erste Komplexitätsdimension ist das Konfigurieren einer Umgebung für MDE. Diese Aktivität setzt sich aus der Implementierung und Selektion von DSMLs sowie Modelloperationen zusammen. Eine solche Aktivität ist gerade durch die Implementierung und Anpassung von Modelloperationen zeitintensiv sowie fehleranfällig. Die zweite Komplexitätsdimension hängt mit der Anwendung von MDE für die eigentliche Softwareentwicklung zusammen. Das Anwenden von MDE ist eine Herausforderung weil eine Menge von heterogenen DSMs, die unterschiedlichen DSMLs unterliegen, erforderlich sind um ein komplexes Softwaresystem zu spezifizieren. Individuelle DSMLs werden verwendet um spezifische Aspekte eines Softwaresystems auf bestimmten Abstraktionsniveaus und aus bestimmten Perspektiven zu beschreiben. Hinzu kommt, dass DSMs sowie DSMLs grundsätzlich nicht unabhängig sind, sondern inhärente Abhängigkeiten besitzen. Diese Abhängigkeiten reflektieren äquivalente Aspekte eines Softwaresystems. Eine Teilmenge dieser Abhängigkeiten reflektieren Anwendungen diverser Modelloperationen, die notwendig sind um den Grad der Automatisierung hoch zu halten. Dies wird erschwert wenn man die erste Komplexitätsdimension hinzuzieht. Aufgrund kontinuierlicher Änderungen der DSMs, müssen alle Arten von Abhängigkeiten, inklusive die Anwendung von Modelloperationen, kontinuierlich verwaltet werden. Dies beinhaltet die Wartung dieser Abhängigkeiten und das sachgerechte (wiederholte) Anwenden von Modelloperationen. Der Beitrag dieser Arbeit ist ein Ansatz, der die Bereiche Traceability und Model Management vereint. Das Erfassen und die automatische Verwaltung von Abhängigkeiten zwischen DSMs unterstützt Traceability, während das (automatische) wiederholte Anwenden von heterogenen Modelloperationen Model Management ermöglicht. Dadurch werden die zuvor erwähnten Herausforderungen der Konfiguration und Anwendung von MDE überwunden. Die negativen Auswirkungen der ersten Komplexitätsdimension können gelindert werden indem Modelloperationen in atomare Einheiten zerlegt werden. Um der implizierten Fragmentierung entgegenzuwirken, erfordert dies allerdings eine nachfolgende Komposition der Modelloperationen. Der Ansatz wird als erweitertes Model Management betrachtet, da ein signifikanter Anteil dieser Arbeit die Kompositionen von heterogenen Modelloperationen behandelt. Unterstützt werden zwei unterschiedliche Arten von Kompositionen. Datenfluss-Kompositionen werden verwendet, um Netzwerke von heterogenen Modelloperationen zu beschreiben, die nur durch das Teilen von Ein- und Ausgabe DSMs komponiert werden. Kontext-Kompositionen bedienen sich eines Konzepts, das von deklarativen Modelltransformationen bekannt ist. Dies ermöglicht die Komposition von unabhängigen Transformationsregeln auf unterschiedlichsten Detailebenen. Die in dieser Arbeit eingeführten Kontext-Kompositionen bieten die Möglichkeit eine Menge von unterschiedlichsten Abhängigkeiten als Kontext für eine Komposition zu verwenden -- unabhängig davon ob diese Abhängigkeit eine Modelloperation repräsentiert. Zusätzlich müssen die Modelloperationen, die komponiert werden, selber keine Kompositionsaspekte implementieren, was deren Wiederverwendbarkeit erhöht. Realisiert wird dieser Ansatz durch einen Formalismus der Executable and Dynamic Hierarchical Megamodel genannt wird und auf der originalen Idee der Megamodelle basiert. Auf Basis dieses Formalismus' sind die Konzepte Traceability (hier Localization) und Model Management (hier Execution) umgesetzt.
7

Dobreva, Veneta Mateeva [Verfasser], Alfons [Akademischer Betreuer] Kemper, and Torsten [Akademischer Betreuer] Grust. "Efficient Management of RFID Traceability Data / Veneta Mateeva Dobreva. Gutachter: Alfons Kemper ; Torsten Grust. Betreuer: Alfons Kemper." München : Universitätsbibliothek der TU München, 2013. http://d-nb.info/1043317163/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dobreva, Veneta M. [Verfasser], Alfons [Akademischer Betreuer] Kemper, and Torsten [Akademischer Betreuer] Grust. "Efficient Management of RFID Traceability Data / Veneta Mateeva Dobreva. Gutachter: Alfons Kemper ; Torsten Grust. Betreuer: Alfons Kemper." München : Universitätsbibliothek der TU München, 2013. http://nbn-resolving.de/urn:nbn:de:bvb:91-diss-20130919-1137517-0-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Danko, Charlott. "Traceability of Medical Devices Used During Surgeries : A Study of the Current Traceability System at the Karolinska University Hospital in Solna and Research of Improvement." Thesis, KTH, Medicinteknik och hälsosystem, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The last few decades' development of technology has greatly affected healthcare. The implementation of technology in healthcare has advanced and improved it immensely, but it has also brought a new level of complexity. One of the modern issues introduced to healthcare is the traceability of medical devices. The main reason why traceability is becoming a more important matter in regards to healthcare is because of patient safety. Patient safety is one of the greatest priorities in healthcare but is constantly challenged by new innovations. Enabling traceability of medical devices is a part of the process of ensuring patient safety. The aim of this master thesis project was to research how medical devices used in surgeries are traced and how the routine can be improved. The idea of this thesis was based on the application of two new regulations, Regulation (EU) 745/2017 and Regulation (EU) 746/2017, both with the purpose of improving traceability. Qualitative methods such as observations, surveys, and interviews were used for this project. To gain multiple perspectives on the issue, different target groups were defined for the collection of data. The qualitative data was then analysed and conclusions based on the data could be drawn. The results of this project showed that the current traceability routine is lacking and that there is a lot of potential for improvements. The computer systems that manages information regarding medical devices can enable proper traceability if combined with other systems. Improvements of features in the systems are suggested, as well as an idea of an integrated system that combines functionalities of other software. Some of the project's challenges are discussed and suggestions for how to further develop the research are presented.
10

Pister, Alexis. "Visual Analytics for Historical Social Networks : Traceability, Exploration, and Analysis." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse vise à identifier théoriquement et concrètement comment l'analyse visuelle peut aider les historiens dans leur processus d'analyse de réseaux sociaux. L'analyse de réseaux sociaux est une méthode utilisée en histoire sociale qui vise à étudier les relations sociales au sein de groupes d'acteurs (familles, institutions, entreprises, etc.) en reconstruisant les relations du passé à partir de documents historiques, tels que des actes de mariages, des actes de naissances, ou des recensements. L'utilisation de méthodes visuelles et analytiques leurs permet d'explorer la structure sociale formant ces groupes et de relier des mesures structurelles à des hypothèses sociologiques et des comportements individuels. Cependant, l'inspection, l'encodage et la modélisation des sources menant à un réseau finalisé donnent souvent lieu à des erreurs, des distorsions et des problèmes de traçabilité, et les systèmes de visualisation actuels présentent souvent des défauts d'utilisabilité et d'interprétabilité. En conséquence, les historiens ne sont pas toujours en mesure de faire des conclusions approfondies à partir de ces systèmes : beaucoup d'études se limitent à une description qualitative d'images de réseaux, surlignant la présence de motifs d'intérêts (cliques, îlots, ponts, etc.). Le but de cette thèse est donc de proposer des outils d'analyse visuelle adaptés aux historiens afin de leur permettre une meilleur intégration de leur processus global et des capacités d'analyse guidées. En collaboration avec des historiens, je formalise le processus d'une analyse de réseau historique, de l'acquisition des sources jusqu'à l'analyse finale, en posant comme critère que les outils utilisés dans ce processus devraient satisfaire des principes de traçabilité, de simplicité et de réalité documentaire (i.e., que les données présentées doivent être conformes aux sources) pour faciliter les va-et-vient entre les différentes étapes et la prise en main par l'utilisateur et ne pas distordre le contenu des sources. Pour satisfaire ces propriétés, je propose de modéliser les sources historiques en réseaux sociaux bipartis multivariés dynamiques avec rôles. Ce modèle intègre explicitement les documents historiques sous forme de nœuds, ce qui permet aux utilisateurs d'encoder, de corriger et d'analyser leurs données avec les mêmes outils. Je propose ensuite deux interfaces d'analyse visuelle permettant, avec une bonne utilisabilité et interprétabilité, de manipuler, d'explorer et d'analyser ce modèle de données. Le premier système ComBiNet offre une exploration visuelle de l'ensemble des dimensions du réseau à l'aide de vues coordonnées et d'un système de requêtes visuelles permettant d'isoler des individus ou des groupes et de comparer leurs structures topologiques et leurs propriétés. L'outil permet également de détecter les motifs inhabituels et ainsi de déceler les éventuelles erreurs dans les annotations. Le second système, PK-Clustering, est une proposition d'amélioration de l'utilisabilité et de l'efficacité des mécanismes de clustering dans les systèmes de visualisation de réseaux sociaux. L'interface permet de créer des regroupements pertinents à partir des connaissances a priori de l'utilisateur, du consensus algorithmique et de l'exploration du réseau dans un cadre d'initiative mixte. Les deux systèmes ont été conçus à partir des besoins et retours continus d'historiens, et visent à augmenter la traçabilité, la simplicité, et la réalité documentaire des sources dans le processus d'analyse de réseaux historiques. Je conclus sur la nécessité d'une meilleure intégration des systèmes d'analyse visuelle dans le processus de recherche des historiens. Cette intégration nécessite des outils plaçant les utilisateurs au centre du processus avec un accent sur la flexibilité et l'utilisabilité, limitant ainsi l'introduction de biais et les barrières d'utilisation des méthodes quantitatives, qui subsistent en histoire
This thesis aims at identifying theoretically and concretely how visual analytics can support historians in their social network analysis process. Historical social network analysis is a method to study social relationships between groups of actors (families, institutions, companies, etc.) through a reconstruction of relationships of the past from historical documents, such as marriage acts, migration forms, birth certificates, and censuses. The use of visualization and analytical methods lets social historians explore and describe the social structure shaping those groups while explaining sociological phenomena and individual behaviors through computed network measures. However, the inspection and encoding of the sources leading to a finalized network is intricate and often results in inconsistencies, errors, distortions, and traceability problems, and current visualization tools typically have usability and interpretability issues. For these reasons, social historians are not always able to make thorough historical conclusions: many studies consist of qualitative descriptions of network drawings highlighting the presence of motifs such as cliques, components, bridges, etc. The goal of this thesis is therefore to propose visual analytics tools integrated into the global social historians' workflow, with guided and easy-to-use analysis capabilities. From collaborations with historians, I formalize the workflow of historical network analysis starting at the acquisition of sources to the final visual analysis. By highlighting recurring pitfalls, I point out that tools supporting this process should satisfy traceability, simplicity, and document reality principles to ease bask and forth between the different steps, provide tools easy to manipulate, and not distort the content of sources with modifications and simplifications. To satisfy those properties, I propose to model historical sources into bipartite multivariate dynamic social networks with roles as they provide a good tradeoff of simplicity and expressiveness while modeling explicitly the documents, hence letting users encode, correct, and analyze their data with the same abstraction and tools. I then propose two interactive visual interfaces to manipulate, explore, and analyze this data model, with a focus on usability and interpretability. The first system ComBiNet allows an interactive exploration leveraging the structure, time, localization, and attributes of the data model with the help of coordinated views and a visual query system allowing users to isolate interesting groups and individuals, and compare their position, structures, and properties. It also lets them highlight erroneous and inconsistent annotations directly in the interface. The second system, PK-Clustering, is a concrete proposition to enhance the usability and effectiveness of clustering mechanisms in social network visual analytics systems. It consists in a mixed-initiative clustering interface that let social scientists create meaningful clusters with the help of their prior knowledge, algorithmic consensus, and interactive exploration of the network. Both systems have been designed with continuous feedback from social historians, and aim to increase the traceability, simplicity, and document reality of visual analytics supported historical social network research. I conclude with discussions on the potential merging of both tools, and more globally on research directions towards better integration of visual analytics systems on the whole workflow of social historians. Systems with a focus on those properties---traceability, simplicity, and document reality---can limit the introduction of bias while lowering the requirements for the use of quantitative methods for historians and social scientists which has always been a controversial discussion among practitioners
11

Auwal, Bilyaminu Romo. "Improving the quality of bug data in software repositories." Thesis, Brunel University, 2016. http://bura.brunel.ac.uk/handle/2438/13655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Context : Researchers have increasingly recognised the benefit of mining software repositories to extract information. Thus, integrating a version control tool (VC tool) and bug tracking tool (BT tool) in mining software repositories as well as synchronising missing bug tracking data (BT data) and version control log (VC log) becomes of paramount importance, in order to improve the quality of bug data in software repositories. In this way, researchers can do good quality research for software project benefit especially in open source software projects where information is limited in distributed development. Thus, shared data to track the issues of the project are not common. BT data often appears not to be mirrored when considering what developers logged as their actions, resulting in reduced traceability of defects in the development logs (VC logs). VC system (Version control system) data can be enhanced with data from bug tracking system (BT system), because VC logs reports about past software development activities. When these VC logs and BT data are used together, researchers can have a more complete picture of a bug’s life cycle, evolution and maintenance. However, current BT system and VC systems provide insufficient support for cross-analysis of both V Clogs and BT data for researchers in empirical software engineering research: prediction of software faults, software reliability, traceability, software quality, effort and cost estimation, bug prediction, and bug fixing. Aims and objectives: The aim of the thesis is to design and implement a tool chain to support the integration of a VC tool and a BT tool, as well as to synchronise the missing VC logs and BT data of open-source software projects automatically. The syncing process, using Bicho (BT tool) and CVSAnalY (VC tool), will be demonstrated and evaluated on a sample of 344 open source software (OSS) projects. Method: The tool chain was implemented and its performance evaluated semi-automatically. The SZZ algorithm approach was used to detect and trace BT data and VC logs. In its formulation, the algorithm looks for the terms "Bugs," or "Fixed" (case-insensitive) along with the ’#’ sign, that shows the ID of a bug in the VC system and BT system respectively. In i addition, the SZZ algorithm was dissected in its formulation and precision and recall analysed for the use of “fix”, “bug” or “# + digit” (e.g., #1234), was detected was detected when tracking possible bug IDs from the VC logs of the sample OSS projects. Results: The results of this analysis indicate that use of “# + digit” (e.g., #1234) is more precise for bug traceability than the use of the “bug” and “fix” keywords. Such keywords are indeed present in the VC logs, but they are less useful when trying to connect the development actions with the bug traces – that is, their recall is high. Overall, the results indicate that VC log and BT data retrieved and stored by automatic tools can be tracked and recovered with better accuracy using only a part of the SZZ algorithm. In addition, the results indicate 80-95% of all the missing BT data and VC logs for the 344 OSS projects has been synchronised into Bicho and CVSAnalY database respectively. Conclusion: The presented tool chain will eliminate and avoid repetitive activities in traceability tasks, as well as software maintenance and evolution. This thesis provides a solution towards the automation and traceability of BT data of software projects (in particular, OSS projects) using VC logs to complement and track missing bug data. Synchronising involves completing the missing data of bug repositories with the logs de tailing the actions of developers. Synchronising benefit various branches of empirical software engineering research: prediction of software faults, software reliability, traceability, software quality, effort and cost estimation, bug prediction ,and bug fixing.
12

Öberg, Lena-Maria. "Traceable Information Systems : Factors That Improve Traceability Between Information and Processes Over Time." Licentiate thesis, Mid Sweden University, Department of Information Technology and Media, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

Preservation of information is not a new issue but preservation of digital information has a relatively short history. Since the 60’s when computers began to be used within administration, digital information that has had to be preserved over time.The problem addressed in this research is how to preserve understandable information over time. Information is context dependent, which means that without context it is not possible to use the information. Process is one part of the context. And an important issue when preserving information is then to be able to trace an information

object to the process where in it has been created and managed. Associating information to a particular process creates the possibility of relating information objects to each other and also to the context in which the information has been created and used. The aim of this thesis is to identify and structure factors that can improve the traceability between information and processes over time. A set of factors based on case studies and a set of analytical methods are presented that can improve the traceability over time. These factors have been identified and structured by the use of the Synergy-4 model. They have been identified within four different spheres namely: competence, management, organization/procedure and technology. The factors have further been structured in three different time states namely: creation time, short and middle term and long-term. The research concludes that there are a lot of factors influencing ability to preserve information. Preservation issues include selection of metadata standards, organizational culture, lack of understanding from management and formalization of documents. The conclusion is that if an organization wants to succeed in preserving traceable information they have to build strategies that cover the issues from a range of different angles. This thesis suggests that crucial angles are competence, management, organization/procedure

and technology. Furthermore, the strategies must be in place at the stage of creationof the information objects.

13

ISHIKAWA, Yoshiharu, and Fengrong LI. "Query Processing in a Traceable P2P Record Exchange Framework." Institute of Electronics, Information and Communication Engineers, 2010. http://hdl.handle.net/2237/14955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Off, Thomas. "Durchgängige Verfolgbarkeit im Vorfeld der Softwareentwicklung von E-Government-Anwendungen : ein ontologiebasierter und modellgetriebener Ansatz am Beispiel von Bürgerdiensten." Phd thesis, Universität Potsdam, 2011. http://opus.kobv.de/ubp/volltexte/2012/5747/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Die öffentliche Verwaltung setzt seit mehreren Jahren E-Government-Anwendungssysteme ein, um ihre Verwaltungsprozesse intensiver mit moderner Informationstechnik zu unterstützen. Da die öffentliche Verwaltung in ihrem Handeln in besonderem Maße an Recht und Gesetz gebunden ist verstärkt und verbreitet sich der Zusammenhang zwischen den Gesetzen und Rechtsvorschriften einerseits und der zur Aufgabenunterstützung eingesetzten Informationstechnik andererseits. Aus Sicht der Softwaretechnik handelt es sich bei diesem Zusammenhang um eine spezielle Form der Verfolgbarkeit von Anforderungen (engl. Traceability), die so genannte Verfolgbarkeit im Vorfeld der Anforderungsspezifikation (Pre-Requirements Specification Traceability, kurz Pre-RS Traceability), da sie Aspekte betrifft, die relevant sind, bevor die Anforderungen in eine Spezifikation eingeflossen sind (Ursprünge von Anforderungen). Der Ansatz dieser Arbeit leistet einen Beitrag zur Verfolgbarkeit im Vorfeld der Anforderungsspezifikation von E-Government-Anwendungssystemen. Er kombiniert dazu aktuelle Entwicklungen und Standards (insbesondere des World Wide Web Consortium und der Object Management Group) aus den Bereichen Verfolgbarkeit von Anforderungen, Semantic Web, Ontologiesprachen und modellgetriebener Softwareentwicklung. Der Lösungsansatz umfasst eine spezielle Ontologie des Verwaltungshandeln, die mit den Techniken, Methoden und Werkzeugen des Semantic Web eingesetzt wird, um in Texten von Rechtsvorschriften relevante Ursprünge von Anforderungen durch Annotationen mit einer definierten Semantik zu versehen. Darauf aufbauend wird das Ontology Definition Metamodel (ODM) verwendet, um die Annotationen als spezielle Individuen einer Ontologie auf Elemente der Unified Modeling Language (UML) abzubilden. Dadurch entsteht ein neuer Modelltyp Pre-Requirements Model (PRM), der das Vorfeld der Anforderungsspezifikation formalisiert. Modelle diesen Typs können auch verwendet werden, um Aspekte zu formalisieren die sich nicht oder nicht vollständig aus dem Text der Rechtsvorschrift ergeben. Weiterhin bietet das Modell die Möglichkeit zum Anschluss an die modellgetriebene Softwareentwicklung. In der Arbeit wird deshalb eine Erweiterung der Model Driven Architecture (MDA) vorgeschlagen. Zusätzlich zu den etablierten Modelltypen Computation Independent Model (CIM), Platform Independent Model (PIM) und Platform Specific Model (PSM) könnte der Einsatz des PRM Vorteile für die Verfolgbarkeit bringen. Wird die MDA mit dem PRM auf das Vorfeld der Anforderungsspezifikation ausgeweitet, kann eine Transformation des PRM in ein CIM als initiale Anforderungsspezifikation erfolgen, indem der MOF Query View Transformation Standard (QVT) eingesetzt wird. Als Teil des QVT-Standards ist die Aufzeichnung von Verfolgbarkeitsinformationen bei Modelltransformationen verbindlich. Um die semantische Lücke zwischen PRM und CIM zu überbrücken, erfolgt analog zum Einsatz des Plattformmodells (PM) in der PIM nach PSM Transformation der Einsatz spezieller Hilfsmodelle. Es kommen dafür die im Projekt "E-LoGo" an der Universität Potsdam entwickelten Referenzmodelle zum Einsatz. Durch die Aufzeichnung der Abbildung annotierter Textelemente auf Elemente im PRM und der Transformation der Elemente des PRM in Elemente des CIM kann durchgängige Verfolgbarkeit im Vorfeld der Anforderungsspezifikation erreicht werden. Der Ansatz basiert auf einer so genannten Verfolgbarkeitsdokumentation in Form verlinkter Hypertextdokumente, die mittels XSL-Stylesheet erzeugt wurden und eine Verbindung zur graphischen Darstellung des Diagramms (z. B. Anwendungsfall-, Klassendiagramm der UML) haben. Der Ansatz unterstützt die horizontale Verfolgbarkeit zwischen Elementen unterschiedlicher Modelle vorwärts- und rückwärtsgerichtet umfassend. Er bietet außerdem vertikale Verfolgbarkeit, die Elemente des gleichen Modells und verschiedener Modellversionen in Beziehung setzt. Über den offensichtlichen Nutzen einer durchgängigen Verfolgbarkeit im Vorfeld der Anforderungsspezifikation (z. B. Analyse der Auswirkungen einer Gesetzesänderung, Berücksichtigung des vollständigen Kontextes einer Anforderung bei ihrer Priorisierung) hinausgehend, bietet diese Arbeit eine erste Ansatzmöglichkeit für eine Feedback-Schleife im Prozess der Gesetzgebung. Stehen beispielsweise mehrere gleichwertige Gestaltungsoptionen eines Gesetzes zur Auswahl, können die Auswirkungen jeder Option analysiert und der Aufwand ihrer Umsetzung in E-Government-Anwendungen als Auswahlkriterium berücksichtigt werden. Die am 16. März 2011 in Kraft getretene Änderung des NKRG schreibt eine solche Analyse des so genannten „Erfüllungsaufwands“ für Teilbereiche des Verwaltungshandelns bereits heute verbindlich vor. Für diese Analyse kann die vorliegende Arbeit einen Ansatz bieten, um zu fundierten Aussagen über den Änderungsaufwand eingesetzter E-Government-Anwendungssysteme zu kommen.
Public administration is using electronic government (e-government) application systems for several years to support their processes more intensive with modern information and communication technology than ever before. This increases and broadens the relationship between law and legislation executed by the administration on the one hand and requirements to e-government application systems used to support administrative execution on the other hand. This relationship is subject matter of pre-requirements specification traceability (pre-RS traceability). This work introduces an approach to pre-RS traceabiliy for e-government application. It combines research efforts and standards (i.e. of World Wide Web Consortium and Object Management Group) from different fields: traceability, semantic web, ontology engineering and model driven software engineering. Using this approach it is possible to add a semantic to elements of law and legislation texts using annotations. Annotation semantics is based on an ontology of public administration execution developed especially for this approach. A mapping from annotated text elements as a special kind of ontology individuals to elements of Unified Modeling Language (UML) is created using the Ontology Definition Metamodel (ODM). This mapping results in a new model type referred to as Pre-Requirements Model (PRM). This model uses elements that exist before requirements are explicitly documented in a requirements specification. Therefore it can be primary used to formalize elements and their relationships in the pre-requirements scope. Through the mapping rules of ODM it keeps a traceable relationship from each model element to its corresponding annotated text elements. PRM can also be used to model and refine elements that are not or not completely derived directly from text of law and legislation. In this work is argued that Model Driven Architecture (MDA) might profit from extending the existing model types Computation Independent Model (CIM), Platform Independent Model (PIM) and Platform Specific Model (PSM) by using a PRM. This extension leads to an Architecture that starts with a pre-requirements viewpoint before any requirements are formalized and documented in models of type CIM. It offers also the opportunity to use model transformation to create an initial CIM from PRM by allying the MOF Query View Transformation standard (QVT). Using QVT ensures the traceability of model transformation because standard enforces recording of traceability information. A Transformation from PRM to CIM creates an initial requirements specification that can be refined using common techniques, methods and tools. To bridge the semantic gap between PRM and CIM the approach follows the pattern of PIM to PSM transformation which uses the Platform Model (PM). Analogues PRM to CIM transformation uses special reference models for e-government developed in the project "E-LoGo" at university of Potsdam. By recoding traces of mapping annotation to elements in PRM and transforming elements of PRM to elements in CIM using reference models continuous pre-RS traceability can be achieved. The approach uses simple Extensible Stylesheet Language Transformations (XSLT) to create a hypertext documentation that links all relevant elements. Navigating along these links makes it possible for example to start with an annotated element of a law text and follow to all resulting requirements in a CIM. Using the opposite direction it is possible to see for each requirement from which text element of a law it is derived or even if there is no relation to law. By integrating the graphical representation of a model element this navigation can even start directly in a UML diagram. This illustrates that the approach offers vertical and horizontal traceability in forward and backward direction. Besides the obvious use cases continuous pre-requirements specification traceability offers in general (i.e. impact analysis on changes of law and legislation, consider context of a requirements when prioritizing them) is also offers the chance to create a feedback on the consequences of a change in law to existing e-government systems. As long as alternatives and the necessary scope in legislative process are still left, a feedback can be used to choose an alternative with less effort or faster implementation. For federal law it is in Germany since 2011 obligatory to make a similar estimation referred to as achievement effort (“Erfüllungsaufwand”). This work contributes to the first step of making a solid estimation of this kind of effort using pre-RS traceability.
15

Ramadan, Qusai [Verfasser], Jan [Akademischer Betreuer] Jürjens, Jan [Gutachter] Jürjens, and Andreas [Gutachter] Mauthe. "Data Protection Assurance by Design: Support for Conflict Detection, Requirements Traceability and Fairness Analysis / Qusai Ramadan ; Gutachter: Jan Jürjens, Andreas Mauthe ; Betreuer: Jan Jürjens." Koblenz, 2020. http://d-nb.info/1212855779/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Kransell, Martin. "The Value of Data Regarding Traceable Attributes in a New Era of Agriculture : Bridging the Information Gap Between Consumers and Producers of Organic Meat." Thesis, Linnéuniversitetet, Institutionen för informatik (IK), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-35089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Purpose – This study aims to explore, and suggest solutions to, the gap between the supply of information from organic meat producers and the demand of information from consumers regarding traceable characteristics (attributes) of meat in a limited geographical area in order to maximize the utilization and value of collected data. Design/methodology/approach – A mixed methods research design is applied to collect both quantitative data from consumers and qualitative data from suppliers to produce empirical results of the supply and demand of information. A theoretical framework of organic food purchase intent is used for the quantitative study as well as the correlation between consumers’ perceived importance of attributes and their willingness-to-pay for meat. The results of the empirical studies are compared to each other in an effort to expose a possible gap using a gap analysis. Findings – Meat is shifting from a price based commodity to a product based on characteristics. This study reveals that there is now a gap between the information made available by organic meat producers and the demand of information from consumers that needs to be recognized in order to maximize the value of collected data. Information regarding environmental impact of raising and transporting the animals is not extensively collected. A substantial amount of data about attributes of perceived importance, such as safety and handling, animal welfare and medication or other treatments is collected but not extensively shared with consumers. Research limitations/implications – The small sample size in a unique area and the scope of the survey data does not provide a result that can be truly generalized. It is therefore suggested that future studies produce results from a larger sample that incorporates the perceived accessibility of important information for consumers. Practical implications – This contributes to the emerging literature of organic food production by comparing both the supply and the demand of information regarding attributes of meat. This information is valuable to organic meat producers and marketers as well as developers of agricultural systems and databases that should shift their focus to consumer oriented traceability systems. Originality/value – This study goes beyond the substantial body of literature regarding attributes of organic food and consumers preferences by comparing these factors to the available supply of information by meat producers and by suggesting solutions to bridge the gap between them. Keywords – Organic meat, Organic agriculture, e-Agriculture, Traceability, Traceability systems, Consumer oriented, Consumer behavior, Willingness-to-pay, Supply and demand, Information gap, Gap analysis, Business development, United States of America, Sense-making theory, Mixed methods Paper type – Research paper, Bachelor’s thesis
17

Grimstad, Bang Tove, and Axel Johansson. "Responsible Sourcing via Blockchain in Mineral Supply Chains." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Manufacturers and suppliers in the tech industry, trading and utilizing minerals, are often unable to conduct substantial supply chain due diligence, due to reasons such as lack of competence, the scattered spread of information and fluid nature of their supply chains. Declaring whether a product has been responsibly sourced, or whether it contains conflict minerals or not, is almost impossible. This study is an exploration of the potential role of blockchain in mineral supply chain management, as a supplementary tool for carrying out due diligence. Well-performed supply chain due diligence should demand continuous status records of various measures of social sustainability, identifying impacts on human well-being. So, how may a blockchain solution for traceability in a mineral supply chain contribute towards ensuring responsible sourcing? Blockchain provides traceability of transactions through its immutable chain structure, and knowing an asset’s origin is vital in order to carry out supply chain due diligence. While the blockchain network has the potential to provide information on the digitally registered flow of an asset, the validity of the information of the physical and social qualities of the asset remains dependent on the actor adding it to the blockchain, leading to an inherent problem regarding the interface between the digital and the physical world, in application of blockchain in supply chains. Through a background study and interviews with researchers and professionals, this study proposes a set of requirements to take into account while addressing responsible sourcing via a blockchain solution. The study finds that a blockchain alone cannot ensure responsible sourcing, and further provides insight into the challenges and opportunities present in the industry and discusses the suitability of potential solutions.
Tillverkare och leverantörer inom techindustrin, som handlar med och drar nytta utav mineraler, är ofta oförmögna att genomföra djupgående företagsgranskningar i sina logistikkedjor, på grund av exempelvis kompetensbrist, vida utspridd information och kedjornas flytande natur. Att säkerställa ifall en produkt har utvunnits på ett hållbart sätt eller huruvida den innehåller konfliktmineraler är i det närmaste omöjligt. Denna studie utforskar blockkedjeteknikens potentiella roll i leverantörskedjor för mineraler, som ett kompletterande verktyg för att genomföra företagsgranskningar. Välgenomförda granskningar bör inkludera fortlöpande statusprotokoll för olika åtgärder gällande social hållbarhet, som identifierar utvinningens påverkan på mänskligt välmående. Så, hur kan en blockkedjelösning för spårbarhet i en leverantörskedja för mineraler bidra till att säkerställa hållbar utvinning? En blockkedja möjliggör spårbarhet av transaktioner genom sin oföränderliga kedjestruktur; samtidigt är kännedom om ursprunget hos en resurs avgörande för att genomföra företagsgranskningar i logistikkedjor. Ett blockkedjenätverk har potential att tillhandahålla information gällande det digitalt registrerade flödet hos en resurs, men informationens validitet gällande dess fysiska och sociala kvaliteter är fortsatt beroende av aktören som registrerar resursen på blockkedjan, vilket leder till ett ofrånkomligt problem gällande gränssnittet mellan den digitala och fysiska världen vid applicering av blockkedjor i leverantörskedjor. Utifrån en litteraturgenomgång och intervjuer med forskare och professionella, så föreslås i denna studie en kravlista att ta hänsyn till ifall blockkedjelösningar ska användas för att understödja hållbar utvinning. Studien visar att en blockkedja på egen hand ej kan säkerställa hållbar utvinning och ger vidare insikt i utmaningar och möjligheter inom industrin, samt diskuterar lämpligheten för potentiella blockkedjelösningar i dessa sammanhang.
18

Mammadova, Aynur. "Deforestation risk in bovine leather supply chain. Risk assessment through conceptualization, discourse and trade data analysis within the context of Italian-Brazilian leather trade." Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3424866.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La produzione agricola industriale su larga scala e il commercio di prodotti sono sempre più connessi a fenomeni di deforestazione e degradazione delle foreste tropicali. Tale fenomeno è descritto tramite il concetto di ‘rischio di deforestazione’ o forest-risk. I prodotti agricoli i cui processi produttivi implicano deforestazione e rimozione della vegetazione autoctona, sono classificati beni a rischio deforestazione (forest risk commodities). Carne bovina, soia, olio di palma e legname – i beni a rischio deforestazione – sono considerati ‘i grandi 4’ tra le forest-risk commodities. A causa della complessità dei sistemi globali di produzione e commercio alcuni beni sono indirettamente legati a tale rischio, poiché derivano da aree deforestate senza essere essi stessi causa diretta di deforestazione. Questa dimensione del rischio viene spesso tralasciata e permane un tema secondario nel dibattito sulla deforestazione derivata dalla produzione e il commercio di beni di consumo. La distinzione tra beni con un legame causale diretto con la deforestazione e beni che includono nella propria filiera il rischio di deforestazione incide su come la responsabilità della deforestazione viene attribuita e considerata sia tramite misure legali che tramite standard volontari di auto-regolamentazione. Pertanto risulta necessario sviluppare una concettualizzazione migliore per concordare una terminologia da utilizzare sia nella letteratura accademica che in quella informale e raggiungere delle decisioni politiche basate su un approccio scientifico. Nella ricerca effettuata si è voluto espandere la concettualizzazione di deforestation risk facendo riferimento al caso delle pelli bovine (di qui in avanti semplicemente, pelli) e in particolare al caso della produzione di pelli/prodotti di conceria in Brasile. Il focus sulle pelli ha molteplici ragioni. In primo luogo, mentre il ruolo degli allevamenti zootecnici come causa di deforestazione in Brasile è soggetto ad una crescente attenzione da parte dell’opinione pubblica, la filiera di produzione delle pelli rimane ancora inesplorata. Fatta eccezione per poche imprese leader del settore dei prodotti in pelle, il dibattito sulla trasparenza di questa filiera e il rischio di deforestazione ad essa associato è praticamente assente. In secondo luogo, la filiera della pelle è di norma molto più complessa rispetto a quella della carne bovina e coinvolge numerosi attori sia a livello nazionale che internazionale, ivi compresi gli intermediari, le concerie, le case di moda, ecc. Ciò crea delle discontinuità nella tracciabilità della pelle e complica l’identificazione del rischio di deforestazione lungo la filiera. Infine, la pelle è un bene che per propria stessa natura è legato a rapporti di forza squilibrati tra gli attori della filiera. Una terza ragione per la scelta del settore della pelle è data dal fatto che, poiché la pelle è spesso considerata un prodotto di scarto secondario della carne bovina, ne consegue che gli attori coinvolti nella filiera sostengono di avere uno scarso potere di negoziazione per imporre i loro standard e delle condizioni di non-deforestazione ai produttori. Al contempo, gli attori a valle della filiera, come le case di moda, sono maggiormente esposti a rischi di natura reputazionale rispetto alle imprese del settore della carne. In conseguenza di tale situazione vi è il fatto che la pelle è un bene con costi e benefici distribuiti in maniera asimmetrica all’interno della filiera. Mentre a monte gli allevatori mancano delle risorse per rispettare standard di sostenibilità e spesso non beneficiano di nessuna compensazione economica per il pellame dei propri bovini, i prodotti finiti in pelle sono visti come beni di lusso, con elevati margini di guadagno per le aziende che li producono e commerciano. Questa ricerca impiega sia dati primari che secondari. I dati primari sono principalmente di tipo qualitativo e derivano da trentanove interviste semi-strutturate e audio-registrate condotte sotto forma sia di colloqui vis-à-vis che a distanza (video-chiamate) durante una missione in Brasile tra maggio e agosto 2018. Tali dati sono stati utilizzati prevalentemente ai fini dell’analisi del discorso (discourse analysis) presentata nel secondo capitolo e come riferimenti interpretativi e di lettura del contesto per l’analisi dei dati quantitativi secondari presentata nei rimanenti capitoli. I dati e le informazioni secondari sono stati derivati da un’estesa analisi della letteratura e analisi di dati statistici relativi a mattatoi, registri su pelli bovine grezze e semilavorate e processi di deforestazione; sono stati inoltre considerati dati geospaziali relativi alle aree deforestate e alla localizzazione dei mattatoi e delle concerie; da ultimo sono stati considerati dati relativi al commercio di pelli e prodotti derivati tra Brasile e Italia. Nessun intervallo di tempo specifico è stato selezionato a priori per l’analisi dei dati: le serie temporali sono state selezionate a seconda della disponibilità di dati e delle necessità relative alle singole tipologie di analisi impiegate. Dai risultati emerge che la filiera delle pelli ha un rischio deforestazione significativo nonostante il pellame non sia un prodotto primario dell’allevamento bovino e un fattore diretto di deforestazione. Il rischio si colloca principalmente nel legame con le attività zootecniche e di allevamento, nell’incompleta tracciabilità della filiera così come nel commercio interno e internazionale di pelle. Le pelli prodotte in Brasile e importate per essere successivamente lavorate in Italia incorporano un livello significativo di rischio di deforestazione a causa degli intensi scambi commerciali tra i due Paesi. Il rischio di deforestazione legato alle pelli è affrontato in maniera diversa dai diversi discorsi esistenti sul tema e pone in evidenza come l’articolarsi della trama di ciascun discorso comporti l’attenzione sia su aspetti visibili che invisibili rispetto alla sostenibilità, all’equità e alla legalità delle filiere in questione. I risultati mettono in risalto l’importanza del ruolo e della voce degli agricoltori di frontiera, mostrando come la loro visione e interpretazione informi un discorso politico incentrato sul tema della sopravvivenza e del sostentamento. È quindi necessaria una maggiore attenzione da parte dell’opinione pubblica sulle filiere produttive, ivi comprese quelle delle pelli e dei prodotti derivati, e in particolare sulle relazioni non eque di potere, così come sull’importanza di un’inclusione significativa di gruppi vulnerabili della popolazione. L’industria del pellame e i grandi marchi dovrebbero essere più proattivi, inviando al mercato un chiaro segnale per cui la deforestazione e altre forme di illegalità non possono essere tollerate. Una piena tracciabilità della filiera e il coinvolgimento dei produttori è imprescindibile se l’industria mira a produrre e commerciare prodotti che non siano responsabili di o coinvolti in processi di deforestazione.
Large-scale industrial agricultural production and commodity trade are increasingly linked to deforestation and forest degradation in the tropics. This link is described via the concept of ‘deforestation risk’. Agricultural products whose production or extraction involves deforestation and native vegetation clearing are classified as forest-risk commodities. Beef, soybean, palm oil, and timber - the commodities with deforestation risk - are considered the “big four” of forest-risk commodities. Due to the complexity of global production and trade systems there are commodities that possess the risk of originating from deforested areas without being direct deforestation/forest degradation drivers. This dimension of the risk is either overlooked or held as secondary in the debates about commodity-driven deforestation. Differentiation between commodities with direct causal links and those with the exposure to deforestation in their supply chain has impact on how responsibility and accountability is constructed both through legal measures and self-regulatory voluntary standards. Better conceptualization is needed to approximate the usage of the terms both in grey and academic literature and to achieve science backed policy decisions. By referring to the case of bovine leather (hereinafter just leather) and the case of Brazilian leather production we aim to expand the conceptualization of deforestation risk. We focus on leather for multiple reasons. First, while the role of cattle in driving deforestation in Brazil is subject to increasing public scrutiny, the leather commodity chain largely remains in the shadow. Except for a few leading firms in leather goods, public discussion about transparency across the leather supply chain and associated deforestation risk is mostly absent. Second, leather supply chains are more complex compared to beef and involve many national and international players, including intermediary sellers, tanneries, fashion houses, etc. This creates traceability gaps and complicates identifying deforestation risk along the chain. Third, leather is a commodity with inherently uneven power relations among the actors in the supply chain and with costs and benefits unevenly distributed across the chain. Often considered a waste or by-product to beef meat, actors in the leather supply chain argue to lack important negotiation power to impose their standards and no deforestation conditions upon producers. At the same time, downstream actors of leather supply chain, such as fashion brands, are more susceptible to reputational risks compared to that of beef. While upstream farmers lack resources to adhere to sustainability standards and hardly get any financial compensation for the skin of their cattle, finished leather products are often regarded as luxury products presenting very high price margins for producing/trading brands. This research employs both primary and secondary data. Primary data is mostly qualitative and entails thirty-nine semi-structured, recorded, and transcribed interviews, in the form of both face-to-face and video call interviews conducted during extended field visit to Brazil in May-August 2018. This data is mainly used for the discourse analysis in the second chapter and for interpretative and contextual purposes to analyse the secondary quantitative data in the other chapters. Secondary information consists of extensive literature review, statistical data on annual slaughter, bovine hide/leather registry and annual deforestation, geospatial data on deforestation, slaughterhouse and tannery locations, as well as, trade statistics on Brazilian-Italian leather trade. No specific time frame was chosen to analyse the data and time series for each data set were selected according to availability and the specific requirements of each type of analysis. The results show that bovine leather supply chains possess significant risk of embedded deforestation despite leather not being a primary product of cattle ranching and driver of deforestation. The risk reveals itself in the link with cattle ranching, incomplete supply chain traceability, as well as in interstate and international leather trade. The Brazilian-Italian bovine leather has significant level of embedded deforestation due to intensive trade relations. Different discourses articulate deforestation risk of bovine leather differently and highlight how the storylines of each discourse bring attention both to what is made visible and invisible in relation to sustainability, legitimacy, and fairness. The results emphasise the importance of the role and voice of frontier settlers, by presenting how their storylines inform a political discourse on livelihoods. There is a need for increased public scrutiny of supply chains, including the leather one, and for special attention to unequal power relations and the importance of meaningful inclusion of vulnerable groups and populations. The leather industry and big brands need to be more proactive by sending clear market signals that deforestation and other illegalities are not tolerated. Full coverage and traceability of the supply chain and engagement with the producers is necessary if the industry wants to produce and trade deforestation-free products.
19

Vaz, Monica Cristine Scherer. "Especificação de um framework para rastreabilidade da cadeia produtiva de grãos." UNIVERSIDADE ESTADUAL DE PONTA GROSSA, 2014. http://tede2.uepg.br/jspui/handle/prefix/171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Made available in DSpace on 2017-07-21T14:19:40Z (GMT). No. of bitstreams: 1 MONICA CRISTINE SCHERER VAZ.pdf: 2687414 bytes, checksum: 22b96a7da0478a4e61874c0a445e1d24 (MD5) Previous issue date: 2014-02-21
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Traceability allows to identify the product’s origin and route, at any time of its production chain, main requirement in normalization processes, certifications and in quality management systems. About food products, where contamination acciddent’s impact involves health damage, traceability allows that the affected lots may be detected and quickly removed from the market with safety, minimizing losses. Besides legal requirements about traceability, the final consumer is looking for having access to informations about food which are being eaten, motivating tecnological solution’s development in this area. The goal of this dissertation is to present the specification of RastroGrão Framework, to traceability of productive grains process. This Framework is based in regulations and rules of quality applied to traceability and in systems of grains traceability, allowing records relative to production’s processes. These records can be changing according to the necessity of each participant agent of production chain. Data to be traced are recorded by the user, not being necessary system maintenance everytime it appears a new requirement to be traced. To the development of Framework was used Java Language to WEB Environment and PostgreSQL Database. The main contribution of this dissertation are related to the benefits offered by the Framework are: i) visibility of informations, which can be accessed through internet, by all the chain agents; ii) integration of all links of the productive chain; and iii) information’s availability to the final consumer through QR-Code, that can be accessed in the internet or printed on the packaging.
A rastreabilidade permite identificar a origem e o percurso de um produto, em qualquer momento de sua cadeia produtiva, requisito fundamental em processos de normalização, certificações e em sistemas de gestão da qualidade. Em se tratando de produtos alimentícios, onde o impacto de acidentes de contaminação envolve riscos a saúde, a rastreabilidade permite que os lotes afetados sejam detectados e retirados do mercado, com rapidez e segurança, minimizando prejuízos. Além das exigências legais em torno da rastreabilidade, o consumidor final está buscando ter acesso às informações sobre os alimentos que estão consumindo, motivando o desenvolvimento de soluções tecnológicas nesta área. O objetivo desta dissertação é apresentar a especificação do Framework RastroGrão, para rastreabilidade do processo produtivo de grãos. Esse Framework está baseado nos regulamentos e normas de qualidade aplicadas à rastreabilidade e em sistemas de rastreabilidade de grãos, permitindo os registros inerentes aos processos de produção. Esses registros podem ser alterados de acordo com a necessidade de cada agente participante da cadeia de produção. Os dados a serem rastreados são registrados pelo próprio usuário, não necessitando de manutenção de sistema cada vez que surge um novo requisito a ser rastreado. Para o desenvolvimento do Framework foi utilizada a Linguagem Java para Ambiente WEB e o Banco de Dados PostgreSQL. As principais contribuições desta dissertação estão relacionadas aos benefícios oferecidos pelo Framework são: i) visibilidade das informações, as quais podem ser acessadas pela internet, por todos os agentes da cadeia; ii) integração de todos os elos da cadeia produtiva, e iii) disponibilização das informações para o consumidor final através de QR-Code, que pode ser acessado na internet ou impresso nas embalagens
20

Azzi, Rita. "Blockchain Adoption in Healthcare : Toward a Patient Centric Ecosystem." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'écosystème de soins de santé évolue constamment, sous l'influence des avancées technologiques qui l'ont orienté vers des approches centrées sur le patient. Toutefois, cette transformation est associée à de nombreux défis en raison de la complexité inhérente et de la fragmentation du système de santé. Cette thèse présente une problématique à trois niveaux. Chaque niveau traite un défi de l'écosystème de soins de santé ayant une répercussion sur la santé des patients ou portant atteinte à leurs vies privées. Le premier défi abordé est celui des médicaments contrefaits ou falsifiés qui représentent une menace pour la santé publique. Le deuxième défi concerne la fragmentation des données de santé qui entrave la coordination des soins et nuit à l'efficacité clinique. Le troisième défi s'attaque à la confidentialité des données relatives aux patients, impliquant aussi la protection de leurs vies privées.La blockchain apparait comme une technologie prometteuse, capable de relever ces différents défis. Introduite dans l'écosystème de santé, la blockchain a le potentiel de renforcer la transparence, l'authentification, la sécurité et la fiabilité. Néanmoins, cette technologie s'accompagne également de son lot de défis. Cette thèse évalue les risques et opportunités liés à l'adoption de la blockchain dans l'écosystème de soins de santé. Nous commençons par une étude approfondie sur le rôle de la blockchain à améliorer la gestion de la chaîne d'approvisionnement et de la chaîne de prestation de soins de santé. Pour compléter cette approche théorique, nous intégrons des applications concrètes du monde réel afin d'élaborer les exigences nécessaires à établir une chaine d'approvisionnement basée sur la blockchain. Notre troisième contribution, présente une approche axée sur le patient, où nous combinons la technologie blockchain et les technologies du Web sémantique pour aider les patients à gérer leurs données de santé. Notre quatrième contribution s'inscrit dans le cadre de la gouvernance des données. Nous développons un Framework basé sur la blockchain pour améliorer la sécurité des données et qui par la suite pourra être adopter dans divers domaines
The healthcare sector evolves constantly, driven by technological advancement and innovative solutions. From remote patient monitoring to the Internet of Things (IoT), Artificial Intelligence (AI), personalized medicine, mobile health, and electronic records systems, technology has improved patient outcomes and enhanced care delivery. These technologies have shifted the healthcare ecosystem to be more patient-centered, focusing on meeting the patient's needs rather than the needs of the individual organizations within it. However, this transformative shift experienced by the healthcare industry is associated with multiple challenges due to the inherent complexity and fragmentation of the healthcare ecosystem. This dissertation addresses three healthcare ecosystem challenges that significantly impact patients. The first challenge addressed is the problem of counterfeit or falsified drugs that represent a threat to public health, resulting from the vulnerabilities in the pharmaceutical supply chain, notably centralized data management and the lack of transparency. The second challenge addressed is the problem of healthcare data fragmentation that thwarts care coordination and impacts clinical efficiency. This problem results from the dynamic and complex patients' journey in the healthcare system, shaped by their unique health needs and preferences. Patient data are scattered across multiple healthcare organizations within centralized databases and are ruled by policies that hinder data sharing and patients' empowerment over their data. The third challenge addressed is the confidentiality and privacy of healthcare data that, if compromised, shatter the trust relationship between patients and healthcare stakeholders. This challenge results from the healthcare organizations' poor data governance that increases the risk of data breaches and unauthorized access to patient information.The blockchain has emerged as a promising solution to address these critical challenges. It was introduced into the healthcare ecosystem with the promise of enforcing transparency, authentication, security, and trustworthiness. Through comprehensive analysis and case studies, this dissertation assesses the opportunities and addresses the challenges of adopting the blockchain in the healthcare industry. We start with a thorough review of the state of the art covering the blockchain's role in improving supply chain management and enhancing the healthcare delivery chain. Second, we combine theoretical and real-world application studies to develop a guideline that outlines the requirements for building a blockchain-based supply chain. Third, we propose a patient-centric framework that combines blockchain technology with Semantic technologies to help patients manage their health data. Our fourth contribution presents a novel approach to data governance by developing a blockchain-based framework that improves data security and empowers patients to participate actively in their healthcare decisions. In this final contribution, we widen the scope of the proposed framework to include a roadmap for its adoption across diverse domains (banking, education, transportation, and logistics, etc.)
21

Toussaint, Marion. "Une contribution à l'industrie 4.0 : un cadre pour sécuriser l'échange de données standardisées." Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La récente transformation numérique de l'industrie a entraîné de nombreux avantages, allant de produits de meilleure qualité à une productivité accrue et à des délais de mise sur le marché plus courts. Dans ce monde numérique, les données sont devenues un élément essentiel dans de nombreuses décisions et processus critiques au sein et entre les organisations. L'échange de données est désormais un processus clé pour la communication, la collaboration et l'efficacité des organisations. L'adoption par l'industrie 4.0 de technologies de communication modernes a rendu ces données disponibles et partageables à un rythme plus rapide que nous ne pouvons les consommer ou les suivre. Cette vitesse apporte des défis importants tels que l'interopérabilité des données et la traçabilité des données, deux défis interdépendants auxquels les industriels sont confrontés et doivent comprendre pour adopter la meilleure position pour les relever. D'une part, les problèmes d'interopérabilité des données sont un frein à des innovations et des collaborations plus rapides. Le volume croissant d'échanges de données est associé à un nombre accru de systèmes hétérogènes qui ont besoin de communiquer et de se comprendre. Les normes d'information sont une solution prouvée, mais leur processus de développement long et complexe les empêche de suivre le rythme rapide de l'environnement qu'elles ont besoin de soutenir et d'assurer l'interopérabilité, ce qui ralentit leur adoption. Cette thèse propose une transition de la gestion de projet prédictive à adaptative avec l'utilisation de méthodes Agiles pour raccourcir les itérations de développement et augmenter la vitesse de livraison, augmentant ainsi l'adoption des normes. Alors que les environnements adaptatifs se sont révélés être une solution viable pour aligner les normes sur le rythme rapide de l'innovation de l'industrie, la plupart des solutions de gestion des exigences des projets n'ont pas évolué pour s'adapter à ce changement. Cette thèse introduit également un modèle pour soutenir une meilleure élicitation des exigences lors de l'élaboration des normes avec une traçabilité et une visibilité accrues. D'autre part, les décisions basées sur les données sont exposées à la vitesse à laquelle les données falsifiées peuvent se propager dans les organisations et corrompre ces décisions. Avec un temps moyen pour identifier (MTTI) et un temps moyen pour contenir (MTTC) une telle menace déjà proche de 300 jours, la croissance constante des données produites et échangées ne fera qu'accroître le MTTI et le MTTC. Bien que les signatures numériques aient déjà prouvé leur utilité pour identifier une telle corruption, un cadre formel de traçabilité des données est toujours nécessaire pour suivre l'échange de données sur des réseaux vastes et complexes d'organisations afin d'identifier et de contenir la propagation des données corrompues. Cette thèse analyse les cadres de cybersécurité existants, leurs limites et introduit un nouveau cadre basé sur des normes, sous la forme d'un profil NIST CSF étendu, pour se préparer, atténuer, gérer et suivre les attaques de manipulation de données. Ce cadre est également accompagné de conseils de mise en œuvre pour faciliter son adoption et sa mise en œuvre par les organisations de toutes tailles
The recent digital transformation of the manufacturing world has resulted in numerous benefits, from higher quality products to enhanced productivity and shorter time to market. In this digital world, data has become a critical element in many critical decisions and processes within and across organizations. Data exchange is now a key process for the organizations' communication, collaboration, and efficiency. Industry 4.0 adoption of modern communication technologies has made this data available and shareable at a quicker rate than we can consume or track it. This speed brings significant challenges such as data interoperability and data traceability, two interdependent challenges that manufacturers face and must understand to adopt the best position to address them. On one hand, data interoperability challenges delay faster innovation and collaboration. The growing volume of data exchange is associated with an increased number of heterogeneous systems that need to communicate with and understand each other. Information standards are a proven solution, yet their long and complex development process impedes them from keeping up with the fast-paced environment they need to support and provide interoperability for, slowing down their adoption. This thesis proposes a transition from predictive to adaptive project management with the use of Agile methods to shorten the development iterations and increase the delivery velocity, increasing standards adoption. While adaptive environments have shown to be a viable solution to align standards with the fast pace of industry innovation, most project requirements management solutions have not evolved to accommodate this change. This thesis also introduces a model to support better requirement elicitation during standards development with increased traceability and visibility. On the other hand, data-driven decisions are exposed to the speed at which tampered data can propagate through organizations and corrupt these decisions. With the mean time to identify (MTTI) and mean time to contain (MTTC) such a threat already close to 300 days, the constant growth of data produced and exchanged will only push the MTTI and MTTC upwards. While digital signatures have already proven their use in identifying such corruption, there is still a need for formal data traceability framework to track data exchange across large and complex networks of organizations to identify and contain the propagation of corrupted data. This thesis analyses existing cybersecurity frameworks, their limitations, and introduces a new standard-based framework, in the form of an extended NIST CSF profile, to prepare against, mitigate, manage, and track data manipulation attacks. This framework is also accompanied with implementation guidance to facilitate its adoption and implementation by organizations of all sizes
22

Jenvald, Mattias, and Mikael Hovmöller. "Reducing Delays for Unplanned Maintenance of Service Parts in MRO Workshops : A case study at an aerospace and defence company." Thesis, Linköpings universitet, Produktionsekonomi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-167203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Service parts sometimes break down unexpectedly and require maintenance. The irregular nature of the need for this type of maintenance makes forecasting difficult and unreliable. Saab currently experiences problems with long delays when performing unplanned maintenance of service parts used in the two models of Gripen aircraft, Gripen C and Gripen D. These delays are source of monetary waste, as late delivery of maintained service parts results in Saab having to pay penalty fines to the customers. The purpose of this master thesis was to analyze data collected during a case study at Saab in Linköping, and suggest improvements for how to reduce these delays. This study focused on analyzing what caused the delays, and how the information provided by the customers can be used by the operative planners at the Maintenance, Repair \& Overhaul (MRO) workshops in order to be more efficient. The data was collected during the case study using semi-structured interviews of 16 people working with the current system, as well as by collecting historical data from an internal database at Saab. This data was analyzed in parallel with a literature study of relevant research related to service parts supply chains, MRO workshops, and unplanned maintenance operations. The analysis showed that there were four types of interruptions of maintenance; Internal stock-out of spare parts, internal stock-out of sub-units, external delays at the Original Equipment Manufacturer (OEM), and internal equipment breakdowns. A root cause analysis found that the four root causes of delays were: Saab does not have any contracts that incentivizes their OEM's to deliver on time. The  data from the technical report is not used to provide the operative planners with information about incoming orders. The MRO workshops do not have a standardized system for prioritizing maintenance of service parts. The MRO workshops currently lacks a method for predicting certain types of machine breakdowns.
23

Johansson, Hanna. "Interdisciplinary Requirement Engineering for Hardware and Software Development : from a Hardware Development Perspective." Thesis, Linköpings universitet, Industriell miljöteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Complexity in products is increasing, and still there is lack of a shared design language ininterdisciplinary development projects. The research questions of the thesis concern differencesand similarities in requirement handling, and integration, current and future. Futureintegration is given more focus with a pair of research questions highlighting obstacles andenablers for increased integration. Interviews were performed at four different companieswith complex development environments whose products originated from different fields;hardware, software, and service. Main conclusions of the thesis are: Time-frames in different development processes are very different and hard to unite. Internal standards exist for overall processes, documentation, and modification handling. Traceability is poorly covered in theory whilst being a big issue in companies. Companies understand that balancing and compromising of requirements is critical fora successful final product. The view on future increased interdisciplinary development is that there are more obstaclesto overcome than enablers supporting it. Dependency is seen as an obstacle inthis regard and certain companies strive to decrease it.The thesis has resulted in general conclusions and further studies is suggested into morespecific areas such as requirement handling tools, requirement types, and traceability.
24

Ekfeldt, Jonas. "Om informationstekniskt bevis." Doctoral thesis, Stockholms universitet, Juridiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-125286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Information technology evidence consists of a mix of representations of various applications of digital electronic equipment, and can be brought to the fore in all contexts that result in legal decisions. The occurrence of such evidence in legal proceedings, and other legal decision-making, is a phenomenon previously not researched within legal science in Sweden. The thesis examines some of the consequences resulting from the occurrence of information technology evidence within Swedish practical legal and judicial decision-making. The thesis has three main focal points. The first consists of a broad identification of legal problems that information technology evidence entails. The second focal point examines the legal terminology associated with information technology evidence. The third focal point consists of identifying sources of error pertaining to information technology evidence from the adjudicator’s point of view. The examination utilizes a Swedish legal viewpoint from a perspective of the public trust in courts. Conclusions include a number of legal problems in several areas, primarily in regards to the knowledge of the adjudicator, the qualification of different means of evidence and the consequences of representational evidence upon its evaluation. In order to properly evaluate information technology evidence, judges are – to a greater extent than for other types of evidence – in need of (objective) knowledge supplementary to that provided by parties and their witnesses and experts. Furthermore, the current Swedish evidence terminology has been identified as a complex of problems in and of itself. The thesis includes suggestions on certain additions to this terminology. Several sources of error have been identified as being attributable to different procedures associated with the handling of information technology evidence, in particular in relation to computer forensic investigations. There is a general need for future research focused on matters regarding both standards of proof for and evaluation of information technology evidence. In addition, a need for deeper legal scientific studies aimed at evidence theory has been identified, inter alia regarding the extent to which frequency theories are applicable in respect to information technology evidence. The need for related further discussions on future emerging areas such as negative evidence and predictive evidence are foreseen.
25

LAN, YUN-CHI, and 蘭韻綺. "Traceability System and Open Data in Taiwan." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/01432011630577052359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

"Extended food supply chain traceability with multiple automatic identification and data collection technologies." 2008. http://library.cuhk.edu.hk/record=b5893508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Hu, Yong.
Thesis submitted in: October 2007.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2008.
Includes bibliographical references (p. 127-129).
Abstracts in English and Chinese.
Chapter Chapter 1. --- Introduction --- p.1
Chapter 1.1. --- Background and Motivation --- p.1
Chapter 1.2. --- Objectives of the Thesis --- p.3
Chapter 1.3. --- Scope of the Thesis --- p.6
Chapter 1.4. --- Structure of the Thesis --- p.6
Chapter Chapter 2. --- Review of Related Technologies --- p.8
Chapter 2.1. --- Scope and Requirements of the Supply Chain Traceability --- p.9
Chapter 2.2. --- Automatic Identification and Data Collection Technologies --- p.14
Chapter 2.2.1. --- Introduction to the AIDC Technologies --- p.14
Chapter 2.2.1.1. --- The Barcode --- p.14
Chapter 2.2.1.2. --- The Radio Frequency Identification (RFID) --- p.17
Chapter 2.2.1.3. --- The Sensors for Food --- p.19
Chapter 2.2.1.4. --- The Global Positioning System (GPS) --- p.23
Chapter 2.2.2. --- Frequencies of the RFID Systems --- p.25
Chapter 2.2.3. --- Encoding Mechanisms for the RFID Tags and Barcode Labels --- p.30
Chapter 2.3. --- Standards and Specifications of the EPCglobal --- p.34
Chapter 2.3.1. --- The EPCglobal Architecture Framework --- p.34
Chapter 2.3.2. --- The EPCglobal EPCIS Specification --- p.39
Chapter 2.3.3. --- The EPCglobal Tag Data Standards --- p.42
Chapter 2.4. --- RFID Applications in Food Supply Chain Management --- p.43
Chapter 2.5. --- Anti-counterfeit Technologies and Solutions --- p.45
Chapter 2.6. --- Data Compression Algorithms --- p.47
Chapter 2.7. --- Shelf Life Prediction Models --- p.49
Chapter Chapter 3. --- Architecture and Scope of the Application System --- p.54
Chapter 3.1. --- Application System Architecture --- p.54
Chapter 3.2. --- Application System Scope --- p.55
Chapter Chapter 4. --- The Tracking and Tracing Management Module --- p.60
Chapter 4.1. --- Overview --- p.60
Chapter 4.2. --- AIDC Technologies Adopted for the Traceable Items --- p.62
Chapter 4.3. --- Mechanism to Achieve the Nested Visibility --- p.70
Chapter 4.4. --- Information Integration in the EPCIS --- p.75
Chapter 4.5. --- Anti-counterfeit Mechanism --- p.82
Chapter Chapter 5. --- The Storage and Transportation Monitoring Module --- p.90
Chapter 5.1. --- Overview --- p.90
Chapter 5.2. --- Compression of the Sensor Data --- p.93
Chapter 5.3. --- Management of the Sensor Data --- p.95
Chapter 5.4. --- Responsive Warning Mechanism --- p.102
Chapter Chapter 6. --- The Sensor Networks Enabled Assessment Module --- p.108
Chapter 6.1. --- Overview --- p.108
Chapter 6.2. --- Management of the Sensor Network Data --- p.110
Chapter 6.3. --- Active Warning Mechanism --- p.114
Chapter Chapter 7. --- Conclusions --- p.122
Chapter 7.1. --- Contributions --- p.122
Chapter 7.2. --- Future Work --- p.124
27

Lupienski, Jason. "Data analysis capability and traceability strategy throughout a cylinder head seat and valve guide process." 2007. http://proquest.umi.com/pqdweb?did=1320956681&sid=5&Fmt=2&clientId=39334&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M.S.)--State University of New York at Buffalo, 2007.
Title from PDF title page (viewed on Nov. 15, 2007) Available through UMI ProQuest Digital Dissertations. Thesis adviser: Wobschall, Darold. Includes bibliographical references.
28

(8635641), Servio Ernesto Palacios Interiano. "Auditable Computations on (Un)Encrypted Graph-Structured Data." Thesis, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Graph-structured data is pervasive. Modeling large-scale network-structured datasets require graph processing and management systems such as graph databases. Further, the analysis of graph-structured data often necessitates bulk downloads/uploads from/to the cloud or edge nodes. Unfortunately, experience has shown that malicious actors can compromise the confidentiality of highly-sensitive data stored in the cloud or shared nodes, even in an encrypted form. For particular use cases —multi-modal knowledge graphs, electronic health records, finance— network-structured datasets can be highly sensitive and require auditability, authentication, integrity protection, and privacy-preserving computation in a controlled and trusted environment, i.e., the traditional cloud computation is not suitable for these use cases. Similarly, many modern applications utilize a "shared, replicated database" approach to provide accountability and traceability. Those applications often suffer from significant privacy issues because every node in the network can access a copy of relevant contract code and data to guarantee the integrity of transactions and reach consensus, even in the presence of malicious actors.

This dissertation proposes breaking from the traditional cloud computation model, and instead ship certified pre-approved trusted code closer to the data to protect graph-structured data confidentiality. Further, our technique runs in a controlled environment in a trusted data owner node and provides proof of correct code execution. This computation can be audited in the future and provides the building block to automate a variety of real use cases that require preserving data ownership. This project utilizes trusted execution environments (TEEs) but does not rely solely on TEE's architecture to provide privacy for data and code. We thoughtfully examine the drawbacks of using trusted execution environments in cloud environments. Similarly, we analyze the privacy challenges exposed by the use of blockchain technologies to provide accountability and traceability.

First, we propose AGAPECert, an Auditable, Generalized, Automated, Privacy-Enabling, Certification framework capable of performing auditable computation on private graph-structured data and reporting real-time aggregate certification status without disclosing underlying private graph-structured data. AGAPECert utilizes a novel mix of trusted execution environments, blockchain technologies, and a real-time graph-based API standard to provide automated, oblivious, and auditable certification. This dissertation includes the invention of two core concepts that provide accountability, data provenance, and automation for the certification process: Oblivious Smart Contracts and Private Automated Certifications. Second, we contribute an auditable and integrity-preserving graph processing model called AuditGraph.io. AuditGraph.io utilizes a unique block-based layout and a multi-modal knowledge graph, potentially improving access locality, encryption, and integrity of highly-sensitive graph-structured data. Third, we contribute a unique data store and compute engine that facilitates the analysis and presentation of graph-structured data, i.e., TruenoDB. TruenoDB offers better throughput than the state-of-the-art. Finally, this dissertation proposes integrity-preserving streaming frameworks at the edge of the network with a personalized graph-based object lookup.

To the bibliography