Dissertations / Theses on the topic 'Model of provenance'

To see the other types of publications on this topic, follow the link: Model of provenance.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 36 dissertations / theses for your research on the topic 'Model of provenance.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tang, Yaobin. "Butterfly -- A model of provenance." Worcester, Mass. : Worcester Polytechnic Institute, 2009. http://www.wpi.edu/Pubs/ETD/Available/etd-031309-095511/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Thakur, Amritanshu. "Semantic construction with provenance for model configurations in scientific workflows." Master's thesis, Mississippi State : Mississippi State University, 2008. http://library.msstate.edu/etd/show.asp?etd=etd-07312008-092758.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Jun. "W7 MODEL OF PROVENANCE AND ITS USE IN THE CONTEXT OF WIKIPEDIA." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/145314.

Full text
Abstract:
Data provenance refers to the lineage or pedigree of data, including information such as its origin and key events that affect it over the course of its lifecycle. In recent years, provenance has become increasingly important as more and more people are using data that they themselves did not generate. Tracking data provenance helps ensure that data provided by many different providers and sources can be trusted and used appropriately. Data provenance also has several other critical uses, including data quality assessment, generating data replication recipes, data security management, etc.One of the major objectives of our research is to investigate the semantics or meaning of data provenance. We describe a generic ontology of data provenance called the W7 model that represents the semantics of data provenance. Formalized in the conceptual graph formalism, the W7 model represents provenance as a combination of seven interconnected elements including "what," "when," "where," "how," "who," "which" and "why." The W7 model is designed to be general and comprehensive enough to cover a broad range of provenance-related vocabularies. However, the W7 model alone, no matter how comprehensive it is, is insufficient for capturing all domain-specific provenance requirements. We hence present a novel approach to developing domain ontologies of provenance. This approach relies on various conceptual graph mechanisms, including schema definitions and canonical formation rules, and enables us to easily adapt and extend the W7 model to develop domain ontologies of provenance. The W7 model for data provenance has been widely adopted and adapted for use within Raytheon Missile Systems and the iPlant Collaborative, as well as the US Army's ATRAP IV (Asymmetric Threat Response and Analysis Program) system.We also developed a domain ontology of provenance for Wikipedia based on the W7 model. This domain ontology enables us to extract provenance for each Wikipedia article. We present a study in which we use their provenance to assess the quality of Wikipedia articles. Assessing and guaranteeing data quality has become a critical concern that, to a large extent, determines the future success and survival of Wikipedia since the quality of Wikipedia has been continuously called into question due to various incidents of vandalism and misinformation since its launch in 2001. Our study shows that the quality of Wikipedia articles depends not only on the different types of contributors but also on how they collaborate. We identify a number of contributor roles based on the provenance. Based on the roles and provenance, our research identifies several collaboration patterns that are preferable or detrimental for data quality, thus providing insights for designing tools and mechanisms to improve Wikipedia article quality.
APA, Harvard, Vancouver, ISO, and other styles
4

Ali, Mufajjul. "Provenance-based data traceability model and policy enforcement framework for cloud services." Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/393423/.

Full text
Abstract:
In the context of software, provenance holds the key to retaining a reproduceable instance of the duration of a service, which can be replayed/reproduced from the beginning. This entails the nature of invocations that took place, how/where the data were created, modified, updated and the user's engagement with the service. With the emergence of the cloud and the benefits it encompasses, there has been a rapid proliferation of services being developed and adopted by commercial businesses. However, these services expose very little internal workings to their customers, and insufficient means to check for the right working order. This can cause transparency and compliance issues, especially in the event of a fault or violation, customers and providers are left to point finger at each other. Provenance-based traceability provides a means to address a part of this problem by being able to capture and query events that have occurred in the past to understand how and why it took place. On top of that, provenance-based policies are required to facilitate the validation and enforcement of business level requirements for end-users satisfaction. This dissertation makes four contributions to the state of the art: i) By defining and implementing an enhanced provenance-based cloud traceability model (cProv), that extends the standardized Prov model to support characteristics related to cloud services. The model is then able to conceptualize the traceability of a running cloud service. ii) By the creation of a provenance-based policy language (cProvl) in order to facilitate the declaration and enforcement of the business level requirements. iii) By developing a traceability framework, that provides client and server-side stacks for integrating service-level traceability and policy-based enforcement of business rules. iv) Finally by the implementation and evaluation of the framework, that leverages on the standardized industry solutions. The framework is then applied to the commercial service: `ConfidenShare' as a proof of concept.
APA, Harvard, Vancouver, ISO, and other styles
5

Amanqui, Flor Karina Mamani. "Using a provenance model and spatiotemporal information to integrate heterogeneous biodiversity semantic data." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-30012018-093704/.

Full text
Abstract:
In the last few years, the Web of data is being rapidly populated with biodiversity data. However, when researchers need to retrieve, integrate, and visualize these data, they need to rely on semi-manual approaches. That is due to the fact that biodiversity repositories, such as GBIF, offer data as just strings in CSV format spreadsheets. There is no machine readable metadata that could add meaning (semantics) to data. Without this metadata, automatic solutions are impossible and labor intensive semi-manual approaches for data integration and visualization are unavoidable. To reduce this problem, we present a novel architecture, called STBioData, to automatically link spatiotemporal biodiversity data, from heterogeneous data sources, to enable easier searching, visualization and downloading of relevant data. It supports the generation of interactive maps and mapping between biodiversity data and ontologies describing them (such as Darwin Core, DBpedia, GeoSPARQL, Time and PROV-O). A new biodiversity provenance model (BioProv), extending the W3C PROV Data Model, was proposed. BioProv enables applications that deal with biodiversity data to incorporate provenance data in their information. A web based prototype, based on this architecture, was implemented. It supports biodiversity domain experts in tasks, such as identifying a species conservation status, by automating most of the necessary tasks. It uses collection data, from important Brazilian biodiversity research institutions, and species geographic distributions and conservation status, from the IUCN Red List of Threatened Species. These data are converted to linked data, enriched and saved as RDF Triples. Users can access the system, using a web interface, and search for collection and species distribution records based on species names, time ranges and geographic location. After a data set is recovered, it can be displayed in an interactive map. The records contents are also shown (including provenance data) together with links to the original records at GBIF and IUCN. Users can export datasets, as a CSV or RDF file, or get a print out in PDF (including the visualizations). Choosing different time ranges, users can, for instance, verify the evolution of a species distribution. The STBioData prototype was tested using use cases. For the tests, 46,211 collection records, from SpeciesLink, and 38,589 conservation status records (including maps), from IUCN, for marine mammal were converted to 2,233,782. RDF triples and linked using well known ontologies. 90% of biodiversity experts, using the tool to determine conservation status, were able to find information about dolphin species, with a satisfactory recovery time, and were able to understand the interactive map. In an information retrieval experiment, when compared with SpeciesLink keyword based search, the prototypes semantic based search performed, on average, 24% better in precision and 22% in recall tests. And that does not takes into account cases where only the prototype returned search results. These results demonstrate the value of having public available linked biodiversity data with semantics.
Nos últimos anos, a Web de dados está sendo rapidamente preenchida com dados de biodiversidade. No entanto, quando pesquisadores precisam recuperar, integrar e visualizar esses dados, eles precisam confiar em abordagens semi-manuais. Isso ocorre devido ao fato de que repositórios sobre biodiversidade, como GBIF, oferecem dados como cadeias de caracteres em planilhas no formato CSV. Não há nenhum metadado legível por máquinas que poderia acrescentar significado (semântico) aos dados. Sem os metadados, soluções automáticas são impossíveis, sendo necessário para visualização e integração dos dados, a utilização de abordagens semi-manuais. Para reduzir esse problema, apresentamos uma arquitetura chamada STBioData. Com ela é possível vincular automaticamente dados de biodiversidade, com informações espaço-temporais provenientes de fontes heterogêneas, tornando mais fácil a pesquisa, visualização e download dos dados relevantes. Ele suporta a geração de mapas interativos e o mapeamento entre dados de biodiversidade e ontologias que os descrevem (como Darwin Core, DBpedia, GeoSPARQL, Time e PROV-O). Foi proposto um novo modelo de proveniência para biodiversidade (BioProv), que estende o modelo de dados PROV W3C. BioProv permite que aplicativos que lidam com dados de biodiversidade incorporem os dados de proveniência em suas informações. Foi implementado um protótipo Web baseado nesta arquitetura. Ele oferece suporte aos especialistas do domínio de biodiversidade em tarefas como, identificação do status de conservação da espécie, além de automatizar a maioria das tarefas necessária. Foi utilizado coleções de dados de importantes pesquisas brasileiras sobre biodiversidade, juntamente com dados de distribuição geográfica das espécies e seu estado de conservação, provenientes da lista de espécies ameaçadas da IUCN (Red List). Esses dados são convertidos em dados conectados, enriquecidos e salvados como triplas RDF. Os usuários podem acessar o sistema, usando uma interface web que permite procurar, utilizando os nomes das espécies, intervalos de tempo e localização geográfica. Os dados recuperados podem ser visualizados no mapa interativo. O conteúdo de registros também é mostrado (incluindo dados de proveniência), juntamente com links para os registros originais no GBIF e IUCN. Os usuários podem exportar o conjunto de dados, como um arquivo CSV ou RDF, ou salvar em PDF (incluindo as visualizações). Escolhendo diferentes intervalos de tempo, os usuários podem por exemplo, verificar a evolução da distribuição das espécies. O protótipo STBioData foi testado usando casos de uso. Para esses testes, 46.211 registros de coleção do SpeciesLink e 38.589 registros de estado de conservação da IUCN (incluindo mapas), sobre mamíferos marinhos, foram convertidos em 2.233.782 triplas RDF. Essas triplas reutilizam ontologias representativas da área . 90% dos especialistas em biodiversidade, usaram a ferramenta para determinar o estado de conservação, eles foram capaz de encontrar as informações sobre determinada espécie de golfinho, com um tempo de recuperação satisfatório e também foram capaz de entender o mapa interativo gerado. Em um experimento sobre recuperação de informações, quando comparado com o sistema de busca por palavra-chave utilizado pela base SpeciesLink, a busca semântica realizada pelo protótipo STBioData, em média, é 24% melhor em testes de precisão e 22% melhor em testes de revocação. Não são considerados os casos onde o protótipo somente retornou o resultado da busca. Esses resultados demonstram o valor de ter dados conectados sobre biodiversidade disponíveis publicamente em um formato semântico.
APA, Harvard, Vancouver, ISO, and other styles
6

Miksa, Elizabeth J. "A model for assigning temper provenance to archaeological ceramics with case studies from the American Southwest." Diss., The University of Arizona, 1998. http://hdl.handle.net/10150/288805.

Full text
Abstract:
Well-designed provenance studies form the basis from which questions of human economy and behavior are addressed. Pottery is often the subject of such studies, requiring geological and archaeological evidence to establish patterns of ceramic economy. A generalized theoretical and methodological framework for provenance studies is presented, followed by specific considerations for ceramic provenance studies. Four main sources of variation affect pottery composition: geological distribution of resources, geological resource variability, differential economic factors affecting resource use, and technological manipulation of materials. Post depositional alteration is also considered. This ceramic provenance model provides explicit guidelines for the assessment of geological aspects of provenance, since geological resource availability affects acquisition by humans and thus archaeological research designs, in which interdependent geological and archaeological scalar factors must be balanced against budgets. Two case studies illustrate the model. The first is of sand-tempered pottery from the Tonto Basin, Arizona, where the bedrock geology is highly variable giving rise to geographically unique sands. Zones with similar sand compositions are modeled using actualistic petrofacies, the Gazzi-Dickinson point-counting technique, and multivariate statistics. Methods used to create a petrofacies model are detailed, as is the model's application to sand tempered utilitarian sherds from three Tonto Basin project areas. Data analysis reveals strong temporal and spatial ceramic production and consumption patterns. The second is of crushed-schist-tempered Hohokam pottery. Crushed schist was often used to temper pre-Classic Hohokam plain ware pottery in central Arizona's middle Gila River valley. Systematic investigation of rocks from the Pinal Schist terrane in the middle Gila River valley was conducted to assess how many sources were exploited prehistorically, and whether schist or schist-tempered pottery were exchanged. Chemical analysis shows that the sources can be statistically discriminated from one another. Schist source data were compared to schist extracted from plain ware sherds and to unmodified pieces of schist recovered from two archaeological sites. Preliminary indications are that schist was derived from several sources. This model provides a flexible, archaeologically relevant framework for assessing temper provenance. Hopefully, archaeologists and petrologists alike will use it to define ceramic provenance research problems and communicate effective solutions to one another.
APA, Harvard, Vancouver, ISO, and other styles
7

Valente, Wander Antunes Gaspar. "SciProv: uma arquitetura para a busca semântica em metadados de proveniência no contexto de e-Science." Universidade Federal de Juiz de Fora (UFJF), 2011. https://repositorio.ufjf.br/jspui/handle/ufjf/4417.

Full text
Abstract:
Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2017-05-05T13:06:54Z No. of bitstreams: 1 wanderantunesgasparvalente.pdf: 18725317 bytes, checksum: 3ee881993096b45e72f9522887e7e2e0 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-05-17T13:37:14Z (GMT) No. of bitstreams: 1 wanderantunesgasparvalente.pdf: 18725317 bytes, checksum: 3ee881993096b45e72f9522887e7e2e0 (MD5)
Made available in DSpace on 2017-05-17T13:37:14Z (GMT). No. of bitstreams: 1 wanderantunesgasparvalente.pdf: 18725317 bytes, checksum: 3ee881993096b45e72f9522887e7e2e0 (MD5) Previous issue date: 2011-01-18
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
A e-Science se caracteriza pela manipulação de um vasto volume de dados e utilização de recursos computacionais em larga escala, muitas vezes localizados em ambientes distribuídos. Nesse cenário, representado por alta complexidade e heterogeneidade, torna-se relevante o tratamento da proveniência de dados, que tem por objetivo descrever os dados que foram gerados ao longo da execução de um experimento científico e apresentar os processos de transformação pelos quais foram submetidos. Assim, a proveniência auxilia a formar uma visão da qualidade, da validade e da atualidade dos dados produzidos em um ambiente de pesquisa científica. O SciProv consiste em uma arquitetura cujo objetivo é interagir com sistemas de gerenciamento de Workflows científicos para promover a captura e a gerência dos metadados de proveniência gerados. Para esse propósito, o SciProv adota uma abordagem baseada em um modelo abstrato para a representação da proveniência. Esse modelo, denominado Open Provenance Model, confere ao SciProv a capacidade de prover uma infraestrutura homogênea e interoperável para a manipulação dos metadados de proveniência. Como resultado, o SciProv permite disponibilizar um arcabouço para consulta às informações de proveniência geradas em um cenário complexo e diversificado de e-Science. Mais importante, a arquitetura faz uso de tecnologia web semântica para processar as consultas aos metadados de proveniência. Nesse contexto, a partir do emprego de ontologias e máquinas de inferências, o SciProv provê recursos para efetuar deduções sobre os metadados de proveniência e obter resultados importantes ao extrair informações adicionais além daquelas que encontram-se registradas de forma explícita nas informações gerenciadas.
E-Science is characterized by manipulation of huge data set and large scale computing resources usage, often located in distributed environments. In this scenario, represented by high complexity and heterogeneity, it becomes important to treat data provenance, which aims to describe data that were generated during a scientific experiment execution and presents processes of transformation by which underwent. Thus, lineage helps to form a quality, validity and topicality vision of data produced in a scientific research environment. SciProv consists of an architecture that aims to interact with scientific workflows management systems for capture and manipulation of generated provenance metadata. For this purpose, SciProv adopts an approach based on an abstract model for representing the lineage. This model, called Open Provenance Model, provides to SciProv the ability to set up a homogeneous and interoperable infrastructure for handling provenance metadata. As a result, SciProv is able to provide a framework for query data provenance generated in a complex and diverse e-Science scenario. More important, the architecture makes use of semantic web technology to process metadata provenance queries. In this context, using ontologies and inference engines, SciProv provides resources to make inferences about lineage and to obtain important results in allowing the extraction of information beyond those that are registered explicitly from managed data.
APA, Harvard, Vancouver, ISO, and other styles
8

Nel, Daniel Hermanus Greyling. "Performative digital asset management: To propose a framework and proof of concept model that effectively enables researchers to document, archive and curate their non-traditional research data." Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/84761/3/Daniel_Nel_Exegesis.pdf.

Full text
Abstract:
This cross disciplinary study was conducted as two research and development projects. The outcome is a multimodal and dynamic chronicle, which incorporates the tracking of spatial, temporal and visual elements of performative practice-led and design-led research journeys. The distilled model provides a strong new approach to demonstrate rigour in non-traditional research outputs including provenance and an 'augmented web of facticity'.
APA, Harvard, Vancouver, ISO, and other styles
9

Saghafi, Salman. "A Framework for Exploring Finite Models." Digital WPI, 2015. https://digitalcommons.wpi.edu/etd-dissertations/458.

Full text
Abstract:
This thesis presents a framework for understanding first-order theories by investigating their models. A common application is to help users, who are not necessarily experts in formal methods, analyze software artifacts, such as access-control policies, system configurations, protocol specifications, and software designs. The framework suggests a strategy for exploring the space of finite models of a theory via augmentation. Also, it introduces a notion of provenance information for understanding the elements and facts in models with respect to the statements of the theory. The primary mathematical tool is an information-preserving preorder, induced by the homomorphism on models, defining paths along which models are explored. The central algorithmic ideas consists of a controlled construction of the Herbrand base of the input theory followed by utilizing SMT-solving for generating models that are minimal under the homomorphism preorder. Our framework for model-exploration is realized in Razor, a model-finding assistant that provides the user with a read-eval-print loop for investigating models.
APA, Harvard, Vancouver, ISO, and other styles
10

Santos, Renata Ribeiro dos. "Modelo de procedência para auxiliar na análise da qualidade do dado geográfico." Universidade Federal de São Carlos, 2016. https://repositorio.ufscar.br/handle/ufscar/8609.

Full text
Abstract:
Submitted by Aelson Maciera (aelsoncm@terra.com.br) on 2017-03-29T19:09:28Z No. of bitstreams: 1 DissRRS.pdf: 3751863 bytes, checksum: 950bef628d03f26a109436e96c9ac337 (MD5)
Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-04-11T13:45:04Z (GMT) No. of bitstreams: 1 DissRRS.pdf: 3751863 bytes, checksum: 950bef628d03f26a109436e96c9ac337 (MD5)
Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-04-11T13:45:15Z (GMT) No. of bitstreams: 1 DissRRS.pdf: 3751863 bytes, checksum: 950bef628d03f26a109436e96c9ac337 (MD5)
Made available in DSpace on 2017-04-11T13:53:54Z (GMT). No. of bitstreams: 1 DissRRS.pdf: 3751863 bytes, checksum: 950bef628d03f26a109436e96c9ac337 (MD5) Previous issue date: 2016-08-09
Não recebi financiamento
The quality of the geographic data must be a relevant concern for providers and consumers of this type of data because the manipulation and analysis of low quality geographic data may result in errors, which will be propagated through the consequent data. Thus it is important to properly document the information which allows for certifying the quality of the geographic data. In order to provide a minimum amount of metadata for such purpose, this dissertation presents an approach based on the provenance of the geographic data, which corresponds to the information about the history of such data from its origin until the processes that resulted in its current state. For this purpose, a provenance model called ProcGeo was proposed, in which it was defined a minimum amount of metadata that must be considered for the analysis of the quality of a certain geographic data. Although a few works and geographic metadata standards, such as Federal Geographic Data Committee (FGDC) and ISO 19115, consider the information about the provenance in the analysis of the quality of geographic data, it´s the opinion of the author that some metadata considered important for this purpose are not adequately contemplated. In this work, the prototype of an interface called ProcGeoInter was also implemented, aiming to guarantee the completeness and correctness in the filling out of the defined metadata in the ProcGeo model as well as the visualization of their content. The validation of the ProcGeo model and of the ProcGeoInter interface were made through tests and surveys applied to providers and consumers of geographic data. As a means of comparison, the interface for filling out and visualization of metadata available by SIG Quantum GIS (plugin Metatools) was used, which implements the FGDC geographic metadata standard. The obtained results indicated that the metadata defined in the ProcGeo model helped the geographic data provider in the description of the provenance of such data, when compared to those defined in the FGDC geographic metadata standard. Through the consumer´s focus it was possible to notice that the information filled out in the metadata defined by the ProcGeo favored the analysis of the quality of the consumed data. It was clear that both providers and consumers do not possess the habit of providing or consuming the information predicted in the FGDC and ISO 19115 geographic metadata standards.
A qualidade do dado geográfico deve ser uma preocupação relevante para provedores e consumidores desse tipo de dado, pois a manipulação e análise de um dado geográfico com baixa qualidade podem resultar em erros que vão se propagar nos dados gerados a partir desse. Assim, é importante que a informação que permita atestar a qualidade do dado geográfico seja adequadamente documentada. Com o propósito de oferecer um conjunto mínimo de metadados para essa finalidade, esse trabalho apresenta uma abordagem baseada na procedência do dado geográfico, que corresponde à informação sobre a história do dado, desde a sua origem até os processos que resultaram no seu estado atual. Para tanto, foi proposto um modelo de procedência denominado ProcGeo no qual foi definido um conjunto mínimo de metadados que devem ser considerados para a análise da qualidade de um dado geográfico. Embora alguns trabalhos e padrões de metadados geográficos, como o Federal Geographic Data Committee (FGDC) e o ISO 19115, considerem a informação da procedência para a análise da qualidade do dado geográfico, sob o ponto de vista da autora deste trabalho, alguns metadados considerados importantes para essa finalidade não são adequadamente contemplados. Neste trabalho também foi implementado o protótipo de uma interface denominada ProcGeoInter, que tem como finalidade garantir a corretude e completude do preenchimento dos metadados definidos no modelo ProcGeo e a visualização do conteúdo dos mesmos. A validação do modelo ProcGeo e da interface ProcGeoInter foram realizados por meio de testes e questionários aplicados a provedores e consumidores de dados geográficos. Para efeito de comparação, foi considerada a interface para preenchimento e visualização de metadados disponibilizada no SIG Quantum GIS (plugin Metatoools), que implementa o padrão de metadados geográficos FGDC. Os resultados obtidos indicaram que os metadados definidos no modelo ProcGeo auxiliaram o provedor de dados geográficos na descrição da procedência desses dados, quando comparados aos definidos no padrão de metadados geográficos FGDC. Pelo foco do consumidor foi possível perceber que as informações preenchidas nos metadados definidos pelo ProcGeo favoreceram a análise da qualidade dos dados consumidos. Ficou evidente que tanto provedores quanto consumidores não possuem o hábito de prover ou consumir as informações previstas nos padrões de metadados geográficos FGDC e ISO 19115.
APA, Harvard, Vancouver, ISO, and other styles
11

Raghavan, Sriram. "A framework for identifying associations in digital evidence using metadata." Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/72659/1/Sriram_Raghavan_Thesis.pdf.

Full text
Abstract:
Digital forensics concerns the analysis of electronic artifacts to reconstruct events such as cyber crimes. This research produced a framework to support forensic analyses by identifying associations in digital evidence using metadata. It showed that metadata based associations can help uncover the inherent relationships between heterogeneous digital artifacts thereby aiding reconstruction of past events by identifying artifact dependencies and time sequencing. It also showed that metadata association based analysis is amenable to automation by virtue of the ubiquitous nature of metadata across forensic disk images, files, system and application logs and network packet captures. The results prove that metadata based associations can be used to extract meaningful relationships between digital artifacts, thus potentially benefiting real-life forensics investigations.
APA, Harvard, Vancouver, ISO, and other styles
12

MARINS, ANDRE LUIZ ALMEIDA. "PROVENANCE CONCEPTUAL MODELS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2008. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=11880@1.

Full text
Abstract:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Sistemas de informação, desenvolvidos para diversos setores econômicos, necessitam com maior freqüência capacidade de rastreabilidade dos dados. Para habilitar tal capacidade, é necessário modelar a proveniência dos dados. Proveniência permite testar conformidade com a legislação, repetição de experimentos, controle de qualidade, entre outros. Habilita também a identificação de agentes (pessoas, organizações ou agentes de software) e pode ser utilizada para estabelecer níveis de confiança para as transformações dos dados. Esta dissertação propõe um modelo genérico de proveniência criado com base no alinhamento de recortes de ontologias de alto nível, padrões internacionais e propostas de padrões que tratam direta ou indiretamente de conceitos relacionados à proveniência. As contribuições da dissertação são portanto em duas direções: um modelo conceitual para proveniência - bem fundamentado - e a aplicação da estratégia de projeto conceitual baseada em alinhamento de ontologias.
Information systems, developed for several economic segments, increasingly demand data traceability functionality. To endow information systems with such capacity, we depend on data provenance modeling. Provenance enables legal compliance, experiment validation, and quality control, among others . Provenance also helps identifying participants (determinants or immanents) like people, organizations, software agents among others, as well as their association with activities, events or processes. It can also be used to establish levels of trust for data transformations. This dissertation proposes a generic conceptual model for provenance, designed by aligning fragments of upper ontologies, international standards and broadly recognized projects. The contributions are in two directions: a provenance conceptual model - extensively documented - that facilitates interoperability and the application of a design methodology based on ontology alignment.
APA, Harvard, Vancouver, ISO, and other styles
13

Carata, Lucian. "Provenance-based computing." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/287562.

Full text
Abstract:
Relying on computing systems that become increasingly complex is difficult: with many factors potentially affecting the result of a computation or its properties, understanding where problems appear and fixing them is a challenging proposition. Typically, the process of finding solutions is driven by trial and error or by experience-based insights. In this dissertation, I examine the idea of using provenance metadata (the set of elements that have contributed to the existence of a piece of data, together with their relationships) instead. I show that considering provenance a primitive of computation enables the exploration of system behaviour, targeting both retrospective analysis (root cause analysis, performance tuning) and hypothetical scenarios (what-if questions). In this context, provenance can be used as part of feedback loops, with a double purpose: building software that is able to adapt for meeting certain quality and performance targets (semi-automated tuning) and enabling human operators to exert high-level runtime control with limited previous knowledge of a system's internal architecture. My contributions towards this goal are threefold: providing low-level mechanisms for meaningful provenance collection considering OS-level resource multiplexing, proving that such provenance data can be used in inferences about application behaviour and generalising this to a set of primitives necessary for fine-grained provenance disclosure in a wider context. To derive such primitives in a bottom-up manner, I first present Resourceful, a framework that enables capturing OS-level measurements in the context of application activities. It is the contextualisation that allows tying the measurements to provenance in a meaningful way, and I look at a number of use-cases in understanding application performance. This also provides a good setup for evaluating the impact and overheads of fine-grained provenance collection. I then show that the collected data enables new ways of understanding performance variation by attributing it to specific components within a system. The resulting set of tools, Soroban, gives developers and operation engineers a principled way of examining the impact of various configuration, OS and virtualization parameters on application behaviour. Finally, I consider how this supports the idea that provenance should be disclosed at application level and discuss why such disclosure is necessary for enabling the use of collected metadata efficiently and at a granularity which is meaningful in relation to application semantics.
APA, Harvard, Vancouver, ISO, and other styles
14

Decou, Audrey [Verfasser], Hilmar von [Akademischer Betreuer] Eynatten, and Gerhard [Akademischer Betreuer] Wörner. "Provenance model of the Cenozoic siliciclastic sediments from the western Central Andes (16-21°S): implications for Eocene to Miocene evolution of the Andes / Audrey Decou. Gutachter: Hilmar von Eynatten ; Gerhard Wörner. Betreuer: Hilmar von Eynatten." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2011. http://d-nb.info/1042264899/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Nilsson, Marita. "Proveniensprincipen : Vara eller icke vara - det är frågan i en digitaliseradinformationsförvaltning." Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-34413.

Full text
Abstract:
Detta forskningsarbete lyfter den problematik somdebatterats kring proveniensprincipen och den ombildningdenna princip har mött sedan digitaliserings ankomst.Studiens avsikt var att påvisa vilken innebörd principen haridag i en modern informationsförvaltning och deninformationshantering som sker där. Syftet var även attundersöka hur informationsförvaltningen arbetar proaktivtmed att garantera proveniens i all sin informationshantering,samt belysa hur proveniens förstås i förhållande till valet avmetod kring informationshanteringen.Undersökningen var kvalitativ och utfördes på tiokommunarkiv i form av att varje kommuns kommunarkivariedjupintervjuades. I undersökningen har även planer kringinformationshantering en studerats. Studien konstaterar vilkaförenklingar som digitaliseringen inneburit kring att säkerställa proveniens, där automatiserad och utvecklad metadataskapat verklig proveniens som kan påvisa informationenssamband med den process och det sammanhang där den harbefunnit sig. Uppsatsen diskuterar även de bekymmer somuppstår då digitaliserad information ordnas på helt andra sättän tidigare och vilka konsekvenser detta får för hur vi skaförhålla oss till och förstå proveniens.Resultatet visar att informationsförvaltningarna kan borga föryttre proveniens vad gäller arkivmaterialet men inte helahandlingsbeståndet. Studien fastslår vidare att inreproveniens som en spegling av organisationens verksamhetmåste förstås utifrån hela handlingsbeståndet och desslogiska ordning, snarare än utifrån arkivmaterialets synliga.Undersökningen konstaterar även betydelsen av proaktivitetkring arbetet med att tydliggöra informationens processuellakontext, samt tidig metadataapplicering ochsystemutveckling som behåller metadata genom allaprocesser. Uppsatsen understryker slutligen att detta inte görsi den utsträckning som är nödvändig.
This essay describes the debate about the principle ofprovenance and its multiple forms, and the transformationsof these forms, due to the coming of electronic informations.The thesis intended to explain the definitions of the principlein a modern information management and there explore howthey operate proactively to assure provenance.The qualitative investigation was carried out at tenmunicipality final archives, where each municipalityarchivist was being interviewed. The study expounds in whatway the digitisation has simplified the methods to conductassured provenance, where automated metadata shows therelationships of the information to function and process. Theessay also debates the difficulties that appear when digitalinformation are being organized in different ways thananalogue information, and how this fact requires a newinterpretation of the principle of provenance.The researcher concludes that the investigated archives,ensure respect des fonds when it concerns the content of thearchives, but not when it comes to the whole content of theinformation management. The result of the study also showsthat the respect of original order as a reflection of theorganization, has to be understood throughout all content ofthe management and its logical order, rather than the visiblecontent that the archives embrace. Furthermore the thesisobserves the importance of proactivity, regarding theclarification of the relationships between the information andthe processes that produce and use them. This could beachieved with early application of metadata and developmentof systems that keep metadata trough all processes. Theconclusion of the essay is that this is not pursued in theextension that is required.
APA, Harvard, Vancouver, ISO, and other styles
16

Martin, Chris J. "Chemical models for, and the role of data and provenance in, an atmospheric chemistry community." Thesis, University of Leeds, 2009. http://etheses.whiterose.ac.uk/1596/.

Full text
Abstract:
This thesis presents research at the interface of the e-Science and atmospheric chemistry disciplines. Two inter-related research topics are addressed: first, the development of computational models of the troposphere (i.e. in silico experiments); and secondly, provenance capture and representation for data produced by these computational models. The research was conducted using an ethnographic approach, seeking to develop in-depth understanding of current working practices, which then informed the research itself. The research focused on the working practices of a defined research community; the users and developers of the MCM (Master Chemical Mechanism). The MCM is a key data and information repository used by researchers, with an interest in atmospheric chemistry, across the world. A computational modelling system, the OSBM (Open Source Box Model) was successfully developed to encourage researchers to make use of the MCM, within their in silico experiments. Taking advantage of functionality provided by the OSBM, the use of in situ experimental data to constrain zero dimensional box models was explored. Limitations of current methodologies for constraining zero dimensional box models were identified, particularly associated with the use of piecewise constant interpolation and the averaging of constraint data. Improved methodologies for constraining zero dimensional box models were proposed, tested and demonstrated to offer gains in the accuracy of the model results and the efficiency of the model itself. Current data generation and provenance related working practices, within the MCM community, were mapped. An opportunity was identified to apply Semantic Web technologies to improve working practices associated with gathering and evaluating feedback from in silico experiments, to inform the ongoing development of the MCM. These envisioned working practices rely on researchers, performing in silico experiments, that make use of the MCM, capturing data and provenance using an ELN (Electronic Laboratory Notebook). A prototype ELN, employing a user-orientation approach to provenance capture and representation, was then successfully designed, implemented and evaluated. The evaluation of this prototype ELN highlighted the importance of adopting a holistic approach to the development of provenance capture tools and the difficulties of balancing researchers’ requirements for flexibility and structure their scientific processes.
APA, Harvard, Vancouver, ISO, and other styles
17

Tomazela, Bruno. "MPPI: um modelo de procedência para subsidiar processos de integração." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-15042010-143510/.

Full text
Abstract:
A procedência dos dados consiste no conjunto de metadados que possibilita identificar as fontes e os processos de transformação aplicados aos dados, desde a criação até o estado atual desses dados. Existem diversas motivações para se incorporar a procedência ao processo de integração, tais como avaliar a qualidade dos dados das fontes heterogêneas, realizar processos de auditoria dos dados e de atribuição de autoria aos proprietários dos dados e reproduzir decisões de integração. Nesta dissertação é proposto o MPPI, um modelo de procedência para subsidiar processos de integração. O modelo enfoca sistemas nos quais as fontes de dados podem ser atualizadas somente pelos seus proprietários, impossibilitando que a integração retifique eventuais conflitos de dados diretamente nessas fontes. O principal requisito do MPPI é que ele ofereça suporte ao tratamento de todas as decisões de integração realizadas em processos anteriores, de forma que essas decisões possam ser reaplicadas automaticamente em processos de integração subsequentes. O modelo MPPI possui quatro características. A primeira delas consiste no mapeamento da procedência dos dados em operações de cópia, edição, inserção e remoção, e no armazenamento dessas operações em um repositório de operações. A segunda característica é o tratamento de operações de sobreposição, por meio da proposta das políticas blind, restrict, undo e redo. A terceira característica consiste na identificação de anomalias decorrentes do fato de que fontes de dados autônomas podem alterar os seus dados entre processos de integração, e na proposta de quatro tipos de validação das operações frente a essas anomalias: validação completa, da origem, do destino, ou nenhuma. A quarta característica consiste na reaplicação de operações, por meio da proposta dos métodos VRS (do inglês Validate and Reapply in Separate) e VRT (do inglês Validate and Reapply in Tandem) e da reordenação segura do repositório, os quais garantem que todas as decisões de integração tomadas pelo usuário em processos de integração anteriores sejam resolvidas automaticamente e da mesma forma em processos de integração subsequentes. A validação do modelo MPPI foi realizada por meio de testes de desempenho que investigaram o tratamento de operações de sobreposição, o método VRT e a reordenação segura, considerando como base as demais características do modelo. Os resultados obtidos mostraram a viabilidade de implementação das políticas propostas para tratamento de operações de sobreposição em sistemas de integração reais. Os resultados também mostraram que o método VRT proporcionou ganhos de desempenho significativos frente à coleta quando o objetivo é restabelecer resultados de processos de integração que já foram executados pelo menos uma vez. O ganho médio de desempenho do método VRT foi de pelo menos 93%. Ademais, os testes também mostraram que reordenar as operações antes da reaplicação pode melhorar ainda mais o desempenho do método VRT
Data provenance is the set of metadata that allows for the identification of sources and transformations applied to data, since its creation to its current state. There are several advantages of incorporating data provenance into data integration processes, such as to estimate data quality and data reliability, to perform data audit, to establish the copyright and ownership of data, and to reproduce data integration decisions. In this master\'s thesis, we propose the MPPI, a novel data provenance model that supports data integration processes. The model focuses on systems in which only owners can update their data sources, i.e., the integration process cannot correct the sources according to integration decisions. The main goal of the MPPI model is to handle decisions taken by the user in previous integration processes, so they can be automatically reapplied in subsequent integration processes. The MPPI model introduces the following properties. It is based on mapping provenance data into operations of copy, edit, insert and remove, which are stored in an operation repository. It also provides four techniques to handle overlapping operations: blind, restrict, undo and redo. Furthermore, it identifies anomalies generated by sources that are updated between two data integration processes and proposes four validation approaches to avoid these anomalies: full validation, source validation, target validation and no validation. Moreover, it introduces two methods that perform the reapplication of operations according to decisions taken by the user, called the VRS (Validate and Reapply in Separate) and the VRT (Validate and Reapply in Tandem) methods, in addition to extending the VRT method with the safe reordering optimization. The MPPI model was validated through performance tests that investigated overlapping operations, the VRT method and the safe reordering optimization. The tests showed that the techniques proposed to handle overlapping operations are feasible to be applied to real integration systems. The results also demonstrated that the VRT method provided significant performance gains over data gathering when the goal is to reestablish previous integration results. The performance gains were of at least 93%. Furthermore, the performance results also showed that reordering the operations before the reapplication process can improve even more the performance of the VRT method
APA, Harvard, Vancouver, ISO, and other styles
18

Pelletier, Isabelle. "Étude comparative des modes d'acculturation chez des étudiants étrangers provenant d'une société individualiste et d'une société collectiviste." Master's thesis, Université Laval, 2003. http://hdl.handle.net/20.500.11794/44217.

Full text
Abstract:
La présente étude s'intéresse à l'influence de la distance culturelle sur le processus d'adaptation d'étudiants internationaux inscrits à l'Université Laval. L'étude cherche à vérifier l'influence du pays d'origine (collectiviste/individualiste) sur deux niveaux du processus d'acculturation (comportement/système de valeur), en contexte individualiste. L'étude vise également à vérifier l'influence de la discrimination perçue sur le mode d'acculturation choisi et sur son orientation idiocentriste ou allocentriste. Quarante six étudiants français et trente six étudiants marocains participent à l'étude et répondent à un questionnaire inspiré des modèles théoriques de Berry (1997) et de Schwartz (1998). Les résultats démontrent que les deux groupes choisissent majoritairement le mode d'intégration. Cependant, les étudiants marocains ne vivent pas les mêmes changements que les étudiants français et ne réagissent pas aux mêmes déterminants d'adaptation. Chez les Français, les principaux changements s'effectuent au niveau des comportements alors qu'aucun changement n'est observé au niveau des valeurs. Chez les Marocains, on remarque une augmentation des valeurs individualistes et une perception de discrimination collective, ce qui nuance leur mode d'acculturation. Pour les deux groupes, la discrimination perçue n 'est aucunement associée à l'orientation du mode d'acculturation (idiocentriste/allocentriste).
APA, Harvard, Vancouver, ISO, and other styles
19

Almeida, Dayse Silveira de. "AcCORD: um modelo colaborativo assíncrono para a reconciliação de dados." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-17112016-144747/.

Full text
Abstract:
Reconciliação é o processo de prover uma visão consistente de dados provenientes de várias fontes de dados. Embora existam na literatura trabalhos voltados à proposta de soluções de reconciliação baseadas em colaboração assíncrona, o desafio de reconciliar dados quando vários usuários colaborativos trabalham de forma assíncrona sobre as mesmas cópias locais de dados, compartilhando somente eventualmente as suas decisões de integração particulares, tem recebido menos atenção. Nesta tese de doutorado investiga-se esse desafio, por meio da proposta do modelo AcCORD (Asynchronous COllaborative data ReconcIliation moDel). AcCORD é um modelo colaborativo assíncrono para reconciliação de dados no qual as atualizações dos usuários são mantidas em um repositório de operações na forma de dados de procedência. Cada usuário tem o seu próprio repositório para armazenar a procedência e a sua própria cópia das fontes. Ou seja, quando inconsistências entre fontes importadas são detectadas, o usuário pode tomar decisões de integração para resolvê-las de maneira autônoma, e as atualizações que são executadas localmente são registradas em seu próprio repositório. As atualizações são compartilhadas entre colaboradores quando um usuário importa as operações dos repositórios dos demais usuários. Desde que diferentes usuários podem ter diferentes pontos de vista para resolver o mesmo conflito, seus repositórios podem estar inconsistentes. Assim, o modelo AcCORD também inclui a proposta de diferentes políticas de reconciliação multiusuário para resolver conflitos entre repositórios. Políticas distintas podem ser aplicadas por diferentes usuários para reconciliar as suas atualizações. Dependendo da política aplicada, a visão final das fontes importadas pode ser a mesma para todos os usuários, ou seja, um única visão global integrada, ou resultar em distintas visões locais para cada um deles. Adicionalmente, o modelo AcCORD também incorpora um método de propagação de decisões de integração, o qual tem como objetivo evitar que um usuário tome decisões inconsistentes a respeito de um mesmo conflito de dado presente em diferentes fontes, garantindo um processo de reconciliação multiusuário mais efetivo. O modelo AcCORD foi validado por meio de testes de desempenho que avaliaram as políticas propostas, e por entrevistas a usuários que avaliaram não somente as políticas propostas mas também a qualidade da reconciliação multiusuário. Os resultados obtidos demonstraram a eficiência e a eficácia do modelo proposto, além de sua flexibilidade para gerar uma visão integrada ou distintas visões locais. As entrevistas realizadas demonstraram diferentes percepções dos usuários quanto à qualidade do resultado provido pelo modelo AcCORD, incluindo aspectos relacionados à consistência, aceitabilidade, corretude, economia de tempo e satisfação.
Reconciliation is the process of providing a consistent view of the data imported from different sources. Despite some efforts reported in the literature for providing data reconciliation solutions with asynchronous collaboration, the challenge of reconciling data when multiple users work asynchronously over local copies of the same imported data has received less attention. In this thesis we investigate this challenge. We propose AcCORD, an asynchronous collaborative data reconciliation model. It stores users integration decision in logs, called repositories. Repositories keep data provenance, that is, the operations applied to the data sources that led to the current state of the data. Each user has her own repository for storing the provenance. That is, whenever inconsistencies among imported sources are detected, the user may autonomously take decisions to solve them, and integration decisions that are locally executed are registered in her repository. Integration decisions are shared among collaborators by importing each others repositories. Since users may have different points of view, repositories may also be inconsistent. Therefore, AcCORD also introduces several policies that can be applied by different users in order to solve conflicts among repositories and reconcile their integration decisions. Depending on the applied policy, the final view of the imported sources may either be the same for all users, that is, a single integrated view, or result in distinct local views for each of them. Furthermore, AcCORD encompasses a decision integration propagation method, which is aimed to avoid that a user take inconsistent decisions over the same data conflict present in different sources, thus guaranteeing a more effective reconciliation process. AcCORD was validated through performance tests that investigated the proposed policies and through users interviews that investigated not only the proposed policies but also the quality of the multiuser reconciliation. The results demonstrated the efficiency and efficacy of AcCORD, and highlighted its flexibility to generate a single integrated view or different local views. The interviews demonstrated different perceptions of the users with regard to the quality of the result provided by AcCORD, including aspects related to consistency, acceptability, correctness, time-saving and satisfaction.
APA, Harvard, Vancouver, ISO, and other styles
20

Nairn, Steven Peter. "Testing alternative models of continental collision in Central Turkey by a study of the sedimentology, provenance and tectonic setting of Late Cretaceous-Early Cenozoic syn-tectonic sedimentary basins." Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/5037.

Full text
Abstract:
In central Anatolia, Turkey, a strand of the former northern Neotethys Ocean subducted northwards under the Eurasian (Pontide) active margin during Late Cretaceous–Early Cenozoic time. Subduction and regional plate convergence were associated with the generation and emplacement of accretionary complexes and supra-subduction zone-type ophiolites onto former passive margins of microcontinents. The resultant suture zones contain Late Cretaceous to Middle Eocene basins (“The Central Anatolian Basins”) including: 1) the Kırıkkale Basin; 2) the Çankırı Basin, 3) the Tuz Gölü Basin and; 4) the Haymana - Polatlı Basin. Using stratigraphic logging, igneous geochemistry, micropalaeontology and provenance studies, this study tests two end-member models of basin evolution. In model one, the basins developed on obducted ophiolitic nappes following closure of a single northern Neotethys Ocean during the latest Cretaceous. In model two, northern Neotethys comprised two oceanic strands, the İzmir-Ankara-Erzincan Ocean to the north and the Inner Tauride Ocean to the south, separated by the Niğde-Kırşehir microcontinent, which was rifted from the Gondwana continent to the south. In this scenario, the basins developed as accretionary-type basins, associated with north-dipping subduction which persisted until the Middle Eocene when continental collision occurred. Where exposed, the basements of the Central Anatolian Basins comprise the Ankara Mélange, a mainly Upper Cretaceous subduction-accretion complex and the western/northern margin of the Niğde-Kırşehir microcontinent. New geochemical data from the composite basement of the Kırıkkale Basin identify mid ocean-ridge basalt (MORB), here interpreted to represent relict Upper Cretaceous Neotethyan oceanic crust. During the latest Cretaceous, the Kırıkkale and Tuz Gölü Basins initiated in deep water above relict MORB crust and ophiolitic mélange, bordered by the Niğde-Kırşehir microcontinent to the east where marginal facies accumulated. Further west, the Haymana-Polatlı Basin represents an accretionary-type basin constructed on the Ankara Mélange. To the north, the Çankırı Basin developed on accretionary mélange, bounded by the Pontide active margin to the north. Palaeocene sedimentation was dominated by marginal coralgal reef facies and siliciclastic turbidites. Latest Palaeocene–middle Eocene facies include shelf-type Nummulitid limestone, shallow-marine deltaic pebbly sandstones and siliciclastic turbidites. This thesis proposes a new model in which two north-dipping subduction zones were active during the late Mesozoic within northern Neotethys. In the south, ophiolites formed above a subduction zone consuming the Inner Tauride Ocean until the southward retreating trench collided with the northern margin of the Tauride continent emplacing ophiolites and mélange. In the north, subduction initiated outboard of the Eurasian margin triggering the genesis of supra-subduction zone ophiolites; the subduction zone rolled back southwards until it collided with the Niğde-Kırşehir microcontinent, again emplacing ophiolites during latest Cretaceous time. Neotethyan MORB still remained to the west of the Niğde-Kırşehir microcontinent forming the basement of the Kırıkkale and Tuz Gölü Basins. Latest Palaeocene–middle Eocene regional convergence culminated in crustal thickening, folding, uplift and strike-slip faulting which represent final continental collision and the geotectonic assembly of central Anatolia.
APA, Harvard, Vancouver, ISO, and other styles
21

Bendella, Meryem. "Fouille de données provenant des réseaux sociaux pour la détection et la recherche." Electronic Thesis or Diss., Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0612.

Full text
Abstract:
L'avènement des réseaux sociaux a suscité un intérêt considérable pour la société au cours de notre décennie. Ces plateformes permettent aux utilisateurs de produire, partager et échanger des contenus divers. Twitter est l'un des réseaux sociaux les plus populaires permettant à ses utilisateurs de publier des messages, appelés tweets. Ces derniers peuvent contenir des textes offensifs, tels que les messages de harcèlement, ou encore des informations liées à des sujets controversés. De nombreux travaux de recherche ont montré comment ces contenus sociaux peuvent avoir une influence sur les utilisateurs. Un système de détection de ce type de messages est nécessaire afin de protéger l'utilisateur et prédire l'apparition des évènements. Dans ce travail de thèse, nous proposons un système de détection de tweets suspects basé sur les modèles thématiques probabilistes et la logique floue. Afin d'identifier les tweets de harcèlement, nous introduisons un modèle de classification exploitant un ensemble de caractéristiques et utilisant des algorithmes d'apprentissage supervisé. Les utilisateurs effectuent également des recherches sur ces plateformes pour trouver des informations qui répondent à un besoin exprimé par une requête. Cependant, les tweets sont courts et l'accès à l'information est parfois difficile. Une partie de nos travaux se situe plus particulièrement dans le contexte de la recherche d'information sociale et vise à améliorer la recherche de tweets. Nous proposons une méthode d'expansion de requêtes, afin de pallier le problème de concision des messages ainsi que des requêtes, basée sur l’extraction des motifs fermés fréquents et utilisant des plongements lexicaux
Social networks have gained a significant interest for society during our decade. These platforms allow users to produce, share and exchange various content. Twitter is one of the most popular social networks that allow users to publish messages, called tweets. These tweets may contain offensive texts, such as harassment or bullying messages, or information related to abnormal topics. Many research studies have shown how such social content can have an impact on users and cause psychological harm. Developing a system for detecting such type of messages is necessary to protect the user and predict tragic events. The work presented in this thesis is brought into the context of data mining from Twitter to identify and detect such messages. We propose a suspicious tweets detection system based on probabilistic topic models and fuzzy logic. In order to identify harassment tweets, we introduce a classification model that exploits a set of features and uses supervised learning algorithms. People also use social networks to search for relevant posts that satisfy their information need where this need is usually formulated using a textual query. Twitter’s messages are short and access to information is sometimes difficult because of the variety of published content and huge amount of data generated. The second part of this work deals with the context of social information retrieval and aims to improve tweets retrieval quality. We propose a query expansion approach to overcome the shortness of user queries and tweets by extracting frequent closed patterns and using word embeddings
APA, Harvard, Vancouver, ISO, and other styles
22

Bouderrah, Mohamed. "Comparaison de deux modes de vitropropagation à partir de vitrosemis d'eucalyptus camaldulensis provenance lake albacutya : Micropropagation à partir de bourgeons axillaires, micropropagation à partir de bourgeons adventifs, et étude de la variabilité." Nancy 1, 1988. http://www.theses.fr/1988NAN10002.

Full text
Abstract:
Microbouturage à partir de vitrosemis : multiplication par fragmentation-élongation. Multiplication par hyper-ramification de bourgeons adventifs induits par caulogénèse sur des vitroplants. Variabilité clonale au cours des différentes phases de la multiplication par bourgeons adventifs
APA, Harvard, Vancouver, ISO, and other styles
23

Bouderrah, Mohamed. "Comparaison de deux modes de vitropropagation à partir de vitrosemis d'Eucalyptus camaldulensis provenance Lake Albacutya micropropagation à partir de bourgeons axillaires : micropropagation à partir de bourgeons adventifs, et étude de la variabilité du comportement de différents clones /." Grenoble 2 : ANRT, 1988. http://catalogue.bnf.fr/ark:/12148/cb37612134n.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Triffault-Bouchet, Gaëlle. "Effets sur les écosystèmes aquatiques lentiques des émissions de polluants provenant de différents modes de valorisation/élimination de déchets - Application à des mâchefers d'IUOM et à des boues de dragage de canaux." Chambéry, 2004. http://www.theses.fr/2004CHAMS004.

Full text
Abstract:
La valorisation des déchets a été favorisée par la loi du 13 juillet 1992 et la définition de la notion de déchets ultimes. L'impact de deux scénarios d'études ont été évalués vis-à-vis des écosystèmes lentiques : valorisation de mâchefers d'incinération d'ordures ménagères (MIOM) en technique routière et immersion de sédiments de dragage en gravière. Ces travaux devaient apar ailleurs permettre de compléter la méthodologie d'évaluation de l'écocompatibilité des déchets, mise au point par l'ADEME. Le potentiel toxique des matrices a été évalué à l'aide d'eeais en microcosmes. Cinq sédiments de dragage ont été étudiés : (i) en condition statiques, (ii) pour 3 rapports eau/sédiment et (iii) 3 conditions de renouvellement de la colonne d'eau. Les percolats de MIOM, obtenus à l'aide d'un lysimètre, ont été évalués à l'aide de 4 essais monospécifiques, d'essais en microcosmes de 2 litres en conditions statiques et d'un essai en microcosmes de 100 litres. Les 5 sédiments des dragage ont été classés au vu de leur potentiel toxique. Quatre des sédiments présentent un potentiel toxique élévé dont des risques d'eutrophisation du milieu récepteur. Les critères d'effets mesurés ont été classés, la survie de Hyalella azteca étant le plus sensible. Les risques liés à l'immersion des sédiments sous eau ne sont pas acceptables pour ces 5 sédiments. Des recommandations quant au volume de sédiement à immerger et aux conditions de ces dépôts ont été émises. L'impact ds percolats de MIOM est avéré pour l'ensemble des espèces tests. Le cuivre semble être à l'origine des effets mesurés. Il apparaît comme un contaminant majeur des percolats de MIOM. Les risques liés à ce scénario de valorisation des MIOM ne sont pas négligeables pour l'écosystème lentique récepteur. Des recommandations pour une mise en oeuvre dans les meilleures conditions possibles ont été émises. Cette étude a enfi souligné l'intéret d'utiliser les essais en microcosmes pour évaluer l'impact de matrices contaminées vis-à-vis d'écosystèmes lentiques. Certains aspects du protocole restent, à ce jour, à optimiser afin d'optenir, notamment, des niveaux de variabilité acceptables pour l'ensemble des paramètres suivis
Waste reused has been promoted by the definition of final wastes (law of the 13/07/92). The impact of two scenarios against lentic ecosystems was studied : one deals with municipal solid waste incineration bottom ashes (MSWIBA) reused in road embankment, the other with under water deposition of dredged materials in a gravel pit. One of the purpose of yhis study was to complete the methodology for the assessment of waste ecocompatibility, focused by ADEME. The toxic potential of these materials have been distinguished and ordered according to their toxic potential. Four of the dredged materials were characterized by a high toxic potential and presented risks of lentic ecosystem eutrophisation. Risks of this storage scenario are not acceptable for these sediments. Recommendations have been made for the sediment amount to be submerged and for the constraints around thsese deposits. MSWIBA leachates impact was demonstrated whatever the methodology used. Copper seems to be responsible of the effects measured on species. It can be considered as a major pollutant of MSWIBA leachates. Risks of this reused scenario are not acceptable for lentic ecosystems. Recommendations have been made for MSWIBA reused as road embankment. With this study, the interest of microcosm assays have been emphasized. Yhis kind of approach was convenient for the evaluation of contaminated matrix impact against lentic ecosystems. Some aspects of this protocol have to be optimised in order to obtain acceptable variability levels for each parameter
APA, Harvard, Vancouver, ISO, and other styles
25

Feneyrol, Julien. "Pétrologie, géochimie et genèse des gisements de tsavorite associés aux gneiss et roches calco-silicatées graphiteux de Lemshuku et Namalulu, Tanzanie." Thesis, Université de Lorraine, 2012. http://www.theses.fr/2012LORR0348/document.

Full text
Abstract:
La tsavorite, grossulaire vert à V-Cr-Mn, est contenue dans des gneiss et roches calco-silicatées graphiteux, souvent associés à des marbres dolomitiques, et appartenant à la ceinture métamorphique néoprotérozoïque mozambicaine. La tsavorite se trouve soit dans des nodules ou des veines de quartz (gisements primaires), soit dans des placers (gisements secondaires). L'étude minéralogique des tsavorites propose un nouveau protocole de certification de leur origine géographique, à partir du rapport V/Cr, de la teneur en Mn et du [delta]18 O. L'étude des gisements de Lemshuku et Namalulu en Tanzanie montre que le métamorphisme des protolithes sédimentaires riches en matière organique et évaporites s'est effectué à P = 7,0 ± 0,4 kbar et T = 677 ± 14°C, à 634 ± 22 Ma (datation U-Th-Pb sur monazite). Le bâti métamorphique s'est refroidi vers 500 Ma (datation 40Ar-39Ar sur muscovite). Deux stades de métasomatose sont reliés à la formation de la tsavorite : (i) une métasomatose de diffusion formant les nodules à P = 5,0-7,4 kbar et T = 580-691°C; (ii) une métasomatose calcique d'infiltration contemporaine de la formation des veines de quartz à P = 3,6-4,9 kbar et T = 505-587°C. Ces dernières sont datées in situ par la méthode Sm-Nd à 606 ± 36 Ma. Les évaporites continentales, déposées dans une sabkha de côte marine avec des sédiments silico-calcaires, sont transformées en tsavorite dans le cas des nodules, alors que les sels fondus sont associés à la formation des veines de quartz. Les minéralisations sont contrôlées par la lithostratigraphie et la tectonique
Tsavorite, a (V, Cr, Mn)-bearing green grossular, is hosted by graphitic gneisses or calc-silicates, often asssociated with dolomitic marbles, and belonging to the Metamorphic Neoproterozoic Mozambique Belt. Tsavorite is found either as nodules or in quartz veins (primary deposits), or in placers (secondary deposits). The mineralogical study of tsavorites suggests a new protocol to certificate their geographical origin, based on the V/Cr ratio, Mn content and delta18O. The study of the Lemshuku and Namalulu deposits in Tanzania has shown that the metamorphism of organic matter-rich and evaporites-rich sedimentary protoliths occurred at P = 7.0 ± 0.4 kbar and T = 677 ± 14°C, at 634 ± 22 Ma (U-Th-Pb dating on monazite). The metamorphic series cooled down at around 500 Ma (40Ar-39Ar dating on muscovite). Two metasomatic stages are linked to the formation of tsavorite : (i) diffusion metasomatism forming nodules at P = 5.0-7.4 kbar and T = 580-691°C; (ii) calcitic infiltration metasomatism forming quartz veins at P = 3.6-4.9 kbar and T = 505-587°C. These last have been dated in situ with Sm-Nd dating at 606 ± 36 Ma. Continental evaporites, deposited in a coastal marine sabkha with (Si, Ca)-bearing sediments, transformed into tsavorite in the case of the nodules, while the molten salts are associated with the formation of the quartz veins. The mineralisations are controlled by lithostratigraphy and structure
APA, Harvard, Vancouver, ISO, and other styles
26

Rebmann, Thierry. "Caractérisations pétroarchéologiques, provenances et aires de circulations des industries moustériennes différentes du silex en Région du Rhin Supérieur, entre la Moselle et le Jura : stations de Mutzig et Nideck (Alsace, France), de Lellig (Luxembourg), et Alle (Canton du Jura, Suisse) = Petroarchäologie, Herkunft und Rohmaterialversorgung der anderen Werkzeuge des Feuersteins im Mittelpaläolithikum des Hochrheingebietes zwischen der Mosel und dem Jura : prähistorische Stationen von Mutzig und des Nideck (Elsass, Frankreich), von Lellig (Luxemburg) und Alle (Kantons des Jura [sic], Schweiz) /." [S.l.] : [s.n.], 2007. http://edoc.unibas.ch/diss/DissB_7915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Eschenbrenner-Diemer, Gersande. "Les « modèles » égyptiens en bois : matériau, fabrication, diffusion, de la fin de l’Ancien à la fin du Moyen Empire (env. 2350-1630 av. J.-C)." Thesis, Lyon 2, 2013. http://www.theses.fr/2013LYO20114.

Full text
Abstract:
Le premier volume est consacré à l’analyse des matériaux et des techniques utilisées pour la fabrication des « modèles » funéraires, c’est-à-dire, les maquettes en bois représentant des personnages ou des scènes de la vie quotidienne, typiques du mobilier funéraire des élites entre la fin de l’Ancien et la fin du Moyen Empire (env. 2350-1630 av. J.-C.). Dans une première partie, axée sur le matériel provenant des sites de Saqqâra, Assiout et Meir, les traits stylistiques et techniques ont été examinés afin de définir des groupes d’objets et identifier des ateliers de production. Des critères de datation ont ensuite été définis et comparés aux autres pièces du mobilier funéraire découverts dans les sépultures étudiées. Suivant un déroulé chronologique depuis la fin de l’Ancien Empire, la deuxième partie se concentre sur les ateliers de fabrication et les contacts interrégionaux. Une attention particulière est portée au rapport qui unit pouvoir royal, élites et artisans au travers de l’envoi d’équipements funéraires et plus particulièrement des modèles en bois découverts de la région memphite à la Haute-Égypte. La troisième partie s’intéresse aux fonctions sociales, économiques et religieuses des modèles et examine plus particulièrement le rapport étroit qui unit ce mobilier et les pratiques funéraires entre la fin de l’Ancien et la fin du Moyen Empire. Le deuxième volume présente le corpus des modèles en bois examinés. Un troisième volume est consacré aux annexes.L’examen des modèles en bois, significatif de profonds changements politiques et religieux à l’origine de nouvelles coutumes et croyances funéraires entre la VIème et la XIIIème dynastie, précise le contexte géographique, historique et social associé à la fabrication et à l’utilisation de ce mobilier et permet d’affiner la perception du rapport entre artisans et pouvoir, rapport omniprésent dans la société égyptienne antique dès la période prédynastique
The first part focuses on material analysis and process use for the manufacturing of funerary “models”. These wooden objects represent people or everyday scenes of life, used by Egyptian elites for funeral furniture between the end of the Old Kingdom to the end of the Middle Kingdom (cir. 2350-1630 BC). In a first part, focused on objects from Saqqara, Assiut and Meir, the stylistic and technical features were examined to define groups of objects and workshops. Then, dating criteria were defined and compared with the funeral furniture discovered in the studied graves. According to one unwound chronological since the end of the Old Kingdom, the second part concentrates on workshops and interregional contacts. A particular attention is worn in the relationship between royal power, elites and craftsmen through the sending of funeral equipment and more particularly bare wooden models of the Memphite area to the Upper Egypt. The third part is interested in the social, economic and religious functions of the models and examines more particularly the narrow relationship which unites this furniture and the funeral practices between the end of the Old Kingdom to the end of the Middle Kingdom. The second volume presents the corpus of the examined wooden models. The third volume is dedicated to appendices. The examination of wooden models, significant of political and religious deep changes at the origin of new customs and funeral faiths between the VIth and the XIIIth dynasty, specifies the geographical, historic and social context associated with the manufacturing. The analysis of these objects allows refining the perception of the relationship between craftsmen and power, omnipresent in the Egyptian society from the Predynastic period
APA, Harvard, Vancouver, ISO, and other styles
28

Mehdi, Sarikhani Arash. "An adaptive provenance collection architecture in scientific workflow systems." Thesis, 2015. http://hdl.handle.net/2440/98122.

Full text
Abstract:
This thesis investigates adaptive provenance collection in the context of scientific workflow systems. In particular, we show how to design and implement an adaptive provenance system that operates at multiple levels of granularity. Scientists in different disciplines use scientific workflows as management and representational frameworks for distributed scientific computations. Scientific workflow systems need a scientific workflow management system (SWfMS) to manage the flow of work among (both local and distributed) participants and resources; and to coordinate user and system participants. Scientific workflow systems are run over heterogeneous environments, which see changes over time in resources, requirement and policies (e.g. the cost of resources, or the policy of provenance collection in). Such changes may influence the way in which workflow mechanisms can best operate within the environments, and motivate our consideration of adaptive mechanisms to deal with such changes. SWfMSs run a scientist’s experiments. They manage sequences of complex transformational processes; in particular, they collect provenance information at various levels of abstraction (or granularity). Provenance in SWfMS is important because it enables scientists to have a clear understanding of results, especially to reproduce and verify them. Provenance information can be collected at different levels of detail, typically coarse, medium and fine grained, using specific provenance collection mechanisms. We define a Model of Provenance (MoP) for each level to make it explicit what is determined as provenance information in each level, and in addition how it is represented. We explore and survey provenance collection mechanisms and MoP, in order to provide sufficient understanding of the design and development of suitable provenance mechanisms for workflow systems. We emphasize adaptability and interoperability as important and desirable properties of a provenance system, especially those running over distributed environments. We propose a novel provenance architecture in scientific workflow architectures, which benefit from the notion of separation of concerns, which is an important principle in middleware architecture. The design and development of our adaptive provenance architecture untangles the adaptive-granularity and provenance-collection concerns, so that we can more easily offer adaptive provenance collection mechanisms. We use reflection (MetaObject Protocol (MOP)) and Aspect-Oriented Programming (AOP) as two ways of realizing the separation of concerns in our adaptive provenance collection mechanisms. Both the MOP and AOP oriented adaptive provenance collection mechanisms are explored in our scientific workflow case study, and implemented on a process network based workflow model. The case study demonstrates adaptive collection and representation of multiple levels of provenance granularity, according to our model of provenance (MoP). This MoP represents various levels of provenance granularity in a format compatible with a generic Open Provenance Model, enabling interoperability.
Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2015.
APA, Harvard, Vancouver, ISO, and other styles
29

"A MULTI-FUNCTIONAL PROVENANCE ARCHITECTURE: CHALLENGES AND SOLUTIONS." Thesis, 2013. http://hdl.handle.net/10388/ETD-2013-12-1419.

Full text
Abstract:
In service-oriented environments, services are put together in the form of a workflow with the aim of distributed problem solving. Capturing the execution details of the services' transformations is a significant advantage of using workflows. These execution details, referred to as provenance information, are usually traced automatically and stored in provenance stores. Provenance data contains the data recorded by a workflow engine during a workflow execution. It identifies what data is passed between services, which services are involved, and how results are eventually generated for particular sets of input values. Provenance information is of great importance and has found its way through areas in computer science such as: Bioinformatics, database, social, sensor networks, etc. Current exploitation and application of provenance data is very limited as provenance systems started being developed for specific applications. Thus, applying learning and knowledge discovery methods to provenance data can provide rich and useful information on workflows and services. Therefore, in this work, the challenges with workflows and services are studied to discover the possibilities and benefits of providing solutions by using provenance data. A multifunctional architecture is presented which addresses the workflow and service issues by exploiting provenance data. These challenges include workflow composition, abstract workflow selection, refinement, evaluation, and graph model extraction. The specific contribution of the proposed architecture is its novelty in providing a basis for taking advantage of the previous execution details of services and workflows along with artificial intelligence and knowledge management techniques to resolve the major challenges regarding workflows. The presented architecture is application-independent and could be deployed in any area. The requirements for such an architecture along with its building components are discussed. Furthermore, the responsibility of the components, related works and the implementation details of the architecture along with each component are presented.
APA, Harvard, Vancouver, ISO, and other styles
30

Decou, Audrey. "Provenance model of the Cenozoic siliciclastic sediments from the western Central Andes (16-21°S): implications for Eocene to Miocene evolution of the Andes." Thesis, 2011. http://hdl.handle.net/11858/00-1735-0000-0006-B303-A.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Haider, Viktoria L. "Evolution and decay of peneplains in the northern Lhasa terrane, Tibetan Plateau." Doctoral thesis, 2014. http://hdl.handle.net/11858/00-1735-0000-0023-990C-4.

Full text
Abstract:
Diese Dissertation befasst sich mit der Entwicklung von “Fastebenen”, die im Weiteren einheitlich als “Peneplains” bezeichnet werden, sowie dem Zerfall dieses markanten geomorphologischen Erscheinungsbildes im südlichsten Teil des tibetischen Plateau dem sogenannten Lhasa Block. Im Zuge dieser Arbeit konnten neue Erkenntnisse über die Hebungsgeschichte und der Sedimentverteilung in diesem Untersuchungsgebiet gewonnen werden. Diese Ergebnisse tragen zu einem besseren Verständnis der geodynamischen Entwicklung Asiens bei, die bis heute viele Fragen aufwirft. Ende des 19. Jahrhunderts wurden Peneplains als metastabile geomorphologische Formen angesehen, die im Zuge großflächiger Erosion entstehen. Die Bezeichnung Peneplain und das dahinter stehende Konzept werden seitdem von der geomorphologischen Gemeinschaft jedoch kontrovers diskutiert. Bis heute gibt es keine standardisierte bzw. repräsentative Definition für das nicht zu übersehende landschaftsbildende Phänomen der Peneplains. Dementsprechend gibt es auch nur wenige Ansätze zu Modellierungen oder Berechnungen mit Geoinformationssystemen. Hier, in dieser Dissertation, werden idealisierte Peneplains als erhöhte, gleichmäßige und großflächige Ebenen mit abfallenden Hängen verstanden, auch wenn sich landschaftsbildende Peneplains oft gekippt darstellen und durch tektonische Prozesse gestört bzw. bereits durch fortschreitende Erosionsprozesse angegriffen sind. Gut erhaltene Peneplains sind speziell für das Gebiet um den höchstgelegenen See der Welt, dem Nam Co, im nördlichen Teil des Lhasa Blocks im Hochland von Tibet charakteristisch. Die Peneplains zerschneiden das dort vorkommende viel ältere und vorwiegend granitische Gestein sowie die angrenzenden Metasedimente. Zur Bestimmung der Abkühl- und Hebungsalter der Granite wurden geo- und thermochronologische Methoden wie Zirkon U-Pb, Zirkon (U-Th)/He, Apatit (U-Th)/He und Apatit-SpaltspurenDatierung angewendet. Neben der Hebungsrate konnte auch die Freilegung des granitischen Gesteines ermittelt werden. Mit der Methode zur Bestimmung des U-Pb-Zirkonalters konnten zwei Intrusionsgruppen, um 118 Ma und 85 Ma, festgestellt werden. Ebenso wurden vulkanische Aktivitäten nachgewiesen und auf einen Zeitraum zwischen 63 Ma und 58 Ma datiert. Thermische Modelle, aufbauend auf Zirkon- und Apatit-(U-Th)/He-Datierungen sowie auf ApatitSpaltspuren-Daten der untersuchten Granitoide, ergeben einen Hebungs- und Abkühlungszeitraum von 75 Ma bis 55 Ma mit einer Hebungsrate von 300 m/Ma, welche im Zeitfenster zwischen 55 Ma und 45 Ma stark abfällt auf 10 m/Ma. Die Auswertung der Messdaten unserer Kooperationspartner an der Universität Münster zu kosmogenen Nukliden zeigen sehr niedrigen Erosionsraten von 6-11 m/Ma und 11-16 m/Ma, in den letzten 10.000 Jahren die in den einzelnen Einzugsgebieten ermittelt wurden. Diese Daten zeugen von einer noch immer andauernden Periode der Stabilität und tragen zur Erhaltung der Peneplains bei. Während der anhaltenden Phase der Erosion und Einebnung sind vor ungefähr 45 Ma in der untersuchten Region zwischen 3 km und 6 km Gestein abgetragen und weg transportiert worden. Es ist naheliegend, dass das abgetragene Material als Sediment über das vorhandene Flusssystem fast vollständig in die heute bestehenden Ozenane transportiert wurde. Im Lhasa Block können nur verhältnismäßig wenig Sedimente aus dieser Zeit nachgewiesen werden. Alle bisherigen Untersuchungsergebnisse sowie die durchgeführte Sediment-Herkunftsanalyse untermauern die Theorie, dass die Peneplainbildung und ihre Erosionsprozesse in niedriger Höhe - höchstwahrscheinlich auf Meeresniveau - stattgefunden haben muss. Dieser Prozess wurde durch die Kollision des indischen Kontinents mit Asien gestoppt. Die resultierende Krustenverdickung führte zu einer Hebung der Landschaft mit den Peneplains, von Meeresniveau auf 5.000 bis 7.000 Höhenmeter. Die auf dem “das Dach der Welt” vorherrschenden idealen Klimabedingungen haben anschließend für die fast vollständige Erhaltung der Peneplains gesorgt. Der zweite Teil der Dissertation befasst sich mit der Entwicklung einer robusten Methode Peneplains anhand digitale Höhenmodelle (DEM) zu berechnen bzw. zu kartieren. Frei zugängliche DEMs machen es möglich, Erdoberflächen repräsentativ mathematisch und statistisch zu analysieren und zu charakterisieren. Diese Analysemethode stellt eine ausgezeichnete Möglichkeit dar, die Peneplains mittels aussagekräftiger Algorithmen zu charakterisieren und digital zu kartieren. Um Peneplains algorithmisch von der Umgebung klar abgrenzen zu können, wurde ein komplett neuer Ansatz der Fuzzylogik angewandt. Als DEM-Basis wurde ein 90 arcsec-DEM der Shuttle Radar Topography Mission (SRTM) verwendet. Mithilfe eines Geoinformationssystems (GIS) wurden Algorithmen geschrieben, die vier verschiedene kritische Parameter zur Beschreibung von Peneplains berücksichtigen: (I) Gefälle, (II) Kurvigkeit, (III) Geländerauhigkeit und (IV) Relative Höhe. Um die Eignung der Methode zu prüfen, wurde auf Basis der SRTM-DEM weltweit kartiert und mit schon in der Literatur beschriebenen Peneplains verglichen. Die dabei erhaltenen Ergebnisse von den Appalachen, den Anden, dem Zentralmassif und Neuseeland bestätigen dass ein Einsatz des Modells, weltweit und unabhängig von der Höhenlage möglich ist.
APA, Harvard, Vancouver, ISO, and other styles
32

Howard, K. E. "Provenance of Palaeoproterozoic metasedimentary rocks in the eastern Gawler Craton, Southern Australia: Implications for reconstruction models of Proterozoic Australia." Thesis, 2006. http://hdl.handle.net/2440/123593.

Full text
Abstract:
This item is only available electronically.
Detrital zircon ages obtained from the Corny Point Paragneiss and the Massena Bay Gneiss in the southeastern Gawler Craton, Australia, constrain their deposition to the interval ca. <1880 Ma. The presence of 2020 Ma, 2450 Ma and 2520 Ma detrital zircons within the Corny Point Paragneiss constrains the source region for the sedimentary protoliths to three possible domains within Australia; the Gawler Craton, the Glenburgh Orogen in the Western Australian Proterozoic, and the North Australian Craton, all of which contain rock systems with similar ages. Whole rock εNd (1850Ma) values from the Corny Point Paragneiss range from -1 to -5. These values potentially preclude the Late Archaean to mid Proterozoic crust of the Gawler Craton as a sole or major source region due to its highly evolved average εNd (1850Ma) of around -10. Preclusion of the Gawler Craton as a source is apparently confirmed by Hf isotopic compositions of 2020 Ma detrital zircons from the Corny Point Paragneiss, which have εHf (2020Ma) ranging between +3 to +7. This compares with εHf (2020Ma) of -1 to -4 for zircons from the 2020 Ma Miltalie Gneiss in the Gawler Craton. Available Nd isotopic data suggests that the Glenburgh Orogen is too crustally evolved to have provided the majority of sediment into the Corny Point Paragneiss protolith. The 2020 Ma detrital Hf isotopic compositions of the Corny Point Paragneiss are similar to the 2020 Ma Wildman Siltstone (εHf (2020Ma) +2 to +7) in the Pine Creek Orogen in the North Australian Craton. Two possible scenarios can be extrapolated from the detrital zircon and Nd isotopic data; (1) the Corny Point Paragneiss sediment was derived from a source region within the North Australian Craton and could share source regions with the Wildman Siltstone, or (2) the sediments were derived from a Gawler Craton source region that included a dominant juvenile component of the 2020 Ma Miltalie Gneiss in the adjacent Gawler Craton which has since been eroded. In the first scenario, the absence of connection to the Gawler Craton allows for the Betts and Giles (2006) plate reconstruction model, which proposes that the Corny Point Paragneiss formed part of the North Australia Craton, and was sutured to the Proto Gawler Craton at 1730-1700 Ma. The second scenario highlights a significant limitation in evaluating the significance of provenance data, particularly when considering old potential source terrains that have undergone significant levels of denudation. The proximity of the Corny Point Paragneiss to the rifted southern and eastern margins of the Australian Proterozoic means a thorough evaluation of the palaeogeographic significance of the Corny Point Paragneiss detrital signature requires corresponding datasets from regions such as Antarctica which were formerly contiguous with the Gawler Craton.
Thesis (B.Sc.(Hons)) -- University of Adelaide, School of Physical Sciences, 2006
APA, Harvard, Vancouver, ISO, and other styles
33

Mackey, Glen Nelson. "Provenance of the south Texas Paleocene-Eocene Wilcox Group, western Gulf of Mexico basin : insights from sandstone modal compositions and detrital zircon geochronology." Thesis, 2009. http://hdl.handle.net/2152/ETD-UT-2009-08-206.

Full text
Abstract:
Sandstone modal compositions and detrital zircon U-Pb analysis of the Paleocene-Eocene Wilcox Group of the southern Gulf Coast of Texas indicate long-distance sediment transport primarily from volcanic and basement sources to the west, northwest and southwest. The Wilcox Group of south Texas represents the earliest series of major post-Cretaceous pulses of sand deposition along the western margin of the Gulf of Mexico (GoM). Laramide basement uplifts have long been held to be the provenance of the Wilcox Group, implying that initiation of basement uplifts was the driving factor for this transition from carbonate sedimentation to clastic deposition. To determine the provenance of the Wilcox Group and test this conventional hypothesis, 40 thin sections were point-counted using the Gazzi-Dickinson method to determine sandstone composition and 10 detrital zircon samples were analyzed by LA-ICP-MS to determine U-Pb age spectra for each of the sampled areas. Modal data for sand grain populations suggest mixed sources including basement rocks, magmatic arc rocks and subordinate sedimentary rocks for the Wilcox Group. Zircon age spectra for these sandstones reveal a complex grain assemblage derived from older sediments and crystalline rocks ranging in age from Archean to Cenozoic. Sediment was primarily derived from Laramide uplifted crystalline blocks of the central and southern Rocky Mountains, the Cordilleran arc of western North America, and arc related extrusive and intrusive igneous rock of northern Mexico. Comparisons of Upper and Lower Wilcox zircon age spectra show that more arc related material was deposited in the Lower Wilcox, whereas more basement material was deposited in the Upper Wilcox.
text
APA, Harvard, Vancouver, ISO, and other styles
34

Gardner, David William. "Sedimentology, stratigraphy, and provenance of the upper Purcell Supergroup, southeastern British Columbia, Canada: implications for syn-depositional tectonism, basin models, and paleogeographic reconstructions." Thesis, 2008. http://hdl.handle.net/1828/911.

Full text
Abstract:
This thesis reports eight measured sections and >400 new detrital zircon U-Pb SHRIMP-II ages from the Mesoproterozoic (~1.4 Ga) upper Purcell Supergroup of southeastern British Columbia, Canada. The goal of my study is to constrain the depositional, tectonic and paleogeographic setting of the upper Purcell Supergroup. Stratigraphic sections across the Purcell Anticlinorium, constructed from measured sections, reveal three syn-depositional growth faults: (1) paleo-Hall Lake, (2) paleo-Larchwood Lake, and (3) paleo-Moyie. Stratigraphic sections were combined into a fence diagram, revealing a large north-northeast trending graben bound to the east by the paleo-Larchwood Lake fault and to the west by the paleo-Hall Lake fault. Five samples were collected for detrital zircon analysis along the eastern extent of exposed Purcell strata; one sample was collected from the western limit of strata. All samples are characterized by subordinate numbers of detrital zircons that yield Paleoproterozoic and Archean ages. Detrital zircon ages from the Sheppard Formation are dominated by 1500, 1700, 1750, and 1850 Ma grains. The overlying Gateway Formation is dominated by 1400-1450, 1700, 1850, and 1900 Ma zircon grains. The overlying Phillips, Roosville (east), and Mount Nelson formations are dominated by detrital zircon ages between 1375-1450 Ma and 1650-1800 Ma. Detrital zircon ages from the Roosville Formation (west) are dominated by 1500-1625 Ma grains. Based on the margin perpendicular orientation of the long axis of syn-depositional grabens relative to Laurentia, and on the presence of syn-depositional aged zircons through the entire sedimentary succession, we interpret the upper Purcell Supergroup to have been deposited in a transpressional pull-apart basin setting, adjacent to a convergent/translational plate margin bound to the west by terranes now located in northeastern Australia.
APA, Harvard, Vancouver, ISO, and other styles
35

Pelletier, Isabelle. "Étude comparative des modes d'acculturation chez des étudiants étrangers provenant d'une société individualiste et d'une société collectiviste /." 2003. http://proquest.umi.com/pqdweb?did=766706561&sid=7&Fmt=2&clientId=9268&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Maher, Anabelle. "Essays in resource economics." Thèse, 2015. http://hdl.handle.net/1866/13583.

Full text
Abstract:
Cette thèse comporte trois essais en économie des ressources naturelles. Le Chapitre 2 analyse les effets du stockage d’une ressource naturelle sur le bien-être et sur le stock de celle-ci, dans le contexte de la rizipisciculture. La rizipisciculture consiste à élever des poissons dans une rizière en même temps que la culture du riz. Je développe un modèle d’équilibre général, qui contient trois composantes principales : une ressource renouvelable à accès libre, deux secteurs de production et le stockage du bien produit à partir de la ressource. Les consommateurs stockent la ressource lorsqu’ils spéculent que le prix de cette ressource sera plus élevé dans le futur. Le stockage a un effet ambigu sur le bien-être, négatif sur le stock de ressource au moment où le stockage a lieu et positive sur le stock de ressource dans le futur. Le Chapitre 3 étudie les effects de la migration de travailleurs qualifiés dans un modèle de commerce international lorsqu’il y a présence de pollution. Je développe un modèle de commerce à deux secteurs dans lequel j’introduis les questions de pollution et de migration dans l’objectif de montrer que le commerce interrégional peut affecter le niveau de pollution dans un pays composé de régions qui ont des structures industrielles différentes. La mobilité des travailleurs amplifie les effets du commerce sur le capital environnemental. Le capital environnemental de la région qui a la technologie la moins (plus) polluante est positivement (négativement) affecté par le commerce. De plus, je montre que le commerce interrégional est toujours bénéfique pour la région avec la technologie la moins polluante, ce qui n’est pas toujours le cas pour la région qui a la technologie la plus polluante. Finalement, le Chapitre 4 est coécrit avec Yves Richelle. Dans ce chapitre, nous étudions l’allocation efficace de l’eau d’un lac entre différents utilisateurs. Nous considérons dans le modèle deux types d’irréversibilités : l’irréversibilité d’un investissement qui crée un dommage à l’écosystème et l’irréversibilité dans l’allocation des droits d’usage de l’eau qui provient de la loi sur l’eau (irréversibilité légale). Nous déterminons d’abord la valeur de l’eau pour chacun des utilisateurs. Par la suite, nous caractérisons l’allocation optimale de l’eau entre les utilisateurs. Nous montrons que l’irréversibilité légale entraîne qu’il est parfois optimal de réduire la quantité d’eau allouée à la firme, même s’il n’y a pas de rivalité d’usage. De plus, nous montrons qu’il n’est pas toujours optimal de prévenir le dommage créé par un investissement. Dans l’ensemble, nous prouvons que les irréversibilités entraînent que l’égalité de la valeur entre les utilisateurs ne tient plus à l’allocation optimale. Nous montrons que lorsqu’il n’y a pas de rivalité d’usage, l’eau non utilisée ne doit pas être considérée comme une ressource sans limite qui doit être utilisée de n’importe quelle façon.
This thesis consists of three essays in resource economics. Chapter 2 analyzes the effects of resource storage on welfare and on the resource stock, in the context of rice-fish culture. I develop a simple general equilibrium model, that has three central components: one open access renewable resource with logistic natural growth, two production sectors and storage of the good produced with the resource. Consumers store the resource when they speculate that the price of the resource will be higher in the future. Storage has an ambiguous effect on welfare, has a negative impact on resource stock at the period the storage takes place and has a positive impact for all following periods. Chapter 3 examines the effects of migration of skilled workers in a model of interregional trade in the presence of pollution. I develop a two-sector model of trade that incorporates both pollution and migration issues to show that interregional trade can affect the pollution level of a country composed of regions with different industrial structures. The mobility of workers amplifies the effects of interregional trade on the environmental capital. The region with the less (more) polluting technology is affected positively (negatively) by trade. Migration doesn’t affect the trade pattern. The region with the less polluting manufacturing industry always gains from trade. If the preferences over manufactures is relatively low, the region with the more pollutant technology can experience a loss from trade in the long run. Finally, Chapter 4 is co-authored with Yves Richelle. In this chapter, we consider the problem of efficiently allocating water of a lake among different potential users. We consider two types of irreversibility: the irreversibility of an investment that creates a fixed damage to the ecosystem and the irreversibility of the right to use the resource that comes from the legislation (legislative irreversibility). First of all, we determine the value of water for users. Then, we characterize the optimal allocation of water among users. With legislative irreversibility, we show that it is sometimes optimal to reduce the amount of water allocated to the firm, even though there is no rivalry in use. Moreover, we show that it is not always optimal to prevent the damage created by the irreversible vi investment. We define the context, in which it is optimal to intervene to prevent the damage. Furthermore, with irreversibility, we prove that the marginal value of water at the efficient allocation for users is not equalized. Overall, we show that in the case of no rivalry in use, unused water should not be seen as a limitless resource to be used in any way whatever.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography