Дисертації з теми "Exploitation of public data"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Exploitation of public data.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Exploitation of public data".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Thompson, Julie. "Evolution of multiple alignments : Towards efficient data exploitation and knowledge extraction in the post-genomique era." Université Louis Pasteur (Strasbourg) (1971-2008), 2006. https://publication-theses.unistra.fr/public/theses_doctorat/2006/MAALOUM_Julie_2006.pdf.

Повний текст джерела
Анотація:
Grâce à la génomique et les technologies protéomiques, la bioinformatique est traversée par une véritable révolution ou l'approche réductioniste traditionnelle est remplacée par de nouvelles stratégies systémiques. Par conséquent, de nouveaux systèmes intégrés sont développés pour la gestion des données hétérogènes, la fouille de l’information et la mise en évidence des connaissances. Dans ce contexte, les alignements multiples de séquences fournissent un environnement idéal pour l'intégration fiable des informations liées à un génome ou un protéome. Durant cette thèse, trois développements ont été réalisés: (i) un banc d’essai pour l’évaluation objective des algorithmes d’alignement, (ii) une ontologie (MAO) des alignements de séquences et de structures, (iii) un système de gestion d’information (MACSIMS) qui exploite l’alignement multiple et l’organisation fournie par l’ontologie. MACSIMS a été utilisé dans plusieurs projets, incluant l'annotation de génomes complets, la caractérisation de cibles pour la protéomique structurale et la prédiction des effets fonctionnels de mutations impliquées dans des pathologies humaines. MACSIMS peut aussi être utilisé pour la mise à l'essai systématique d'hypothèses de recherche et cette approche a été validée dans le cadre d’une étude portant sur la prédiction des sites fonctionnels dans les protéines sur la base de différentes caractéristiques de séquence/structure. Les applications potentielles de MACSIMS touchent aussi bien aux aspects d’annotation automatique de protéines hypothétiques, qu’à des aspects plus structuraux tel que l’étude de motifs ou résidus spécifiques d’un repliement. A l’avenir, on peut penser que ces développements auront des implications dans les domaines aussi divers que le génie des protéines, la modélisation de voies biologiques, ou les stratégies de développement de médicaments
Genomics and proteomics technologies, together with the new systems biology strategies have led to a paradigm shift in bioinformatics. The traditional reductionist approach has been replaced by a more global, integrated view. In this context, new information management systems are now being introduced to collect, store and curate heterogeneous information in ways that will allow its efficient retrieval and exploitation. Multiple sequence alignments provide an ideal environment for the reliable integration of information from a complete genome to a gene and its related products. In the multiple alignment, patterns of conservation and divergence can be used to identify evolutionarily conserved features and important genetic events. In this thesis, three developments are described: (i) a new benchmark for the objective evaluation of multiple alignment algorithms, (ii) a multiple alignment ontology (MAO) for nucleic acid or protein sequences and structures, (iii) an information management system (MACSIMS) that exploits the multiple alignment and the organisation provided by the ontology. MACSIMS has been used in a variety of projects, including complete genome annotation, target characterisation for structural proteomics and the prediction of structural and functional effects of mutations involved in human pathologies. MACSIMS can also be used for the systematic testing of research hypotheses and the rationale is demonstrated by a study of the effectiveness of various sequence/structure characteristics for the prediction of functional sites in proteins. Other potential applications include such fields as the annotation of the numerous hypothetical proteins produced by the genome sequencing projects or the definition of characteristic motifs for specific protein folds. Hopefully, this will also have more wide-reaching consequences in areas such as protein engineering, metabolic modelling, or the development of new drug development strategies
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Kaplangil, Diren. "Les enjeux juridiques de l'Open Data : Les données publiques entre la patrimonialisation et dé-patrimonialisation." Thesis, Université Grenoble Alpes, 2022. http://www.theses.fr/2022GRALD001.

Повний текст джерела
Анотація:
La thématique de la libération des contenus informationnels issus du secteur public prend une dimension nouvelle avec l’engagement de l’État dans sa politique d’open data. Devenant « ouvertes » au sens libre de droits, les données publiques aujourd’hui ne sont plus considérées comme moyen de l’action publique seul au service des relations démocratiques entretenues entre l’administration et ses administrés, mais apparaissent davantage comme « infrastructure informationnelle » autour de laquelle se dessine l’économie dite « numérique ». Cette transformation quant à l’appréhension des données suscite sans doute la question de leur nature juridique, encore loin d’être précise dans les textes réglementaires. Les aménagements apportés dans le cadre du régime de l’open data semblent les rapprocher des « communs de la connaissance » et laissent penser que ces ressources ne peuvent être privatisées au profit de certains. Pourtant, les pratiques qui interviennent en matière de leur valorisation révèlent la volonté du contrôle de leur exploitation par les institutions publiques qui les détiennent, ou leurs cessionnaires, dont les modèles d’exercice s’apparentent à des formes de propriétés exclusives.Le discours de l’ouverture ne révèle pas seule la question de la nature juridique des données. En effet, la démarche d’open data se situe au cœur de la politique de l’immatériel public de l’État qui cherche à protéger et valoriser toutes ses ressources d’une manière parfois qui contrarie avec les principes de l’ouverture. Notre travail de recherche s’attache donc à l’analyse de cette relation conflictuelle créée autour de la libération des données du secteur public qui embrasse différentes thématiques à la croisée des chemins du droit public et du droit privé, plus précisément des propriétés intellectuelles
The Legal Issues of Open Data: Public data between patrimonialisation and de-patrimonialisationSince the first commitment of the French Government in its open data policy, for a decade now, the issue of free circulation of public sector information takes a new dimension. Considered as « open » in the sense of « free of any restrictive rights », public data do not serve only the democratic goal of consolidating the rights of citizens in their relations with the public authorities, but rather provide an essential « information infrastructure » which constitutes the basis of today's emerging "digital" economy. However, this transformation in apprehension of public data inevitably raises the question of their legal status, which is still far from being clarified. Whereas some adjustments brought under the regime of open data make them legally closer to the legal status of « commons », some other monopolistic practices coming from the public institutions (in particular cultural establishments or their concessionaires) in exploiting the content of their information reveal another approach which is quite close to the form of exclusive property rights.Moreover, opening public data does not only raise the question of the legal nature of these information. Indeed, these information are at the heart of the French State's public policy relating to its intangible assets seeking to protect and exploit them in a way which sometimes may enter into conflict with the principles of open data. At the crossroads of public and private law, more specifically intellectual property law, our thesis is therefore focused on the analysis of this conflictual relationship emerged around the process of proactive release of public sector data online
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Rychlá, Jana. "Marketing bankovních služeb v České spořitelně, a.s." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2008. http://www.nusl.cz/ntk/nusl-221806.

Повний текст джерела
Анотація:
This diploma work deals with marketing problems of bank services. The theory of marketing bank mix is described in it, creation of marketing campaign and control of relations with clients. The goal is a detection of an embarrassment of marketing methods and campaigns used in Česká spořitelna, then evaluation of this methods and finally making some concept how to solve this situation.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Arvizo, Adrian E. Janowiak Vincent J. "Field level computer exploitation package." Monterey, Calif. : Naval Postgraduate School, 2007. http://bosun.nps.edu/uhtbin/hyperion.exe/07Mar%5FArvizo.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Géraud, Rémi. "Advances in public-key cryptology and computer exploitation." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE057/document.

Повний текст джерела
Анотація:
La sécurité de l’information repose sur la bonne interaction entre différents niveaux d’abstraction : les composants matériels, systèmes d’exploitation, algorithmes, et réseaux de communication. Cependant, protéger ces éléments a un coût ; ainsi de nombreux appareils sont laissés sans bonne couverture. Cette thèse s’intéresse à ces différents aspects, du point de vue de la sécurité et de la cryptographie. Nous décrivons ainsi de nouveaux algorithmes cryptographiques (tels que des raffinements du chiffrement de Naccache–Stern), de nouveaux protocoles (dont un algorithme d’identification distribuée à divulgation nulle de connaissance), des algorithmes améliorés (dont un nouveau code correcteur et un algorithme efficace de multiplication d’entiers),ainsi que plusieurs contributions à visée systémique relevant de la sécurité de l’information et à l’intrusion. En outre, plusieurs de ces contributions s’attachent à l’amélioration des performances des constructions existantes ou introduites dans cette thèse
Information security relies on the correct interaction of several abstraction layers: hardware, operating systems, algorithms, and networks. However, protecting each component of the technological stack has a cost; for this reason, many devices are left unprotected or under-protected. This thesis addresses several of these aspects, from a security and cryptography viewpoint. To that effect we introduce new cryptographic algorithms (such as extensions of the Naccache–Stern encryption scheme), new protocols (including a distributed zero-knowledge identification protocol), improved algorithms (including a new error-correcting code, and an efficient integer multiplication algorithm), as well as several contributions relevant to information security and network intrusion. Furthermore, several of these contributions address the performance of existing and newly-introduced constructions
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Procopio, Pietro. "Foreground implications in the scientific exploitation of CMB data." Paris 7, 2009. http://www.theses.fr/2009PA077252.

Повний текст джерела
Анотація:
La première partie de mon travail de thèse se concentre sur la fonction de distribution des photons CMB. En particulier, je montre les implémentations et les phases actualisantes caractérisantes un code d'intégration numérique (KYPRIX) pour la solution de l'équation de Kompaneets dans le contexte cosmologique. Elles ont été aussi exécutées des implémentations physiques: l'introduction de la constante cosmologique dans les équations consacrées pour calculer l'évolution de l'Univers primordial; le choix des abondances chimiques primordiales de H et de He maintenant est possible; les fractions de ionisation pour les espèces impliquées ont été introduites dans les processus physiques; il a été créé une interface optionnelle qui relie KYPRSX avec les codes, comme RECFAST, pour calculer une histoire de recombinaison de la fraction de ionisation de H et de He. Pendant mon deuxième stage à APC j'ai exécuté plusieurs épreuves sur le Planck Sky Model. Les épreuves ont impliqué les deux dernières réalisations du modèle d'émission Galactique, les cartes de foregrounds Galactique tirées par les datus de WMAP et une carte propre de anisotropy du CMB. La dernière réalisation du PSM de la prédiction d'intensité totale des processus Galactiques a montré des résultats en harmonie avec les précédents pour presque toutes les fréquences évaluées, pendant que cela a besoin encore de quelques implémentations à 23 GHz, o l'émission de synchrotron et l'émission de free-free sont plus prominents. J'ai utilisé SM1CA et un autre filtre (le filtre de FFT) que j'ai développé, pour un retraitement de l'IRIS mapset. Les améliorations dramatiques obtenues sur les cartes d'IRIS sont clairement visibles juste par l'oeil
The first part of my thesis work focus on the CMB photon distribution function. I show the implementations and the updating phases characterizing a numerical integration code (KYPRIX) for the solution of the Kompaneets equation in cosmological context. Physical implementations were also performed: the introduction of the cosmological constant; the choice of the primordial chemical abundances of H and He is now possible; the ionization fractions for the species involved have been introduced; it was created an optional interface that links KYPRIX with codes, like RECFAST, in order to calculate a recombination history of the ionization fraction of H and He. Ail of the physical implementations contributed to perform more realistic simulation of the spectral distortion of the CMB. During my second stage at APC I performed several tests on the Planck Sky Model. The tests involved the latest two release of Galactic emission model, the Galactic foreground template derived by WMAP data and a clean CMB anisotropy map. The last release of the PSM total intensity prediction of the Galactic processes showed results consistent with the previous ones for almost ail the frequencies tested, while it still needs some tuning at 23 GHz, where synchrotron emission and free-free emission are more prominent. I started using SMICA (component separation techniques) during my first stage at APC, in 2007. 1 used SMICA, and another filter (FFT filter) I developed, for a reprocessing of the IRIS mapset. A FFT filter was developed for this purpose and I used the filter only on localized regions, not on the full-sky maps. The dramatic improvements obtained on the IRIS maps are clearly visible just by eye
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Elmi, Saïda. "An Advanced Skyline Approach for Imperfect Data Exploitation and Analysis." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2017. http://www.theses.fr/2017ESMA0011/document.

Повний текст джерела
Анотація:
Ce travail de thèse porte sur un modèle de requête de préférence, appelée l'opérateur Skyline, pour l'exploitation de données imparfaites. L'imperfection de données peut être modélisée au moyen de la théorie de l'évidence. Ce type de données peut être géré dans des bases de données imparfaites appelées bases de données évidentielles. D'autre part, l'opérateur skyline est un outil puissant pour extraire les objets les plus intéressants dans une base de données.Dans le cadre de cette thèse, nous définissons une nouvelle sémantique de l'opérateur Skyline appropriée aux données imparfaites modélisées par la théorie de l'évidence. Nous introduisons par la suite la notion de points marginaux pour optimiser le calcul distribué du Skyline ainsi que la maintenance des objets Skyline en cas d'insertion ou de suppression d'objets dans la base de données.Nous modélisons aussi une fonction de score pour mesurer le degré de dominance de chaque objet skyline et définir le top-k Skyline. Une dernière contribution porte sur le raffinement de la requête Skyline pour obtenir les meilleurs objets skyline appelés objets Etoile ou Skyline stars
The main purpose of this thesis is to study an advanced database tool named the skyline operator in the context of imperfect data modeled by the evidence theory. In this thesis, we first address, on the one hand, the fundamental question of how to extend the dominance relationship to evidential data, and on the other hand, it provides some optimization techniques for improving the efficiency of the evidential skyline. We then introduce efficient approach for querying and processing the evidential skyline over multiple and distributed servers. ln addition, we propose efficient methods to maintain the skyline results in the evidential database context wben a set of objects is inserted or deleted. The idea is to incrementally compute the new skyline, without reconducting an initial operation from the scratch. In the second step, we introduce the top-k skyline query over imperfect data and we develop efficient algorithms its computation. Further more, since the evidential skyline size is often too large to be analyzed, we define the set SKY² to refine the evidential skyline and retrieve the best evidential skyline objects (or the stars). In addition, we develop suitable algorithms based on scalable techniques to efficiently compute the evidential SKY². Extensive experiments were conducted to show the efficiency and the effectiveness of our approaches
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hayashi, Shogo. "Information Exploration and Exploitation for Machine Learning with Small Data." Doctoral thesis, Kyoto University, 2021. http://hdl.handle.net/2433/263774.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Kurdej, Marek. "Exploitation of map data for the perception of intelligent vehicles." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2174/document.

Повний текст джерела
Анотація:
La plupart des logiciels contrôlant les véhicules intelligents traite de la compréhension de la scène. De nombreuses méthodes existent actuellement pour percevoir les obstacles de façon automatique. La majorité d’entre elles emploie ainsi les capteurs extéroceptifs comme des caméras ou des lidars. Cette thèse porte sur les domaines de la robotique et de la fusion d’information et s’intéresse aux systèmes d’information géographique. Nous étudions ainsi l’utilité d’ajouter des cartes numériques, qui cartographient le milieu urbain dans lequel évolue le véhicule, en tant que capteur virtuel améliorant les résultats de perception. Les cartes contiennent en effet une quantité phénoménale d’information sur l’environnement : sa géométrie, sa topologie ainsi que d’autres informations contextuelles. Dans nos travaux, nous avons extrait la géométrie des routes et des modèles de bâtiments afin de déduire le contexte et les caractéristiques de chaque objet détecté. Notre méthode se base sur une extension de grilles d’occupations : les grilles de perception crédibilistes. Elle permet de modéliser explicitement les incertitudes liées aux données de cartes et de capteurs. Elle présente également l’avantage de représenter de façon uniforme les données provenant de différentes sources : lidar, caméra ou cartes. Les cartes sont traitées de la même façon que les capteurs physiques. Cette démarche permet d’ajouter les informations géographiques sans pour autant leur donner trop d’importance, ce qui est essentiel en présence d’erreurs. Dans notre approche, le résultat de la fusion d’information contenu dans une grille de perception est utilisé pour prédire l’état de l’environnement à l’instant suivant. Le fait d’estimer les caractéristiques des éléments dynamiques ne satisfait donc plus l’hypothèse du monde statique. Par conséquent, il est nécessaire d’ajuster le niveau de certitude attribué à ces informations. Nous y parvenons en appliquant l’affaiblissement temporel. Étant donné que les méthodes existantes n’étaient pas adaptées à cette application, nous proposons une famille d’opérateurs d’affaiblissement prenant en compte le type d’information traitée. Les algorithmes étudiés ont été validés par des tests sur des données réelles. Nous avons donc développé des prototypes en Matlab et des logiciels en C++ basés sur la plate-forme Pacpus. Grâce à eux nous présentons les résultats des expériences effectués en conditions réelles
This thesis is situated in the domains of robotics and data fusion, and concerns geographic information systems. We study the utility of adding digital maps, which model the urban environment in which the vehicle evolves, as a virtual sensor improving the perception results. Indeed, the maps contain a phenomenal quantity of information about the environment : its geometry, topology and additional contextual information. In this work, we extract road surface geometry and building models in order to deduce the context and the characteristics of each detected object. Our method is based on an extension of occupancy grids : the evidential perception grids. It permits to model explicitly the uncertainty related to the map and sensor data. By this means, the approach presents also the advantage of representing homogeneously the data originating from various sources : lidar, camera or maps. The maps are handled on equal terms with the physical sensors. This approach allows us to add geographic information without imputing unduly importance to it, which is essential in presence of errors. In our approach, the information fusion result, stored in a perception grid, is used to predict the stateof environment on the next instant. The fact of estimating the characteristics of dynamic elements does not satisfy the hypothesis of static world. Therefore, it is necessary to adjust the level of certainty attributed to these pieces of information. We do so by applying the temporal discounting. Due to the fact that existing methods are not well suited for this application, we propose a family of discoun toperators that take into account the type of handled information. The studied algorithms have been validated through tests on real data. We have thus developed the prototypes in Matlab and the C++ software based on Pacpus framework. Thanks to them, we present the results of experiments performed in real conditions
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Jacobs, Thomas Richard. "Exploitation of thread- and data-parallelism in video coding algorithms." Thesis, Loughborough University, 2007. https://dspace.lboro.ac.uk/2134/34963.

Повний текст джерела
Анотація:
MPEG-2, MPEG-4 and H.264 are currently the most popular video coding algorithms for consumer devices. The complexity and computational intensity of their respective encoding processes and the associated power consumption currently limits their full deployment in portable or cost-sensitive consumer devices. This thesis takes two approaches in addressing these performance issues. Firstly in the static partitioning of application's control-flow-graphs using thread-level parallelism to share the computational load between multiple processors in a System-on-Chip multiprocessor configuration. Secondly, two separate design methodologies, one founded in RTL and in the second SystemC, were applied in order to investigate dedicated vector architectures in the acceleration of video encoding through the exploitation of data-level parallel techniques. By implementing two vector datapaths, one from each methodology, a comparison of the two is made. The key contributions of the work are summarised below: (1) demonstration of the reduction in computational workload per processor by exploiting thread-level parallelism; (2) static partitioning of three state-of-the-art video encoders, namely MPEG-2, MPEG-4 and H.264, to permit their execution on a multi-processor environment; (3) design of a vector datapath to accelerate MPEG-4 video encoding by implementing data-level parallelism; (4) comparative study of the potential of the ESL language, SystemC, in the design methodology, in comparison with the RTL.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Ramljak, Dusan. "Data Driven High Performance Data Access." Diss., Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/530207.

Повний текст джерела
Анотація:
Computer and Information Science
Ph.D.
Low-latency, high throughput mechanisms to retrieve data become increasingly crucial as the cyber and cyber-physical systems pour out increasing amounts of data that often must be analyzed in an online manner. Generally, as the data volume increases, the marginal utility of an ``average'' data item tends to decline, which requires greater effort in identifying the most valuable data items and making them available with minimal overhead. We believe that data analytics driven mechanisms have a big role to play in solving this needle-in-the-haystack problem. We rely on the claim that efficient pattern discovery and description, coupled with the observed predictability of complex patterns within many applications offers significant potential to enable many I/O optimizations. Our research covers exploitation of storage hierarchy for data driven caching and tiering, reduction of distance between data and computations, removing redundancy in data, using sparse representations of data, the impact of data access mechanisms on resilience, energy consumption, storage usage, and the enablement of new classes of data driven applications. For caching and prefetching, we offer a powerful model that separates the process of access prediction from the data retrieval mechanism. Predictions are made on a data entity basis and used the notions of ``context'' and its aspects such as ``belief'' to uncover and leverage future data needs. This approach allows truly opportunistic utilization of predictive information. We elaborate on which aspects of the context we are using in areas other than caching and prefetching different situations and why it is appropriate in the specified situation. We present in more details the methods we have developed, BeliefCache for data driven caching and prefetching and AVSC for pattern mining based compression of data. In BeliefCache, using a belief, an aspect of context representing an estimate of the probability that the storage element will be needed, we developed modular framework BeliefCache, to make unified informed decisions about that element or a group. For the workloads we examined we were able to capture complex non-sequential access patterns better than a state-of-the-art framework for optimizing cloud storage gateways. Moreover, our framework is also able to adjust to variations in the workload faster. It also does not require a static workload to be effective since modular framework allows for discovering and adapting to the changes in the workload. In AVSC, using an aspect of context to gauge the similarity of the events, we perform our compression by keeping relevant events intact and approximating other events. We do that in two stages. We first generate a summarization of the data, then approximately match the remaining events with the existing patterns if possible, or add the patterns to the summary otherwise. We show gains over the plain lossless compression for a specified amount of accuracy for purposes of identifying the state of the system and a clear tradeoff in between the compressibility and fidelity. In other mentioned research areas we present challenges and opportunities with the hope that will spur researchers to further examine those issues in the space of rapidly emerging data intensive applications. We also discuss the ideas how our research in other domains could be applied in our attempts to provide high performance data access.
Temple University--Theses
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Lee, Kenneth Sydney. "Characterization and Exploitation of GPU Memory Systems." Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/34215.

Повний текст джерела
Анотація:
Graphics Processing Units (GPUs) are workhorses of modern performance due to their ability to achieve massive speedups on parallel applications. The massive number of threads that can be run concurrently on these systems allow applications which have data-parallel computations to achieve better performance when compared to traditional CPU systems. However, the GPU is not perfect for all types of computation. The massively parallel SIMT architecture of the GPU can still be constraining in terms of achievable performance. GPU-based systems will typically only be able to achieve between 40%-60% of their peak performance. One of the major problems affecting this effeciency is the GPU memory system, which is tailored to the needs of graphics workloads instead of general-purpose computation. This thesis intends to show the importance of memory optimizations for GPU systems. In particular, this work addresses problems of data transfer and global atomic memory contention. Using the novel AMD Fusion architecture, we gain overall performance improvements over discrete GPU systems for data-intensive applications. The fused architecture systems offer an interesting trade off by increasing data transfer rates at the cost of some raw computational power. We characterize the performance of different memory paths that are possible because of the shared memory space present on the fused architecture. In addition, we provide a theoretical model which can be used to correctly predict the comparative performance of memory movement techniques for a given data-intensive application and system. In terms of global atomic memory contention, we show improvements in scalability and performance for global synchronization primitives by avoiding contentious global atomic memory accesses. In general, this work shows the importance of understanding the memory system of the GPU architecture to achieve better application performance.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Fragkou, Aikaterini. "Greek Primary Educators' Perceptions of Strategies for Mitigating Cyber Child Exploitation." ScholarWorks, 2018. https://scholarworks.waldenu.edu/dissertations/5335.

Повний текст джерела
Анотація:
Cyber child exploitation is a problem in Greece due to the economic crisis and the resulting lack of government focus on social improvements. Research reveals the importance of educating school teachers of the potential for cyber exploitation of children and argues that early detection of child-focused cybercrimes will decrease the prevalence of child exploitation. The purpose of this interpretive qualitative study was to explore the phenomenon of cyber child exploitation in Greece and to identify strategies teachers may employ to identify and avert cyber child exploitation. Grounded theory provided the framework for this research. The sample consisted of 20 school teachers from a private primary school in suburban Greece. The 20 teachers were over 21 years old, presently certified as teachers and working in primary school, willing to share on voluntary basis information about their experiences and concerns with cyber child exploitation awareness among students, as well as parents. One-to-one interviews were conducted to gather data. Coding was the procedure followed to divide the interview data and rearrange based on common patterns. The resulting themes revealed that no consistent strategies were used to protect children, teachers play a significant role in the prevention of cyber child exploitation, and there is a need for professional development of programs to protect children. Implications for positive social change suggest that educational institutions will help protect children as teachers become more knowledgeable about specific measures to effectively recognize cyber predators. With the guidance of well-informed teachers, students may learn to use the World Wide Web in an effective fashion while being able to avoid the dangers posed by cyber predators.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

De, Vettor Pierre. "A Resource-Oriented Architecture for Integration and Exploitation of Linked Data." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1176/document.

Повний текст джерела
Анотація:
Cette thèse porte sur l'intégration de données brutes provenant de sources hétérogènes sur le Web. L'objectif global est de fournir une architecture générique et modulable capable de combiner, de façon sémantique et intelligente, ces données hétérogènes dans le but de les rendre réutilisables. Ce travail est motivé par un scenario réel de l'entreprise Audience Labs permettant une mise à l'échelle de cette architecture. Dans ce rapport, nous proposons de nouveaux modèles et techniques permettant d'adapter le processus de combinaison et d'intégration à la diversité des sources de données impliquées. Les problématiques sont une gestion transparente et dynamique des sources de données, passage à l'échelle et responsivité par rapport au nombre de sources, adaptabilité au caractéristiques de sources, et finalement, consistance des données produites(données cohérentes, sans erreurs ni doublons). Pour répondre à ces problématiques, nous proposons un méta-modèle pour représenter ces sources selon leurs caractéristiques, liées à l'accès (URI) ou à l'extraction (format) des données, mais aussi au capacités physiques des sources (latence, volume). En s'appuyant sur cette formalisation, nous proposent différentes stratégies d'accès aux données, afin d'adapter les traitements aux spécificités des sources. En se basant sur ces modèles et stratégies, nous proposons une architecture orientée ressource, ou tout les composants sont accessibles par HTTP via leurs URI. En se basant sur les caractéristiques des sources, sont générés des workflows d'exécution spécifiques et adapté, permettant d'orchestrer les différentes taches du processus d'intégration de façon optimale, en donnant différentes priorités à chacune des tâches. Ainsi, les temps de traitements sont diminuées, ainsi que les volumes des données échangées. Afin d'améliorer la qualité des données produites par notre approches, l'accent est mis sur l'incertitude qui peut apparaître dans les données sur le Web. Nous proposons un modèle, permettant de représenter cette incertitude, au travers du concept de ressource Web incertaines, basé sur un modèle probabiliste ou chaque ressource peut avoir plusieurs représentation possibles, avec une certaine probabilité. Cette approche sera à l'origine d'une nouvelle optimisation de l'architecture pour permettre de prendre en compte l'incertitude pendant la combinaison des données
In this thesis, we focus on data integration of raw data coming from heterogeneous and multi-origin data sources on the Web. The global objective is to provide a generic and adaptive architecture able to analyze and combine this heterogeneous, informal, and sometimes meaningless data into a coherent smart data set. We define smart data as significant, semantically explicit data, ready to be used to fulfill the stakeholders' objective. This work is motivated by a live scenario from the French {\em Audience Labs} company. In this report, we propose new models and techniques to adapt the combination and integration process to the diversity of data sources. We focus on transparency and dynamicity in data source management, scalability and responsivity according to the number of data sources, adaptability to data source characteristics, and finally consistency of produced data (coherent data, without errors and duplicates). In order to address these challenges, we first propose a meta-models in order to represent the variety of data source characteristics, related to access (URI, authentication) extraction (request format), or physical characteristics (volume, latency). By relying on this coherent formalization of data sources, we define different data access strategies in order to adapt access and processing to data source capabilities. With help form these models and strategies, we propose a distributed resource oriented software architecture, where each component is freely accessible through REST via its URI. The orchestration of the different tasks of the integration process can be done in an optimized way, regarding data source and data characteristics. This data allows us to generate an adapted workflow, where tasks are prioritized amongst other in order to fasten the process, and by limiting the quantity of data transfered. In order to improve the data quality of our approach, we then focus on the data uncertainty that could appear in a Web context, and propose a model to represent uncertainty in a Web context. We introduce the concept of Web resource, based on a probabilistic model where each resource can have different possible representations, each with a probability. This approach will be the basis of a new architecture optimization allowing to take uncertainty into account during our combination process
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Debuse, J. C. W. "Exploitation of modern heuristic techniques within a commercial data mining environment." Thesis, University of East Anglia, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389330.

Повний текст джерела
Анотація:
The development of information technology allows organisations to gather and store ever increasing quantities of data. This data, although not often collected specifically for such a purpose, may be processed to extract knowledge which is interesting, novel and useful. Such processing, known as data mining, demands algorithms which can efficiently 'mine' through the large volumes of data and extract patterns of interest. Modern heuristic techniques are a class of optimisation algorithms, which solve problems by searching through the space containing all possible solutions. They have been applied to a wide variety of such problems with great success, which suggests that they may also prove useful for data mining. Conducting a search through the space of all patterns within a database using such techniques is likely to yield useful information. Within this thesis, it is demonstrated that modern heuristic techniques may be successfully applied to a wide range of data mining problems. The results presented highlight the suitability of such algorithms for the demands of the commercial environment; as a consequence of this, much of the work undertaken has become incorporated within real business processes, bringing considerable savings. A variety of algorithmic enhancements are also investigated, yielding important results for both the data mining and heuristics fields.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Smith, Ashley Nicole. "End-to-End Classification Process for the Exploitation of Vibrometry Data." Wright State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=wright1421104791.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

McGuinness, Christopher. "Characterizing Remote Sensing Data Compression Distortion for Improved Automated Exploitation Performance." University of Dayton / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1524844209730534.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Ponchateau, Cyrille. "Conception et exploitation d'une base de modèles : application aux data sciences." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2018. http://www.theses.fr/2018ESMA0005/document.

Повний текст джерела
Анотація:
Les sciences expérimentales font régulièrement usage de séries chronologiques, pour représenter certains des résultats expérimentaux, qui consistent en listes chronologiques de valeurs (indexées par le temps), généralement fournies par des capteurs reliés à un système (objet de l’expérience). Ces séries sont analysées dans le but d’obtenir un modèle mathématique permettant de décrire les données et ainsi comprendre et expliquer le comportement du système étudié. De nos jours, les technologies de stockage et analyse de séries chronologiques sont nombreuses et matures, en revanche, quant au stockage et à la gestion de modèles mathématiques et leur mise en lien avec des données numériques expérimentales, les solutions existantes sont à la fois récentes, moins nombreuses et moins abouties. Or,les modèles mathématiques jouent un rôle essentiel dans l’interprétation et la validation des résultats expérimentaux. Un système de stockage adéquat permettrait de faciliter leur gestion et d’améliorer leur ré-utilisabilité. L’objectif de ce travail est donc de développer une base de modèles permettant la gestion de modèle mathématiques et de fournir un système de « requête par les données », afin d’aider à retrouver/reconnaître un modèle à partir d’un profil numérique expérimental. Dans cette thèse, je présente donc la conception (de la modélisation des données, jusqu’à l’architecture logicielle) de la base de modèles et les extensions qui permettent de réaliser le système de « requête par les données ». Puis, je présente le prototype de la base de modèle que j’ai implémenté, ainsi que les résultats obtenus à l’issu des tests de ce-dernier
It is common practice in experimental science to use time series to represent experimental results, that usually come as a list of values in chronological order (indexed by time) and generally obtained via sensors connected to the studied physical system. Those series are analyzed to obtain a mathematical model that allow to describe the data and thus to understand and explain the behavio rof the studied system. Nowadays, storage and analyses technologies for time series are numerous and mature, but the storage and management technologies for mathematical models and their linking to experimental numerical data are both scarce and recent. Still, mathematical models have an essential role to play in the interpretation and validation of experimental results. Consequently, an adapted storage system would ease the management and re-usability of mathematical models. This work aims at developing a models database to manage mathematical models and provide a “query by data” system, to help retrieve/identify a model from an experimental time series. In this work, I will describe the conception (from the modeling of the system, to its software architecture) of the models database and its extensions to allow the “query by data”. Then, I will describe the prototype of models database,that I implemented and the results obtained by tests performed on the latter
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Axetun, Magnus. "Securing hospitals from exploitation of hardware ports." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-143259.

Повний текст джерела
Анотація:
Electronic devices are widely used in today’s hospitals and the possibilities they offer are increasing every day. The devices are often embedded systems, running outdated operating systems and need to have a high uptime which makes them vulnerable to malicious software (malware). This thesis examines the ways malware can propagate through the Universal Serial Bus (USB) with the help of social engineering. Valuable assets are defined and different threat scenarios to these assets are presented. Lastly, the different scenarios are evaluated based on which assets they impact and how to effective mitigate the threats they present. Short- and long-term mitigations are presented to secure the devices in a broader view.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Peloton, Julien. "Data analysis and scientific exploitation of the CMB B-modes experiment, POLARBEAR." Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCC154.

Повний текст джерела
Анотація:
L'évolution des techniques d'observation au cours des deux dernières décennies a rendu possible l'obtention de jeux de données de plus en plus précis, et a permis l'évolution de la cosmologie vers une science de haute précision. Les études menées sur les anisotropies du Fond Diffus Cosmologique n'ont jamais cessé de jouer un rôle prépondérant dans cette transformation, tant leurs impacts ont été importants. Néanmoins, les jeux de données extrêmement volumineux et complexes produits par les expériences de Fond Diffus en cours posent un nouveau défi pour le domaine, à tel point que la réussite de l'analyse moderne des données du Fond Diffus repose sur une forte interdisciplinarité combinant de la physique, des mathématiques, des méthodes statistiques ainsi que des méthodes de calcul numérique. Dans cette thèse, j'expose l'analyse du premier jeu de données produit par POLARBEAR, l'une des expériences actuelle de premier plan sur le Fond Diffus, ainsi que les résultats majeurs obtenus. L'expérience POLARBEAR est spécifiquement dédiée à la détection et à la caractérisation de la signature des modes B de la polarisation du Fond Diffus Cosmologique. La recherche des modes B est l'un des sujets actuel les plus passionnants pour le Fond Diffus, qui a commencé à ouvrir de nouvelles perspectives sur la cosmologie, en partie grâce aux résultats présentés et discutés dans ce travail. Dans cette thèse, je décris en premier lieu le modèle cosmologique actuel, en me concentrant sur la physique du Fond Diffus, et plus particulièrement ses propriétés de polarisation; ainsi qu'une vue d'ensemble des contributions et des résultats des expériences antérieures et en cours. Dans un deuxième temps, je présente l'instrument POLARBEAR, l'analyse des données prises lors de la première année d'observation, ainsi que les résultats scientifiques qui en ont été tirés, en soulignant principalement ma contribution au projet dans son ensemble. Dans le dernier chapitre, et dans le contexte des prochaines générations d'expérience sur les modes B, je détaille une étude plus systématique concernant l'impact de la présence des fuites des modes E dans les modes B sur les performances prévues par ces futures expériences, notamment en comparant plusieurs méthodes dont la méthodes des pseudospectres pures ainsi que l'estimateur quadratique à variance minimum. En particulier, dans le cas d'observation du ciel présentant une symétrie azimutale, je détaille comment l'estimateur quadratique à variance minimum peut être utilisé pour estimer de manière efficace les paramètres cosmologiques, et je présente une implémentation performante basée sur des algorithmes parallèles existants pour le calcul des transformations en harmoniques sphériques
Over the last two decades cosmology has been transformed from a data-starved to a data-driven, high precision science. N This transformation happened thanks to improved observational techniques, allowing to collect progressively bigger and more powerful data sets. Studies of the Cosmic Microwave Background (CMB) anisotropies have played, and continue on doing so, a particularly important and impactful role in this process. The huge data sets produced by recent CMB experiments pose new challenges for the field due to their volumes and complexity. Its successful resolution requires combining mathematical, statistical and computational methods aIl of which form a keystone of the modern CMB data analysis. In this thesis, I describe data analysis of the first data set produced by one of the most advanced, current CMB experiments, POLARBEAR and the major results it produced. The POLARBEAR experiment is a leading CMB B-mode polarization experiment aiming at detection and characterization of the so-called B-mode signature of the CMB polarization. This is one of the most exciting topics in the current CMB research, which only just has started yielding new insights onto cosmology in part thanks to the results discussed hereafter. In this thesis I describe first the modern cosmological model, focusing on the physics of the CMB, and in particular its polarization properties, and providing an overview of the past experiments and results. Subsequently, I present the POLARBEAR instrument, data analysis of its first year data set and the scientific results drawn from it, emphasizing my major contributions to the overall effort. In the last chapter, and in the context of the next generation CMB B-mode experiments, I present a more systematic study of the impact of the presence of the so-called E-to-B leakage on the performance forecasts of CMB B-modes experiments, by comparing several methods including the pure pseudospectrum method and the minimum variance quadratic estimator. In particular, I detail how the minimum variance quadratic estimator in the case of azimuthally symmetric patches can be used to estimate efficiently parameters, and I present an efficient implementation based on existing parallel algorithms for computing Spherical Harmonic Transforms
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Coble, Keith. "Drowning in Data, Starving for Knowledge OMEGA Data Environment." International Foundation for Telemetering, 2003. http://hdl.handle.net/10150/605343.

Повний текст джерела
Анотація:
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada
The quantity T&E data has grown in step with the increase in computing power and digital storage. T&E data management and exploitation technologies have not kept pace with this exponential growth. New approaches to the challenges posed by this data explosion must provide for continued growth while providing seamless integration with the existing body of work. Object Oriented Data Management provides the framework to handle the continued rapid growth in computer speed and the amount of data gathered and legacy integration. The OMEGA Data Environment is one of the first commercially available examples of this emerging class of OODM applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Welte, Anthony. "Spatio-temporal data fusion for intelligent vehicle localization." Thesis, Compiègne, 2020. http://bibliotheque.utc.fr/EXPLOITATION/doc/IFD/2020COMP2572.

Повний текст джерела
Анотація:
La localisation précise constitue une brique essentielle permettant aux véhicules de naviguer de manière autonome sur la route. Cela peut être atteint à travers les capteurs déjà existants, de nouvelles technologies (Iidars, caméras intelligentes) et des cartes haute définition. Dans ce travail, l'intérêt d'enregistrer et réutiliser des informations sauvegardées en mémoire est exploré. Les systèmes de localisation doivent permettre une estimation à haute fréquence, des associations de données, de la calibration et de la détection d'erreurs. Une architecture composée de plusieurs couches de traitement est proposée et étudiée. Une couche principale de filtrage estime la pose tandis que les autres couches abordent les problèmes plus complexes. L'estimation d'état haute fréquence repose sur des mesures proprioceptives. La calibration du système est essentielle afin d'obtenir une pose précise. En gardant les états estimés et les observations en mémoire, les modèles d'observation des capteurs peuvent être calibrés à partir des estimations lissées. Les Iidars et les caméras intelligentes fournissent des mesures qui peuvent être utilisées pour la localisation mais soulèvent des problèmes d'association de données. Dans cette thèse, le problème est abordé à travers une fenêtre spatio-temporelle, amenant une image plus détaillée de l'environnement. Le buffer d'états est ajusté avec les observations et toutes les associations possibles. Bien que l'utilisation d'amers cartographiés permette d'améliorer la localisation, cela n'est possible que si la carte est fiable. Une approche utilisant les résidus lissés a posteriori a été développée pour détecter ces changements de carte
Localization is an essential basic capability for vehicles to be able to navigate autonomously on the road. This can be achieved through already available sensors and new technologies (Iidars, smart cameras). These sensors combined with highly accurate maps result in greater accuracy. In this work, the benefits of storing and reusing information in memory (in data buffers) are explored. Localization systems need to perform a high-frequency estimation, map matching, calibration and error detection. A framework composed of several processing layers is proposed and studied. A main filtering layer estimates the vehicle pose while other layers address the more complex problems. High-frequency state estimation relies on proprioceptive measurements combined with GNSS observations. Calibration is essential to obtain an accurate pose. By keeping state estimates and observations in a buffer, the observation models of these sensors can be calibrated. This is achieved using smoothed estimates in place of a ground truth. Lidars and smart cameras provide measurements that can be used for localization but raise matching issues with map features. In this work, the matching problem is addressed on a spatio-temporal window, resulting in a more detailed pictur of the environment. The state buffer is adjusted using the observations and all possible matches. Although using mapped features for localization enables to reach greater accuracy, this is only true if the map can be trusted. An approach using the post smoothing residuals has been developed to detect changes and either mitigate or reject the affected features
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Skeppstedt, David. "Identification and Exploitation of Vulnerabilities in a Large-Scale ITSystem." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-261423.

Повний текст джерела
Анотація:
This thesis presents the results of a vulnerability assessment and exploit development targeting a large-scale IT-system. Penetration testing and threat modelling was used to identify vulnerabilities in the system. This resulted in identification of five vulnerabilities and the development of a reliable denial of service exploit using an authentication bypass and a stack-based buffer overflow. The consequences of the vulnerabilities and the exploit is discussed and set into a broader perspective. The conclusion is that the results from this thesis can help improve the security of the IT-system. However, the identification of additional vulnerabilities could lead to a more potent exploit.
I detta examensarbete har ett storskaligt IT-system säkerhetsgranskats. Metoden som har används är penetrationstest och hotmodellering. Resultatet är en tillförlitlig överbelastningsattack som utnyttjar två av de fem sårbarheter som har upptäckts. Attacken utnyttjar ett fel i auktoriseringsflöde och en buffertöverfyllning. Konsekvenser av attacken och sårbarheterna diskuteras. Slutsatsen är att resultatet kommer att bidra till att IT-systemet blir säkrare men om fler sårbarheter hade hittats så skulle attacken kunnat ha bättre verkan på målet.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Garnaud, Eve. "Dépendances fonctionnelles : extraction et exploitation." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00951619.

Повний текст джерела
Анотація:
Les dépendances fonctionnelles fournissent une information sémantique sur les données d'une table en mettant en lumière les liens de corrélation qui les unient. Dans cette thèse, nous traitons du problème de l'extraction de ces dépendances en proposant un contexte unifié permettant la découverte de n'importe quel type de dépendances fonctionnelles (dépendances de clé, dépendances fonctionnelles conditionnelles, que la validité soit complète ou approximative). Notre algorithme, ParaCoDe, s'exécute en parallèle sur les candidats, réduisant ainsi le temps global de calcul. De ce fait, il est très compétitif vis-à-vis des approches séquentielles connues à ce jour. Les dépendances satisfaites sur une table nous servent à résoudre le problème de la matérialisation partielle du cube de données. Nous présentons une caractérisation de la solution optimale dans laquelle le coût de chaque requête est borné par un seuil de performance fixé préalablement et dont la taille est minimale. Cette spécification de la solution donne un cadre unique pour décrire et donc comparer formellement les techniques de résumé de cubes de données.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Lassalle, Guillaume. "Exploitation of hyperspectral data for assessing vegetation health under exposure to petroleum hydrocarbons." Thesis, Toulouse, ISAE, 2019. http://www.theses.fr/2019ESAE0030.

Повний текст джерела
Анотація:
L’exploration pétrolière et le monitoring de la contamination demeurent très limités dans les régions coloniséespar la végétation. La présence de suintements naturels et de fuites d’installations pétrolières est bien souventmasquée par le feuillage, rendant inopérantes les technologies actuelles de détection du pétrole brut et des produitspétroliers. L’exposition de la végétation à ces composés affecte toutefois son état de santé et, par conséquent, sespropriétés optiques dans le domaine [400:2500] nm. Cela suggère de pouvoir détecter les suintements et les fuitesd’installations de manière indirecte, en analysant l’état de santé de la végétation au travers de sa réflectancespectrale. Basée sur cette hypothèse, la présente thèse évalue le potentiel de l’imagerie hyperspectrale aéroportéeà très haute résolution spatiale pour détecter et quantifier la contamination pétrolière en région tempéréevégétalisée. Pour cela, une approche multi-échelles en trois étapes a été adoptée. La première étape a eu pourobjet de développer une méthode de détection et de caractérisation de la contamination en conditions contrôlées,exploitant les propriétés optiques de Rubus fruticosus L. La méthode proposée combine 14 indices de végétationen classification et permet de détecter divers contaminants pétroliers avec précision, depuis l’échelle de la feuillejusqu’à celle du couvert. Son utilisation en conditions naturelles a été validée sur un bourbier de productioncontaminé, colonisé par la même espèce. Au cours de la seconde étape, une méthode de quantification deshydrocarbures pétroliers totaux, basée sur l’inversion d’un modèle de transfert radiatif, a été développée. Cetteméthode exploite le contenu en pigments des feuilles, estimé à partir de leur signature spectrale, afin de prédireprécisément le taux de contamination en hydrocarbures du sol. La dernière étape de l’approche a démontré larobustesse des deux méthodes en imagerie aéroportée. Celles-ci se sont montrées très performantes pour détecteret quantifier la contamination des bourbiers. Une autre méthode de quantification, basée sur la régressionmultiple, a également été proposée. Au terme de cette thèse, les trois méthodes proposées ont été validées pourune utilisation sur le terrain, à l’échelle de la feuille et du couvert, ainsi qu’en imagerie hyperspectrale aéroportéeà très haute résolution spatiale. Leurs performances dépendent toutefois de l’espèce, de la saison et du niveau decontamination du sol. Une approche similaire a été conduite en conditions tropicales, permettant de mettre aupoint une méthode de quantification de la contamination adaptée à ce contexte. En vue d’une utilisationopérationnelle, un effort important reste nécessaire pour étendre le champ d’application des méthodes à d’autrescontextes et envisager leur application sur les futurs capteurs hyperspectraux embarqués sur satellite et sur drone.Enfin, l’apport de la télédétection active (radar et LiDAR) est à considérer dans les recherches futures, afin delever certaines limites propres à l’utilisation de la télédétection optique passive
Oil exploration and contamination monitoring remain limited in regions covered by vegetation. Natural seepages and oil leakages due to facility failures are often masked by the foliage, making ineffective the current technologies used for detecting crude oil and petroleum products. However, the exposure of vegetation to oil affects its health and, consequently, its optical properties in the [400:2500] nm domain. This suggest being able to detect seepages and leakages indirectly, by analyzing vegetation health through its spectral reflectance. Based on this assumption, this thesis evaluates the potential of airborne hyperspectral imagery with high spatial resolution for detecting and quantifying oil contamination in vegetated regions. To achieve this, a three-step multiscale approach was adopted. The first step aimed at developing a method for detecting and characterizing the contamination under controlled conditions, by exploiting the optical properties of Rubus fruticosus L. The proposed method combines 14 vegetation indices in classification and allows detecting various oil contaminants accurately, from leaf to canopy scale. Its use under natural conditions was validated on a contaminated mud pit colonized by the same species. During the second step, a method for quantifying total petroleum hydrocarbons, based on inverting the PROSPECT model, was developed. The method exploits the pigment content of leaves, estimated from their spectral signature, for predicting the level of hydrocarbon contamination in soils accurately. The last step of the approach demonstrated the robustness of the two methods using airborne imagery. They proved performing for detecting and quantifying mud pit contamination. Another method of quantification, based on multiple regression, was proposed. At the end of this thesis, the three methods proposed were validated for use both on the field, at leaf and canopy scales, and on airborne hyperspectral images with high spatial resolution. Their performances depend however on the species, the season and the level of soil contamination. A similar approach was conducted under tropical conditions, allowing the development of a method for quantifying the contamination adapted to this context. In a perspective of operational use, an important effort is still required for extending the scope of the methods to other contexts and for anticipating their use on satellite- and drone-embedded hyperspectral sensors. Finally, the contribution of active remote sensing (radar and LiDAR) should be considered in further research, in order to overcome some of the limits specific to passive optical remote sensing
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Bouali, Tarek. "Platform for efficient and secure data collection and exploitation in intelligent vehicular networks." Thesis, Dijon, 2016. http://www.theses.fr/2016DIJOS003/document.

Повний текст джерела
Анотація:
De nos jours, la filiale automobile connait une évolution énorme en raison de la croissance évolutive des technologies de communication, des aptitudes de détection et de perception de l’environnement, et des capacités de stockage et de traitement présentes dans les véhicules. En effet, une voiture est devenue une sorte d'agent mobile capable de percevoir son environnement et d’en collecter des informations, de communiquer avec les autres véhicules ou infrastructures présentes sur la route, et de traiter les données collectées. Ces progrès stimulent le développement de plusieurs types d'applications qui vont permettre d'améliorer la sécurité et l'efficacité de conduite et de rendre le voyage des automobilistes plus confortable. Cependant, ce développement repose beaucoup sur les données collectées et donc ne pourra se faire que via une collecte sécurisée et un traitement efficace de ces données détectées. La collecte de données dans un réseau véhiculaire a toujours été un véritable défi en raison des caractéristiques spécifiques de ces réseaux fortement dynamiques (changement fréquent de topologie, vitesse élevée des véhicules et fragmentation fréquente du réseau), qui conduisent à des communications opportunistes et non durables. L'aspect sécurité, reste un autre maillon faible de ces réseaux sans fils vu qu'ils sont par nature vulnérables à diverses types d'attaques visant à falsifier les données recueillies et affecter leur intégrité. En outre, les données recueillies ne sont pas compréhensibles par eux-mêmes et ne peuvent pas être interprétées et comprises si montrées directement à un conducteur ou envoyées à d'autres nœuds dans le réseau. Elles doivent être traitées et analysées pour extraire les caractéristiques significatives et informations pour développer des applications utiles et fiables. En plus, les applications développées ont toujours des exigences différentes en matière de qualité de service (QdS). Plusieurs travaux de recherche et projets ont été menées pour surmonter les défis susmentionnés. Néanmoins, ils n'ont pas abouti à la perfection et souffrent encore de certaines faiblesses. Pour cette raison, nous focalisons nos efforts durant cette thèse au développement d’une plateforme de collecte efficace et sécurisée de données dans un réseau de véhicules ainsi que l’exploitation de ces données par des applications améliorant le voyage des automobilistes et la connectivité des véhicules. Pour ce faire, nous proposons une première solution visant à déployer de manière optimale des véhicules, qui auront la tâche de recueillir des données, dans une zone urbaine. Ensuite, nous proposons un nouveau protocole de routage sécurisé permettant de relayer les données collectées vers une destination en se basant sur un système de détection et d'expulsion des véhicules malveillants. Ce protocole est par la suite amélioré avec un nouveau mécanisme de prévention d'intrusion permettant de détecter des attaquants au préalable en utilisant les filtres de Kalman. En deuxième partie de thèse, nous nous sommes concentré sur l’exploitation de ces données en développant une première application capable de calculer de manière fine l’itinéraire le plus économique pour les automobilistes ou tout gestionnaire de flottes de véhicules. Cette solution est basée sur les données influents sur la consommation de carburant et collectées à partir des véhicules eux mêmes et aussi d’autres sources d’informations dans l’Internet et accessibles via des API spécifiques. Enfin, un mécanisme spatio-temporel permettant de choisir le meilleur médium de communication disponible a été développé. Ce dernier est basé sur la logique floue et considère les informations recueillies sur les réseaux, les utilisateurs et les applications pour préserver de meilleure qualité de service
Nowadays, automotive area is witnessing a tremendous evolution due to the increasing growth in communication technologies, environmental sensing & perception aptitudes, and storage & processing capacities that we can find in recent vehicles. Indeed, a car is being a kind of intelligent mobile agent able to perceive its environment, sense and process data using on-board systems and interact with other vehicles or existing infrastructure. These advancements stimulate the development of several kinds of applications to enhance driving safety and efficiency and make traveling more comfortable. However, developing such advanced applications relies heavily on the quality of the data and therefore can be realized only with the help of a secure data collection and efficient data treatment and analysis. Data collection in a vehicular network has been always a real challenge due to the specific characteristics of these highly dynamic networks (frequent changing topology, vehicles speed and frequent fragmentation), which lead to opportunistic and non long lasting communications. Security, remains another weak aspect in these wireless networks since they are by nature vulnerable to various kinds of attacks aiming to falsify collected data and affect their integrity. Furthermore, collected data are not understandable by themselves and could not be interpreted and understood if directly shown to a driver or sent to other nodes in the network. They should be treated and analyzed to extract meaningful features and information to develop reliable applications. In addition, developed applications always have different requirements regarding quality of service (QoS). Several research investigations and projects have been conducted to overcome the aforementioned challenges. However, they still did not meet perfection and suffer from some weaknesses. For this reason, we focus our efforts during this thesis to develop a platform for a secure and efficient data collection and exploitation to provide vehicular network users with efficient applications to ease their travel with protected and available connectivity. Therefore, we first propose a solution to deploy an optimized number of data harvesters to collect data from an urban area. Then, we propose a new secure intersection based routing protocol to relay data to a destination in a secure manner based on a monitoring architecture able to detect and evict malicious vehicles. This protocol is after that enhanced with a new intrusion detection and prevention mechanism to decrease the vulnerability window and detect attackers before they persist their attacks using Kalman filter. In a second part of this thesis, we concentrate on the exploitation of collected data by developing an application able to calculate the most economic itinerary in a refined manner for drivers and fleet management companies. This solution is based on several information that may affect fuel consumption, which are provided by vehicles and other sources in Internet accessible via specific APIs, and targets to economize money and time. Finally, a spatio-temporal mechanism allowing to choose the best available communication medium is developed. This latter is based on fuzzy logic to assess a smooth and seamless handover, and considers collected information from the network, users and applications to preserve high quality of service
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Masson, Romain. "La valorisation des biens publics." Thesis, Paris 10, 2018. http://www.theses.fr/2018PA100094.

Повний текст джерела
Анотація:
La présente recherche vise à cerner et définir le concept de valorisation appliqué aux biens publics en s’appuyant sur son double fondement, le droit de propriété et le bon usage des deniers publics. Ce concept repose sur deux composantes, l’exploitation et la cession, qui permettent de mettre en lumière les multiples formes de la valorisation : économique, sociale, environnementale. Ces manifestations de la valorisation renouvellent l’analyse afin de mieux comprendre l’enjeu de la réforme du droit des biens publics, la manière dont la valorisation a influencé ce droit et les évolutions à venir. Ainsi, le rapprochement des régimes domaniaux a permis d’assouplir et de moderniser les outils de valorisation et les principes juridiques régissant le domaine public. Ce rapprochement devrait aboutir à une unification de la compétence juridictionnelle au profit du juge administratif. Par ailleurs, sous l’impulsion de la valorisation, de nouvelles obligations s’imposent aux propriétaires publics : mise en concurrence des occupations domaniales, inventaire des biens, valorisation d’avenir
This research aims to identify and define the concept of valorization applied to public properties based on its double foundation, the right to property and the proper use of public funds. This concept is based on two components, exploitation and disposal, which highlight the multiple forms of valorization : economic, social, environmental. These valorisation events renew the analysis in order to better understand the stake of the reform of the law of the public properties, the way in which the valorization has influenced this right and the evolutions to come. Thus, the approximation of state regimes has made it possible to soften and modernize valorization tools and the legal principles governing the public domain. This rapprochement should lead to a unification of jurisdiction for the benefit of the administrative judge. In addition, under the impetus of the valorization, new obligations are imposed on the public owners : competition of the public occupations, inventory of the properties, valorization of the future
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Arnaud, Bérenger. "Exploitation et partage de données hétérogènes et dynamiques." Thesis, Montpellier 2, 2013. http://www.theses.fr/2013MON20025/document.

Повний текст джерела
Анотація:
Dans un contexte de données industrielles et numériques, le développement d'un outil sur mesure pour une tâche particulière est couteux par de nombreux aspects. À l'inverse, l'adaptation d'outils génériques l'est également en particularisation (personnalisation, adaptation, extension, …), pour les développeurs comme pour les utilisateurs finaux. Nos approches visent à considérer les différents niveaux d'interactions pour améliorer l'exploitation des données fournies ou générées en collaboration.Les définitions et problématiques liées aux données dépendent le plus souvent des domaines dans lesquelles elles sont traitées. Pour ce travail, nous avons opté pour une approche holistique considérant ensemble des perspectives différentes. Le résultat est une synthèse des concepts émergeant montrant les équivalences d'un domaine à l'autre. La première contribution consiste à améliorer le marquage collaboratif de documents. Deux améliorations sont proposées par notre outil Coviz. (1) L'étiquetage des ressources est propre à chaque utilisateur qui organise ses vocables par une poly-hiérarchie nominative. Chacun peut considérer les concepts des autres par une relation de partage. Le système fournit également du contenu connexe via un moissonnage des archives ouvertes. (2) L'outil applique le concept de facette des données à l'interface puis les combine avec une recherche par mot-clé. Ce dernier point est commun à tous les utilisateurs, le système considère chacune des actions individuelles comme celles d'un groupe.La contribution majeure, confidentielle, est un framework baptisé DIP pour Data Interaction and Presentation. Son but est d'augmenter la liberté d'expression de l'utilisateur sur l'interaction et l'accès aux données. Il diminue les contraintes machines et logicielles en adjoignant une nouvelle voix d'accès direct entre l'utilisateur et les données disponibles, ainsi que des points d'« articulation » génériques. D'un point de vue final, l'utilisateur gagne en expression de filtrage, en partage, en maintien de l'état de sa navigation, en automatisation de ses tâches courantes, etc.Il a été testé en condition réelle de stress, de ressources et d'utilisation avec le logiciel KeePlace. Ce dernier a d'ailleurs été l'initiateur de cette thèse
In the context of numeric data, the software development costs entail a number of cost factors. In contrast, adapting generic tools has its own set of costs, requiring developer's integration and final user's adaptation. The aim of our approach is to consider the different points of interaction with the data to improve the exploitation of data, whether provided or generated from collaboration.The definitions and problems related to data are dependent upon the domain from which the data come and the treatment that have been applied to them. In this work we have opted for a holistic approach where we consider the range of angles. The result is a summary of the emergent concepts and domain equivalences.The first contribution consists of improving collaborative document mark-up. Two improvements are proposed by out tool – Coviz –. 1) Resource tagging which is unique to each user, who organises their own labels according to their personal poly-hierarchy. Each user may take into consideration other users approaches through sharing of tags. The system supplies additional context through a harvesting of documents in open archives. 2) The tool applies the concept of facets to the interface and then combines them to provide a search by keyword or characteristic selection. This point is shared by all users and the actions of an individual user impact the whole group.The major contribution, which is confidential, is a framework christened DIP for Data Interaction and Presentation. Its goal is to increase the freedom of expression of the user over the interaction and access to data. It reduces the hardware and software constrains by adding a new access point between the user and the raw data as well as generic pivots. From a final point of view the user gains in expression of filtering, in sharing, in state persistence of the navigator, in automation of day-to-day tasks, etc.DIP has been stress tested under real-life conditions of users and limited resources with the software KeePlace. Acknowledgement is given to KeePlace who initiated this thesis
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Fraley, Hannah E. "School Nurses' Awareness and Attitudes Towards Commercial Sexual Exploitation of Children| A Mixed Methods Study." Thesis, University of Massachusetts Boston, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10264694.

Повний текст джерела
Анотація:

Human trafficking is a global problem and a multi-billion dollar industry. Most victims are women and girls and more than half are children. In the United States, many at risk youth continue to attend school with school nurses on the frontlines. Using the Peace and Power Conceptual Model, a mixed methods study was conducted to explore their awareness, attitudes, and role perceptions in prevention of commercial sexual exploitation of children (CSEC). Two factors related to increased awareness, and positive attitudes and role perceptions to prevent of CSEC included prior exposure to working with vulnerable students, and prior education about CSEC. Two factors that inhibited identification of CSEC included an uncertainty in identifying CSEC, and a lack of collaboration with colleagues in schools. Four sub-themes were identified; ‘exposure/knowledge, ‘collaboration’, ‘role boundaries’, and ‘creating respite space’. Future research should target the multidisciplinary school team. Simultaneous policy efforts should focus on improving practice conditions for school nurses to support their role in identification and intervention to prevent CSEC among at risk youth.

Стилі APA, Harvard, Vancouver, ISO та ін.
30

Marshall, Dana T. "The exploitation of image construction data and temporal/image coherence in ray traced animation /." Full text (PDF) from UMI/Dissertation Abstracts International, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3008386.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Yeremenko, Roman, and Valeri Badakh. "Public access data in aerospace industry." Thesis, ORT Publishing, 2019. http://er.nau.edu.ua:8080/handle/NAU/40228.

Повний текст джерела
Анотація:
In industries like an aerospace engineering a large amount of researches is funded by the corporations, which grant special access to the specific documentation and data to affiliated scientists and may provide internship to affiliated universities students. The downside of above-mentioned model is that it might complicate the researches that require general statistical design and/or performance data from the wide range of products (such as aircrafts or helicopters), thus complicating predictions of trends in the field, which could be detrimental to conceptual advancement of aerospace engineering as a whole, as well as to providing the cutting-edge industry awareness to students, teachers and researchers alike.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Thorisson, Gudmundur A. "Database federation, resource interoperability and digital identity, for management and exploitation of contemporary biological data." Thesis, University of Leicester, 2011. http://hdl.handle.net/2381/8951.

Повний текст джерела
Анотація:
Modern research into the genetic basis of human health and disease is increasingly dominated by high-throughput experimentation and routine generation of large volumes of complex genotype to phenotype (G2P) information. Efforts to effectively manage, integrate, analyse and interpret this wealth of data face substantial challenges. This thesis discusses informatics approaches to addressing some of these challenges, primarily in the context of disease genetics. The genome-wide association study (GWAS) is widely used in the field, but translation of findings into scientific knowledge is hampered by heterogeneous and incomplete reporting, restrictions on sharing of primary data, publication bias and other factors. The central focus of the work was design and implementation of a core informatics infrastructure for centralised gathering and presentation of GWAS results. The resulting open-access HGVbaseG2P genetic association database and web-based tools for search, retrieval and graphical genome viewing increase overall usefulness of published GWAS findings. HGVbaseG2P conceptual modelling activities were also merged into a collaborative standardisation effort with international partners. A key outcome of this joint work is a minimal model for phenotype data which, together with ontologies and other standards, lays the foundation for a federated network of semantically and syntactically interoperable, distributed G2P databases. Attempts to gather complete aggregate representations of primary GWAS data into HGVbaseG2P were largely unsuccessful, chiefly due to concerns over re-identification of study participants. This led to a separate line of inquiry which explored - via in-depth field analysis, workshop organisation and other community outreach activities – potential applications of federated identity technologies for unambiguously identifying researchers online. Results suggest two broad use cases for user-centric researcher identities - i) practical, streamlined data access management and ii) tracking digital contributions for the purpose of attribution - which are critical to facilitating and incentivising sharing of GWAS (and other) research data.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Silva, Ardemiro de Barros. "Remotely sensed, geophysical and geochemical data as aids to mineral exploitation in Bahia State, Brazil." Thesis, Open University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.304255.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Brooks, Emma, and n/a. "Selectivity versus availability: patterns of prehistoric fish and shellfish exploitation at Triangle Flat, western Golden Bay." University of Otago. Department of Anthropology, 2002. http://adt.otago.ac.nz./public/adt-NZDU20070508.145145.

Повний текст джерела
Анотація:
This thesis sets out to examine issues of selectivity and availability in fishing and shellfish gathering by pre-European Maori at Triangle Flat in western Golden Bay. Faunal remains from four archaeological sites have revealed new and valuable information about economic subsistence practices in this region. It is proposed that exploitation of these important coastal resources was based on factors other than the availability of, proximity to resource patches. Evidence from the Triangle Flat sites is compared to that from Tasman Bay and the southern North Island to gain a regional perspective on fishing and shellfish gathering strategies. The most definitive evidence for selective targeting is provided by tuatua, an open beach species that has been found to dominate in sites based adjacent to tidal mud and sand flats. Also of interest is the dominance of mud snail in a site that is adjacent to large cockle and pipi beds. When regional sites were examined it was found that this pattern was also recorded for the site of Appleby in Tasman Bay. Selectivity in fishing strategies is also apparent with red cod and barracouta dominating the Triangle Flat assemblages. This pattern conforms to evidence from both eastern Golden Bay and Tasman Bay but does not reflect evidence from the southern North Island. Of particular interest is the apparent dearth of snapper in the sites at Triangle Flat, since snapper abounds in the area today. An explanation based on climatic change is considered to be the most feasible. This indicates that enviromentalal availability was at least in part responsible for the archaeological evidence of fishing. The consistency of the catch of red cod and barracouta in Golden Bay, and the pattern of shellfishing preferentially for tuatua suggests that cultural choice was also a significant selective factor.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Lloyd, Sarah W. "Social workers' understandings of child sexual exploitation and sexually exploited girls." Thesis, University of Huddersfield, 2016. http://eprints.hud.ac.uk/id/eprint/34502/.

Повний текст джерела
Анотація:
In recent year’s child sexual exploitation has received significant attention in the UK politically, publically, and in the media. In particular, high profile cases involving groups of men and adolescent girls have resulted in criticism directed towards safeguarding services. Of specific concern is whether sexually exploited young people have been safeguarded, as they should have been. If they have not been, is this because safeguarding professionals understand young people to be ‘making a choice’ to be in sexually exploitative situations and therefore they are ‘left to it’. Thus, this doctoral research considers how social workers understand CSE, with a focus on the social workers’ understandings of sexually exploited girls as choice-makers and agents. Eighteen social workers, from three local authorities, within one region in England were interviewed. The interviewees work in all areas of safeguarding. To further elicit the social workers’ understandings of sexually exploited girls, the interviewees’ understandings of girls sexually abused in the home were also explored. How girls' choice-making and agency are understood and responded to, depending on where, and by whom they have been abused/exploited, is explored. The methodology is qualitative and adopts a social constructionist, feminist approach utilising thematic analysis. The social workers understand that CSE happens to a certain ‘type’ of girl: one who is likely to be socially and economically deprived; that is why the social workers understand she is vulnerable to CSE. The interviewees have complex understandings regarding who is to blame for CSE and the lack of overt blame placed on the perpetrators, alongside significant culpability placed on the girls, is striking. Moreover, the confluence of choice-making and blame within the interviewees’ epistemological framework concerning CSE and sexually exploited girls is of specific note. The social workers ‘wrestle’ with their understandings of sexually exploited girls as choice-makers, this is because they associate choice with blame and this leaves them conflicted. The way in which they resolve this conflict is to invalidate certain choices that girls make which they understand ‘result’ in her being exploited, in order not to blame her. The research concludes that social workers need to separate out choice-making from blame and recognise that sexually exploited girls make choices, within a context but that they should never be blamed for making those choices. Furthermore, their agency needs to be encouraged and enabled in positive directions and blame should always and unequivocally be placed on the perpetrators.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Ocran, Emmanuel. "The faint low-frequency radio universe in continuum: exploitation of the pre-SKA deepest survey." Doctoral thesis, Faculty of Science, 2020. http://hdl.handle.net/11427/32898.

Повний текст джерела
Анотація:
This thesis presents a thorough and significant work on the properties of radio sources as derived from deep 610-MHz GMRT data and ancillary multi-wavelength data. The faint radio sources at 610-MHz are found out to distances such that the objects are seen as they were when the universe was less than half its current age. These data provide a first look at the faint radio sky at sensitivities that will soon be achieved by key programs on the South African MeerKAT radio telescope, and thus take a first step in the exploration of the radio universe that will be made by the Square Kilometre Array. I report deep 610-MHz GMRT observations of the EN1 field, a region of 1.86 deg2 . We achieve a nominal sensitivity of 7.1µ Jy beam−1 . From our 610 MHz mosaic image, we recover 4290 sources after accounting for multiple component sources down to a 5σ flux density limit of 35.5 µ Jy. From this data, I derive the 610 MHz source counts applying corrections for completeness, resolution bias and Eddington bias. The 610- MHz source counts show a flattening at flux densities below 1 mJy. The source counts are higher than previous observations at this frequency below this break. However, they are generally consistent with recent models of the low-frequency source population. Using ancillary multi-wavelength data in the field, I investigate the key issue of source population classification using the deepest data at an intermediate-low frequency (higher than LOFAR and lower than JVLA), where previous work has not been sensitive enough to reach the µJy population. By cross-matching against the multi-wavelength data, I identify 72% of the radio sample having reliable redshifts, of which 19% of the redshifts are based on spectroscopy. From the classification, I obtain 1685 sources as Star-Forming Galaxies (SFGs), 281 sources Radio-Quiet (RQ) and 339 sources Radio-Loud (RL) Active Galactic Nuclei (AGN) for the sub-sample with redshifts and at least one multi-wavelength AGN diagnostic. SFGs are mostly low-power radio sources, i.e L610 MHz < 1025 W Hz−1 while RQ AGN and RL AGN have radio powers L610 MHz > 1025 W Hz−1 . From cross-matching my sample with other radio surveys (GMRT at 325-MHz, FIRST at 1.4-GHz and JVLA at 5-GHz), I obtain the median spectral index from 325-MHz to 610-MHz to be −0.80 ± 0.29, 610-MHz to 1.4-GHz to be −0.83 ± 0.31 and 1.4-GHz to 5-GHz to be −1.12 ± 0.15. The main result is that the median spectral index appears to steepen at the highest frequency. With the above catalogue in hand, I use the non-parametric V/Vmax test and the radio luminosity function to investigate the cosmic evolution of different source populations. I study SFGs and derive their IR-radio correlation and luminosity function as a function of redshift. By integrating the evolving SFG luminosity functions I also derive the cosmic star formation rate density out to z = 1.5. I address the long standing question about the origin of radio emission in RQ AGN. I compare the star formation rate (SFR) derived from their far-infrared luminosity, as traced by Herschel, with the SFR computed from their radio emission. I find evidence that the main contribution to the radio emission of RQ AGN is the star formation activity in their host galaxies. At high luminosities, however, both SFGs and 1 RQ AGN display a radio excess when comparing radio and infrared star formation rates. The vast majority of our sample lie along the SFR − M? ”main sequence” at all redshifts when using infrared star formation rates. This result opens the possibility of using the radio band to estimate the SFR even in the hosts of bright AGN where the optical-to-mid-infrared emission can be dominated by the AGN. I investigate the evolution of radio AGN out to z ∼ 1.5 with continuous models of pure density and pure luminosity evolution with Φ? ∝ ( 1 + z)(2.25±0.38)−(0.63±0.35)z and L610 MHz ∝ ( 1 + z)(3.45±0.53)−(0.55±0.29)z respectively. I also constrain the evolution of RQ AGN and RL AGN separately with a continuous model of pure luminosity evolution. For the RQ and RL AGN, we find a fairly mild evolution with redshift best fitted by pure luminosity evolution with L610 MHz ∝ ( 1 + z)(2.81±0.43)−(0.57±0.30)z for RQ AGN and L610 MHz ∝ ( 1 + z)(3.58±0.54)−(0.56±0.29)z for RL AGN. The results reveal that the 610 MHz radio AGN population thus comprises two differently evolving populations whose radio emission is mostly SF-driven or AGN-driven respectively. Finally, I probe the infrared-radio correlation and radio spectral indices of the faint radio population using stacking. I stack infrared sources in the EN1 field using the MIPS 24 micron mid-infrared survey and radio surveys created at 325 MHz, 610 MHz and 1.4 GHz. The stacking experiment shows a variation in the absolute strength of the infrared-radio correlation between these three different frequencies and the MIPS 24 micron band. I find tentative evidence of a small deviation from the correlation at the faintest infrared flux densities. The stacked radio spectral index analyses reveal that the majority of the median stacked sources exhibit steep spectra, with a spectral index that steepens with frequency between α 325 610 and α 610 1400. This work is particularly useful to pave the way for upcoming radio surveys with SKA pathfinders and precursors.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Guenneau, Flavien. "Relaxation magnétique nucléaire de systèmes couplés et exploitation des données unidimensionnelles au moyen d'un logiciel convivial (RMNYW)." Nancy 1, 1999. http://docnum.univ-lorraine.fr/public/SCD_T_1999_0093_GUENNEAU.pdf.

Повний текст джерела
Анотація:
Les travaux présents dans ce mémoire de thèse concernent des nouvelles méthodes de détermination et d'exploitation des paramètres de relaxation magnétique nucléaire. Dans une première partie, nous décrivons le logiciel RMNYW de traitement de données RMN, que nous avons été amené à développer pour les ordinateurs de type pc. Outre ses qualités de convivialité, nous avons mis l'accent sur ses aspects originaux pour montrer qu'il constitue un outil adapté aux thématiques du laboratoire. Sa capacité à traiter commodément les expériences de relaxation a d'ailleurs été mise à profit pour l'exploitation des données rattachées à la troisième partie de ce mémoire. Le deuxième chapitre s'articule autour de l'expérience de découplage homonucléaire. Nous décrivons deux méthodes permettant de s'affranchir de la distorsion des pics (phase-twist) qui affecte cette expérience. La première méthode fournit une information quantitative grâce à un algorithme impliquant une analyse des données temporelles. Ce même problème peut également être résolu en faisant appel aux spectres de puissance, mais aux dépens de la quantitativite. Ce traitement remarquablement aisé nous a permis d'utiliser la séquence d'écho de spin comme une boite noire pour mesurer les temps de relaxation t 1 et t 1 r h o dans des systèmes couples. La première partie du troisième chapitre porte sur l'hexafluorobenzene. Nous montrons que la détermination des vitesses de relaxation et de corrélation croisée permet de remonter non seulement à l'anisotropie de réorientation, mais aussi aux différents éléments des tenseurs d'écran. L'étude suivante concerne l'acide méthanoïque et ne fait appel qu'aux vitesses de corrélation croisée ; elle confirme l'existence de cette molécule sous forme de dimère à l'état liquide. Pour ces deux exemples, les données obtenues en phase liquide sont confrontées aux résultats de RMN du solide et aux calculs de chimie quantique.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Louis, Ruwaid, and David Yu. "A study of the exploration/exploitation trade-off in reinforcement learning : Applied to autonomous driving." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254938.

Повний текст джерела
Анотація:
A world initiative was set in motion for decreasing the amount of traffic accidents. Autonomous driving is a field which contributes to the initiative. Following report examines exploration/exploitationtrade-off in reinforcement learning applied to decision making in autonomous driving. The approach consisted of modelling the problemas a Markov Decision Process which was solved with the Q-learning. Decision making utilized exploration greed approach. Scenarios consisted of different kinds of intersections, and was built using SUMO. The ego vehicle was controlled using TraCI. Goal was to discuss thetrade-off from two perspectives - time and safety, measured in numberof collision among other things - in the domain of autonomous driving. Furthermore, exploration prompted ego vehicle to pass the scenarios in less time. This lead to increased collisions, and thus decreased safety. In contrast, exploitation preferred deacceleration and stopping which resulted in increased safety but increased the passage time and traffic. Conclusion was to exploit previous experiences when applying reinforcement learning to decision making in autonomous driving because safety is the highest priority when it comes to autonomous driving and the world initiative.
Ett globalt initiativ startades för att reducera antalet trafikolyckor innan år 2030. Autonoma fordon är ett forskningsområde som bidrar till det globala initiativet. I denna rapport undersöks avvägningen mellan utforskning och utnyttjande inom förstärkningsinlärande för beslutsfattande processen inom autonoma fordon. Tillvägagångssättet bestod av att modellera problemet som Markov Beslutsprocess som löstes med hjälp av Q-learning. Beslutsfattande processen tillvaratog en utnyttjande inställning. Scenario bestod av olika typer av korsningar, och de programmerades med hjälp av SUMO. Autonoma fordonet kontrollerades med hjälp av TraCI. Målet var att diskutera avvägningen från två perspektiv tid och säkerhet, mät i antalet kollisioner bl.a inom forskningsområdet autonoma fordon. Resultat visade att utforskning uppmanade autonoma fordonet att passera scenarion under kortare tid. Detta ledde till ökade antal kollisioner och därmed minskad säkerhet. Å andra sidan, ökad utnyttjande föredrog inbromsning vilket resulterade i ökad antalet lyckade passeringar. Detta leder till ökad säkerhet men ökar också passeringstiden och mängden trafik. Slutsatsen var att man ska föredra utnyttjande av tidigare erfarenheter när man tillämpar förstärkningsinlärande på beslutsfattandeprocessen inom autonoma fordon. Slutsatsen befattades därför att säkerhet har högst prioritet inom autonoma fordon och det globala initiativet.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Kroe, Elaine, and S. Department of Education National Center for Education Statistics U. "Data File, Public Use: Public Libraries Survey: Fiscal Year 2001 (Revised)." U. S. Department of Education, NCES 2003â 398, 2003. http://hdl.handle.net/10150/105908.

Повний текст джерела
Анотація:
The Public Libraries Survey is conducted annually by the National Center for Education Statistics through the Federal-State Cooperative System for Public Library Data. The data are collected by a network of state data coordinators appointed by the chief officers of state library agencies in the 50 States, the District of Columbia, and the outlying areas. Data are collected on population of legal service area, service outlets, public service hours, library materials, total circulation, circulation of children's materials, reference transactions, library visits, children's program attendance, electronic services and information, staff, operating income, operating expenditures, capital outlay, and more.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Mistretta, Anna E. "Risk Factors for Financial Exploitation among an Urban Adult Population in the United States." Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/iph_theses/124.

Повний текст джерела
Анотація:
This thesis focus on the growing problem of elder mistreatment in the United States and related risk factors. In particular, focus is given to the problem of elder financial exploitation using survey analysis of an urban adult sample in the United States.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Beck, Dominic. "Challenges in CMB Lensing Data Analysis and Scientific Exploitation of Current and Future CMB Polarization Experiments." Thesis, Université de Paris (2019-....), 2019. https://wo.app.u-paris.fr/cgi-bin/WebObjects/TheseWeb.woa/wa/show?t=3973&f=25502.

Повний текст джерела
Анотація:
La prochaine génération de mesures du fond diffus cosmologique (CMB) continuera d’établir le domaine de la cosmologie comme une science de haute précision et ne cessera d’ouvrir de nouvelles frontières en physique fondamentale. Les mesures limitées par la variance cosmique de la température du CMB, mais aussi de sa polarisation jusqu'aux minutes d'arc, permettent des mesures précises de notre modèle cosmologique, qui est sensible à la physique insaisissable de la matière noire, de l'énergie noire et des neutrinos. De plus, une mesure aux grandes échelles de la polarisation CMB, dite “mode B”, permettra de déterminer la puissance des ondes gravitationnelles primordiales, générées par des phénomènes potentiellement présents dans le tout jeune univers, à des échelles d’énergie qui s’approchent de la théorie de grande unification. L'entrée dans un nouveau régime de sensibilité implique la nécessité d'améliorer notre compréhension de la physique et des méthodes d'analyse des effets systématiques astronomiques et instrumentaux.Dans ce contexte, cette thèse présente plusieurs analyses des possibles effets systématiques astronomiques et instrumentaux, principalement centrés sur les mesures CMB dans le cadre du lentillage gravitationnelle faible. Cet effet déforme le trajet des photons primaires du CMB, de sorte que les propriétés statistiques du signal mesuré s'écartent du signal primaire, et il faut donc prendre en compte cette distorsion. Cette thèse décrit la physique fondamentale, les méthodes d'analyse et les applications aux ensembles de données actuels de POLARBEAR, une expérience CMB dans le contexte scientifique des lentilles gravitationnelles.En particulier, cette thèse établit que les futures mesures de haute précision des lentilles gravitationnelles devront prendre en compte la haute complexité de cet effet, principalement causée par des déflexions multiples, induites par une distribution non linéaire des structures à grande échelle en évolution. De plus, l’impact des corrélations d’ordre supérieur, introduites par les avant-plans galactiques et par l’analyse jointe de données aux petites et aux grandes échelles, est étudié. Cette thèse démontre la nécessité d’observations des petites échelles dans plusieurs bandes de fréquences, ainsi que l’utilisation de techniques pour supprimer les avant-plans, afin d’obtenir une estimation sans biais du rapport tenseur-sur-scalaire
Next-generation cosmic microwave background (CMB) measurements will further establish the field of cosmology as a high-precision science and continue opening new frontiers of fundamental physics. Cosmic-variance limited measurements not only of the CMB temperature but also its polarization down to arcminute scales will allow for precise measurements of our cosmological model, which is sensitive to the elusive physics of dark matter, dark energy and neutrinos. Furthermore, a large-scale measurement of B-mode CMB polarization permits a determination of the power of primordial gravitational waves, generated by processes potentially happening in the very early universe at energies close to the scale of the Grand Unified Theory. Entering a new sensitivity regime entails the necessity to improve our physical understanding and analysis methods of astronomical and instrumental systematics.This thesis presents within this context several analyses of potential astronomical and instrumental systematics, primarily focusing on CMB measurements related to weak gravitational lensing. The latter distorts the path of the primary CMB's photons, such that the statistical properties of the measured signal deviate from the primary signal and, hence, has to be accounted for. This thesis describes the underlying physics, analysis methods and applications to current data sets of the POLARBEAR CMB experiment in the context of CMB lensing science.This thesis shows that future high-precision measurements of CMB lensing have to account for the high complexity of this effect, primarily caused by multiple deflections within an evolving, non-linear large-scale structure distribution. Furthermore, the impact of higher-order correlations introduced by galactic foregrounds and CMB lensing when jointly analyzing CMB data sets on both large and small scales is investigated, showing the need for small-scale multi-frequency observations and foreground removal techniques to obtain an unbiased estimate of the tensor-to-scalar ratio
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Ba, Mouhamadou Lamine. "Exploitation de la structure des données incertaines." Electronic Thesis or Diss., Paris, ENST, 2015. http://www.theses.fr/2015ENST0013.

Повний текст джерела
Анотація:
Cette thèse s’intéresse à certains problèmes fondamentaux découlant d’un besoin accru de gestion des incertitudes dans les applications Web multi-sources ayant de la structure, à savoir le contrôle de versions incertaines dans les plates-formes Web à large échelle, l’intégration de sources Web incertaines sous contraintes, et la découverte de la vérité à partir de plusieurs sources Web structurées. Ses contributions majeures sont : la gestion de l’incertitude dans le contrôle de versions de données arborescentes en s’appuyant sur un modèle XML probabiliste ; les étapes initiales vers un système d’intégration XML probabiliste de sources Web incertaines et dépendantes ; l’introduction de mesures de précision pour les données géographiques et ; la conception d’algorithmes d’exploration pour un partitionnement optimal de l’ensemble des attributs dans un processus de recherche de la vérité sur des sources Web conflictuelles
This thesis addresses some fundamental problems inherent to the need of uncertainty handling in multi-source Web applications with structured information, namely uncertain version control in Web-scale collaborative editing platforms, integration of uncertain Web sources under constraints, and truth finding over structured Web sources. Its major contributions are: uncertainty management in version control of treestructured data using a probabilistic XML model; initial steps towards a probabilistic XML data integration system for uncertain and dependent Web sources; precision measures for location data and; exploration algorithms for an optimal partitioning of the input attribute set during a truth finding process over conflicting Web sources
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Clevenger, Mark Allen. "Data encryption using RSA public-key cryptosystem." Virtual Press, 1996. http://liblink.bsu.edu/uhtbin/catkey/1014844.

Повний текст джерела
Анотація:
The RSA data encryption algorithm was developed by Ronald Rivest, Adi Shamir and Leonard Adelman in 1978 and is considered a de facto standard for public-key encryption. This computer science thesis demonstrates the author's ability to engineer a software system based on the RSA algorithm. This adaptation of the RSA encryption process was devised to be used on any type of data file, binary as well as text. In the process of developing this computer system, software tools were constructed that allow the exploration of the components of the RSA encryption algorithm. The RSA algorithm was further interpolated as a method of providing software licensing, that is, a manner in which authorization to execute a particular piece of software can be determined at execution time. This document summarizes the RSA encryption process and describes the tools utilized to construct a computer system based on this algorithm.
Department of Computer Science
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Craig, Heather (Heather Hult). "Interactive data narrative : designing for public engagement." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/97993.

Повний текст джерела
Анотація:
Thesis: S.M., Massachusetts Institute of Technology, Department of Comparative Media Studies, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 97-101).
Interactive data narrative, or the crafting of interactive online stories based upon new or existing data, has grown dramatically over the last several years. Data is increasingly available through such mechanisms as embedded sensor networks, remote sensing, and mobile data collection platforms. The affordances of mobile computing and increasing internet access enable widespread-and often citizen-powered-data collection initiatives. This proliferation of data raises the challenge of translating data into compelling and actionable stories. New data collection and online storytelling strategies foster a mode of communication that can reveal complexities, time-based shifts, and arcane patterns with regard to newly available geolocated data. This thesis investigates interactive storytelling as a mode of communicating data and analyzes trends and opportunities for future innovation. Surveying the field and analyzing specific projects lays the foundation for a design intervention for adding a narrative layer to geolocated, citizen-collected data.
by Heather Craig.
S.M.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Yun, Catherine (Catherine T. ). "Splinter : practical private queries on public data." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113458.

Повний текст джерела
Анотація:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 39-43).
Every day, millions of people rely on third party services like Google Maps to navigate from A to B. With existing technology, each query provides Google and their affiliates with a track record of where they've been and where they're going. In this thesis, I design, engineer, and implement a solution that offers absolute privacy when making routing queries, through the application of the Function Secret Sharing (FSS) cryptographic primitive. I worked on a library in Golang that applied an optimized FSS protocol, and exposed an API to generate and evaluate different kinds of queries. I then built a system with servers that handle queries to the database, and clients that generate queries. I used DIMACS maps data and the Transit Node Routing (TNR) algorithm to obtain graph data hosted by the servers. Finally, I evaluated the performance of my system for practicality, and compared it to existing private map routing systems.
by Catherine Yun.
M. Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

He, Yurong. "Data sharing across research and public communities." Thesis, University of Maryland, College Park, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10242315.

Повний текст джерела
Анотація:

For several decades, the intensifying trend of researchers to believe that sharing research data is “good” has overshadowed the belief that sharing data is “bad.” However, sharing data is difficult even though an impressive effort has been made to solve data sharing issues within the research community, but relatively little is known about data sharing beyond the research community. This dissertation aims to address this gap by investigating how data are shared effectively across research and public communities.

The practices of sharing data with both researchers and non-professionals in two comparative case studies, Encyclopedia of Life and CyberSEES, were examined by triangulating multiple qualitative data sources (i.e., artifacts, documentation, participant observation, and interviews). The two cases represent the creation of biodiversity data, the beginning of the data sharing process in a home repository, and the end of the data sharing process in an aggregator repository. Three research questions are asked in each case:

• Who are the data providers?

• Who are the data sharing mediators?

• What are the data sharing processes?

The findings reveal the data sharing contexts and processes across research and public communities. Data sharing contexts are reflected by the cross-level data providers and human mediators rooted in different groups, whereas data sharing processes are reflected by the dynamic and sustainable collaborative efforts made by different levels of human mediators with the support of technology mediators.

This dissertation provides theoretical and practical contributions. Its findings refine and develop a new data sharing framework of knowledge infrastructure for different-level data sharing across different communities. Both human and technology infrastructure are made visible in the framework. The findings also provide insight for data sharing practitioners (i.e., data providers, data mediators, data managers, and data contributors) and information system developers and designers to better conduct and support open and sustainable data sharing across research and public communities.

Стилі APA, Harvard, Vancouver, ISO та ін.
47

Kosaraju, Aravinda. "Attrition in cases involving crimes of child sexual exploitation in England." Thesis, University of Kent, 2017. https://kar.kent.ac.uk/66828/.

Повний текст джерела
Анотація:
This thesis is a critical exposition of attrition in cases involving crimes of child sexual exploitation in England. More specifically, this thesis offers an analysis of policy texts and empirical data, to interrogate the conditions of possibility for attrition in contemporary discourses on child sexual exploitation. It does so by employing a Foucauldian feminist theoretical framework and critical discourse analysis. It shows that knowledge statements within child sexual exploitation discourses around the notion of risk, about children as (un)knowing and as (a)sexual coupled with techniques of power such as the processes of assessing risk, the deployment of the rhetoric of consent and the requirement for an avowing subject, construct multiple subject positions which sexually exploited children come to occupy. It contends that specific rationalities underpinning the current forms of thinking within practitioners' discourse about the problem of attrition in child sexual exploitation cases in conjunction with the deployment within policy discourse of specific strategies for tackling crimes of child sexual exploitation, such as the disruption of perpetrators, lead to the de-prioritisation of prosecutions as a rational response to the crimes of child sexual exploitation. It stresses that children's experiences of sexual exploitation emerge into a discursive space enclosed by three axes namely: the fields of knowledge, processes of normalisation, and the modes of subject formation. It contends that these three axes enclosing the child sexual exploitation discursive space intersect at various sites within child sexual exploitation practice thereby producing the conditions in which attrition in these cases becomes possible.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Adu-Prah, Samuel. "GEOGRAPHIC DATA MINING AND GEOVISUALIZATION FOR UNDERSTANDING ENVIRONMENTAL AND PUBLIC HEALTH DATA." OpenSIUC, 2013. https://opensiuc.lib.siu.edu/dissertations/657.

Повний текст джерела
Анотація:
Within the theoretical framework of this study it is recognized that a very large amount of real-world facts and geospatial data are collected and stored. Decision makers cannot consider all the available disparate raw facts and data. Problem-specific variables, including complex geographic identifiers have to be selected from this data and be validated. The problems associated with environmental- and public-health data are that (1) geospatial components of the data are not considered in analysis and decision making process, (2) meaningful geospatial patterns and clusters are often overlooked, and (3) public health practitioners find it difficult to comprehend geospatial data. Inspired by the advent of geographic data mining and geovisualization in public and environmental health, the goal of this study is to unveil the spatiotemporal dynamics in the prevalence of overweight and obesity in United States youths at regional and local levels over a twelve-year study period. Specific objectives of this dissertation are to (1) apply regionalization algorithms effective for the identification of meaningful clusters that are in spatial uniformity to youth overweight and obesity, and (2) use Geographic Information System (GIS), spatial analysis techniques, and statistical methods to explore the data sets for health outcomes, and (3) explore geovisualization techniques to transform discovered patterns in the data sets for recognition, flexible interaction and improve interpretation. To achieve the goal and the specific objectives of this dissertation, we used data sets from the National Longitudinal Survey of Youth 1997 (NLSY'97) early release (1997-2004), NLSY'97 current release (2005 - 2008), census 2000 data and yearly population estimates from 2001 to 2008, and synthetic data sets. The NLSY97 Cohort database range varied from 6,923 to 8,565 individuals during the period. At the beginning of the cohort study the age of individuals participating in this study was between 12 and 17 years, and in 2008, they were between 24 and 28 years. For the data mining tool, we applied the Regionalization with Dynamically Constrained Agglomerative clustering and Partitioning (REDCAP) algorithms to identify hierarchical regions based on measures of weight metrics of the U.S. youths. The applied algorithms are the single linkage clustering (SLK), average linkage clustering (ALK), complete linkage clustering (CLK), and the Ward's method. Moreover, we used GIS, spatial analysis techniques, and statistical methods to analyze the spatial varying association of overweight and obesity prevalence in the youth and to geographically visualize the results. The methods used included the ordinary least square (OLS) model, the spatial generalized linear mixed model (GLMM), Kulldorff's Scan space-time analysis, and the spatial interpolation techniques (inverse distance weighting). The three main findings for this study are: first, among the four algorithms ALK, Ward and CLK identified regions effectively than SLK which performed very poorly. The ALK provided more promising regions than the rest of the algorithms by producing spatial uniformity effectively related to the weight variable (body mass index). The regionalization algorithm-ALK provided new insights about overweight and obesity, by detecting new spatial clusters with over 30% prevalence. New meaningful clusters were detected in 15 counties, including Yazoo, Holmes, Lincoln, and Attala, in Mississippi; Wise, Delta, Hunt, Liberty, and Hardin in Texas; St Charles, St James, and Calcasieu in Louisiana; Choctaw, Sumter, and Tuscaloosa in Alabama. Demographically, these counties have race/ethnic composition of about 75% White, 11.6% Black and 13.4% others. Second, results from this study indicated that there is an upward trend in the prevalence of overweight and obesity in United States youths both in males and in females. Male youth obesity increased from 10.3% (95% CI=9.0, 11.0) in 1999 to 27.0% (95% CI=26.0, 28.0) in 2008. Likewise, female obesity increased from 9.6% (95% CI=8.0, 11.0) in 1999 to 28.9% (95% CI=27.0, 30.0) during the same period. Youth obesity prevalence was higher among females than among males. Aging is a substantial factor that has statistically highly significant association (p < 0.001) with prevalence of overweight and obesity. Third, significant cluster years for high rates were detected in 2003-2008 (relative risk 1.92, 3.4 annual prevalence cases per 100000, p < 0.0001) and that of low rates in 1997-2002 (relative risk 0.39, annual prevalence cases per 100000, p < 0.0001). Three meaningful spatiotemporal clusters of obesity (p < 0.0001) were detected in counties located within the South, Lower North Eastern, and North Central regions. Counties identified as consistently experiencing high prevalence of obesity and with the potential of becoming an obesogenic environment in the future are Copiah, Holmes, and Hinds in Mississippi; Harris and Chamber, Texas; Oklahoma and McCain, Oklahoma; Jefferson, Louisiana; and Chicot and Jefferson, Arkansas. Surprisingly, there were mixed trends in youth obesity prevalence patterns in rural and urban areas. Finally, from a public health perspective, this research have shown that in-depth knowledge of whether and in what respect certain areas have worse health outcomes can be helpful in designing effective community interventions to promote healthy living. Furthermore, specific information obtained from this dissertation can help guide geographically-targeted programs, policies, and preventive initiatives for overweight and obesity prevalence in the United States.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Bringer, Joy Deanne. "Sexual exploitation : swimming coaches' perceptions and the development of role conflict and role ambiguity." Thesis, University of Gloucestershire, 2002. http://eprints.glos.ac.uk/3038/.

Повний текст джерела
Анотація:
Public awareness about sexual abuse and sexual harassment in sport has greatly increased over the last 10 years. In England, the sport of swimming has been especially affected, first because of several high profile cases of swimming coaches being convicted of sexual abuse, and secondly because the Amateur Swimming Association (ASA) has taken a proactive stance to protect children in swimming. Much of the previous research examining sexual exploitation in sport has been from the perspective of the athlete. This qualitative study was designed to examine swimming coaches' constructions of appropriateness about coach/swimmer sexual relationships. Nineteen coaches participated in either an elite, national, or county level focus group. Coaches discussed the appropriateness of coach/athlete relationships as presented in 7 vignettes. Analysis was conducted in accordance with the constructivist revision of Grounded Theory (Charmaz, 1990; Strauss & Corbin, 1998) and organised with the assistance of the software programme, QSR NVIVO. The coaches report that sex with an athlete below the legal age of consent is inappropriate. Coaches' perceptions regarding "legal" relationships vary according to whether the coach is talking about himself versus other coaches. The emergent themes influencing perceptions of appropriateness are: reducing opportunities for false allegations, the influence of public scrutiny, evaluating consequences of relationships, maintaining professional boundaries, and reluctance to judge fellow coaches. After completing the initial analysis, the emergent themes were further explored in individual unstructured interviews with three purposively selected coaches. One coach was in a long-term relationship with a swimmer, another served a prison term for child sexual abuse of a swimmer he coached, and the third had allegations against him dropped. The secondary analysis reveals that the themes about appropriateness relate to the broader issue of coaches' attempts to resolve perceived role conflict and role ambiguity that has arisen from increased awareness of child protection. This is examined with reference to how awareness of sexual abuse in sport has provoked coaches to question their roles and coaching boundaries. Results are discussed in relation to organisational psychology theories of role conflict and role ambiguity and directions for future research are suggested.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Salazar, Niño Elvis. "The Mining Concession and the Right of Exploitation Seeking a Balance between the Public and the Private." Derecho & Sociedad, 2015. http://repositorio.pucp.edu.pe/index/handle/123456789/118164.

Повний текст джерела
Анотація:
The author introduces the regulatory and political landscape of the mining concession and the institutional decay it has suffered. He also shows the systems of acreage and the system which our country has received it well: the System of Public Domain. Finally, the author analyzes the mining concession and harvesting rights, raised according to the Constitution and the General Mining Law, explaining the policy changes needed to strengthen it as a fundamental qualifying title for the development of mining activities in the country.
El autor presenta un panorama normativo y político de la concesión minera y el desgaste institucional que ha sufrido. Asimismo, expone los sistemas de dominio minero y el que nuestro país ha acogido con acierto: el Sistema Dominalista. Por último, analiza la concesión minera y el derecho de aprovechamiento, planteado de acuerdo a la Constitución y Ley General de Minería, explicando los cambios normativos necesarios a fin de fortalecerlo como título habilitante fundamental para el desarrollo de las actividades mineras del país.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії