Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Exploration noise.

Dissertationen zum Thema „Exploration noise“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-32 Dissertationen für die Forschung zum Thema "Exploration noise" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Lloyd-Jones, Rebecca Louise. „Amid the Noise: A Percussionist's Exploration of Creative Practice“. Thesis, Griffith University, 2016. http://hdl.handle.net/10072/370333.

Der volle Inhalt der Quelle
Annotation:
This exegesis explores the relationships between the roles of performer, interpreter and composer and the location of my own individual creativity as a percussionist/musician within this nexus. Accompanying this exegesis is a creative portfolio, consisting of original compositions, arrangements and recordings of selected Australian percussion repertoire that I have performed. Adopting a practice-based research method, I have aimed to document the multifaceted approach taken in my own music-making, detailing the outcomes reached on this journey. I have endeavoured to develop and present different perspectives for applying my acquired musical knowledge through composition, collaboration and performing, and this is presented through a creative portfolio, which accompanies this exegesis. I also am examining the expansion of my practice, through the integration of using recording as documentation, and the broadening outcome that has had on the product I produce. I also investigate what influence dedicating myself to performing others’ compositions has had upon my practice to date. This exegesis is about opening windows into new sonic terrain and examining some of the boundaries and traditions prevalent in Western Art Music. I have attempted to do this through the eyes of an active music-maker—making informed decisions about musical contour based on improvisation, exploration and intuition.
Thesis (Masters)
Master of Music Research (MMusRes)
Queensland Conservatorium
Arts, Education and Law
Full Text
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Parsons, Adrian. „Seismic exploration techniques applied to ultrasonic imaging within concrete“. Thesis, University of Liverpool, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368818.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Levine, Matthew Jason. „A framework for technology exploration of aviation environmental mitigation strategies“. Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54437.

Der volle Inhalt der Quelle
Annotation:
The goal of this thesis was to develop a framework for modeling relevant environmental performance metrics and objectively simulating the future environmental impacts of aviation given the evolution of the fleet, the development of new technologies, and the expansion of airports. By exchanging fidelity for computational speed, a screening-level framework for assessing aviation's environmental impacts can be developed to observe new insights on fleet-level trends and inform environmental mitigation strategies. This was accomplished by developing per class average ``generic-vehicle" models that can reduce the fleet to a few representative aircraft models for predicting fleet results with reasonable accuracy. The method for Generating Emissions and Noise, Evaluating Residuals and using Inverse method for Choosing the best Alternatives (GENERICA) expands a previous generic vehicle formulation to additionally match DNL contours across a subset of airports. Designs of experiments, surrogate models, Monte Carlo simulations, and ``desirability" scores were combined to set the vehicle design parameters and reduce the mean relative error across the subset of airports. Results show these vehicle models more accurately represented contours at busy airports operating a wide variety of aircraft as compared to a traditional representative-in-class approach. Additionally, a rapid method for assessing population exposure counts was developed and incorporated into the noise tool, and the generic vehicles demonstrated accuracy with respect to population exposure counts for the actual fleet in the baseline year. The capabilities of the enabled framework were demonstrated to show fleet-level trends and explore placement of new runways at capacity constrained airports.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Miller, Nolan W. „Athenian Acoustics: A Sonic Exploration“. Ohio University Honors Tutorial College / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ouhonors1556289254557967.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Syversætre, Johannessen Vega. „White Noise : An exploration of tufted surfacesin relation to sound, physicalcontact and tactility“. Thesis, Högskolan i Borås, Akademin för textil, teknik och ekonomi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-615.

Der volle Inhalt der Quelle
Annotation:
This project is about exploring a textile surface with the design elements of sound, physical contact and tactility. It is interesting to analyse how audio and physical elements can help stimulate the human senses. The aim is to bring these elements into a design context and create a textile surface that can give people a sensory and spatial experience. Through tufting it is possible to work with long and short pile, which adds tactile values in the material. The outcome of this exploration is a vast tufted landscape that partly covers the wall and continues out on the floor. The surface has an abstract visual appearance with irregular shapes that defines the different material. The large scale has an overwhelming effect and invites people to interact and explore the surface. This challenges the fundamental structures of architecture and increases the importance of tactile and human senses, such as curiosity in spatial environments
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Rajendran, Aravind. „Noise Margin, Critical Charge and Power-Delay Tradeoffs for SRAM Design Space Exploration“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1307667225.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Setiawan, Panji. „Exploration and optimization of noise reduction algorithms for speech recognition in embedded devices /“. Aachen : Shaker, 2009. http://d-nb.info/99453583X/04.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lautakoski, Johan. „Procedurell generering av terräng Perlin noise eller Diamond-Square : med fokus på exekveringstid och framkomlighet“. Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-12940.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Setiawan, Panji [Verfasser]. „Exploration and Optimization of Noise Reduction Algorithms for Speech Recognition in Embedded Devices / Panji Setiawan“. Aachen : Shaker, 2009. http://d-nb.info/1156517788/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

White, Robert J. „Exploration of a Strategy for Reducing Gear Noise in Planetary Transmissions and Evaluation of Laser Vibrometry as a Means for Measuring Transmission Error“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=case1129928063.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Kitchen, Robert Raymond. „Exploration, quantification, and mitigation of systematic error in high-throughput approaches to gene-expression profiling : implications for data reproducibility“. Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/5691.

Der volle Inhalt der Quelle
Annotation:
Technological and methodological advances in the fields of medical and life-sciences have, over the last 25 years, revolutionised the way in which cellular activity is measured at the molecular level. Three such advances have provided a means of accurately and rapidly quantifying mRNA, from the development of quantitative Polymerase Chain Reaction (qPCR), to DNA microarrays, and second-generation RNA-sequencing (RNA-seq). Despite consistent improvements in measurement precision and sample throughput, the data generated continue to be a ffected by high levels of variability due to the use of biologically distinct experimental subjects, practical restrictions necessitating the use of small sample sizes, and technical noise introduced during frequently complex sample preparation and analysis procedures. A series of experiments were performed during this project to pro le sources of technical noise in each of these three techniques, with the aim of using the information to produce more accurate and more reliable results. The mechanisms for the introduction of confounding noise in these experiments are highly unpredictable. The variance structure of a qPCR experiment, for example, depends on the particular tissue-type and gene under assessment while expression data obtained by microarray can be greatly influenced by the day on which each array was processed and scanned. RNA-seq, on the other hand, produces data that appear very consistent in terms of differences between technical replicates, however there exist large differences when results are compared against those reported by microarray, which require careful interpretation. It is demonstrated in this thesis that by quantifying some of the major sources of noise in an experiment and utilising compensation mechanisms, either pre- or post-hoc, researchers are better equipped to perform experiments that are more robust, more accurate, and more consistent.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Ars, Jean-Michel. „Inversion conjointe géophysique appliquée à l'exploration en géothermie profonde dans le Massif Central“. Thesis, Brest, 2018. http://www.theses.fr/2018BRES0025.

Der volle Inhalt der Quelle
Annotation:
Le développement de l’énergie géothermique a conduit à l’exploitation de ressources établies dans des contextes géologiques et géodynamiques très variés. L’exploration géophysique de ces réservoirs complexes nécessite l’utilisation de plusieurs méthodes d’imagerie complémentaire. Ce travail de thèse porte sur l’exploration d’une ressource géothermique située en contexte de socle fracturé dans le Massif Central français par magnétotellurique, tomographie de bruit ambiant et gravimétrie.La magnétotellurique est une méthode d’imagerie 3D résolvante qui est sensible à la présence d’eau et aux argiles d’altération hydrothermale mais limitée par sa couverture spatiale. La tomographie de bruit sismique présente une bonne résolution verticale mais ne résout pas les variations horizontales de vitesse. Cette méthode est sensible aux variations des propriétés mécaniques des roches et donc aux milieux fracturés. Enfin la gravimétrie apporte une contrainte sur les variations lithologiques et possède une bonne résolution latérale mais une faible résolution verticale.Nous présentons une méthode d’inversion conjointe des données sismiques et gravimétriques sous contrainte d’un modèle de résistivité obtenu par inversion magnétotellurique indépendante. L’inversion conjointe nécessite de définir des couplages entre modèles. Par absence de connaissance a priori de relations pétrophysiques, nous avons couplé les modèles de densité, de résistivité et de vitesse avec une loi qui contraint les paramètres à être corrélés en moyenne. Cette stratégie vise à faire ressortir des relations caractéristiques des objets géologiques de la ressource géothermique.Cette méthodologie d’inversion conjointe a été testée sur des modèles synthétiques. L’application aux données réelles acquises dans le Massif Central a permis de définir une zone en profondeur de forte corrélation interprétée comme la transition ductile fragile. La partie intermédiaire des modèles, plus homogène, permet de distinguer différentes unités géologiques séparées par une zone de faille. Enfin la partie superficielle se distingue par une forte hétérogénéité des paramètres résultants probablement de processus d’altération de surface
The development of geothermal energy has led to the exploitation of resources established in varied geological and geodynamic contexts. Geophysical exploration of these complex reservoirs requires the use of several complementary imaging methods. This PhD thesis focuses on the exploration of a geothermal resource located within the fractured basement in the French Massif Central using magnetotelluric, ambient noise tomography and gravimetry. Magnetotelluric is a 3D imaging method with a good resolution power that is sensitive to the presence of water and hydrothermal weathering clays but is limited by its spatial coverage. Seismic noise tomography has a good vertical resolution but does not resolve well horizontal velocity variations. This method is sensitive to variations of the mechanical properties of rocks and thus to fractured media. Finally gravimetry brings constraint on the lithological variations and has a good lateral resolution but lacks vertical resolution.We present a method of joint inversion of seismic and gravimetric data under the constraint of a resistivity model obtained by independent magnetotelluric inversion. Joint inversion requires defining model couplings. By lack of prior knowledge of petrophysical relationships, we have coupled the density, resistivity and velocity models with a law that constraints the parameters to be correlated on average.This strategy aims to bring out the characteristic relationships of the geological objects of the geothermal resource. This joint inversion methodology has been tested on synthetic models. The application to the real data acquired in the Massif Central has made it possible to define a deep zone of high correlation interpreted as the fragile ductile transition. The intermediate part of the models, more homogeneous, allows to distinguish different geological units separated by a fault zone. Finally the superficial part is distinguished by strong heterogeneity of the parameters resulting probably from surface alteration process
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Ventura, Raphaël. „Estimation de la pollution sonore en milieu urbain par assimilation d'observations mobiles“. Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS387.

Der volle Inhalt der Quelle
Annotation:
La pollution sonore en milieu urbain est un enjeu sanitaire important, et l'exposition des populations doit être estimée adéquatement. Les simulations qui permettent de générer les cartes de bruit employées à cet effet sont cependant entachées d'erreurs. L'observation de l'environnement est alors judicieuse car elle permet la récolte d'informations supplémentaires en différents points de l'espace et à différents instants. Nous proposons dans cette thèse des méthodes d'assimilation de données permettant la fusion d'une carte de bruit issue de la simulation (ébauche), et d'observations acquises via l'application mobile Ambiciti. La fiabilité des observations est évaluée dans différents contextes, et nous mettons en place une méthode d'étalonnage qui permet de réduire autant que possible le biais de mesure. La combinaison de l'ébauche et des observations résulte en un estimateur nommé analyse, dont la variance de l'erreur est minimisée grâce à une estimation préalable des erreurs suivantes : corrélations spatiales des erreurs d'ébauche; erreurs d'instrumentation, de représentativité temporelle et spatiale pour l'observation. Une méthode d'assimilation à l'échelle du quartier est développée afin de générer des cartes horaires à partir d'une carte d'ébauche de bruit moyen sur une période de la journée, et d'observations acquises par un expérimentateur à travers Ambiciti. Une seconde méthode exploite l'ensemble des données partagées anonymement par les utilisateurs d'Ambiciti. Cet ensemble est filtré et classifié, et la précédente méthode est adaptée afin de produire des cartes de bruit d'analyse à l'échelle de la ville
Noise pollution is a major environmental health problems, and the determination of populations exposure is needed. This can be done through noise mapping. Usually, maps are simulation-based, and subject to high uncertainties. Observational data is distributed in space and time and hence conveys information that is complementary to simulation data. In this thesis, we propose data assimilation methods that allow one to merge prior noise maps issued by numerical simulation with phone-acquired (via the Ambiciti app) noise observations. We run a performance analysis that addresses the range, accuracy, precision and reproducibility of measurements. Conclusions of this evaluation lead us to the proposition of a calibration strategy that has been embedded in Ambiciti. The result of the prior map and observations merging is called an analysis, and is designed to have minimum error variance, based on the respective uncertainties of both data sources that we evaluated: spatial correlations for the prior error; measurement errors, time and location representativeness for the observations. We address the estimation problem on two different scales. The first method relies on the so-called ``best linear unbiased estimator''. It produces hourly noise maps, based on temporally averaged simulation maps and mobile phone audio data recorded at the neighborhood scale. The second method leverages the crowd-sensed Ambiciti user data available throughout the covered city. The observations set must be filtered and pre-processed, in order to only select the ones that were generated in adequate conditions. The prior simulation map is then corrected in a global fashion
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Setiawan, Panji [Verfasser], Harald [Akademischer Betreuer] Höge, Harald [Gutachter] Höge und Tim [Gutachter] Fingscheidt. „Exploration and Optimization of Noise Reduction Algorithms for Speech Recognition in Embedded Devices / Panji Setiawan ; Gutachter: Harald Höge, Tim Fingscheidt ; Akademischer Betreuer: Harald Höge ; Universität der Bundeswehr München, Fakultät für Elektrotechnik und Informationstechnik“. Neubiberg : Universitätsbibliothek der Universität der Bundeswehr München, 2009. http://d-nb.info/1191856364/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Setiawan, Panji [Verfasser], Harald [Akademischer Betreuer] Höge, Harald Gutachter] Höge und Tim [Gutachter] [Fingscheidt. „Exploration and Optimization of Noise Reduction Algorithms for Speech Recognition in Embedded Devices / Panji Setiawan ; Gutachter: Harald Höge, Tim Fingscheidt ; Akademischer Betreuer: Harald Höge ; Universität der Bundeswehr München, Fakultät für Elektrotechnik und Informationstechnik“. Neubiberg : Universitätsbibliothek der Universität der Bundeswehr München, 2009. http://nbn-resolving.de/urn:nbn:de:bvb:706-6109.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Setiawan, Panji Verfasser], Harald [Akademischer Betreuer] [Höge, Harald [Gutachter] Höge und Tim [Gutachter] Fingscheidt. „Exploration and Optimization of Noise Reduction Algorithms for Speech Recognition in Embedded Devices / Panji Setiawan ; Gutachter: Harald Höge, Tim Fingscheidt ; Akademischer Betreuer: Harald Höge ; Universität der Bundeswehr München, Fakultät für Elektrotechnik und Informationstechnik“. Neubiberg : Universitätsbibliothek der Universität der Bundeswehr München, 2009. http://d-nb.info/1191856364/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Wee, Bee Leng. „Death rattle : an exploration“. Thesis, University of Southampton, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.289908.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Cerf, Loïc. „Constraint-based mining of closed patterns in noisy n-ary relations“. Lyon, INSA, 2010. http://theses.insa-lyon.fr/publication/2010ISAL0050/these.pdf.

Der volle Inhalt der Quelle
Annotation:
Useful knowledge discovery processes can be based on patterns extracted from large datasets. Designing efficient data mining algorithms to compute collections of relevant patterns is an active research domain. Many datasets record whether some properties hold for some objects, e. G. , whether an item is bought by a customer or whether a gene is over-expressed in a biological sample. Such datasets are binary relations and can be represented as 0/1 matrices. In such matrices, a closed itemset is a maximal rectangle of ’1’s modulo arbitrary permutations of the lines (objects) and the columns (properties). Thus, every closed itemset supports the discovery of a maximal subset of objects sharing the same maximal subset of properties. Efficiently extracting every closed itemset satisfying user-defined relevancy constraints has been extensively studied. Despite its success across many application domains, this framework often turns out to be too narrow. First of all, many datasets are n-ary relations, i. E. , 0/1 tensors. Reducing their analysis to two dimensions is ignoring potentially interesting additional dimensions, e. G. , where a customer buys an item (localized analysis) or when a gene expression is measured (kinetic analysis). The presence of noise in most real-life datasets is a second issue, which leads to the fragmentation of the patterns to discover. Generalizing the definition of a closed itemset to make it suit relations of higher arity and tolerate some noise is straightforward (maximal hyper-rectangle with an upper bound of ’0’s tolerated per hyper-plan). On the contrary, generalizing their extraction is very hard. Indeed, classical algorithms exploit a mathematical property (the Galois connection) of the closed itemsets that none of the two generalizations preserve. That is why our extractor browses the candidate pattern space in an original way that does not favor any dimension. This search can be guided by a very broad class of relevancy constraints the patterns must satisfy. In particular, this thesis studies constraints specifically designed for mining almost-persistent cliques in dynamic graphs. Our extractor is orders of magnitude faster than known competitors focusing on exact patterns in ternary relations or on noise-tolerant patterns in binary relations. Despite these results, such an exhaustive approach often cannot, in a reasonable time, tolerate as much noise as the dataset contains. In this case, complementing the extraction with a hierarchical agglomeration of the (insufficiently noise-tolerant) patterns increases the quality of the returned collection of patterns
Les processus de découverte de connaissances nouvelles peuvent être fondés sur des motifs locaux extraits de grands jeux de données. Concevoir des algorithmes de fouille de données efficaces pour calculer des collections de motifs pertinents est un domaine actif de recherche. Beaucoup de jeux de données enregistrent si des objets présentent ou non certaines propriétés; par exemple si un produit est acheté par un client ou si un gène est sur exprimé dans un échantillon biologique. Ces jeux de données sont des relations binaires et peuvent être représentés par des matrices 0/1. Dans de telles matrices, un ensemble fermé est un rectangle maximal de '1's modulo des permutations arbitraires des lignes (objets) et des colonnes (propriétés). Ainsi, chaque ensemble fermé sous tend la découverte d'un sous ensemble maximal d'objets partageant le même sous ensemble maximal de propriétés. L'extraction efficace de tous les ensembles fermés, satisfaisant des contraintes de pertinences définies par l'utilisateur, a été étudiée en profondeur. Malgré son succès dans de nombreux domaines applicatifs, ce cadre de travail se révèle souvent trop étroit. Tout d'abord, beaucoup de jeux de données sont des relations n-aires, c'est à dire des tenseurs 0/1. Réduire leur analyse à deux dimensions revient à ignorer des dimensions additionnelles potentiellement intéressantes; par exemple où un client achète un produit (analyse spatiale) ou quand l'expression d'un gène est mesurée (analyse cinétique). La présence de bruit dans la plupart des jeux de données réelles est un second problème qui conduit à la fragmentation des motifs à découvrir. On généralise facilement la définition d'un ensemble fermé pour la rendre applicable à des relations de plus grande arité et tolérante au bruit (hyper rectangle maximal avec une borne supérieure de '0's tolérés par hyperplan). Au contraire, généraliser leur extraction est très difficile. En effet, les algorithmes classiques exploitent une propriété mathématique (la connexion de Galois) des ensembles fermés qu'aucune des deux généralisations ne préserve. C'est pourquoi notre extracteur parcourt l'espace des motifs candidats d'une façon originale qui ne favorise aucune dimension. Cette recherche peut être guidée par une très grande classe de contraintes de pertinence que les motifs doivent satisfaire. En particulier, cette thèse étudie des contraintes spécifiquement conçues pour la fouille de quasi cliques presque persistantes dans des graphes dynamiques. Notre extracteur est plusieurs ordres de grandeurs plus efficaces que les algorithmes existants se restreignant à la fouille de motifs exacts dans des relations ternaires ou à la fouille de motifs tolérants aux erreurs dans des relations binaires. Malgré ces résultats, une telle approche exhaustive ne peut souvent pas, en un temps raisonnable, tolérer tout le bruit contenu dans le jeu de données. Dans ce cas, compléter l'extraction avec une agglomération hiérarchique des motifs (qui ne tolèrent pas suffisamment de bruit) améliore la qualité des collections de motifs renvoyées
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Wu, Wencen. „Bio-inspired cooperative exploration of noisy scalar fields“. Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/48940.

Der volle Inhalt der Quelle
Annotation:
A fundamental problem in mobile robotics is the exploration of unknown fields that might be inaccessible or hostile to humans. Exploration missions of great importance include geological survey, disaster prediction and recovery, and search and rescue. For missions in relatively large regions, mobile sensor networks (MSN) are ideal candidates. The basic idea of MSN is that mobile robots form a sensor network that collects information, meanwhile, the behaviors of the mobile robots adapt to changes in the environment. To design feasible motion patterns and control of MSN, we draw inspiration from biology, where animal groups demonstrate amazingly complex but adaptive collective behaviors to changing environments. The main contributions of this thesis include platform independent mathematical models for the coupled motion-sensing dynamics of MSN and biologically-inspired provably convergent cooperative control and filtering algorithms for MSN exploring unknown scalar fields in both 2D and 3D spaces. We introduce a novel model of behaviors of mobile agents that leads to fundamental theoretical results for evaluating the feasibility and difficulty of exploring a field using MSN. Under this framework, we propose and implement source seeking algorithms using MSN inspired by behaviors of fish schools. To balance the cost and performance in exploration tasks, a switching strategy, which allows the mobile sensing agents to switch between individual and cooperative exploration, is developed. Compared to fixed strategies, the switching strategy brings in more flexibility in engineering design. To reveal the geometry of 3D spaces, we propose a control and sensing co-design for MSN to detect and track a line of curvature on a desired level surface.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Bahri, Emna. „Amélioration des procédures adaptatives pour l'apprentissage supervisé des données réelles“. Thesis, Lyon 2, 2010. http://www.theses.fr/2010LYO20089/document.

Der volle Inhalt der Quelle
Annotation:
L'apprentissage automatique doit faire face à différentes difficultés lorsqu'il est confronté aux particularités des données réelles. En effet, ces données sont généralement complexes, volumineuses, de nature hétérogène, de sources variées, souvent acquises automatiquement. Parmi les difficultés les plus connues, on citera les problèmes liés à la sensibilité des algorithmes aux données bruitées et le traitement des données lorsque la variable de classe est déséquilibrée. Le dépassement de ces problèmes constitue un véritable enjeu pour améliorer l'efficacité du processus d'apprentissage face à des données réelles. Nous avons choisi dans cette thèse de réfléchir à des procédures adaptatives du type boosting qui soient efficaces en présence de bruit ou en présence de données déséquilibrées.Nous nous sommes intéressés, d’abord, au contrôle du bruit lorsque l'on utilise le boosting. En effet, les procédures de boosting ont beaucoup contribué à améliorer l'efficacité des procédures de prédiction en data mining, sauf en présence de données bruitées. Dans ce cas, un double problème se pose : le sur-apprentissage des exemples bruités et la détérioration de la vitesse de convergence du boosting. Face à ce double problème, nous proposons AdaBoost-Hybride, une adaptation de l’algorithme Adaboost fondée sur le lissage des résultats des hypothèses antérieures du boosting, qui a donné des résultats expérimentaux très satisfaisants.Ensuite, nous nous sommes intéressés à un autre problème ardu, celui de la prédiction lorsque la distribution de la classe est déséquilibrée. C'est ainsi que nous proposons une méthode adaptative du type boosting fondée sur la classification associative qui a l’intérêt de permettre la focalisation sur des petits groupes de cas, ce qui est bien adapté aux données déséquilibrées. Cette méthode repose sur 3 contributions : FCP-Growth-P, un algorithme supervisé de génération des itemsets de classe fréquents dérivé de FP-Growth dans lequel est introduit une condition d'élagage fondée sur les contre-exemples pour la spécification des règles, W-CARP une méthode de classification associative qui a pour but de donner des résultats au moins équivalents à ceux des approches existantes pour un temps d'exécution beaucoup plus réduit, enfin CARBoost, une méthode de classification associative adaptative qui utilise W-CARP comme classifieur faible. Dans un chapitre applicatif spécifique consacré à la détection d’intrusion, nous avons confronté les résultats de AdaBoost-Hybride et de CARBoost à ceux des méthodes de référence (données KDD Cup 99)
Machine learning often overlooks various difficulties when confronted real data. Indeed, these data are generally complex, voluminous, and heterogeneous, due to the variety of sources. Among these problems, the most well known concern the sensitivity of the algorithms to noise and unbalanced data. Overcoming these problems is a real challenge to improve the effectiveness of the learning process against real data. In this thesis, we have chosen to improve adaptive procedures (boosting) that are less effective in the presence of noise or with unbalanced data.First, we are interested in robustifying Boosting against noise. Most boosting procedures have contributed greatly to improve the predictive power of classifiers in data mining, but they are prone to noisy data. In this case, two problems arise, (1) the over-fitting due to the noisy examples and (2) the decrease of convergence rate of boosting. Against these two problems, we propose AdaBoost-Hybrid, an adaptation of the Adaboost algorithm that takes into account mistakes made in all the previous iteration. Experimental results are very promising.Then, we are interested in another difficult problem, the prediction when the class is unbalanced. Thus, we propose an adaptive method based on boosted associative classification. The interest of using associations rules is allowing the focus on small groups of cases, which is well suited for unbalanced data. This method relies on 3 contributions: (1) FCP-Growth-P, a supervised algorithm for extracting class frequent itemsets, derived from FP-Growth by introducing the condition of pruning based on counter-examples to specify rules, (2) W-CARP associative classification method which aims to give results at least equivalent to those of existing approaches but in a faster manner, (3) CARBoost, a classification method that uses adaptive associative W-CARP as weak classifier. Finally, in a chapter devoted to the specific application of intrusion’s detection, we compared the results of AdaBoost-Hybrid and CARBoost to those of reference methods (data KDD Cup 99)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Ballard, Susan Patricia Art College of Fine Arts UNSW. „Out of order: explorations in digital materiality“. Publisher:University of New South Wales. Art, 2008. http://handle.unsw.edu.au/1959.4/42596.

Der volle Inhalt der Quelle
Annotation:
Digital art installation is the result of informatic materials entering gallery spaces and challenging the establishment of media forms. This thesis contends that the open, recursive and recombinatory process of looking at digital installation is in fact the result of noisy relations between information and the spatial temporal contexts of the art gallery. In order to focus on the processes of informatic materials within gallery spaces, this thesis identifies four key modulations of noise and materiality ? emergence, feedback, entropy and delay. I demonstrate how these impact on a range of recent digital installations by Australian and New Zealand artists. The lens of digital materiality shifts from an informational context into that of art history where it is found to highlight the systemic relationality of the installation. The thesis opens with a consideration of histories of media-specificity, and argues for a necessary separation of our concepts of media and materiality. This context provides a set of tools by which the remainder of the thesis investigates a range of digital material flows that are not tied to fixed media definitions. I draw on a range of theorists including Umberto Eco, Gilles Deleuze, Claude Shannon and Jack Burnham to further locate these material flows within two strands: experimental sound and information theory. This discussion forms the basis of the thesis? re-appraisal of media distinctions and highlights the complex relationship of informational materials to both sonic and visual histories. The second half of the thesis undertakes an appraisal of emergence, feedback, entropy and delay in specific works and suggests dimensionality, movement and duration as key determinants of the digital installation. These chapters demonstrate that what is at stake in digital installation is the viewer?s implicit role in the shifting relationships of digital materiality. Overall, this thesis presents a framework for emergent materiality in digital installation. I develop a theory of emergent materiality as a process specific to digital installation, and argue that digital installation is in fact a subject-forming assemblage of information-noise in which relations of dimensionality, movement and duration coalesce without cohering. And, within which gallery spaces begin to get noisy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Abboud, Yacine. „Fouille de motifs : entre accessibilité et robustesse“. Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0176/document.

Der volle Inhalt der Quelle
Annotation:
L'information occupe désormais une place centrale dans notre vie quotidienne, elle est à la fois omniprésente et facile d'accès. Pourtant, l'extraction de l'information à partir des données est un processus souvent inaccessible. En effet, même si les méthodes de fouilles de données sont maintenant accessibles à tous, les résultats de ces fouilles sont souvent complexes à obtenir et à exploiter pour l'utilisateur. La fouille de motifs combinée à l'utilisation de contraintes est une direction très prometteuse de la littérature pour à la fois améliorer l'efficience de la fouille et rendre ses résultats plus appréhendables par l'utilisateur. Cependant, la combinaison de contraintes désirée par l'utilisateur est souvent problématique car, elle n'est pas toujours adaptable aux caractéristiques des données fouillées tel que le bruit. Dans cette thèse, nous proposons deux nouvelles contraintes et un algorithme pour pallier ce problème. La contrainte de robustesse permet de fouiller des données bruitées en conservant la valeur ajoutée de la contrainte de contiguïté. La contrainte de clôture allégée améliore l'appréhendabilité de la fouille de motifs tout en étant plus résistante au bruit que la contrainte de clôture classique. L'algorithme C3Ro est un algorithme générique de fouille de motifs séquentiels intégrant de nombreuses contraintes, notamment les deux nouvelles contraintes que nous avons introduites, afin de proposer à l'utilisateur la fouille la plus efficiente possible tout en réduisant au maximum la taille de l'ensemble des motifs extraits. C3Ro rivalise avec les meilleurs algorithmes de fouille de motifs de la littérature en termes de temps d'exécution tout en consommant significativement moins de mémoire. C3Ro a été expérimenté dans le cadre de l’extraction de compétences présentes dans les offres d'emploi sur le Web
Information now occupies a central place in our daily lives, it is both ubiquitous and easy to access. Yet extracting information from data is often an inaccessible process. Indeed, even though data mining methods are now accessible to all, the results of these mining are often complex to obtain and exploit for the user. Pattern mining combined with the use of constraints is a very promising direction of the literature to both improve the efficiency of the mining and make its results more apprehensible to the user. However, the combination of constraints desired by the user is often problematic because it does not always fit with the characteristics of the searched data such as noise. In this thesis, we propose two new constraints and an algorithm to overcome this issue. The robustness constraint allows to mine noisy data while preserving the added value of the contiguity constraint. The extended closedness constraint improves the apprehensibility of the set of extracted patterns while being more noise-resistant than the conventional closedness constraint. The C3Ro algorithm is a generic sequential pattern mining algorithm that integrates many constraints, including the two new constraints that we have introduced, to provide the user the most efficient mining possible while reducing the size of the set of extracted patterns. C3Ro competes with the best pattern mining algorithms in the literature in terms of execution time while consuming significantly less memory. C3Ro has been experienced in extracting competencies from web-based job postings
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Quetard, Boris. „Anticipation et accumulation active d'information sensorielle dans la prise de décision en situations de vision normale et dégradée“. Thesis, Université Clermont Auvergne‎ (2017-2020), 2018. http://www.theses.fr/2018CLFAL006/document.

Der volle Inhalt der Quelle
Annotation:
Conduire un véhicule dans le brouillard requiert d’intégrer de l’information visuelle bruitée avec des attentes sur la scène routière pour rechercher des indices visuels importants pour la navigation. Les tâches d’identification et de recherche visuelle peuvent être vues comme des processus de prise de décision où l’information est accumulée et où des attentes sur l’objet et son contexte sont intégrées. L’accumulation d’information est souvent modélisée comme un processus passif. Cette thèse vise à mettre en avant des mécanismes actifs, intégrant les attentes sur la cible (sur sa position, sur son identité) et de l’information sensorielle dégradée (e.g., brouillard). Nous avons employé le paradigme de mouse-tracking, permettant d’inférer des aspects dynamiques du processus de prise de décision via les mouvements de la souris d’ordinateur. L’Étude 1 évalue l’effet du contexte dans la catégorisation de cible et suggère un compromis entre rapidité et exactitude de l’accumulation d’évidence pouvant être vu comme influençant activement la décision. Mais elle n’évalue pas directement la collecte active d’évidence. Les Études 2 et 3 incluent la mesure de la détection et vérification de la cible via les mouvement des yeux lors de la recherche visuelle dans des scènes dégradée. Les attentes sur la localisation (Étude 2) et sur l’identité de la cible (Étude 3) sont manipulées. Ces études éclairent les contributions de la détection et de la vérification dans l’accumulation d’évidence pour la réponse cible absente et cible présente. Pour conclure, nous proposons une ébauche de modèle de prise de décision intégrant une dynamique entre accumulation d’évidence et système oculomoteur
Driving a vehicle in the fog requires the integration of noisy visual information with expectations about the visual road scene, in order to search for visual clues important for navigating. The visual search and identification of relevant objects can be seen as decision-making processes where sensory information is accumulated and where the expectations about the target object and its context are integrated. The accumulation of information is often modelled as a passive process. This thesis focuses on the contribution of active mechanisms integrating expectations about the target (its identity, its location) with degraded sensory information (with fog or artificial noise). We used the mouse-tracking paradigm, allowing to infer dynamic aspects of the decision-making process through a computer mouse movements. Study 1 evaluates the effect of the context for categorizing a target and suggests a trade-off between the speed and accuracy of the evidence accumulation process which can be seen as actively influencing the decision. But this study cannot directly evaluate the active collection of evidence. In Studies 2 and 3, target detection and verification are directly measured through eye movements during visual search tasks in visually degraded scenes. We manipulated the expectations about the location (Study 2) and the target’s identity (Study 3). These studies emphasize the contributions of the detection and verification processes in the accumulation of evidence toward the target present and target absent responses. In conclusion, we propose the draft of a decision-making model which integrates the dynamics between the accumulation of evidence, and the oculomotor system
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Wan, Tsimhei. „Predicting optimal self-assembly of patchy colloids via simulation : an exploration of local search metaheuristics on a noisy yield landscape“. Thesis, University of Bath, 2019. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.767600.

Der volle Inhalt der Quelle
Annotation:
Self-assembly systems, based on polymers, nanoparticles, colloidal sized particles or other derivatives, are an appealing way to create materials with high complexity and detail. These systems are highly sensitive to changes in the interactions between the system's components, leading to different emergent structures. In material science, targeted design of new colloidal systems requires the tuning of interactions such that the desired target structure or macrobehaviour is emergent, and to obtain a high-yielding self-assembly path. Changes to the interactions can be represented by a parameter space. Time-dependent simulations (or experiments) are necessary to find emergent systems in parameter space, and the complexity of them demands resources and time. The conventional systematic (brute-force) scan of parameter space is highly inefficient, with most resources spent evaluating low-yielding regions. Typically, to counter noisy measurements to accurately identify favourable parameter values, an average is taken across multiple simulations. We examine hill-climbing as a potential alternative for tuning parameters values to obtain high-yielding systems. As an example tuning problem, a two-dimensional shorted-ranged attractive patchy hard disk model, important for coarse-grain modelling of polymers and biological systems, and a yield measure for quantifying our target structure (large round compact honeycomb clusters) are introduced. Varying the interaction strength and patch width, a noisy landscape is constructed from Monte Carlo simulation yield data. We show that a hill-climbing search on this landscape can locate the localised region of high-yielding assembly, and suggest situations where this is advantageous over brute-force scan.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Aimon, Cassandre. „Effet de l'environnement sur les stratégies comportementales du bar Dicentrarchus labrax. Cas d'une pénurie de nourriture et d'une marée noire Food deprivation reduces social interest in the European sea bass Dicentrarchus labrax, in the Journal of Experimental Biology 222(3), February 2019“. Thesis, Brest, 2019. http://www.theses.fr/2019BRES0055.

Der volle Inhalt der Quelle
Annotation:
Les écosystèmes marins sont soumis à une large palette de forçages naturels ou d’origine anthropique. Face à ces forçages, les organismes marins font notamment appel à la plasticité phénotypique afin de préserver leur valeur sélective. Dans le cadre de cette thèse, je m'intéresse plus particulièrement à la plasticité comportementale des juvéniles de bar Européens en réponse à deux stress environnementaux, la privation de nourriture et l'exposition à des hydrocarbures pétroliers. L'objectif principal de ce travail est d'analyser les conséquences de ces perturbations à travers une approche intégrative permettant d’évaluer les effets directs au niveau individuel, mais aussi les répercussions indirectes possibles au niveau populationnel et des communautés. Les tests comportementaux mis en place ont permis d'évaluer trois traits comportementaux : la sociabilité, la prise de risque et l'exploration. D’un point de vue analytique, une analyse en composante principale a été appliquée afin d'objectiver l'identification des comportements et leur interprétation. Les résultats expérimentaux montrent que le jeûne réduit la sociabilité et que l'exposition aux hydrocarbures pétroliers peut conduire à des altérations de la réponse anti-prédateur. Ces résultats suggèrent des effets néfastes sur la fitness des individus avec des répercussions possibles sur les dynamiques écologiques via une altération des relations intra- (grégarité) et inter-spécifiques (prédateurs/proie). Ces travaux de recherche illustrent combien les régulations comportementales permettent de relier les effets des perturbations environnementales à de multiples niveaux d'organisation, de l'individu à l'écosystème
Marine ecosystems are under a wide range of natural or anthropogenic forcings. In response to these forcings, marine organisms rely notably on their phenotypic plasticity to preserve their fitness. In this thesis, I am particularly interested in the behavioural plasticity of juvenile European sea bass in response to two environmental stressors, food deprivation and exposure to petroleum hydrocarbons. The main objective of this work is to evaluate the consequences of these disturbances through an integrative approach that assesses direct effects at the individual level, but also possible indirect impacts at the population and community levels.The behavioural tests implemented allowed the evaluation of three behavioural traits : sociability, risk-taking and exploration. From an analytical point of view, a principal component analysis was applied in order to objectify the identification of behaviours and their interpretation. Experimental results show that fasting reduces sociability and that exposure to petroleum hydrocarbons can lead to alterations in the anti-predator response. These results suggest adverse effects on the fitness of individuals with possible repercussions on ecological dynamics through altered intra- (gregarious) and inter-specific (predator/prey) relationships. This research illustrates how behavioural regulations can link the effects of environmental disturbances to multiple levels of organization, from the individual to the ecosystem
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Samson, Stéphanie. „En attendant l'or. Une histoire souterraine de la colonisation française en Afrique noire. Explorations, prospections, économie minière (1850-1940)“. Paris 10, 2009. http://www.theses.fr/2009PA100137.

Der volle Inhalt der Quelle
Annotation:
C’est l’or du Bambouk, qui a attiré les explorateurs vers le Haut-Sénégal au 18e s. Pourquoi alors n’y a-t-il pas eu d’Eldorado dans les colonies françaises d’Afrique noire ? L’étude porte sur les investissements miniers, depuis l’échec, mi-19e siècle de l’exploitation de Kéniéba, planifiée par un Gouverneur du Sénégal, Faidherbe, hanté par le mythe de l’or, jusqu’aux mines d’Oubangui-Chari et du Cameroun dans les années 1930, en passant par les rushes de Côte d’Ivoire, Guinée et Congo, portés par la spéculation dans l’empire britannique et les succès du Congo belge. Rôle et formes de la politique minière coloniale sont examinés : objectifs, moyens et méthodes de l’administration, enjeux des connaissances scientifiques et techniques (cartographie, géologie), réformes juridiques, rapports avec les entreprises et les mineurs africains. D’abord, militaires et administrateurs craignant une ruée vers l’or, adoptent un droit minier restrictif. Sceptique sur la présence de richesses minières, la métropole spécialise l’Afrique dans l’agriculture. Pourtant, on trouve de la bauxite, du fer. Dans les années 1920 : le gouverneur général d’AEF Antonetti fait appel à de grandes entreprises qui cherchent des minerais industriels. Dans les années 1930, un nouveau groupe de pression du patronat des mines et de la métallurgie (CSMM, Comité des Forges) dirigé par un ingénieur des Mines, Fernand Blondel, fait de l’Afrique noire un continent stratégique, dont la prospection doit être facilitée par l’État. En 1939, or et diamants restent l’essentiel des exportations minérales de l’AOF et de l’AEF ; 70% de l’or vient de l’orpaillage africain. Ce résultat est vu comme un échec du colonisateur
Bambuk’s gold lured French explorers into the Upper Senegal region in the 18th century. So, why was there no Eldorado in the French African colonies south of Sahara ? This research focuses on mining investments: the failure of the Kenieba mines (mid-19th century), planed by Faidherbe, then Governor of Senegal, who was haunted by the myth of Bambuk’s gold, later the rushes of Ivory Coast, Guinea and the Congo, driven by the speculation in the British colonies and the success of the Belgian Congo, and in the 30’, the mines of Oubangui-Chari and Cameroon. The forms of the mining policy are studied through the objectives, means and methods of the administration, the use of science and techniques (cartography, geology), law reform and the relationships with companies and African gold miners. At first, military and administrators, afraid of a possible gold rush, chose a restrictive mining law. France was skeptical about the mineral wealth of Africa and specialized these colonies in agricultural products. However, bauxite and iron were found. In the 20’, Antonetti, ‘gouverneur general’ in French Equatorial Africa, favoured big business, which prospected for industrial minerals. In the 30’, a new lobby created by mining and metallurgy companies (CSMM and Comité des Forges), led by Fernand Blondel, a mining engineer, promoted Africa as a strategic place for minerals, pushing for an intervention of the state. Nevertheless, in 1939, gold and diamonds remained the main export minerals of AOF and AEF, 70% of this gold coming from traditional African mining. This was considered as a failure for the colonizing power
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Eude, Thibaut. „Forage des données et formalisation des connaissances sur un accident : Le cas Deepwater Horizon“. Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEM079/document.

Der volle Inhalt der Quelle
Annotation:
Le forage de données, méthode et moyens développés dans cette thèse, redéfinit le processus d’extraction de données, de la formalisation de la connaissance et de son enrichissement notamment dans le cadre de l’élucidation d’évènements qui n’ont pas ou peu été documentés. L’accident de la plateforme de forage Deepwater Horizon, opérée pour le compte de BP dans le Golfe du Mexique et victime d’un blowout le 20 avril 2010, sera notre étude de cas pour la mise en place de notre preuve de concept de forage de données. Cet accident est le résultat d’un décalage inédit entre l’état de l’art des heuristiques des ingénieurs de forage et celui des ingénieurs antipollution. La perte de contrôle du puits MC 252-1 est donc une faillite d’ingénierie et il faudra quatre-vingt-sept jours à l’équipe d’intervention pour reprendre le contrôle du puits devenu sauvage et stopper ainsi la pollution. Deepwater Horizon est en ce sens un cas d’ingénierie en situation extrême, tel que défini par Guarnieri et Travadel.Nous proposons d’abord de revenir sur le concept général d’accident au moyen d’une analyse linguistique poussée présentant les espaces sémantiques dans lesquels se situe l’accident. Cela permet d’enrichir son « noyau de sens » et l’élargissement de l’acception commune de sa définition.Puis, nous amenons que la revue de littérature doit être systématiquement appuyée par une assistance algorithmique pour traiter les données compte tenu du volume disponible, de l’hétérogénéité des sources et des impératifs d’exigences de qualité et de pertinence. En effet, plus de huit cent articles scientifiques mentionnant cet accident ont été publiés à ce jour et une vingtaine de rapports d’enquêtes, constituant notre matériau de recherche, ont été produits. Notre méthode montre les limites des modèles d’accidents face à un cas comme Deepwater Horizon et l’impérieuse nécessité de rechercher un moyen de formalisation adéquat de la connaissance.De ce constat, l’utilisation des ontologies de haut niveau doit être encouragée. L’ontologie DOLCE a montré son grand intérêt dans la formalisation des connaissances à propos de cet accident et a permis notamment d’élucider très précisément une prise de décision à un moment critique de l’intervention. La population, la création d’instances, est le coeur de l’exploitation de l’ontologie et son principal intérêt mais le processus est encore très largement manuel et non exempts d’erreurs. Cette thèse propose une réponse partielle à ce problème par un algorithme NER original de population automatique d’une ontologie.Enfin, l’étude des accidents n’échappe pas à la détermination des causes et à la réflexion sur les « faits socialement construits ». Cette thèse propose les plans originaux d’un « pipeline sémantique » construit à l’aide d’une série d’algorithmes qui permet d’extraire la causalité exprimée dans un document et de produire un graphe représentant ainsi le « cheminement causal » sous-jacent au document. On comprend l’intérêt pour la recherche scientifique ou industrielle de la mise en lumière ainsi créée du raisonnement afférent de l’équipe d’enquête. Pour cela, ces travaux exploitent les avancées en Machine Learning et Question Answering et en particulier les outils Natural Language Processing.Cette thèse est un travail d’assembleur, d’architecte, qui amène à la fois un regard premier sur le cas Deepwater Horizon et propose le forage des données, une méthode et des moyens originaux pour aborder un évènement, afin de faire émerger du matériau de recherche des réponses à des questionnements qui échappaient jusqu’alors à la compréhension
Data drilling, the method and means developed in this thesis, redefines the process of data extraction, the formalization of knowledge and its enrichment, particularly in the context of the elucidation of events that have not or only slightly been documented. The Deepwater Horizon disaster, the drilling platform operated for BP in the Gulf of Mexico that suffered a blowout on April 20, 2010, will be our case study for the implementation of our proof of concept for data drilling. This accident is the result of an unprecedented discrepancy between the state of the art of drilling engineers' heuristics and that of pollution response engineers. The loss of control of the MC 252-1 well is therefore an engineering failure and it will take the response party eighty-seven days to regain control of the wild well and halt the pollution. Deepwater Horizon is in this sense a case of engineering facing extreme situation, as defined by Guarnieri and Travadel.First, we propose to return to the overall concept of accident by means of an in-depth linguistic analysis presenting the semantic spaces in which the accident takes place. This makes it possible to enrich its "core meaning" and broaden the shared acceptance of its definition.Then, we bring that the literature review must be systematically supported by algorithmic assistance to process the data taking into account the available volume, the heterogeneity of the sources and the requirements of quality and relevance standards. In fact, more than eight hundred scientific articles mentioning this accident have been published to date and some twenty investigation reports, constituting our research material, have been produced. Our method demonstrates the limitations of accident models when dealing with a case like Deepwater Horizon and the urgent need to look for an appropriate way to formalize knowledge.As a result, the use of upper-level ontologies should be encouraged. The DOLCE ontology has shown its great interest in formalizing knowledge about this accident and especially in elucidating very accurately a decision-making process at a critical moment of the intervention. The population, the creation of instances, is the heart of the exploitation of ontology and its main interest, but the process is still largely manual and not without mistakes. This thesis proposes a partial answer to this problem by an original NER algorithm for the automatic population of an ontology.Finally, the study of accidents involves determining the causes and examining "socially constructed facts". This thesis presents the original plans of a "semantic pipeline" built with a series of algorithms that extract the expressed causality in a document and produce a graph that represents the "causal path" underlying the document. It is significant for scientific or industrial research to highlight the reasoning behind the findings of the investigation team. To do this, this work leverages developments in Machine Learning and Question Answering and especially the Natural Language Processing tools.As a conclusion, this thesis is a work of a fitter, an architect, which offers both a prime insight into the Deepwater Horizon case and proposes the data drilling, an original method and means to address an event, in order to uncover answers from the research material for questions that had previously escaped understanding
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Vu, Hong-Son, und 武黃山. „Broad-Bandwidth Active Noise Cancellation Integrated Circuit Design Exploration Targeting at High-Performance/Low-Power for In-ear Headphones“. Thesis, 2016. http://ndltd.ncl.edu.tw/handle/96683710387480231089.

Der volle Inhalt der Quelle
Annotation:
博士
逢甲大學
電機與通訊工程博士學位學程
104
Conventional active noise control (ANC) headphones often perform well in reducing the low-frequency noise and isolating the high-frequency noise by earmuffs passively. The existing ANC systems often use high-speed digital processors to cancel out disturbing noise, which result in high power consumption for a commercial ANC headphone. While ANC headphone applications are strongly influenced by practical constraints, most previous works developing algorithms for ANC headphones are based on simplified simulations only and neglect practical limitations. This dissertation proposes a dedicated ANC circuit implementation based on the well-known adaptive filtered-x least mean square (FxLMS) algorithm for high fidelity in-ear headphones, which includes the new techniques to design the VLSI architecture that owns both the versatility and scalability. The proposed design techniques which include the proper filter length selection, low-power storage mechanism for convolution, parallel processing, and high-throughput pipelining architecture provide optimization in the view points of algorithmic, architectural, logic, and circuit levels to achieve high noise reduction performance and low-power design goals. Using those techniques, this dissertation proposes two design examples which are (1) a low-power broad-bandwidth noise cancellation VLSI circuit design based on well-known feed-forward FxLMS algorithm, and (2) a high-performance feedback active noise cancellation VLSI circuit design. The design (1) can attenuate 15 dB for broadband pink noise between 50 and 1500 Hz when operated at 20-MHz clock frequency at the costs of 84.2 k gates and power consumption of 6.59 mW only, with constraints from higher cost and more complex due to it requires two microphones from the hardware structural view point. The design (2) can achieve 15 dB noise reduction and up to 600 Hz attenuation bandwidth, while using only one microphone and is not influenced by the causality constraint. Using the TSMC 90-nm CMOS technology, the optimum operating frequencies of the both proposed designs are 20-MHz which achieve good noise reduction, high data throughput rate, and low-power consumption. The success of this chip implementation proves the correctness and practicability of the proposed design techniques.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Leavitt, Sarah Van Ness. „Exploration of infectious disease transmission dynamics using the relative probability of direct transmission between patients“. Thesis, 2020. https://hdl.handle.net/2144/41503.

Der volle Inhalt der Quelle
Annotation:
The question “who infected whom” is a perennial one in the study of infectious disease dynamics. To understand characteristics of infectious diseases such as how many people will one case produce over the course of infection (the reproductive number), how much time between the infection of two connected cases (the generation interval), and what factors are associated with transmission, one must ascertain who infected whom. The current best practices for linking cases are contact investigations and pathogen whole genome sequencing (WGS). However, these data sources cannot perfectly link cases, are expensive to obtain, and are often not available for all cases in a study. This lack of discriminatory data limits the use of established methods in many existing infectious disease datasets. We developed a method to estimate the relative probability of direct transmission between any two infectious disease cases. We used a subset of cases that have pathogen WGS or contact investigation data to train a model and then used demographic, spatial, clinical, and temporal data to predict the relative transmission probabilities for all case-pairs using a simple machine learning algorithm called naive Bayes. We adapted existing methods to estimate the reproductive number and generation interval to use these probabilities. Finally, we explored the associations between various covariates and transmission and how they related to the associations between covariates and pathogen genetic relatedness. We applied these methods to a tuberculosis outbreak in Hamburg, Germany and to surveillance data in Massachusetts, USA. Through simulations we found that our estimated transmission probabilities accurately classified pairs as links and nonlinks and were able to accurately estimate the reproductive number and the generation interval. We also found that the association between covariates and genetic relatedness captures the direction but not absolute magnitude of the association between covariates and transmission, but the bias was improved by using effect estimates from the naive Bayes algorithm. The methods developed in this dissertation can be used to explore transmission dynamics and estimate infectious disease parameters in established datasets where this was not previously feasible because of a lack of highly discriminatory information, and therefore expand our understanding of many infectious diseases.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Baer, Michelle M. „Clowning around : an exploration of life behind the nose“. Thesis, 2008. http://spectrum.library.concordia.ca/975908/1/MR45274.pdf.

Der volle Inhalt der Quelle
Annotation:
This pilot project demonstrates how Henderson's (2005) method of mask and clown can be effectively adapted to the practice of drama therapy through a practical application with 10 adolescent high school students in a brief drama therapy series. Research was conducted in a phenomenological framework to reveal the authentic meaning of the experience of the participants in their own narrative and arts-based documentation. Narrative data was collected from participant-observer notes, participants' journals, responses to a questionnaire, and video transcription. Arts-based data was collected from photographs and film footage of the mask-making and mask character exploration, clown discovery, and the participants' public clown performance. Qualitative coding of the data, analysis of the outcome, implications for clinical practice, and suggestions for further research, are presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Akilo, Olufemi David. „Exploration of an electro-magneto-responsive polymeric drug delivery system for enhanced nose-to-brain delivery“. Thesis, 2016. http://hdl.handle.net/10539/21526.

Der volle Inhalt der Quelle
Annotation:
A thesis submitted to the Faculty of Health Sciences, University of the Witwatersrand in fulfilment of the requirements for the degree of Doctor of Philosophy
Delivering drugs to the brain for the treatment of brain diseases has been fraught with low bioavailability of drugs due to the Blood-Brain Barrier (BBB). The intranasal (IN) route of delivery has purportedly been given recognition as an alternative route of delivering drugs to the brain with improved bioavailability if the nose-to-brain option is considered. However, drugs administered through the nasal mucosa suffer some challenges such as mucocilliary clearance, enzymatic degradation, inability of a controllable drug release to give a precise dose, resulting in frequent dosing and absorption into the systemic circulation through the blood rich vessels in the mucosa, thus facing the BBB challenge. The aim of this study was to develop a novel Nano-co-Plex (NCP), a magnetic nano-carrier loaded with a therapeutic agent which is further incorporated into a nasal thermosensitive electro-responsive mucogel (TERM) for in situ gelling, for electroactuated release of the incorporated drug-loaded NCP in a controllable “on-off” pulsatile manner, which is achieved with the aid of an external electric stimulation (ES). The released drug-loaded NCP was then targeted to the brain via a direct nose-to-brain drug delivery pathway with the aid of an external magnetic field (MF) for rapid transportation. The ES was brought about by applying a 5V potential difference (PD) using electrodes on the nose and the external MF would then be applied by placing a magnetic headband on the head of the patient. In this research, the drug-loaded NCP was prepared by firstly synthesizing iron oxide nanoparticles (Magnetite) which were then coated with Polyplex; a polymeric complex fabricated employing polyvinyl alcohol (PVA), polyethyleneimine (PEI) and fIuorecein isothiocyanate (FITC). The coated magnetite was thereafter loaded with Carmustine (BCNU), an effective drug commonly used in brain tumor treatment, to formulate the BCNU-NCP. The TERM was prepared by blending a thermosensitive polymer, Pluronic F127 (F127) with mucoadhesive polymers, chitosan (CS) and hydroxypropyl methylcellulose (HPMC). Polyaniline (PANI) was included in the blend as the electo-active moiety of the formulation. Finally, the BCNU-NCP was incorporated into the gel to form a Nanogel- Composite. A Box–Behnken design model was employed for the optimization of the Nanogel Composite. TERM, BCNU-NCP and Nanogel Composite were characterized employing Thermogravimetric analysis (TGA), Superconducting Quantum Interference Device (SQUID) magnetometry, Fourier Transform Infrared Spectroscopy (FTIR), Nuclear Magnetic Resonance (NMR), X-ray Diffractometry (XRD), Scanning Electron Microscopy (SEM), Cyclic Voltammetry (CV), Transmission Electron Microscopy (TEM), Rheological, Porositometry, Textural and Zetasize analyses. In vitro drug release, ex vivo permeation and in vivo studies were performed. The BCNU-NCP was found to be paramagnetic with a magnetization value of 61emu/g, possessing a mixture of spherical and hexagonal shaped core-shell nanoparticles of size 30-50nm with zeta potential of +32 ±2mV. The NCP displayed a high degree of crystallinity with 32% Polyplex coating. The loading capacity of NCP was 176.86μg BCNU/mg of the carrier and maximum release of 75.8% of the loaded BCNU was achieved after 24 hours. FTIR and NMR confirmed the conjugation of PVA and PEI of the Polyplex at a ratio of 1:4. Cytotoxicity of the BCNU-loaded Nano-co-Plex displayed superiority over the conventional BCNU towards human glioblastoma (HG) A170 cells. Cell studies revealed enhanced uptake and internalization of BCNU-NCP in HG A170 cells in the presence of an external MF. BCNU-NCP was found to be non-toxic to healthy brain cells. A thermally stable gel with desirable rheological and mucoadhesive properties was developed. The results revealed gelation temperature of 27.5±0.5°C with a porous morphology. Nanogel Composite possesses electroactive properties and shows response to ES and releases incorporated BCNU-NCP in an “on-off” pulsatile drug release profile upon application of a 5V PD. The in vitro release studies showed an average release of BCNU-NCP per release cycle to be 10.28%. Ex vivo permeation studies were performed using a freshly excised nasal tissue of the New Zealand white rabbit; the results showed that BCNU-NCP was able to permeate through the nasal tissue at a 6 times greater amount in the presence of a MF than in the absence of MF. BCNU concentration was found to be high in the brain and CSF of rabbit when the Nanogel Composite is intranasally administered compared to the IV injection of the conventional BCNU. Furthermore, application of the MF was found to increase the concentration of BCNU in the brain and CSF of the rabbit. The result of Field Emission Electron Probe Micro Analyzer (FE EPMA) was further used to confirm the presence of NCP in the rabbit brain tissue. Histopathological results indicated mild lesions in the nasal mucosa of the rabbit after IN administration of Nanogel Composite. The results of the in vitro, ex vivo and the in vivo proved that the Nanogel Composite is superior in delivering BCNU into the brain than the conventional drug delivery system for the treatment of brain tumor as it was able to release the therapeutic agent in a controllable manner. The MF applied aided drug to be targeted and rapidly transported to the brain via a direct nose-to-brain pathway thereby circumventing the BBB and increasing bioavailability of drug in the brain. This vehicle may also be used to deliver other similar therapeutic agents into the brain for the treatment of various brain diseases.
MB2016
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Dahan, Jean-Jacques. „La démarche de découverte expérimentalement médiée par Cabri-Géomètre en mathématiques: un essai de formalisation à partir de l'analyse de démarches de résolutions de problèmes de boîtes noires“. Phd thesis, 2005. http://tel.archives-ouvertes.fr/tel-00356107.

Der volle Inhalt der Quelle
Annotation:
Notre travail est centré sur la démarche de découverte reposant sur des expérimentations réalisées avec Cabri-Géomètre. L'analyse d'un corpus débordant le cadre des Mathématiques clarifie la manière dont la découverte arrive ou est transmise, ainsi que le rôle de l'expérimentation dans ces processus. Elle justifie notre hypothèse de décomposition de la démarche de découverte expérimentale en macro-étapes pré- et post-conjectures elles-mêmes décomposables en micro-étapes du type exploration-interprétation..
L'analyse de la résolution d'une boîte noire particulière permet d'affiner notre modèle a priori de la démarche de découverte en y précisant le rôle de la figure (Duval), les niveaux de géométrie (praxéologies G1 et G2 de Parzysz) et leurs prolongements que nous développons (G1 et G2 informatiques), les cadres d'investigations (Millar) et la place de la preuve expérimentale (Johsua).
Les analyses des expérimentations mises en place permettent de disposer d'un modèle amélioré qui doit permettre aux enseignants d'avoir une connaissance minimale des étapes heuristiques du travail de leurs élèves, de concevoir des activités d'études et de recherches ayant des objectifs précis en liaison avec les étapes formalisées de notre modélisation et d'envisager leur possible évaluation.
Des analyses d'activités existantes avec notre grille montrent la validité du modèle étudié. Des propositions d'activités ont été construites pour favoriser l'apparition de telle ou telle phase de la recherche; elles montrent la viabilité de ce modèle dans la conception d'ingénieries didactiques générant une démarche conforme à la démarche postulée.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie