Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Bayesův filtr.

Dissertationen zum Thema „Bayesův filtr“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-29 Dissertationen für die Forschung zum Thema "Bayesův filtr" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Havelka, Martin. „Detekce aktuálního podlaží při jízdě výtahem“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-444988.

Der volle Inhalt der Quelle
Annotation:
This diploma thesis deals with the detection of the current floor during elevator ride. This functionality is necessary for robot to move in multi-floor building. For this task, a fusion of accelerometric data during the ride of the elevator and image data obtained from the information display inside the elevator cabin is used. The research describes the already implemented solutions, data fusion methods and image classification options. Based on this part, suitable approaches for solving the problem were proposed. First, datasets from different types of elevator cabins were obtained. An algorithm for working with data from the accelerometric sensor was developed. A convolutional neural network, which was used to classify image data from displays, was selected and trained. Subsequently, the data fusion method was implemented. The individual parts were tested and evaluated. Based on their evaluation, integration into one functional system was performed. System was successfully verified and tested. Result of detection during the ride in different elevators was 97%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Guňka, Jiří. „Adaptivní klient pro sociální síť Twitter“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-237052.

Der volle Inhalt der Quelle
Annotation:
The goal of this term project is create user friendly client of Twitter. They may use methods of machine learning as naive bayes classifier to mentions new interests tweets. For visualissation this tweets will be use hyperbolic trees and some others methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Matula, Tomáš. „Techniky umělé inteligence pro filtraci nevyžádané pošty“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-236060.

Der volle Inhalt der Quelle
Annotation:
This thesis focuses on the e-mail classification and describes the basic ways of spam filtering. The Bayesian spam classifiers and artificial immune systems are analyzed and applied in this thesis. Furthermore, existing applications and evaluation metrics are described. The aim of this thesis is to design and implement an algorithm for spam filtering. Ultimately, the results are compared with selected known methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ravet, Alexandre. „Introducing contextual awareness within the state estimation process : Bayes filters with context-dependent time-heterogeneous distributions“. Thesis, Toulouse, INSA, 2015. http://www.theses.fr/2015ISAT0045/document.

Der volle Inhalt der Quelle
Annotation:
Ces travaux se focalisent sur une problématique fondamentale de la robotique autonome: l'estimation d'état. En effet, la plupart des approches actuelles permettant à un robot autonome de réaliser une tâche requièrent tout d'abord l'extraction d'une information d'état à partir de mesures capteurs bruitées. Ce vecteur d'état contient un ensemble de variables caractérisant le système à un instant t, comme la position du robot, sa vitesse, etc. En robotique comme dans de nombreux autres domaines, le filtrage bayésien est devenu la solution la plus populaire pour estimer l'état d'un système de façon robuste et à haute fréquence. Le succès du filtrage bayésien réside dans sa relative simplicité, que ce soit dans la mise en oeuvre des équations récursives de filtrage, ou encore dans la représentation simplifiée et intuitive du système au travers du modèle de Markov caché d'ordre 1. Généralement, un filtre bayésien repose sur une description minimaliste de l'état du système. Cette représentation simplifiée permet de conserver un temps d'exécution réduit, mais est également la conséquence de notre compréhension partielle du fonctionnement du système physique. Tous les aspects inconnus ou non modélisés du système sont ensuite représentés de façon globale par l'adjonction de composantes de bruit. Si ces composantes de bruit constituent une représentation simple et unifiée des aspects non modélisés du système, il reste néanmoins difficile de trouver des paramètres de bruit qui sont pertinents dans tous les contextes. En effet, à l'opposé de ce principe de modélisation, la problématique de navigation autonome pose le problème de la multiplicité d'environnements différents pour lesquels il est nécessaire de s'adapter intelligemment. Cette problématique nous amène donc à réviser la modélisation des aspects inconnus du systèmes sous forme de bruits stationnaires, et requiert l'introduction d'une information de contexte au sein du processus de filtrage. Dans ce cadre, ces travaux se focalisent spécifiquement sur l'amélioration du modèle état-observation sous-jacent au filtre bayésien afin de le doter de capacités d'adaptation vis-à-vis des perturbations contextuelles modifiant les performances capteurs. L'objectif principal est donc ici de trouver l'équilibre entre complexité du modèle et modélisation précise des phénomènes physiques représentés au travers d'une information de contexte. Nous établissons cet équilibre en modifiant le modèle état-observation afin de compenser les hypothèses simplistes de bruit stationnaire tout en continuant de bénéficier du faible temps de calcul requis par les équations récursives. Dans un premier temps, nous définissons une information de contexte basée sur un ensemble de mesures capteurs brutes, sans chercher à identifier précisément la typologie réelle de contextes de navigation. Toujours au sein du formalisme bayésien, nous exploitons des méthodes d'apprentissage statistique pour identifier une distribution d'observation non stationnaire et dépendante du contexte. cette distribution repose sur l'introduction de deux nouvelles composantes: un modèle destiné à prédire le bruit d'observation pour chaque capteur, et un modèle permettant de sélectionner un sous-ensemble de mesures à chaque itération du filtre. Nos investigations concernent également l'impact des méthodes d'apprentissage: dans le contexte historique du filtrage bayésien, le modèle état-observation est traditionnellement appris de manière générative, c'est à dire de manière à expliquer au mieux les paires état-observation contenues dans les données d'apprentissage. Cette méthode est ici remise en cause puisque, bien que fondamentalement génératif, le modèle état-observation est uniquement exploité au travers des équations de filtrage, et ses capacités génératives ne sont donc jamais utilisées[...]
Prevalent approaches for endowing robots with autonomous navigation capabilities require the estimation of a system state representation based on sensor noisy information. This system state usually depicts a set of dynamic variables such as the position, velocity and orientation required for the robot to achieve a task. In robotics, and in many other contexts, research efforts on state estimation converged towards the popular Bayes filter. The primary reason for the success of Bayes filtering is its simplicity, from the mathematical tools required by the recursive filtering equations, to the light and intuitive system representation provided by the underlying Hidden Markov Model. Recursive filtering also provides the most common and reliable method for real-time state estimation thanks to its computational efficiency. To keep low computational complexity, but also because real physical systems are not perfectly understood, and hence never faithfully represented by a model, Bayes filters usually rely on a minimum system state representation. Any unmodeled or unknown aspect of the system is then encompassed within additional noise terms. On the other hand, autonomous navigation requires robustness and adaptation capabilities regarding changing environments. This creates the need for introducing contextual awareness within the filtering process. In this thesis, we specifically focus on enhancing state estimation models for dealing with context-dependent sensor performance alterations. The issue is then to establish a practical balance between computational complexity and realistic modelling of the system through the introduction of contextual information. We investigate on achieving this balance by extending the classical Bayes filter in order to compensate for the optimistic assumptions made by modeling the system through time-homogeneous distributions, while still benefiting from the recursive filtering computational efficiency. Based on raw data provided by a set of sensors and any relevant information, we start by introducing a new context variable, while never trying to characterize a concrete context typology. Within the Bayesian framework, machine learning techniques are then used in order to automatically define a context-dependent time-heterogeneous observation distribution by introducing two additional models: a model providing observation noise predictions and a model providing observation selection rules.The investigation also concerns the impact of the training method we choose. In the context of Bayesian filtering, the model we exploit is usually trained in the generative manner. Thus, optimal parameters are those that allow the model to explain at best the data observed in the training set. On the other hand, discriminative training can implicitly help in compensating for mismodeled aspects of the system, by optimizing the model parameters with respect to the ultimate system performance, the estimate accuracy. Going deeper in the discussion, we also analyse how the training method changes the meaning of the model, and how we can properly exploit this property. Throughout the manuscript, results obtained with simulated and representative real data are presented and analysed
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sontag, Ralph. „Hat Bayes eine Chance?“ Universitätsbibliothek Chemnitz, 2004. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200400556.

Der volle Inhalt der Quelle
Annotation:
Workshop "Netz- und Service-Infrastrukturen" Hat Bayes eine Chance? Seit einigen Monaten oder Jahren werden verstärkt Bayes-Filter eingesetzt, um die Nutz-E-Mail ("`Ham"') vom unerwünschten "`Spam"' zu trennen. Diese stoßen jedoch leicht an ihre Grenzen. In einem zweiten Abschnitt wird ein Filtertest der Zeitschrift c't genauer analysiert.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Fredborg, Johan. „Spam filter for SMS-traffic“. Thesis, Linköpings universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-94161.

Der volle Inhalt der Quelle
Annotation:
Communication through text messaging, SMS (Short Message Service), is nowadays a huge industry with billions of active users. Because of the huge userbase it has attracted many companies trying to market themselves through unsolicited messages in this medium in the same way as was previously done through email. This is such a common phenomenon that SMS spam has now become a plague in many countries. This report evaluates several established machine learning algorithms to see how well they can be applied to the problem of filtering unsolicited SMS messages. Each filter is mainly evaluated by analyzing the accuracy of the filters on stored message data. The report also discusses and compares requirements for hardware versus performance measured by how many messages that can be evaluated in a fixed amount of time. The results from the evaluation shows that a decision tree filter is the best choice of the filters evaluated. It has the highest accuracy as well as a high enough process rate of messages to be applicable. The decision tree filter which was found to be the most suitable for the task in this environment has been implemented. The accuracy in this new implementation is shown to be as high as the implementation used for the evaluation of this filter. Though the decision tree filter is shown to be the best choice of the filters evaluated it turned out the accuracy is not high enough to meet the specified requirements. It however shows promising results for further testing in this area by using improved methods on the best performing algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Valová, Alena. „Optimální metody výměny řídkých dat v senzorové síti“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-318682.

Der volle Inhalt der Quelle
Annotation:
This thesis is focused on object tracking by a decentralized sensor network using fusion center-based and consensus-based distributed particle filters. The model includes clutter as well as missed detections of the object. The approach uses sparsity of global likelihood function, which, by means of appropriate sparse approximation and the suitable dictionaty selection can significantly reduce communication requirements in the decentralized sensor network. The master's thesis contains a design of exchange methods of sparse data in the sensor network and a comparison of the proposed methods in terms of accuracy and energy requirements.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Delobel, Laurent. „Agrégation d'information pour la localisation d'un robot mobile sur une carte imparfaite“. Thesis, Université Clermont Auvergne‎ (2017-2020), 2018. http://www.theses.fr/2018CLFAC012/document.

Der volle Inhalt der Quelle
Annotation:
La plupart des grandes villes modernes mondiales souffrent des conséquences de la pollution et des bouchons. Une solution à ce problème serait de réglementer l'accès aux centres-villes pour les voitures personnelles en faveur d'un système de transports publics constitués de navettes autonomes propulsées par une énergie n'engendrant pas de pollution gazeuse. Celles-ci pourraient desservir les usagers à la demande, en étant déroutées en fonction des appels de ceux-ci. Ces véhicules pourraient également être utilisés afin de desservir de grands sites industriels, ou bien des sites sensibles dont l'accès, restreint, doit être contrôlé. Afin de parvenir à réaliser cet objectif, un véhicule devra être capable de se localiser dans sa zone de travail. Une bonne partie des méthodes de localisation reprises par la communauté scientifique se basent sur des méthodes de type "Simultaneous Localization and Mapping" (SLAM). Ces méthodes sont capables de construire dynamiquement une carte de l'environnement ainsi que de localiser un véhicule dans une telle carte. Bien que celles-ci aient démontré leur robustesse, dans la plupart des implémentations, le partage d'une carte commune entre plusieurs robots peut s'avérer problématique. En outre, ces méthodes n'utilisent fréquemment aucune information existant au préalable et construisent la carte de leur environnement à partir de zéro.Nous souhaitons lever ces limitations, et proposons d'utiliser des cartes de type sémantique, qui existent au-préalable, par exemple comme OpenStreetMap, comme carte de base afin de se localiser. Ce type de carte contient la position de panneaux de signalisation, de feux tricolores, de murs de bâtiments etc... De telles cartes viennent presque à-coup-sûr avec des imprécisions de position, des erreurs au niveau des éléments qu'elles contiennent, par exemple des éléments réels peuvent manquer dans les données de la carte, ou bien des éléments stockés dans celles-ci peuvent ne plus exister. Afin de gérer de telles erreurs dans les données de la carte, et de permettre à un véhicule autonome de s'y localiser, nous proposons un nouveau paradigme. Tout d'abord, afin de gérer le problème de sur-convergence classique dans les techniques de fusion de données (filtre de Kalman), ainsi que le problème de mise à l'échelle, nous proposons de gérer l'intégralité de la carte par un filtre à Intersection de Covariance Partitionnée. Nous proposons également d'effacer des éléments inexistant des données de la carte en estimant leur probabilité d'existence, calculée en se basant sur les détections de ceux-ci par les capteurs du véhicule, et supprimant ceux doté d'une probabilité trop faible. Enfin, nous proposons de scanner périodiquement la totalité des données capteur pour y chercher de nouveaux amers potentiels que la carte n'intègre pas encore dans ses données, et de les y ajouter. Des expérimentations montrent la faisabilité d'un tel concept de carte dynamique de haut niveau qui serait mise à jour au-vol
Most large modern cities in the world nowadays suffer from pollution and traffic jams. A possible solution to this problem could be to regulate personnal car access into center downtown, and possibly replace public transportations by pollution-free autonomous vehicles, that could dynamically change their planned trajectory to transport people in a fully on-demand scenario. These vehicles could be used also to transport employees in a large industrial facility or in a regulated access critical infrastructure area. In order to perform such a task, a vehicle should be able to localize itself in its area of operation. Most current popular localization methods in such an environment are based on so-called "Simultaneous Localization and Maping" (SLAM) methods. They are able to dynamically construct a map of the environment, and to locate such a vehicle inside this map. Although these methods demonstrated their robustness, most of the implementations lack to use a map that would allow sharing over vehicles (map size, structure, etc...). On top of that, these methods frequently do not take into account already existing information such as an existing city map and rather construct it from scratch. In order to go beyond these limitations, we propose to use in the end semantic high-level maps, such as OpenStreetMap as a-priori map, and to allow the vehicle to localize based on such a map. They can contain the location of roads, traffic signs and traffic lights, buildings etc... Such kind of maps almost always come with some degree of imprecision (mostly in position), they also can be wrong, lacking existing but undescribed elements (landmarks), or containing in their data elements that do not exist anymore. In order to manage such imperfections in the collected data, and to allow a vehicle to localize based on such data, we propose a new strategy. Firstly, to manage the classical problem of data incest in data fusion in the presence of strong correlations, together with the map scalability problem, we propose to manage the whole map using a Split Covariance Intersection filter. We also propose to remove possibly absent landmarks still present in map data by estimating their probability of being there based on vehicle sensor detections, and to remove those with a low score. Finally, we propose to periodically scan sensor data to detect possible new landmarks that the map does not include yet, and proceed to their integration into map data. Experiments show the feasibility of such a concept of dynamic high level map that could be updated on-the-fly
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Garcia, Elmar [Verfasser], und Tino [Akademischer Betreuer] Hausotte. „Bayes-Filter zur Genauigkeitsverbesserung und Unsicherheitsermittlung von dynamischen Koordinatenmessungen / Elmar Garcia. Gutachter: Tino Hausotte“. Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2014. http://d-nb.info/1054731764/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Dall'ara, Jacopo. „Algoritmi per il mapping ambientale mediante array di antenne“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14267/.

Der volle Inhalt der Quelle
Annotation:
La capacità di costruire delle mappe riportanti informazioni di tipo statistico di un ambiente sconosciuto, tramite dei sensori disposti all'interno di esso, è un problema soggetto a numerose ricerche scientifiche svolte in ogni parte del mondo nelle ultime due decadi, in quanto è collegato ad innumerevoli applicazioni pratiche. Lo scopo di questo elaborato è quello di fornire una veloce introduzione teorica a tale problema, per poi proporre un metodo nuovo e più efficiente che vada a sostituire, o parzialmente completare, gli strumenti usati oggigiorno.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Obst, Marcus. „Untersuchungen zur kooperativen Fahrzeuglokalisierung in dezentralen Sensornetzen“. Master's thesis, Universitätsbibliothek Chemnitz, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200900264.

Der volle Inhalt der Quelle
Annotation:
Die dynamische Schätzung der Fahrzeugposition durch Sensordatenfusion ist eine der grundlegenden Aufgaben für moderne Verkehrsanwendungen wie zum Beispiel fahrerlose Transportsysteme oder Pre-Crash-Sicherheitssysteme. In dieser Arbeit wird ein Verfahren zur dezentralen kooperativen Fahrzeuglokalisierung vorgestellt, das auf einer allgemeinen Methode zur Fusion von Informationen mehrerer Teilnehmer beruht. Sowohl die lokale als auch die übertragene Schätzung wird durch Partikel dargestellt. Innerhalb einer Simulation wird gezeigt, dass sich die Positionsschätzung der einzelnen Teilnehmer im Netzwerk im Vergleich zu einer reinen GPS-basierten Lösung verbessert.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Arroyo, Negrete Elkin Rafael. „Continuous reservoir model updating using an ensemble Kalman filter with a streamline-based covariance localization“. Texas A&M University, 2006. http://hdl.handle.net/1969.1/4859.

Der volle Inhalt der Quelle
Annotation:
This work presents a new approach that combines the comprehensive capabilities of the ensemble Kalman filter (EnKF) and the flow path information from streamlines to eliminate and/or reduce some of the problems and limitations of the use of the EnKF for history matching reservoir models. The recent use of the EnKF for data assimilation and assessment of uncertainties in future forecasts in reservoir engineering seems to be promising. EnKF provides ways of incorporating any type of production data or time lapse seismic information in an efficient way. However, the use of the EnKF in history matching comes with its shares of challenges and concerns. The overshooting of parameters leading to loss of geologic realism, possible increase in the material balance errors of the updated phase(s), and limitations associated with non-Gaussian permeability distribution are some of the most critical problems of the EnKF. The use of larger ensemble size may mitigate some of these problems but are prohibitively expensive in practice. We present a streamline-based conditioning technique that can be implemented with the EnKF to eliminate or reduce the magnitude of these problems, allowing for the use of a reduced ensemble size, thereby leading to significant savings in time during field scale implementation. Our approach involves no extra computational cost and is easy to implement. Additionally, the final history matched model tends to preserve most of the geological features of the initial geologic model. A quick look at the procedure is provided that enables the implementation of this approach into the current EnKF implementations. Our procedure uses the streamline path information to condition the covariance matrix in the Kalman Update. We demonstrate the power and utility of our approach with synthetic examples and a field case. Our result shows that using the conditioned technique presented in this thesis, the overshooting/undershooting problems disappears and the limitation to work with non- Gaussian distribution is reduced. Finally, an analysis of the scalability in a parallel implementation of our computer code is given.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Closas, Gómez Pau. „Bayesian signal processing techniques for GNSS receivers: from multipath mitigation to positioning“. Doctoral thesis, Universitat Politècnica de Catalunya, 2009. http://hdl.handle.net/10803/6942.

Der volle Inhalt der Quelle
Annotation:
Aquesta tesi gira al voltant del disseny de receptors per a sistemes globals de navegació per satèl·lit (Global Navigation Satellite Systems, GNSS). El terme GNSS fa referència a tots aquells sistemes de navegació basats en una constel·lació de satèl·lits que emeten senyals de navegació útils per a posicionament. El més popular és l'americà GPS, emprat globalment. Els esforços d'Europa per a tenir un sistema similar veuran el seu fruit en un futur proper, el sistema s'anomena Galileo. Altres sistemes globals i regionals existeixen dissenyats per al mateix objectiu: calcular la posició dels receptors. Inicialment la tesi presenta l'estat de l'art en GNSS, a nivell de l'estructura dels actuals senyals de navegació i pel que fa a l'arquitectura dels receptors.

El disseny d'un receptor per a GNSS consta d'un seguit de blocs funcionals. Començant per l'antena receptora fins al càlcul final de la posició del receptor, el disseny proporciona una gran motivació per a la recerca en diversos àmbits. Tot i que la cadena de Radiofreqüència del receptor també és comentada a la tesis, l'objectiu principal de la recerca realitzada recau en els algorismes de processament de senyal emprats un cop realitzada la digitalització del senyal rebut. En un receptor per a GNSS, aquests algorismes es poden dividir en dues classes: els de sincronisme i els de posicionament. Aquesta classificació correspon als dos grans processos que típicament realitza el receptor. Primer, s'estima la distancia relativa entre el receptor i el conjunt de satèl·lits visibles. Aquestes distancies es calculen estimant el retard patit pel senyal des de que és emès pel corresponent satèl·lit fins que és rebut pel receptor. De l'estimació i seguiment del retard se n'encarrega l'algorisme de sincronisme. Un cop calculades la distancies relatives als satèl·lits, multiplicant per la velocitat de la llum el retards estimats, l'algorisme de posicionament pot operar. El posicionament es realitza típicament pel procés de trilateralització: intersecció del conjunt d'esferes centrades als satèl·lits visibles i de radi les distancies estimades relatives al receptor GNSS. Així doncs, sincronització i posicionament es realitzen de forma seqüencial i ininterrompudament. La tesi fa contribucions a ambdues parts, com explicita el subtítol del document.

Per una banda, la tesi investiga l'ús del filtrat Bayesià en el seguiment dels paràmetres de sincronisme (retards, desviaments Doppler i phases de portadora) del senyal rebut. Una de les fonts de degradació de la precisió en receptors GNSS és la presència de repliques del senyal directe, degudes a rebots en obstacles propers. És per això que els algorismes proposats en aquesta part de la tesi tenen com a objectiu la mitigació de l'efecte multicamí. La dissertació realitza una introducció dels fonaments teòrics del filtrat Bayesià, incloent un recull dels algorismes més populars. En particular, el Filtrat de Partícules (Particle Filter, PF) s'estudia com una de les alternatives més interessants actualment per a enfrontar-se a sistemes no-lineals i/o no-Gaussians. Els PF són mètodes basats en el mètode de Monte Carlo que realitzen una caracterització discreta de la funció de probabilitat a posteriori del sistema. Al contrari d'altres mètodes basats en simulacions, els PF tenen resultats de convergència que els fan especialment atractius en casos on la solució òptima no es pot trobar. En aquest sentit es proposa un PF que incorpora un seguit de característiques que el fan assolir millors prestacions i robustesa que altres algorismes, amb un nombre de partícules reduït. Per una banda, es fa un seguiment dels estats lineals del sistema mitjançant un Filtre de Kalman (KF), procediment conegut com a Rao-Blackwellization. Aquest fet provoca que la variància de les partícules decreixi i que un menor nombre d'elles siguin necessàries per a assolir una certa precisió en l'estimació de la distribució a posteriori. D'altra banda, un dels punts crítics en el disseny de PF és el disseny d'una funció d'importància (emprada per a generar les partícules) similar a l'òptima, que resulta ésser el posterior. Aquesta funció òptima no està disponible en general. En aquesta tesi, es proposa una aproximació de la funció d'importància òptima basada en el mètode de Laplace. Paral·lelament es proposen algorismes com l'Extended Kalman Filter (EKF) i l'Unscented Kalman Filter (UKF), comparant-los amb el PF proposat mitjançant simulacions numèriques.

Per altra banda, la presentació d'un nou enfocament al problema del posicionament és una de les aportacions originals de la tesi. Si habitualment els receptors operen en dos passos (sincronització i posicionament), la proposta de la tesi rau en l'Estimació Directa de la Posició (Direct Position Estimation, DPE) a partir del senyal digital. Tenint en compte la novetat del mètode, es proporcionen motivacions qualitatives i quantitatives per a l'ús de DPE enfront al mètode convencional de posicionament. Se n'ha estudiat l'estimador de màxima versemblança (Maximum Likelihood, ML) i un algorisme per a la seva implementació pràctica basat en l'algorisme Accelerated Random Search (ARS). Els resultats de les simulacions numèriques mostren la robustesa de DPE a escenaris on el mètode convencional es veu degradat, com per exemple el cas d'escenaris rics en multicamí. Una de les reflexions fruit dels resultats és que l'ús conjunt dels senyals provinents dels satèl·lits visibles proporciona millores en l'estimació de la posició, doncs cada senyal està afectada per un canal de propagació independent. La tesi també presenta l'extensió de DPE dins el marc Bayesià: Bayesian DPE (BDPE). BDPE manté la filosofia de DPE, tot incloent-hi possibles fonts d'informació a priori referents al moviment del receptor. Es comenten algunes de les opcions com l'ús de sistemes de navegació inercials o la inclusió d'informació atmosfèrica. Tot i així, cal tenir en compte que la llista només està limitada per la imaginació i l'aplicació concreta on el marc BDPE s'implementi.

Finalment, la tesi els límits teòrics en la precisió dels receptors GNSS. Alguns d'aquests límits teòrics eren ja coneguts, d'altres veuen ara la llum. El límit de Cramér-Rao (Cramér-Rao Bound, CRB) ens prediu la mínima variància que es pot obtenir en estimar un paràmetre mitjançant un estimador no esbiaixat. La tesi recorda el CRB dels paràmetres de sincronisme, resultat ja conegut. Una de les aportacions és la derivació del CRB de l'estimador de la posició pel cas convencional i seguint la metodologia DPE. Aquests resultats proporcionen una comparativa asimptòtica dels dos procediments pel posicionament de receptors GNSS. D'aquesta manera, el CRB de sincronisme pel cas Bayesià (Posterior Cramér-Rao Bound, PCRB) es presenta, com a límit teòric dels filtres Bayesians proposats en la tesi.
This dissertation deals with the design of satellite-based navigation receivers. The term Global Navigation Satellite Systems (GNSS) refers to those navigation systems based on a constellation of satellites, which emit ranging signals useful for positioning. Although the american GPS is probably the most popular, the european contribution (Galileo) will be operative soon. Other global and regional systems exist, all with the same objective: aid user's positioning. Initially, the thesis provides the state-of-the-art in GNSS: navigation signals structure and receiver architecture. The design of a GNSS receiver consists of a number of functional blocks. From the antenna to the final position calculation, the design poses challenges in many research areas. Although the Radio Frequency chain of the receiver is commented in the thesis, the main objective of the dissertation is on the signal processing algorithms applied after signal digitation. These algorithms can be divided into two: synchronization and positioning. This classification corresponds to the two main processes typically performed by a GNSS receiver. First, the relative distance between the receiver and the set of visible satellites is estimated. These distances are calculated after estimating the delay suffered by the signal traveling from its emission at the corresponding satellite to its reception at the receiver's antenna. Estimation and tracking of these parameters is performed by the synchronization algorithm. After the relative distances to the satellites are estimated, the positioning algorithm starts its operation. Positioning is typically performed by a process referred to as trilateration: intersection of a set of spheres centered at the visible satellites and with radii the corresponding relative distances. Therefore, synchronization and positioning are processes performed sequentially and in parallel. The thesis contributes to both topics, as expressed by the subtitle of the dissertation.

On the one hand, the thesis delves into the use of Bayesian filtering for the tracking of synchronization parameters (time-delays, Doppler-shifts and carrier-phases) of the received signal. One of the main sources of error in high precision GNSS receivers is the presence of multipath replicas apart from the line-of-sight signal (LOSS). Wherefore the algorithms proposed in this part of the thesis aim at mitigating the multipath effect on synchronization estimates. The dissertation provides an introduction to the basics of Bayesian filtering, including a compendium of the most popular algorithms. Particularly, Particle Filters (PF) are studied as one of the promising alternatives to deal with nonlinear/nonGaussian systems. PF are a set of simulation-based algorithms, based on Monte-Carlo methods. PF provide a discrete characterization of the posterior distribution of the system. Conversely to other simulation-based methods, PF are supported by convergence results which make them attractive in cases where the optimal solution cannot be analytically found. In that vein, a PF that incorporates a set of features to enhance its performance and robustness with a reduced number of particles is proposed. First, the linear part of the system is optimally handled by a Kalman Filter (KF), procedure referred to as Rao-Blackwellization. The latter causes a reduction on the variance of the particles and, thus, a reduction on the number of required particles to attain a given accuracy when characterizing the posterior distribution. A second feature is the design of an importance density function (from which particles are generated) close to the optimal, not available in general. The selection of this function is typically a key issue in PF designs. The dissertation proposes an approximation of the optimal importance function using Laplace's method. In parallel, Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) algorithms are considered, comparing these algorithms with the proposed PF by computer simulations.

On the other hand, a novel point of view in the positioning problem constitutes one of the original contributions of the thesis. Whereas conventional receivers operate in a two-steps procedure (synchronization and positioning), the proposal of the thesis is a Direct Position Estimation (DPE) from the digitized signal. Considering the novelty of the approach, the dissertation provides both qualitative and quantitative motivations for the use of DPE instead of the conventional two-steps approach. DPE is studied following the Maximum Likelihood (ML) principle and an algorithm based on the Accelerated Random Search (ARS) is considered for a practical implementation of the derived estimator. Computer simulation results carried show the robustness of DPE in scenarios where the conventional approach fails, for instance in multipath-rich scenarios. One of the conclusions of the thesis is that joint processing of satellite's signals provides enhance positioning performances, due to the independent propagation channels between satellite links. The dissertation also presents the extension of DPE to the Bayesian framework: Bayesian DPE (BDPE). BDPE maintains DPE's philosophy, including the possibility of accounting for sources of side/prior information. Some examples are given, such as the use of Inertial Measurement Systems and atmospheric models. Nevertheless, we have to keep in mind that the list is only limited by imagination and the particular applications were BDPE is implemented. Finally, the dissertation studied the theoretical lower bounds of accuracy of GNSS receivers. Some of those limits were already known, others see the light as a result of the research reported in the dissertation. The Cramér-Rao Bound (CRB) is the theoretical lower bound of accuracy of any unbiased estimator of a parameter. The dissertation recalls the CRB of synchronization parameters, result already known. A novel contribution of
the thesis is the derivation of the CRB of the position estimator for either conventional and DPE approaches. These results provide an asymptotical comparison of both GNSS positioning approaches. Similarly, the CRB of synchronization parameters for the Bayesian case (Posterior Cramér-Rao Bound, PCRB) is given, used as a fundamental limit of the Bayesian filters proposed in the thesis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Alexandersson, Johan, und Olle Nordin. „Implementation of SLAM Algorithms in a Small-Scale Vehicle Using Model-Based Development“. Thesis, Linköpings universitet, Datorteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148612.

Der volle Inhalt der Quelle
Annotation:
As autonomous driving is rapidly becoming the next major challenge in the auto- motive industry, the problem of Simultaneous Localization And Mapping (SLAM) has never been more relevant than it is today. This thesis presents the idea of examining SLAM algorithms by implementing such an algorithm on a radio con- trolled car which has been fitted with sensors and microcontrollers. The software architecture of this small-scale vehicle is based on the Robot Operating System (ROS), an open-source framework designed to be used in robotic applications. This thesis covers Extended Kalman Filter (EKF)-based SLAM, FastSLAM, and GraphSLAM, examining these algorithms in both theoretical investigations, simulations, and real-world experiments. The method used in this thesis is model- based development, meaning that a model of the vehicle is first implemented in order to be able to perform simulations using each algorithm. A decision of which algorithm to be implemented on the physical vehicle is then made backed up by these simulation results, as well as a theoretical investigation of each algorithm. This thesis has resulted in a dynamic model of a small-scale vehicle which can be used for simulation of any ROS-compliant SLAM-algorithm, and this model has been simulated extensively in order to provide empirical evidence to define which SLAM algorithm is most suitable for this application. Out of the algo- rithms examined, FastSLAM was proven to the best candidate, and was in the final stage, through usage of the ROS package gMapping, successfully imple- mented on the small-scale vehicle.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Bauer, Stefan. „Erhöhung der Qualität und Verfügbarkeit von satellitengestützter Referenzsensorik durch Smoothing im Postprocessing“. Master's thesis, Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-102106.

Der volle Inhalt der Quelle
Annotation:
In dieser Arbeit werden Postprocessing-Verfahren zum Steigern der Genauigkeit und Verfügbarkeit satellitengestützer Positionierungsverfahren, die ohne Inertialsensorik auskommen, untersucht. Ziel ist es, auch unter schwierigen Empfangsbedingungen, wie sie in urbanen Gebieten herrschen, eine Trajektorie zu erzeugen, deren Genauigkeit sie als Referenz für andere Verfahren qualifiziert. Zwei Ansätze werdenverfolgt: Die Verwendung von IGS-Daten sowie das Smoothing unter Einbeziehung von Sensoren aus der Fahrzeugodometrie. Es wird gezeigt, dass durch die Verwendung von IGS-Daten eine Verringerung des Fehlers um 50% bis 70% erreicht werden kann. Weiterhin demonstrierten die Smoothing-Verfahren, dass sie in der Lage sind, auch unter schlechten Empfangsbedingungen immer eine Genauigkeit im Dezimeterbereich zu erzielen.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Havlíček, Martin. „Zkoumání konektivity mozkových sítí pomocí hemodynamického modelování“. Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-233576.

Der volle Inhalt der Quelle
Annotation:
Zobrazení funkční magnetickou rezonancí (fMRI) využívající "blood-oxygen-level-dependent" efekt jako indikátor lokální aktivity je velmi užitečnou technikou k identifikaci oblastí mozku, které jsou aktivní během percepce, kognice, akce, ale také během klidového stavu. V poslední době také roste zájem o studium konektivity mezi těmito oblastmi, zejména v klidovém stavu. Tato práce předkládá nový a originální přístup k problému nepřímého vztahu mezi měřenou hemodynamickou odezvou a její příčinou, tj. neuronálním signálem. Zmíněný nepřímý vztah komplikuje odhad efektivní konektivity (kauzálního ovlivnění) mezi různými oblastmi mozku z dat fMRI. Novost prezentovaného přístupu spočívá v použití (zobecněné nelineární) techniky slepé dekonvoluce, což dovoluje odhad endogenních neuronálních signálů (tj. vstupů systému) z naměřených hemodynamických odezev (tj. výstupů systému). To znamená, že metoda umožňuje "data-driven" hodnocení efektivní konektivity na neuronální úrovni i v případě, že jsou měřeny pouze zašumělé hemodynamické odezvy. Řešení tohoto obtížného dekonvolučního (inverzního) problému je dosaženo za použití techniky nelineárního rekurzivního Bayesovského odhadu, který poskytuje společný odhad neznámých stavů a parametrů modelu. Práce je rozdělena do tří hlavních částí. První část navrhuje metodu k řešení výše uvedeného problému. Metoda využívá odmocninové formy nelineárního kubaturního Kalmanova filtru a kubaturního Rauch-Tung-Striebelova vyhlazovače, ovšem rozšířených pro účely řešení tzv. problému společného odhadu, který je definován jako simultánní odhad stavů a parametrů sekvenčním přístupem. Metoda je navržena především pro spojitě-diskrétní systémy a dosahuje přesného a stabilního řešení diskretizace modelu kombinací nelineárního (kubaturního) filtru s metodou lokální linearizace. Tato inverzní metoda je navíc doplněna adaptivním odhadem statistiky šumu měření a šumů procesu (tj. šumů neznámých stavů a parametrů). První část práce je zaměřena na inverzi modelu pouze jednoho časového průběhu; tj. na odhad neuronální aktivity z fMRI signálu. Druhá část generalizuje navrhovaný přístup a aplikuje jej na více časových průběhů za účelem umožnění odhadu parametrů propojení neuronálního modelu interakce; tj. odhadu efektivní konektivity. Tato metoda představuje inovační stochastické pojetí dynamického kauzálního modelování, což ji činí odlišnou od dříve představených přístupů. Druhá část se rovněž zabývá metodami Bayesovského výběru modelu a navrhuje techniku pro detekci irelevantních parametrů propojení za účelem dosažení zlepšeného odhadu parametrů. Konečně třetí část se věnuje ověření navrhovaného přístupu s využitím jak simulovaných tak empirických fMRI dat, a je významných důkazem o velmi uspokojivých výsledcích navrhovaného přístupu.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Mathema, Najma. „Predicting Plans and Actions in Two-Player Repeated Games“. BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8683.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence (AI) agents will need to interact with both other AI agents and humans. One way to enable effective interaction is to create models of associates to help to predict the modeled agents' actions, plans, and intentions. If AI agents are able to predict what other agents in their environment will be doing in the future and can understand the intentions of these other agents, the AI agents can use these predictions in their planning, decision-making and assessing their own potential. Prior work [13, 14] introduced the S# algorithm, which is designed as a robust algorithm for many two-player repeated games (RGs) to enable cooperation among players. Because S# generates actions, has (internal) experts that seek to accomplish an internal intent, and associates plans with each expert, it is a useful algorithm for exploring intent, plan, and action in RGs. This thesis presents a graphical Bayesian model for predicting actions, plans, and intents of an S# agent. The same model is also used to predict human action. The actions, plans and intentions associated with each S# expert are (a) identified from the literature and (b) grouped by expert type. The Bayesian model then uses its transition probabilities to predict the action and expert type from observing human or S# play. Two techniques were explored for translating probability distributions into specific predictions: Maximum A Posteriori (MAP) and Aggregation approach. The Bayesian model was evaluated for three RGs (Prisoners Dilemma, Chicken and Alternator) as follows. Prediction accuracy of the model was compared to predictions from machine learning models (J48, Multi layer perceptron and Random Forest) as well as from the fixed strategies presented in [20]. Prediction accuracy was obtained by comparing the model's predictions against the actual player's actions. Accuracy for plan and intent prediction was measured by comparing predictions to the actual plans and intents followed by the S# agent. Since the plans and the intents of human players were not recorded in the dataset, this thesis does not measure the accuracy of the Bayesian model against actual human plans and intents. Results show that the Bayesian model effectively models the actions, plans, and intents of the S# algorithm across the various games. Additionally, the Bayesian model outperforms other methods for predicting human actions. When the games do not allow players to communicate using so-called “cheap talk”, the MAP-based predictions are significantly better than Aggregation-based predictions. There is no significant difference in the performance of MAP-based and Aggregation-based predictions for modeling human behavior when cheaptalk is allowed, except in the game of Chicken.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Ribeiro, Eduardo da Silva. „Novas propostas em filtragem de projeções tomográficas sob ruído Poisson“. Universidade Federal de São Carlos, 2010. https://repositorio.ufscar.br/handle/ufscar/438.

Der volle Inhalt der Quelle
Annotation:
Made available in DSpace on 2016-06-02T19:05:43Z (GMT). No. of bitstreams: 1 3115.pdf: 5210903 bytes, checksum: d78cb316f1a90afa1f1d9e435752a5f6 (MD5) Previous issue date: 2010-05-24
Financiadora de Estudos e Projetos
In this dissertation we present techniques for filtering of tomographic projections with Poisson noise. For the filtering of the tomogram projections we use variations of three filtering techniques: Bayesian estimation, Wiener filtering and thresholding in Wavelet domain. We used ten MAP estimators, each estimator with a diferent probability density as prior information. An adaptive windowing was used to calculate the local estimates. A hypothesis test was used to select the best probability density to each projection. We used the Pointwise Wiener filter and FIR Wiener Filter, in both cases we used a adaptive scheme for the filtering. For thresholding in wavelet domain, we tested the performance of four families basis of wavelet functions and four techniques for obtaining thresholds. The experiments were done with the phantom of Shepp and Logan and five set of projections of phantoms captured by a CT scanner developed by CNPDIA-EMBRAPA. The image reconstruction was made with the parallel POCS algorithm. The evaluation of the filtering was made after reconstruction with the following criteria for measurement of error: ISNR, PSNR, SSIM and IDIV.
Nesta dissertação técnicas de filtragem de projeções tomográficas com ruído Poisson são apresentadas. Utilizamos variações de três técnicas de filtragem: estimação Bayesiana, filtragem de Wiener e limiarização no domínio Wavelet. Foram utilizados dez estimadores MAP, em cada uma densidade de probabilidade foi utilizada como informação a priori. Foi utilizado um janelamento adaptativo para o cálculo das estimativas locais e um teste de hipóteses para a escolha da melhor densidade de probabilidade que se adéqua a cada projeção. Utilizamos o filtro de Wiener na versão pontual e FIR, em ambos os casos utilizamos um esquema adaptativo durante a filtragem. Para a limiarização no domínio Wavelet, verificamos o desempenho de quatro famílias de funções Wavelet e quatro técnicas de obtenção de limiares. Os experimentos foram feitos com o phantom de Shepp e Logan e cinco conjunto de projeções de phantoms capturas por um minitomógrafo no CNPDIAEMBRAPA. A reconstrução da imagem feita com o algoritmo POCS paralelo. A avaliação da filtragem foi feita após a reconstrução com os seguintes crit_erios de medida de erro: ISNR, PSNR, IDIV e SSIM.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Obst, Marcus. „Bayesian Approach for Reliable GNSS-based Vehicle Localization in Urban Areas“. Doctoral thesis, Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-162894.

Der volle Inhalt der Quelle
Annotation:
Nowadays, satellite-based localization is a well-established technical solution to support several navigation tasks in daily life. Besides the application inside of portable devices, satellite-based positioning is used for in-vehicle navigation systems as well. Moreover, due to its global coverage and the availability of inexpensive receiver hardware it is an appealing technology for numerous applications in the area of Intelligent Transportation Systems (ITSs). However, it has to be admitted that most of the aforementioned examples either rely on modest accuracy requirements or are not sensitive to temporary integrity violations. Although technical concepts of Advanced Driver Assistance Systems (ADASs) based on Global Navigation Satellite Systems (GNSSs) have been successfully demonstrated under open sky conditions, practice reveals that such systems suffer from degraded satellite signal quality when put into urban areas. Thus, the main research objective of this thesis is to provide a reliable vehicle positioning concept which can be used in urban areas without the aforementioned limitations. Therefore, an integrated probabilistic approach which preforms fault detection & exclusion, localization and multi-sensor data fusion within one unified Bayesian framework is proposed. From an algorithmic perspective, the presented concept is based on a probabilistic data association technique with explicit handling of outlier measurements as present in urban areas. By that approach, the accuracy, integrity and availability are improved at the same time, that is, a consistent positioning solution is provided. In addition, a comprehensive and in-depth analysis of typical errors in urban areas within the pseudorange domain is performed. Based on this analysis, probabilistic models are proposed and later on used to facilitate the positioning algorithm. Moreover, the presented concept clearly targets towards mass-market applications based on low-cost receivers and hence aims to replace costly sensors by smart algorithms. The benefits of these theoretical contributions are implemented and demonstrated on the example of a real-time vehicle positioning prototype as used inside of the European research project GAlileo Interactive driviNg (GAIN). This work describes all necessary parts of this system including GNSS signal processing, fault detection and multi-sensor data fusion within one processing chain. Finally, the performance and benefits of the proposed concept are examined and validated both with simulated and comprehensive real-world sensor data from numerous test drives.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Mahmoud, Mohamed. „Parking Map Generation and Tracking Using Radar : Adaptive Inverse Sensor Model“. Thesis, Linköpings universitet, Fluida och mekatroniska system, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-167084.

Der volle Inhalt der Quelle
Annotation:
Radar map generation using binary Bayes filter or what is commonly known as Inverse Sensor Model; which translates the sensor measurements into grid cells occupancy estimation, is a classical problem in different fields. In this work, the focus will be on development of Inverse Sensor Model for parking space using 77 GHz FMCW (Frequency Modulated Continuous Wave) automotive radar, that can handle different environment geometrical complexity in a parking space. There are two main types of Inverse Sensor Models, where each has its own assumption about the sensor noise. One that is fixed and is similar to a lookup table, and constructed based on combination of sensor-specific characteristics, experimental data and empirically-determined parameters. The other one is learned by using ground truth labeling of the grid map cell, to capture the desired Inverse Sensor Model. In this work a new Inverse Sensor Model is proposed, that make use of the computational advantage of using fixed Inverse Sensor Model and capturing desired occupancy estimation based on ground truth labeling. A derivation of the occupancy grid mapping problem using binary Bayes filtering would be performed from the well known SLAM (Simultaneous Localization and Mapping) problem, followed by presenting the Adaptive Inverse Sensor Model, that uses fixed occupancy estimation but with adaptive occupancy shape estimation based on statistical analysis of the radar measurements distribution across the acquisition environment. A prestudy of the noise nature of the radar used in this work is performed, to have a common Inverse Sensor Model as a benchmark. Then the drawbacks of such Inverse Sensor Model would be addressed as sub steps of Adaptive Inverse Sensor Model, to be able to haven an optimal grid map occupancy estimator. Finally a comparison between the generated maps using the benchmark and the adaptive Inverse Sensor Model will take place, to show that under the fulfillment of the assumptions of the Adaptive Inverse Sensor Model, the Adaptive Inverse Sensor Model can offer a better visual appealing map to that of the benchmark.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Karlsson, Nicklas. „System för att upptäcka Phishing : Klassificering av mejl“. Thesis, Växjö University, School of Mathematics and Systems Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-2073.

Der volle Inhalt der Quelle
Annotation:

Denna rapport tar en titt på phishing-problemet, något som många har råkat ut för med bland annat de falska Nordea eller eBay mejl som på senaste tiden har dykt upp i våra inkorgar, och ett eventuellt sätt att minska phishingens effekt. Fokus i rapporten ligger på klassificering av mejl och den huvudsakliga frågeställningen är: ”Är det, med hög träffsäkerhet, möjligt att med hjälp av ett klassificeringsverktyg sortera ut mejl som har med phishing att göra från övrig skräppost.” Det visade sig svårare än väntat att hitta phishing mejl att använda i klassificeringen. I de klassificeringar som genomfördes visade det sig att både metoden Naive Bayes och med Support Vector Machine kan hitta upp till 100 % av phishing mejlen. Rapporten pressenterar arbetsgången, teori om phishing och resultaten efter genomförda klassificeringstest.


This report takes a look at the phishing problem, something that many have come across with for example the fake Nordea or eBay e-mails that lately have shown up in our e-mail inboxes, and a possible way to reduce the effect of phishing. The focus in the report lies on classification of e-mails and the main question is: “Is it, with high accuracy, possible with a classification tool to sort phishing e-mails from other spam e-mails.” It was more difficult than expected to find phishing e-mails to use in the classification. The classifications that were made showed that it was possible to find up to 100 % of the phishing e-mails with both Naive Bayes and with Support Vector Machine. The report presents the work done, facts about phishing and the results of the classification tests made.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Jüngel, Matthias. „The memory-based paradigm for vision-based robot localization“. Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2012. http://dx.doi.org/10.18452/16593.

Der volle Inhalt der Quelle
Annotation:
Für mobile autonome Roboter ist ein solides Modell der Umwelt eine wichtige Voraussetzung um die richtigen Entscheidungen zu treffen. Die gängigen existierenden Verfahren zur Weltmodellierung basieren auf dem Bayes-Filter und verarbeiten Informationen mit Hidden Markov Modellen. Dabei wird der geschätzte Zustand der Welt (Belief) iterativ aktualisiert, indem abwechselnd Sensordaten und das Wissen über die ausgeführten Aktionen des Roboters integriert werden; alle Informationen aus der Vergangenheit sind im Belief integriert. Wenn Sensordaten nur einen geringen Informationsgehalt haben, wie zum Beispiel Peilungsmessungen, kommen sowohl parametrische Filter (z.B. Kalman-Filter) als auch nicht-parametrische Filter (z.B. Partikel-Filter) schnell an ihre Grenzen. Das Problem ist dabei die Repräsentation des Beliefs. Es kann zum Beispiel sein, dass die gaußschen Modelle beim Kalman-Filter nicht ausreichen oder Partikel-Filter so viele Partikel benötigen, dass die Rechendauer zu groß wird. In dieser Dissertation stelle ich ein neues Verfahren zur Weltmodellierung vor, das Informationen nicht sofort integriert, sondern erst bei Bedarf kombiniert. Das Verfahren wird exemplarisch auf verschiedene Anwendungsfälle aus dem RoboCup (autonome Roboter spielen Fußball) angewendet. Es wird gezeigt, wie vierbeinige und humanoide Roboter ihre Position und Ausrichtung auf einem Spielfeld sehr präzise bestimmen können. Grundlage für die Lokalisierung sind bildbasierte Peilungsmessungen zu Objekten. Für die Roboter-Ausrichtung sind dabei Feldlinien eine wichtige Informationsquelle. In dieser Dissertation wird ein Verfahren zur Erkennung von Feldlinien in Kamerabildern vorgestellt, das ohne Kalibrierung auskommt und sehr gute Resultate liefert, auch wenn es starke Schatten und Verdeckungen im Bild gibt.
For autonomous mobile robots, a solid world model is an important prerequisite for decision making. Current state estimation techniques are based on Hidden Markov Models and Bayesian filtering. These methods estimate the state of the world (belief) in an iterative manner. Data obtained from perceptions and actions is accumulated in the belief which can be represented parametrically (like in Kalman filters) or non-parametrically (like in particle filters). When the sensor''s information gain is low, as in the case of bearing-only measurements, the representation of the belief can be challenging. For instance, a Kalman filter''s Gaussian models might not be sufficient or a particle filter might need an unreasonable number of particles. In this thesis, I introduce a new state estimation method which doesn''t accumulate information in a belief. Instead, perceptions and actions are stored in a memory. Based on this, the state is calculated when needed. The system has a particular advantage when processing sparse information. This thesis presents how the memory-based technique can be applied to examples from RoboCup (autonomous robots play soccer). In experiments, it is shown how four-legged and humanoid robots can localize themselves very precisely on a soccer field. The localization is based on bearings to objects obtained from digital images. This thesis presents a new technique to recognize field lines which doesn''t need any pre-run calibration and also works when the field lines are partly concealed and affected by shadows.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Mosallam, Ahmed. „Remaining useful life estimation of critical components based on Bayesian Approaches“. Thesis, Besançon, 2014. http://www.theses.fr/2014BESA2069/document.

Der volle Inhalt der Quelle
Annotation:
La construction de modèles de pronostic nécessite la compréhension du processus de dégradation des composants critiques surveillés afin d’estimer correctement leurs durées de fonctionnement avant défaillance. Un processus de d´dégradation peut être modélisé en utilisant des modèles de Connaissance issus des lois de la physique. Cependant, cette approche n´nécessite des compétences Pluridisciplinaires et des moyens expérimentaux importants pour la validation des modèles générés, ce qui n’est pas toujours facile à mettre en place en pratique. Une des alternatives consiste à apprendre le modèle de dégradation à partir de données issues de capteurs installés sur le système. On parle alors d’approche guidée par des données. Dans cette thèse, nous proposons une approche de pronostic guidée par des données. Elle vise à estimer à tout instant l’état de santé du composant physique et prédire sa durée de fonctionnement avant défaillance. Cette approche repose sur deux phases, une phase hors ligne et une phase en ligne. Dans la phase hors ligne, on cherche à sélectionner, parmi l’ensemble des signaux fournis par les capteurs, ceux qui contiennent le plus d’information sur la dégradation. Cela est réalisé en utilisant un algorithme de sélection non supervisé développé dans la thèse. Ensuite, les signaux sélectionnés sont utilisés pour construire différents indicateurs de santé représentant les différents historiques de données (un historique par composant). Dans la phase en ligne, l’approche développée permet d’estimer l’état de santé du composant test en faisant appel au filtre Bayésien discret. Elle permet également de calculer la durée de fonctionnement avant défaillance du composant en utilisant le classifieur k-plus proches voisins (k-NN) et le processus de Gauss pour la régression. La durée de fonctionnement avant défaillance est alors obtenue en comparant l’indicateur de santé courant aux indicateurs de santé appris hors ligne. L’approche développée à été vérifiée sur des données expérimentales issues de la plateforme PRO-NOSTIA sur les roulements ainsi que sur des données fournies par le Prognostic Center of Excellence de la NASA sur les batteries et les turboréacteurs
Constructing prognostics models rely upon understanding the degradation process of the monitoredcritical components to correctly estimate the remaining useful life (RUL). Traditionally, a degradationprocess is represented in the form of physical or experts models. Such models require extensiveexperimentation and verification that are not always feasible in practice. Another approach that buildsup knowledge about the system degradation over time from component sensor data is known as datadriven. Data driven models require that sufficient historical data have been collected.In this work, a two phases data driven method for RUL prediction is presented. In the offline phase, theproposed method builds on finding variables that contain information about the degradation behaviorusing unsupervised variable selection method. Different health indicators (HI) are constructed fromthe selected variables, which represent the degradation as a function of time, and saved in the offlinedatabase as reference models. In the online phase, the method estimates the degradation state usingdiscrete Bayesian filter. The method finally finds the most similar offline health indicator, to the onlineone, using k-nearest neighbors (k-NN) classifier and Gaussian process regression (GPR) to use it asa RUL estimator. The method is verified using PRONOSTIA bearing as well as battery and turbofanengine degradation data acquired from NASA data repository. The results show the effectiveness ofthe method in predicting the RUL
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Maršál, Martin. „Elektronický modul pro akustickou detekci“. Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-240831.

Der volle Inhalt der Quelle
Annotation:
This diploma thesis deals with the design and implementation of an electronic module for acoustic detection. The module has the task of detecting a predetermined acoustic signals through them learned classification model. The module is used mainly for security purposes. To identify and classify the proposed model using machine learning techniques. Given the possibility of retraining for a different set of sounds, the module becomes a universal sound detector. With acoustic sound using the digital MEMS microphone, for which it is designed and implemented conversion filter. The resulting system is implemented into firmware microcontroller with real time operating system. The various functions of the system are realized with regard to the possible optimization (less powerful MCU or battery power). The module transmits the detection results of the master station via Ethernet network. In the case of multiple modules connected to the network to create a distributed system, which is designed for precise time synchronization using PTP protocol defined by the IEEE-1588 standard.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Sarr, Ndey Binta, und 莎妮塔. „Hybrid of Filter Wrapper using Naive Bayes Algorithm and Genetic Algorithm“. Thesis, 2018. http://ndltd.ncl.edu.tw/handle/nqhgvw.

Der volle Inhalt der Quelle
Annotation:
碩士
元智大學
生物與醫學資訊碩士學位學程
106
Feature selection is an essential data preprocessing method and has been generally studied in data mining and machine learning. In this paper, we presented an effective feature selection approach using the hybrid method. That is using the filter method to select the most informative features from the dataset, then we used the wrapper method with a genetic search to select relevance features and to remove the redundancy of the features in the dataset. We finally run those features with the combination of the two algorithm, Naïve Bayes and Genetic Algorithm using voting as the classifier in weka. The experimental results present that our method has the indisputable advantages in the form of classification accuracy, error rate and Kappa’s statistics and number of features selected by comparing with filter method, wrapper method and with other approaches that has already been built. It is powerful, less competitive cost and easy to comprehend.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Liu, Guoliang. „Bayes Filters with Improved Measurements for Visual Object Tracking“. Doctoral thesis, 2012. http://hdl.handle.net/11858/00-1735-0000-0006-B3F9-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Hofmann, David. „Myoelectric Signal Processing for Prosthesis Control“. Doctoral thesis, 2014. http://hdl.handle.net/11858/00-1735-0000-0022-5DA2-9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Bauer, Stefan. „Erhöhung der Qualität und Verfügbarkeit von satellitengestützter Referenzsensorik durch Smoothing im Postprocessing“. Master's thesis, 2012. https://monarch.qucosa.de/id/qucosa%3A19821.

Der volle Inhalt der Quelle
Annotation:
In dieser Arbeit werden Postprocessing-Verfahren zum Steigern der Genauigkeit und Verfügbarkeit satellitengestützer Positionierungsverfahren, die ohne Inertialsensorik auskommen, untersucht. Ziel ist es, auch unter schwierigen Empfangsbedingungen, wie sie in urbanen Gebieten herrschen, eine Trajektorie zu erzeugen, deren Genauigkeit sie als Referenz für andere Verfahren qualifiziert. Zwei Ansätze werdenverfolgt: Die Verwendung von IGS-Daten sowie das Smoothing unter Einbeziehung von Sensoren aus der Fahrzeugodometrie. Es wird gezeigt, dass durch die Verwendung von IGS-Daten eine Verringerung des Fehlers um 50% bis 70% erreicht werden kann. Weiterhin demonstrierten die Smoothing-Verfahren, dass sie in der Lage sind, auch unter schlechten Empfangsbedingungen immer eine Genauigkeit im Dezimeterbereich zu erzielen.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Obst, Marcus. „Bayesian Approach for Reliable GNSS-based Vehicle Localization in Urban Areas“. Doctoral thesis, 2014. https://monarch.qucosa.de/id/qucosa%3A20218.

Der volle Inhalt der Quelle
Annotation:
Nowadays, satellite-based localization is a well-established technical solution to support several navigation tasks in daily life. Besides the application inside of portable devices, satellite-based positioning is used for in-vehicle navigation systems as well. Moreover, due to its global coverage and the availability of inexpensive receiver hardware it is an appealing technology for numerous applications in the area of Intelligent Transportation Systems (ITSs). However, it has to be admitted that most of the aforementioned examples either rely on modest accuracy requirements or are not sensitive to temporary integrity violations. Although technical concepts of Advanced Driver Assistance Systems (ADASs) based on Global Navigation Satellite Systems (GNSSs) have been successfully demonstrated under open sky conditions, practice reveals that such systems suffer from degraded satellite signal quality when put into urban areas. Thus, the main research objective of this thesis is to provide a reliable vehicle positioning concept which can be used in urban areas without the aforementioned limitations. Therefore, an integrated probabilistic approach which preforms fault detection & exclusion, localization and multi-sensor data fusion within one unified Bayesian framework is proposed. From an algorithmic perspective, the presented concept is based on a probabilistic data association technique with explicit handling of outlier measurements as present in urban areas. By that approach, the accuracy, integrity and availability are improved at the same time, that is, a consistent positioning solution is provided. In addition, a comprehensive and in-depth analysis of typical errors in urban areas within the pseudorange domain is performed. Based on this analysis, probabilistic models are proposed and later on used to facilitate the positioning algorithm. Moreover, the presented concept clearly targets towards mass-market applications based on low-cost receivers and hence aims to replace costly sensors by smart algorithms. The benefits of these theoretical contributions are implemented and demonstrated on the example of a real-time vehicle positioning prototype as used inside of the European research project GAlileo Interactive driviNg (GAIN). This work describes all necessary parts of this system including GNSS signal processing, fault detection and multi-sensor data fusion within one processing chain. Finally, the performance and benefits of the proposed concept are examined and validated both with simulated and comprehensive real-world sensor data from numerous test drives.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie