Dissertationen zum Thema „Bayesův filtr“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-29 Dissertationen für die Forschung zum Thema "Bayesův filtr" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Havelka, Martin. „Detekce aktuálního podlaží při jízdě výtahem“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-444988.
Der volle Inhalt der QuelleGuňka, Jiří. „Adaptivní klient pro sociální síť Twitter“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-237052.
Der volle Inhalt der QuelleMatula, Tomáš. „Techniky umělé inteligence pro filtraci nevyžádané pošty“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-236060.
Der volle Inhalt der QuelleRavet, Alexandre. „Introducing contextual awareness within the state estimation process : Bayes filters with context-dependent time-heterogeneous distributions“. Thesis, Toulouse, INSA, 2015. http://www.theses.fr/2015ISAT0045/document.
Der volle Inhalt der QuellePrevalent approaches for endowing robots with autonomous navigation capabilities require the estimation of a system state representation based on sensor noisy information. This system state usually depicts a set of dynamic variables such as the position, velocity and orientation required for the robot to achieve a task. In robotics, and in many other contexts, research efforts on state estimation converged towards the popular Bayes filter. The primary reason for the success of Bayes filtering is its simplicity, from the mathematical tools required by the recursive filtering equations, to the light and intuitive system representation provided by the underlying Hidden Markov Model. Recursive filtering also provides the most common and reliable method for real-time state estimation thanks to its computational efficiency. To keep low computational complexity, but also because real physical systems are not perfectly understood, and hence never faithfully represented by a model, Bayes filters usually rely on a minimum system state representation. Any unmodeled or unknown aspect of the system is then encompassed within additional noise terms. On the other hand, autonomous navigation requires robustness and adaptation capabilities regarding changing environments. This creates the need for introducing contextual awareness within the filtering process. In this thesis, we specifically focus on enhancing state estimation models for dealing with context-dependent sensor performance alterations. The issue is then to establish a practical balance between computational complexity and realistic modelling of the system through the introduction of contextual information. We investigate on achieving this balance by extending the classical Bayes filter in order to compensate for the optimistic assumptions made by modeling the system through time-homogeneous distributions, while still benefiting from the recursive filtering computational efficiency. Based on raw data provided by a set of sensors and any relevant information, we start by introducing a new context variable, while never trying to characterize a concrete context typology. Within the Bayesian framework, machine learning techniques are then used in order to automatically define a context-dependent time-heterogeneous observation distribution by introducing two additional models: a model providing observation noise predictions and a model providing observation selection rules.The investigation also concerns the impact of the training method we choose. In the context of Bayesian filtering, the model we exploit is usually trained in the generative manner. Thus, optimal parameters are those that allow the model to explain at best the data observed in the training set. On the other hand, discriminative training can implicitly help in compensating for mismodeled aspects of the system, by optimizing the model parameters with respect to the ultimate system performance, the estimate accuracy. Going deeper in the discussion, we also analyse how the training method changes the meaning of the model, and how we can properly exploit this property. Throughout the manuscript, results obtained with simulated and representative real data are presented and analysed
Sontag, Ralph. „Hat Bayes eine Chance?“ Universitätsbibliothek Chemnitz, 2004. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200400556.
Der volle Inhalt der QuelleFredborg, Johan. „Spam filter for SMS-traffic“. Thesis, Linköpings universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-94161.
Der volle Inhalt der QuelleValová, Alena. „Optimální metody výměny řídkých dat v senzorové síti“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-318682.
Der volle Inhalt der QuelleDelobel, Laurent. „Agrégation d'information pour la localisation d'un robot mobile sur une carte imparfaite“. Thesis, Université Clermont Auvergne (2017-2020), 2018. http://www.theses.fr/2018CLFAC012/document.
Der volle Inhalt der QuelleMost large modern cities in the world nowadays suffer from pollution and traffic jams. A possible solution to this problem could be to regulate personnal car access into center downtown, and possibly replace public transportations by pollution-free autonomous vehicles, that could dynamically change their planned trajectory to transport people in a fully on-demand scenario. These vehicles could be used also to transport employees in a large industrial facility or in a regulated access critical infrastructure area. In order to perform such a task, a vehicle should be able to localize itself in its area of operation. Most current popular localization methods in such an environment are based on so-called "Simultaneous Localization and Maping" (SLAM) methods. They are able to dynamically construct a map of the environment, and to locate such a vehicle inside this map. Although these methods demonstrated their robustness, most of the implementations lack to use a map that would allow sharing over vehicles (map size, structure, etc...). On top of that, these methods frequently do not take into account already existing information such as an existing city map and rather construct it from scratch. In order to go beyond these limitations, we propose to use in the end semantic high-level maps, such as OpenStreetMap as a-priori map, and to allow the vehicle to localize based on such a map. They can contain the location of roads, traffic signs and traffic lights, buildings etc... Such kind of maps almost always come with some degree of imprecision (mostly in position), they also can be wrong, lacking existing but undescribed elements (landmarks), or containing in their data elements that do not exist anymore. In order to manage such imperfections in the collected data, and to allow a vehicle to localize based on such data, we propose a new strategy. Firstly, to manage the classical problem of data incest in data fusion in the presence of strong correlations, together with the map scalability problem, we propose to manage the whole map using a Split Covariance Intersection filter. We also propose to remove possibly absent landmarks still present in map data by estimating their probability of being there based on vehicle sensor detections, and to remove those with a low score. Finally, we propose to periodically scan sensor data to detect possible new landmarks that the map does not include yet, and proceed to their integration into map data. Experiments show the feasibility of such a concept of dynamic high level map that could be updated on-the-fly
Garcia, Elmar [Verfasser], und Tino [Akademischer Betreuer] Hausotte. „Bayes-Filter zur Genauigkeitsverbesserung und Unsicherheitsermittlung von dynamischen Koordinatenmessungen / Elmar Garcia. Gutachter: Tino Hausotte“. Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2014. http://d-nb.info/1054731764/34.
Der volle Inhalt der QuelleDall'ara, Jacopo. „Algoritmi per il mapping ambientale mediante array di antenne“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14267/.
Der volle Inhalt der QuelleObst, Marcus. „Untersuchungen zur kooperativen Fahrzeuglokalisierung in dezentralen Sensornetzen“. Master's thesis, Universitätsbibliothek Chemnitz, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200900264.
Der volle Inhalt der QuelleArroyo, Negrete Elkin Rafael. „Continuous reservoir model updating using an ensemble Kalman filter with a streamline-based covariance localization“. Texas A&M University, 2006. http://hdl.handle.net/1969.1/4859.
Der volle Inhalt der QuelleClosas, Gómez Pau. „Bayesian signal processing techniques for GNSS receivers: from multipath mitigation to positioning“. Doctoral thesis, Universitat Politècnica de Catalunya, 2009. http://hdl.handle.net/10803/6942.
Der volle Inhalt der QuelleEl disseny d'un receptor per a GNSS consta d'un seguit de blocs funcionals. Començant per l'antena receptora fins al càlcul final de la posició del receptor, el disseny proporciona una gran motivació per a la recerca en diversos àmbits. Tot i que la cadena de Radiofreqüència del receptor també és comentada a la tesis, l'objectiu principal de la recerca realitzada recau en els algorismes de processament de senyal emprats un cop realitzada la digitalització del senyal rebut. En un receptor per a GNSS, aquests algorismes es poden dividir en dues classes: els de sincronisme i els de posicionament. Aquesta classificació correspon als dos grans processos que típicament realitza el receptor. Primer, s'estima la distancia relativa entre el receptor i el conjunt de satèl·lits visibles. Aquestes distancies es calculen estimant el retard patit pel senyal des de que és emès pel corresponent satèl·lit fins que és rebut pel receptor. De l'estimació i seguiment del retard se n'encarrega l'algorisme de sincronisme. Un cop calculades la distancies relatives als satèl·lits, multiplicant per la velocitat de la llum el retards estimats, l'algorisme de posicionament pot operar. El posicionament es realitza típicament pel procés de trilateralització: intersecció del conjunt d'esferes centrades als satèl·lits visibles i de radi les distancies estimades relatives al receptor GNSS. Així doncs, sincronització i posicionament es realitzen de forma seqüencial i ininterrompudament. La tesi fa contribucions a ambdues parts, com explicita el subtítol del document.
Per una banda, la tesi investiga l'ús del filtrat Bayesià en el seguiment dels paràmetres de sincronisme (retards, desviaments Doppler i phases de portadora) del senyal rebut. Una de les fonts de degradació de la precisió en receptors GNSS és la presència de repliques del senyal directe, degudes a rebots en obstacles propers. És per això que els algorismes proposats en aquesta part de la tesi tenen com a objectiu la mitigació de l'efecte multicamí. La dissertació realitza una introducció dels fonaments teòrics del filtrat Bayesià, incloent un recull dels algorismes més populars. En particular, el Filtrat de Partícules (Particle Filter, PF) s'estudia com una de les alternatives més interessants actualment per a enfrontar-se a sistemes no-lineals i/o no-Gaussians. Els PF són mètodes basats en el mètode de Monte Carlo que realitzen una caracterització discreta de la funció de probabilitat a posteriori del sistema. Al contrari d'altres mètodes basats en simulacions, els PF tenen resultats de convergència que els fan especialment atractius en casos on la solució òptima no es pot trobar. En aquest sentit es proposa un PF que incorpora un seguit de característiques que el fan assolir millors prestacions i robustesa que altres algorismes, amb un nombre de partícules reduït. Per una banda, es fa un seguiment dels estats lineals del sistema mitjançant un Filtre de Kalman (KF), procediment conegut com a Rao-Blackwellization. Aquest fet provoca que la variància de les partícules decreixi i que un menor nombre d'elles siguin necessàries per a assolir una certa precisió en l'estimació de la distribució a posteriori. D'altra banda, un dels punts crítics en el disseny de PF és el disseny d'una funció d'importància (emprada per a generar les partícules) similar a l'òptima, que resulta ésser el posterior. Aquesta funció òptima no està disponible en general. En aquesta tesi, es proposa una aproximació de la funció d'importància òptima basada en el mètode de Laplace. Paral·lelament es proposen algorismes com l'Extended Kalman Filter (EKF) i l'Unscented Kalman Filter (UKF), comparant-los amb el PF proposat mitjançant simulacions numèriques.
Per altra banda, la presentació d'un nou enfocament al problema del posicionament és una de les aportacions originals de la tesi. Si habitualment els receptors operen en dos passos (sincronització i posicionament), la proposta de la tesi rau en l'Estimació Directa de la Posició (Direct Position Estimation, DPE) a partir del senyal digital. Tenint en compte la novetat del mètode, es proporcionen motivacions qualitatives i quantitatives per a l'ús de DPE enfront al mètode convencional de posicionament. Se n'ha estudiat l'estimador de màxima versemblança (Maximum Likelihood, ML) i un algorisme per a la seva implementació pràctica basat en l'algorisme Accelerated Random Search (ARS). Els resultats de les simulacions numèriques mostren la robustesa de DPE a escenaris on el mètode convencional es veu degradat, com per exemple el cas d'escenaris rics en multicamí. Una de les reflexions fruit dels resultats és que l'ús conjunt dels senyals provinents dels satèl·lits visibles proporciona millores en l'estimació de la posició, doncs cada senyal està afectada per un canal de propagació independent. La tesi també presenta l'extensió de DPE dins el marc Bayesià: Bayesian DPE (BDPE). BDPE manté la filosofia de DPE, tot incloent-hi possibles fonts d'informació a priori referents al moviment del receptor. Es comenten algunes de les opcions com l'ús de sistemes de navegació inercials o la inclusió d'informació atmosfèrica. Tot i així, cal tenir en compte que la llista només està limitada per la imaginació i l'aplicació concreta on el marc BDPE s'implementi.
Finalment, la tesi els límits teòrics en la precisió dels receptors GNSS. Alguns d'aquests límits teòrics eren ja coneguts, d'altres veuen ara la llum. El límit de Cramér-Rao (Cramér-Rao Bound, CRB) ens prediu la mínima variància que es pot obtenir en estimar un paràmetre mitjançant un estimador no esbiaixat. La tesi recorda el CRB dels paràmetres de sincronisme, resultat ja conegut. Una de les aportacions és la derivació del CRB de l'estimador de la posició pel cas convencional i seguint la metodologia DPE. Aquests resultats proporcionen una comparativa asimptòtica dels dos procediments pel posicionament de receptors GNSS. D'aquesta manera, el CRB de sincronisme pel cas Bayesià (Posterior Cramér-Rao Bound, PCRB) es presenta, com a límit teòric dels filtres Bayesians proposats en la tesi.
This dissertation deals with the design of satellite-based navigation receivers. The term Global Navigation Satellite Systems (GNSS) refers to those navigation systems based on a constellation of satellites, which emit ranging signals useful for positioning. Although the american GPS is probably the most popular, the european contribution (Galileo) will be operative soon. Other global and regional systems exist, all with the same objective: aid user's positioning. Initially, the thesis provides the state-of-the-art in GNSS: navigation signals structure and receiver architecture. The design of a GNSS receiver consists of a number of functional blocks. From the antenna to the final position calculation, the design poses challenges in many research areas. Although the Radio Frequency chain of the receiver is commented in the thesis, the main objective of the dissertation is on the signal processing algorithms applied after signal digitation. These algorithms can be divided into two: synchronization and positioning. This classification corresponds to the two main processes typically performed by a GNSS receiver. First, the relative distance between the receiver and the set of visible satellites is estimated. These distances are calculated after estimating the delay suffered by the signal traveling from its emission at the corresponding satellite to its reception at the receiver's antenna. Estimation and tracking of these parameters is performed by the synchronization algorithm. After the relative distances to the satellites are estimated, the positioning algorithm starts its operation. Positioning is typically performed by a process referred to as trilateration: intersection of a set of spheres centered at the visible satellites and with radii the corresponding relative distances. Therefore, synchronization and positioning are processes performed sequentially and in parallel. The thesis contributes to both topics, as expressed by the subtitle of the dissertation.
On the one hand, the thesis delves into the use of Bayesian filtering for the tracking of synchronization parameters (time-delays, Doppler-shifts and carrier-phases) of the received signal. One of the main sources of error in high precision GNSS receivers is the presence of multipath replicas apart from the line-of-sight signal (LOSS). Wherefore the algorithms proposed in this part of the thesis aim at mitigating the multipath effect on synchronization estimates. The dissertation provides an introduction to the basics of Bayesian filtering, including a compendium of the most popular algorithms. Particularly, Particle Filters (PF) are studied as one of the promising alternatives to deal with nonlinear/nonGaussian systems. PF are a set of simulation-based algorithms, based on Monte-Carlo methods. PF provide a discrete characterization of the posterior distribution of the system. Conversely to other simulation-based methods, PF are supported by convergence results which make them attractive in cases where the optimal solution cannot be analytically found. In that vein, a PF that incorporates a set of features to enhance its performance and robustness with a reduced number of particles is proposed. First, the linear part of the system is optimally handled by a Kalman Filter (KF), procedure referred to as Rao-Blackwellization. The latter causes a reduction on the variance of the particles and, thus, a reduction on the number of required particles to attain a given accuracy when characterizing the posterior distribution. A second feature is the design of an importance density function (from which particles are generated) close to the optimal, not available in general. The selection of this function is typically a key issue in PF designs. The dissertation proposes an approximation of the optimal importance function using Laplace's method. In parallel, Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) algorithms are considered, comparing these algorithms with the proposed PF by computer simulations.
On the other hand, a novel point of view in the positioning problem constitutes one of the original contributions of the thesis. Whereas conventional receivers operate in a two-steps procedure (synchronization and positioning), the proposal of the thesis is a Direct Position Estimation (DPE) from the digitized signal. Considering the novelty of the approach, the dissertation provides both qualitative and quantitative motivations for the use of DPE instead of the conventional two-steps approach. DPE is studied following the Maximum Likelihood (ML) principle and an algorithm based on the Accelerated Random Search (ARS) is considered for a practical implementation of the derived estimator. Computer simulation results carried show the robustness of DPE in scenarios where the conventional approach fails, for instance in multipath-rich scenarios. One of the conclusions of the thesis is that joint processing of satellite's signals provides enhance positioning performances, due to the independent propagation channels between satellite links. The dissertation also presents the extension of DPE to the Bayesian framework: Bayesian DPE (BDPE). BDPE maintains DPE's philosophy, including the possibility of accounting for sources of side/prior information. Some examples are given, such as the use of Inertial Measurement Systems and atmospheric models. Nevertheless, we have to keep in mind that the list is only limited by imagination and the particular applications were BDPE is implemented. Finally, the dissertation studied the theoretical lower bounds of accuracy of GNSS receivers. Some of those limits were already known, others see the light as a result of the research reported in the dissertation. The Cramér-Rao Bound (CRB) is the theoretical lower bound of accuracy of any unbiased estimator of a parameter. The dissertation recalls the CRB of synchronization parameters, result already known. A novel contribution of
the thesis is the derivation of the CRB of the position estimator for either conventional and DPE approaches. These results provide an asymptotical comparison of both GNSS positioning approaches. Similarly, the CRB of synchronization parameters for the Bayesian case (Posterior Cramér-Rao Bound, PCRB) is given, used as a fundamental limit of the Bayesian filters proposed in the thesis.
Alexandersson, Johan, und Olle Nordin. „Implementation of SLAM Algorithms in a Small-Scale Vehicle Using Model-Based Development“. Thesis, Linköpings universitet, Datorteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148612.
Der volle Inhalt der QuelleBauer, Stefan. „Erhöhung der Qualität und Verfügbarkeit von satellitengestützter Referenzsensorik durch Smoothing im Postprocessing“. Master's thesis, Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-102106.
Der volle Inhalt der QuelleHavlíček, Martin. „Zkoumání konektivity mozkových sítí pomocí hemodynamického modelování“. Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-233576.
Der volle Inhalt der QuelleMathema, Najma. „Predicting Plans and Actions in Two-Player Repeated Games“. BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8683.
Der volle Inhalt der QuelleRibeiro, Eduardo da Silva. „Novas propostas em filtragem de projeções tomográficas sob ruído Poisson“. Universidade Federal de São Carlos, 2010. https://repositorio.ufscar.br/handle/ufscar/438.
Der volle Inhalt der QuelleFinanciadora de Estudos e Projetos
In this dissertation we present techniques for filtering of tomographic projections with Poisson noise. For the filtering of the tomogram projections we use variations of three filtering techniques: Bayesian estimation, Wiener filtering and thresholding in Wavelet domain. We used ten MAP estimators, each estimator with a diferent probability density as prior information. An adaptive windowing was used to calculate the local estimates. A hypothesis test was used to select the best probability density to each projection. We used the Pointwise Wiener filter and FIR Wiener Filter, in both cases we used a adaptive scheme for the filtering. For thresholding in wavelet domain, we tested the performance of four families basis of wavelet functions and four techniques for obtaining thresholds. The experiments were done with the phantom of Shepp and Logan and five set of projections of phantoms captured by a CT scanner developed by CNPDIA-EMBRAPA. The image reconstruction was made with the parallel POCS algorithm. The evaluation of the filtering was made after reconstruction with the following criteria for measurement of error: ISNR, PSNR, SSIM and IDIV.
Nesta dissertação técnicas de filtragem de projeções tomográficas com ruído Poisson são apresentadas. Utilizamos variações de três técnicas de filtragem: estimação Bayesiana, filtragem de Wiener e limiarização no domínio Wavelet. Foram utilizados dez estimadores MAP, em cada uma densidade de probabilidade foi utilizada como informação a priori. Foi utilizado um janelamento adaptativo para o cálculo das estimativas locais e um teste de hipóteses para a escolha da melhor densidade de probabilidade que se adéqua a cada projeção. Utilizamos o filtro de Wiener na versão pontual e FIR, em ambos os casos utilizamos um esquema adaptativo durante a filtragem. Para a limiarização no domínio Wavelet, verificamos o desempenho de quatro famílias de funções Wavelet e quatro técnicas de obtenção de limiares. Os experimentos foram feitos com o phantom de Shepp e Logan e cinco conjunto de projeções de phantoms capturas por um minitomógrafo no CNPDIAEMBRAPA. A reconstrução da imagem feita com o algoritmo POCS paralelo. A avaliação da filtragem foi feita após a reconstrução com os seguintes crit_erios de medida de erro: ISNR, PSNR, IDIV e SSIM.
Obst, Marcus. „Bayesian Approach for Reliable GNSS-based Vehicle Localization in Urban Areas“. Doctoral thesis, Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-162894.
Der volle Inhalt der QuelleMahmoud, Mohamed. „Parking Map Generation and Tracking Using Radar : Adaptive Inverse Sensor Model“. Thesis, Linköpings universitet, Fluida och mekatroniska system, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-167084.
Der volle Inhalt der QuelleKarlsson, Nicklas. „System för att upptäcka Phishing : Klassificering av mejl“. Thesis, Växjö University, School of Mathematics and Systems Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-2073.
Der volle Inhalt der QuelleDenna rapport tar en titt på phishing-problemet, något som många har råkat ut för med bland annat de falska Nordea eller eBay mejl som på senaste tiden har dykt upp i våra inkorgar, och ett eventuellt sätt att minska phishingens effekt. Fokus i rapporten ligger på klassificering av mejl och den huvudsakliga frågeställningen är: ”Är det, med hög träffsäkerhet, möjligt att med hjälp av ett klassificeringsverktyg sortera ut mejl som har med phishing att göra från övrig skräppost.” Det visade sig svårare än väntat att hitta phishing mejl att använda i klassificeringen. I de klassificeringar som genomfördes visade det sig att både metoden Naive Bayes och med Support Vector Machine kan hitta upp till 100 % av phishing mejlen. Rapporten pressenterar arbetsgången, teori om phishing och resultaten efter genomförda klassificeringstest.
This report takes a look at the phishing problem, something that many have come across with for example the fake Nordea or eBay e-mails that lately have shown up in our e-mail inboxes, and a possible way to reduce the effect of phishing. The focus in the report lies on classification of e-mails and the main question is: “Is it, with high accuracy, possible with a classification tool to sort phishing e-mails from other spam e-mails.” It was more difficult than expected to find phishing e-mails to use in the classification. The classifications that were made showed that it was possible to find up to 100 % of the phishing e-mails with both Naive Bayes and with Support Vector Machine. The report presents the work done, facts about phishing and the results of the classification tests made.
Jüngel, Matthias. „The memory-based paradigm for vision-based robot localization“. Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2012. http://dx.doi.org/10.18452/16593.
Der volle Inhalt der QuelleFor autonomous mobile robots, a solid world model is an important prerequisite for decision making. Current state estimation techniques are based on Hidden Markov Models and Bayesian filtering. These methods estimate the state of the world (belief) in an iterative manner. Data obtained from perceptions and actions is accumulated in the belief which can be represented parametrically (like in Kalman filters) or non-parametrically (like in particle filters). When the sensor''s information gain is low, as in the case of bearing-only measurements, the representation of the belief can be challenging. For instance, a Kalman filter''s Gaussian models might not be sufficient or a particle filter might need an unreasonable number of particles. In this thesis, I introduce a new state estimation method which doesn''t accumulate information in a belief. Instead, perceptions and actions are stored in a memory. Based on this, the state is calculated when needed. The system has a particular advantage when processing sparse information. This thesis presents how the memory-based technique can be applied to examples from RoboCup (autonomous robots play soccer). In experiments, it is shown how four-legged and humanoid robots can localize themselves very precisely on a soccer field. The localization is based on bearings to objects obtained from digital images. This thesis presents a new technique to recognize field lines which doesn''t need any pre-run calibration and also works when the field lines are partly concealed and affected by shadows.
Mosallam, Ahmed. „Remaining useful life estimation of critical components based on Bayesian Approaches“. Thesis, Besançon, 2014. http://www.theses.fr/2014BESA2069/document.
Der volle Inhalt der QuelleConstructing prognostics models rely upon understanding the degradation process of the monitoredcritical components to correctly estimate the remaining useful life (RUL). Traditionally, a degradationprocess is represented in the form of physical or experts models. Such models require extensiveexperimentation and verification that are not always feasible in practice. Another approach that buildsup knowledge about the system degradation over time from component sensor data is known as datadriven. Data driven models require that sufficient historical data have been collected.In this work, a two phases data driven method for RUL prediction is presented. In the offline phase, theproposed method builds on finding variables that contain information about the degradation behaviorusing unsupervised variable selection method. Different health indicators (HI) are constructed fromthe selected variables, which represent the degradation as a function of time, and saved in the offlinedatabase as reference models. In the online phase, the method estimates the degradation state usingdiscrete Bayesian filter. The method finally finds the most similar offline health indicator, to the onlineone, using k-nearest neighbors (k-NN) classifier and Gaussian process regression (GPR) to use it asa RUL estimator. The method is verified using PRONOSTIA bearing as well as battery and turbofanengine degradation data acquired from NASA data repository. The results show the effectiveness ofthe method in predicting the RUL
Maršál, Martin. „Elektronický modul pro akustickou detekci“. Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-240831.
Der volle Inhalt der QuelleSarr, Ndey Binta, und 莎妮塔. „Hybrid of Filter Wrapper using Naive Bayes Algorithm and Genetic Algorithm“. Thesis, 2018. http://ndltd.ncl.edu.tw/handle/nqhgvw.
Der volle Inhalt der Quelle元智大學
生物與醫學資訊碩士學位學程
106
Feature selection is an essential data preprocessing method and has been generally studied in data mining and machine learning. In this paper, we presented an effective feature selection approach using the hybrid method. That is using the filter method to select the most informative features from the dataset, then we used the wrapper method with a genetic search to select relevance features and to remove the redundancy of the features in the dataset. We finally run those features with the combination of the two algorithm, Naïve Bayes and Genetic Algorithm using voting as the classifier in weka. The experimental results present that our method has the indisputable advantages in the form of classification accuracy, error rate and Kappa’s statistics and number of features selected by comparing with filter method, wrapper method and with other approaches that has already been built. It is powerful, less competitive cost and easy to comprehend.
Liu, Guoliang. „Bayes Filters with Improved Measurements for Visual Object Tracking“. Doctoral thesis, 2012. http://hdl.handle.net/11858/00-1735-0000-0006-B3F9-2.
Der volle Inhalt der QuelleHofmann, David. „Myoelectric Signal Processing for Prosthesis Control“. Doctoral thesis, 2014. http://hdl.handle.net/11858/00-1735-0000-0022-5DA2-9.
Der volle Inhalt der QuelleBauer, Stefan. „Erhöhung der Qualität und Verfügbarkeit von satellitengestützter Referenzsensorik durch Smoothing im Postprocessing“. Master's thesis, 2012. https://monarch.qucosa.de/id/qucosa%3A19821.
Der volle Inhalt der QuelleObst, Marcus. „Bayesian Approach for Reliable GNSS-based Vehicle Localization in Urban Areas“. Doctoral thesis, 2014. https://monarch.qucosa.de/id/qucosa%3A20218.
Der volle Inhalt der Quelle