Thèses sur le sujet « Localization technique »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Localization technique.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Localization technique ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Khan, Adnan Umar. « Distributive time division multiplexed localization technique for WLANs ». Thesis, De Montfort University, 2012. http://hdl.handle.net/2086/7102.

Texte intégral
Résumé :
This thesis presents the research work regarding the solution of a localization problem in indoor WLANs by introducing a distributive time division multiplexed localization technique based on the convex semidefinite programming. Convex optimizations have proven to give promising results but have limitations of computational complexity for a larger problem size. In the case of localization problem the size is determined depending on the number of nodes to be localized. Thus a convex localization technique could not be applied to real time tracking of mobile nodes within the WLANs that are already providing computationally intensive real time multimedia services. Here we have developed a distributive technique to circumvent this problem such that we divide a larger network into computationally manageable smaller subnets. The division of a larger network is based on the mobility levels of the nodes. There are two types of nodes in a network; mobile, and stationery. We have placed the mobile nodes into separate subnets which are tagged as mobile whereas the stationary nodes are placed into subnets tagged as stationary. The purpose of this classification of networks into subnets is to achieve a priority-based localization with a higher priority given to mobile subnets. Then the classified subnets are localized by scheduling them in a time division multiplexed way. For this purpose a time-frame is defined consisting of finite number of fixed duration time-slots such that within the slot duration a subnet could be localized. The subnets are scheduled within the frames with a 1:n ratio pattern that is within n number of frames each mobile subnet is localized n times while each stationary subnet consisting of stationary nodes is localized once. By using this priority-based scheduling we have achieved a real time tracking of mobile node positions by using the computationally intensive convex optimization technique. In addition, we present that the resultant distributive technique can be applied to a network having diverse node density that is a network with its nodes varying from very few to large numbers can be localized by increasing frame duration. This results in a scalable technique. In addition to computational complexity, another problem that arises while formulating the distance based localization as a convex optimization problem is the high-rank solution. We have also developed the solution based on virtual nodes to circumvent this problem. Virtual nodes are not real nodes but these are nodes that are only added within the network to achieve low rank realization. Finally, we developed a distributive 3D real-time localization technique that exploited the mobile user behaviour within the multi-storey indoor environments. The estimates of heights by using this technique were found to be coarse. Therefore, it can only be used to identify floors in which a node is located.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Yeluri, Sai Krishna. « Outdoor localization technique using landmarks to determine position and orientation ». [Gainesville, Fla.] : University of Florida, 2003. http://purl.fcla.edu/fcla/etd/UFE0000828.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Handfield, Joseph J. « High resolution source localization in near-field sensor arrays by MVDR technique / ». Online version of thesis, 2007. http://hdl.handle.net/1850/5861.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Sheng, Jian. « VALUE-BASED FAULT LOCALIZATION IN JAVA NUMERICAL SOFTWARE WITH CAUSAL INFERENCE TECHNIQUE ». Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1543982972617959.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Ambarkutuk, Murat. « A Grid based Indoor Radiolocation Technique Based on Spatially Coherent Path Loss Model ». Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/80405.

Texte intégral
Résumé :
This thesis presents a grid-based indoor radiolocation technique based on a Spatially Coherent Path Loss Model (SCPL). SCPL is a path loss model which characterizes the radio wave propagation in an environment by solely using Received Signal Strength (RSS) fingerprints. The propagation of the radio waves is characterized by uniformly dividing the environment into grid cells, followed by the estimation of the propagation parameters for each grid cell individually. By using SCPL and RSS fingerprints acquired at an unknown location, the distance between an agent and all the access point in an indoor environment can be determined. A least-squares based trilateration is then used as the global fix of location the agent in the environment. The result of the trilateration is then represented in a probability distribution function over the grid cells induced by SCPL. Since the proposed technique is able to locally model the propagation accounting for attenuation of non-uniform environmental irregularities, the characterization of the path loss in the indoor environment and radiolocation technique might yield improved results. The efficacy of the proposed technique was investigated with an experiment comparing SCPL and an indoor radiolocation technique based on a conventional path loss model.
Master of Science
This thesis presents a technique uses radio waves to localize an agent in an indoor environment. By characterizing the difference between transmitted and received power of the radio waves, the agent can determine how far it is away from the transmitting antennas, i.e. access points, placed in the environment. Since the power difference mainly results from obstructions in the environment, the attenuation profile of the environment carries a significant importance in radiolocation techniques. The proposed technique, called Spatially Coherent Path Loss Model (SCPL), characterizes the radio wave propagation, i.e. the attenuation, separately for different regions of the environment, unlike the conventional techniques employing global attenuation profiles. The localization environment is represented with grid-cell structure and the parameters of SCPL model describing the extent of the attenuation of the environment are estimated individually. After creating an attenuation profile of the environment, the agent localizes itself in the localization environment by using SCPL with signal powers received from the access points. This scheme of attenuation profiling constitutes the main contribution of the proposed technique. The efficacy and validity of the proposed technique was investigated with an experiment comparing SCPL and an indoor radiolocation technique based on a conventional path loss model.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Imam, Farasat. « Bluetooth Low Energy (BLE) based Indoor Localization using Fingerprinting Techniques ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Trouver le texte intégral
Résumé :
Positioning technologies have been created over the last few decades to provide users with location and navigation services by utilizing technological advancements in digital circuitry. GPS was one of the first positioning systems to be created (Global Positioning System). This system is a location-based navigation system that operates through satellites. Using GPS used to necessitate special (expensive) hardware, but smartphone technology has made it possible to use GPS on our handheld devices without the need for any additional equipment. We're all familiar with this; we use GPS on our smartphones for navigating in our daily lives. For outdoor location, GPS has become the de facto standard. However, due to the lack of Line-of-Sight (LoS) inside buildings, GPS cannot be used in our indoor environments. Positioning systems for indoor environments (market) are being developed since humans spend more time indoors than outdoors. Indoor positioning systems have been developed using a variety of available signal technologies (such as WiFi, ZigBee, Bluetooth, UWB, and others) depending on the context and application scenario. The Bluetooth-Low Energy (or Bluetooth Smart) protocol was first released in 2010. This signal technology was created with the goal of being low-cost and energy efficient. Apple Inc. and Aruba (Hewlett Packard Enterprise) introduced Beacon technology, which uses the Bluetooth Low Energy standard to communicate with smartphones and provide context and location awareness. The fact that all new smartphones (and tablets) include the BLE protocol can be exploited to aid in the development of low-cost, energy-efficient, precise, and accurate indoor positioning systems (by making use of Beacon and smartphone). The BLE protocol has the potential to become the de facto standard for the Internet of Things phenomenon, and so a BLE-based localization system in an indoor environment might become an integrated part of IoT.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Mi, Liang. « A Testbed for Design and Performance Evaluation of Visual Localization Technique inside the Small Intestine ». Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/620.

Texte intégral
Résumé :
Wireless video capsule endoscopy (VCE) plays an increasingly important role in assisting clinical diagnoses of gastrointestinal (GI) diseases. It provides a non-invasive way to examine the entire small intestine, where other conventional endoscopic instruments can barely reach. Existing examination systems for the VCE cannot track the location of a endoscopic capsule, which prevents the physician from identifying the exact location of the diseases. During the eight hour examination time, the video capsule continuously keeps taking images at a frame rate up to six frame per sec, so it is possible to extract the motion information from the content of the image sequence. Many attempts have been made to develop computer vision algorithms to detect the motion of the capsule based on the small changes in the consecutive video frames and then trace the location of the capsule. However, validation of those algorithms has become a challenging topic because conducting experiments on the human body is extremely difficult due to individual differences and legal issues. In this thesis, two validation approaches for motion tracking of the VCE are presented in detail respectively. One approach is to build a physical testbed with a plastic pipe and an endoscopy camera; the other is to build a virtual testbed by creating a three-dimensional virtual small intestine model and simulating the motion of the capsule. Based on the virtual testbed, a physiological factor, intestinal contraction, has been studied in terms of its influence on visual based localization algorithm and a geometric model for measuring the amount of contraction is proposed and validated via the virtual testbed. Empirical results have made contributions in support of the performance evaluation of other research on the visual based localization algorithm of VCE.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Erdem, Rengin. « Ag2s/2-mpa Quantum Dots ». Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614384/index.pdf.

Texte intégral
Résumé :
Quantum dots are fluorescent semiconductor nanocrystals that have unique optical properties such as high quantum yield and photostability. These nanoparticles are superior to organic dyes and fluorescent proteins in many aspects and therefore show great potential for both in vivo and in vitro imaging and drug delivery applications. However, cytototoxicity is still one of the major problems associated with their biological applications. The aim of this study is in vitro characterization and assessment of biological application potential of a novel silver sulfide quantum dot coated with mercaptopropionic acid (2-MPA). In vitro studies reported in this work were conducted on a mouse fibroblast cell line (NIH/3T3) treated with Ag2S/2-MPA quantum dots in 10-600 &mu
g/mL concentration range for 24 h. Various fluorescence spectroscopy and microscopy methods were used to determine metabolic activity, proliferation rate and apoptotic fraction of QD-treated cells as well as QD internalization efficiency and intracellular localization. Metabolic activity and proliferation rate of the QD treated cells were measured with XTT and CyQUANT®
cell proliferation assays, respectively. Intracellular localization and qualitative uptake studies were conducted using confocal laser scanning microscopy. Apoptosis studies were performed with Annexin V assay. Finally, we also conducted a quantitative uptake assay to determine internalization efficiency of the silver sulfide particles. Correlated metabolic activity and proliferation assay results indicate that Ag2S/2-MPA quantum dots are highly cytocompatible with no significant toxicity up to 600 &mu
g/mL treatment. Optimal cell imaging concentration was determined as 200 &mu
g/mL. Particles displayed a punctuated cytoplasmic distribution indicating to endosomal entrapment. In vitro characterization studies reported in this study indicate that Ag2S/2-MPA quantum dots have great biological application potential due to their excellent spectral and cytocompatibility properties. Near-infrared emission of silver sulfide quantum dots provides a major advantage in imaging since signal interference from the cells (autofluorescence) which is a typical problem in microscopic studies is minimum in this part of the emission spectrum. The results of this study are presented in an article which was accepted by Journal of Materials Chemistry. DOI: 10.1039/C2JM31959D.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Tondreau, Gilles. « Damage localization in civil engineering structures using dynamic strain measurements ». Doctoral thesis, Universite Libre de Bruxelles, 2013. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209466.

Texte intégral
Résumé :
This thesis focuses on the development of a new method for the continuous

monitoring of civil engineering structures in order to locate small damages automatically. A

review of the very wide literature on Structural Health Monitoring (SHM) points first out that

the methods can be grouped in four categories based on their need or not of a numerical model,

as well as their need or not of information of the damaged structure to be applied. This state

of the art of the SHM methods highlights the requirement to reach each levels of SHM, which

is in particular for the localization of small damages in civil engineering structures the needs

for a non-model based output-only damage sensitive feature extraction technique. The origin of

the local sensitivity of strains to damages is also analyzed, which justifies their use for damage

localization.

A new method based on the modal filtering technique which consists in combining linearly

the sensor responses in a specific way to mimic a single degree of freedom system and which

was previously developed for damage detection is proposed. A very large network of dynamic

strain sensors is deployed on the structure and split into several independent local sensor networks.

Low computational cost and fast signal processing techniques are coupled to statistical

control charts for robust and fully automated damage localization.

The efficiency of the method is demonstrated using time-domain simulated data on a simply

supported beam and a three-dimensional bridge structure. The method is able to detect and

locate very small damages even in the presence of noise on the measurements and variability

of the baseline structure if strain sensors are used. The difficulty to locate damages from acceleration

sensors is also clearly illustrated. The most common classical methods for damage

localization are applied on the simply supported beam and the results show that the modal filtering

technique presents much better performances for an accurate localization of small damages

and is easier to automate.

An improvement of the modal filters method referred to as adaptive modal filters is next

proposed in order to enhance the ability to localize small damages, as well as to follow their

evolution through modal filters updating. Based on this study, a new damage sensitive feature

is proposed and is compared with other damage sensitive features to detect the damages with

modal filters to demonstrate its interest. These expectations are verified numerically with the

three-dimensional bridge structure, and the results show that the adaptation of the modal filters

increases the sensitivity of local filters to damages.

Experimental tests have been led first to check the feasibility of modal filters to detect damages

when they are used with accelerometers. Two case studies are considered. The first work

investigates the experimental damage detection of a small aircraft wing equipped with a network

of 15 accelerometers, one force transducer and excited with an electro-dynamic shaker. A

damage is introduced by replacing inspection panels with damaged panels. A modified version

of the modal filtering technique is applied and compared with the damage detection based principal

component analysis of FRFs as well as of transmissibilities. The three approaches succeed

in the damage detection but we illustrate the advantage of using the modal filtering algorithm as

well as of the new damage sensitive feature. The second experimental application aims at detecting

both linear and nonlinear damage scenarios using the responses of four accelerometers

installed on the three-storey frame structure previously developed and studied at Los Alamos

National Labs. In particular, modal filters are shown to be sensitive to both types of damages,

but cannot make the distinction between linear and nonlinear damages.

Finally, the new method is tested experimentally to locate damages by considering cheap

piezoelectric patches (PVDF) for dynamic strain measurements. Again, two case studies are investigated.

The first work investigates a small clamped-free steel plate equipped with 8 PVDFs sensors, and excited with a PZT patch. A small damage is introduced at different locations by

fixing a stiffener. The modal filters are applied on three local filters in order to locate damage.

Univariate control charts allow to locate automatically all the damage positions correctly.

The last experimental investigation is devoted to a 3.78m long I-steel beam equipped with 20

PVDFs sensors and excited with an electro-dynamic shaker. Again, a small stiffener is added to

mimic the effect of a small damage and five local filters are defined to locate the damage. The

damage is correctly located for several positions, and the interest of including measurements

under different environmental conditions for the baseline as well as overlapping the local filters

is illustrated.

The very nice results obtained with these first experimental applications of modal filters

based on strains show the real interest of this very low computational cost method for outputonly

non-model based automated damage localization of real structures.
Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished

Styles APA, Harvard, Vancouver, ISO, etc.
10

Kothakapa, Vijayvardhan Reddy. « Investigation on the use of time-modulation technique for an ultra-wideband reader ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14982/.

Texte intégral
Résumé :
Ultra-wideband Technology is a trusted key for future generation radio frequency identification systems to conquer them as high as the limitations of the ongoing narrow bandwidth radio frequency identification technology like decreasing the space coverage, insufficient ranging resolution for accurate localization, sensitivity to interference, and multiple access capabilities. The idea in practice is to apply the Time Modulation technique which means the presence of switches at the antenna ports, which is a new procedure, but typically adopted for narrowband antennas arrays. So, for the arrays working at a single frequency. Here we are trying to see if it is possible to apply this excitation technique also to ultra-wideband antennas. So, in this case, instead of having two monopoles for instance as well as our application, we have used two Ultra-wideband antennas working in the lower European UWB band [3.1 – 4.8]GHz. For single narrow band antennas, we see what it happens only at single band frequency. In this case, having UWB antennas, we must split our 2GHz band from 3 to 5GHz into windows of 500MHz. This dissertation mainly focuses on the two important characteristics. They are: localization and power transmission both realized by the time modulated antenna array and evaluates their application in the communication system. The first step of experiment localization is carried out on a computer by using the software tool called Computer Simulation Technology (CST) in the range from 3GHz to 5GHz and then merging the results with a MATLAB programming to extract the far-field results and by using Nonlin software which was developed by the researchers of DEI: with this procedure we are able to evaluate the simulation results of far-field by taking into account all the possible phenomena, both linear and non-linear, taking place in the radiating system under test.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Riley, H. Bryan. « Matched-field source detection and localization in high noise environments a novel reduced-rank signal processing approach ». Ohio : Ohio University, 1994. http://www.ohiolink.edu/etd/view.cgi?ohiou1173982711.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
12

Maamoun, Khaled M. « Deploying Monitoring Trails for Fault Localization in All-optical Networks and Radio-over-Fiber Passive Optical Networks ». Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23195.

Texte intégral
Résumé :
Fault localization is the process of realizing the true source of a failure from a set of collected failure notifications. Isolating failure recovery within the network optical domain is necessary to resolve alarm storm problems. The introduction of the monitoring trail (m-trail) has been proven to deliver better performance by employing monitoring resources in a form of optical trails - a monitoring framework that generalizes all the previously reported counterparts. In this dissertation, the m-trail design is explored and a focus is given to the analysis on using m-trails with established lightpaths to achieve fault localization. This process saves network resources by reducing the number of the m-trails required for fault localization and therefore the number of wavelengths used in the network. A novel approach based on Geographic Midpoint Technique, an adapted version of the Chinese Postman’s Problem (CPP) solution and an adapted version of the Traveling Salesman’s Problem (TSP) solution algorithms is introduced. The desirable features of network architectures and the enabling of innovative technologies for delivering future millimeter-waveband (mm-WB) Radio-over-Fiber (RoF) systems for wireless services integrated in a Dense Wavelength Division Multiplexing (DWDM) is proposed in this dissertation. For the conceptual illustration, a DWDM RoF system with channel spacing of 12.5 GHz is considered. The mm-WB Radio Frequency (RF) signal is obtained at each Optical Network Unit (ONU) by simultaneously using optical heterodyning photo detection between two optical carriers. The generated RF modulated signal has a frequency of 12.5 GHz. This RoF system is easy, cost-effective, resistant to laser phase noise and also reduces maintenance needs, in principle. A revision of related RoF network proposals and experiments is also included. A number of models for Passive Optical Networks (PON)/ RoF-PON that combine both innovative and existing ideas along with a number of solutions for m-trail design problem of these models are proposed. The comparison between these models uses the expected survivability function which proved that these models are liable to be implemented in the new and existing PON/ RoF-PON systems. This dissertation is followed by recommendation of possible directions for future research in this area.
Styles APA, Harvard, Vancouver, ISO, etc.
13

FERRARI, SIMONE. « LOCALIZATION TECHNIQUES FOR RENORMING ». Doctoral thesis, Università degli Studi di Milano, 2013. http://hdl.handle.net/2434/222237.

Texte intégral
Résumé :
Renorming theory involves finding isomorphisms in order to improve the norm of a normed space X. This means to make the geometrical and topological properties of the unit ball of the given normed space as close as possible to those of the unit ball of a Hilbert space. In this work we study different types of geometrical properties. In 1989 Hansell introduced the notion of descriptive topological space: we do not state this definition here, since it is rather technical. Hansell pointed out the role played by the existence of sigma-isolated networks in these spaces. They can replace sigma-discrete topological bases which turn out to be exclusive of metrizable space after the Bing--Nagata--Smirnov theorem. So Hansell proved that a Banach space is descriptive with respect to the weak-topology if, and only if, the norm topology has a sigma-isolated network with respect to the weak-topology. Hansell also proved that if a Banach space has a Kadec norm, then it is descriptive with respect to the weak-topology The main problem in Kadec renorming theory is whether it is possible to prove the converse of the previous theorem: in fact no examples are known of descriptive Banach spaces, with respect to the weak-topology, that do not admit an equivalent Kadec norm. In this work we prove the following theorem: X is a descriptive Banach space with respect to the weak-topology if, and only if, there exists an equivalent weak-lower semicontinuous and weak-Kadec quasinorm q(.), i.e. a quasinorm such that the weak and the norm topologies coincide on the set {x in X s.t. q(x)=1} and a||x||< q(x)<\b||x|| holds for some positive constants a and b. In the second part of this dissertation we state some results on rotund renormings. In this work we will give a characterization of rotund renorming in term of the G$_\delta$-diagonal property: X admits an equivalent, weak-lower semicontinuous and rotund norm if, and only if, X admits an equivalent, weak-lower semicontinuous norm ||.|| such that the set {x in X s.t. ||x||=1} has a G$_\delta$-diagonal with slices. We also prove some transference results. In the third part of the dissertation we begin a study of uniformly rotund renorming theory.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Celi, Guillaume. « Etude, applications et améliorations de la technique LVI sur les défauts rencontrés dans les technologies CMOS avancées 45nm et inférieur ». Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00904697.

Texte intégral
Résumé :
L'analyse de défaillances joue un rôle important dans l'amélioration des performances et de la fabrication des circuits intégrés. Des défaillances peuvent intervenir à tout moment dans la chaîne d'un produit, que ce soit au niveau conception, durant la qualification du produit, lors de la production, ou encore lors de son utilisation. Il est donc important d'étudier ces défauts dans le but d'améliorer la fiabilité des produits. De plus, avec l'augmentation de la densité et de la complexité des puces, il est de plus en plus difficile de localiser les défauts, et ce malgré l'amélioration des techniques d'analyses. Ce travail de thèse s'inscrit dans ce contexte et vise à étudier et développer une nouvelle technique d'analyse de défaillance basée sur l'étude de l'onde laser réfléchie le "Laser Voltage Imaging" (LVI) pour l'analyse de défaillance des technologies ultimes (inférieur à 45nm).
Styles APA, Harvard, Vancouver, ISO, etc.
15

Nasif, Ahmed O. « Opportunistic spectrum access using localization techniques ». Fairfax, VA : George Mason University, 2009. http://hdl.handle.net/1920/4572.

Texte intégral
Résumé :
Thesis (Ph.D.)--George Mason University, 2009.
Vita: p. 146. Thesis director: Brian L. Mark. Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical and Computer Engineering. Title from PDF t.p. (viewed Oct. 11, 2009). Includes bibliographical references (p. 138-145). Also issued in print.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Park, Sang Min. « Effective fault localization techniques for concurrent software ». Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53010.

Texte intégral
Résumé :
Multicore and Internet cloud systems have been widely adopted in recent years and have resulted in the increased development of concurrent programs. However, concurrency bugs are still difficult to test and debug for at least two reasons. Concurrent programs have large interleaving space, and concurrency bugs involve complex interactions among multiple threads. Existing testing solutions for concurrency bugs have focused on exposing concurrency bugs in the large interleaving space, but they often do not provide debugging information for developers to understand the bugs. To address the problem, this thesis proposes techniques that help developers in debugging concurrency bugs, particularly for locating the root causes and for understanding them, and presents a set of empirical user studies that evaluates the techniques. First, this thesis introduces a dynamic fault-localization technique, called Falcon, that locates single-variable concurrency bugs as memory-access patterns. Falcon uses dynamic pattern detection and statistical fault localization to report a ranked list of memory-access patterns for root causes of concurrency bugs. The overall Falcon approach is effective: in an empirical evaluation, we show that Falcon ranks program fragments corresponding to the root-cause of the concurrency bug as "most suspicious" almost always. In principle, such a ranking can save a developer's time by allowing him or her to quickly hone in on the problematic code, rather than having to sort through many reports. Others have shown that single- and multi-variable bugs cover a high fraction of all concurrency bugs that have been documented in a variety of major open-source packages; thus, being able to detect both is important. Because Falcon is limited to detecting single-variable bugs, we extend the Falcon technique to handle both single-variable and multi-variable bugs, using a unified technique, called Unicorn. Unicorn uses online memory monitoring and offline memory pattern combination to handle multi-variable concurrency bugs. The overall Unicorn approach is effective in ranking memory-access patterns for single- and multi-variable concurrency bugs. To further assist developers in understanding concurrency bugs, this thesis presents a fault-explanation technique, called Griffin, that provides more context of the root cause than Unicorn. Griffin reconstructs the root cause of the concurrency bugs by grouping suspicious memory accesses, finding suspicious method locations, and presenting calling stacks along with the buggy interleavings. By providing additional context, the overall Griffin approach can provide more information at a higher-level to the developer, allowing him or her to more readily diagnose complex bugs that may cross file or module boundaries. Finally, this thesis presents a set of empirical user studies that investigates the effectiveness of the presented techniques. In particular, the studies compare the effectiveness between a state-of-the-art debugging technique and our debugging techniques, Unicorn and Griffin. Among our findings, the user study shows that while the techniques are indistinguishable when the fault is relatively simple, Griffin is most effective for more complex faults. This observation further suggests that there may be a need for a spectrum of tools or interfaces that depend on the complexity of the underlying fault or even the background of the user.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Svensk, Linnea. « Localization Techniques, Yang-Mills Theory and Strings ». Thesis, Uppsala universitet, Teoretisk fysik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-280740.

Texte intégral
Résumé :
Equivariant localization techniques exploit symmetries of systems, represented by group actions on manifolds, and use them to evaluate certain partition functions exactly. In this master thesis we begin with the study of localization in finite dimensions. We then generalize this concept to infinite dimensions and study the partition function of two dimensional quantum Yang- Mills theory and its relation to string theory. The partition function can be written as a sum over the critical point set and be related to the topology of the moduli space of flat connections. Furthermore, for large N the partition function of the gauge groups SU(N) and U(N) can be interpreted as a string perturbation series. The coefficients of the expansion are given by a sum over maps from a two dimensional surface onto the two dimensional target space and thus the partition function is interpreted as a closed string theory. Also, a string theory action is discussed using topological field theory tools and localization techniques.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Du, Jinze. « Indoor localization techniques for wireless sensor networks ». Thesis, Nantes, 2018. http://www.theses.fr/2018NANT4001/document.

Texte intégral
Résumé :
Cette thèse traite d’algorithmes de localisation basés sur la mesure du RSSI (Received Signal Strength Indicator) pour des applications intérieures dans des réseaux de capteurs sans fil. L’étude des caractéristiques du RSSI à partir d’un système de localisation expérimental, permet de construire un modèle de canal RSSI de type atténuation lognormale. Pour faire face à la faible précision des distances déduites de RSSI, nous proposons trois nouveaux algorithmes de localisation à l’intérieur des bâtiments basés sur la multilatération et les mesures de RSSI moyennées. Ces algorithmes pondèrent les distances mesurées en fonction de leur fiabilité supposée. Nous développons également, une stratégie d’acquisition et de suivi des paramètres du canal pour la localisation dans des applications où la position est contrainte. Pour estimer les paramètres du modèle de canal, la méthode des moindres carrés moyens (LMS) est associée à une méthode de trilatération. Des critères quantitatifs sont fournis pour garantir l’efficacité de la stratégie de suivi en proposant un compromis entre la résolution de la contrainte et la variation des paramètres du modèle. Les résultats de simulation montrent un bon comportement de la stratégie de poursuite proposée en présence d’une variation spatio-temporelle du canal de propagation. Par rapport aux algorithmes existants, une meilleure précision de localisation est obtenue au prix d’un peu plus de temps de calcul. La stratégie proposée est également testée expérimentalement
In this thesis, the author focused on RSSI based localization algorithms for indoor applications in wireless sensor networks. Firstly, from the observation of RSSI behavior based on an experimental localization system, an experimental RSSI channel model is deduced, which is consistent to the popular lognormal shadowing path loss model. Secondly, this thesis proposes three indoor localization algorithms based on multilateration and averaged RSSI. In these algorithms, the measured distances are weighted according to their assumed accuracy. Lastly, a RSSI based parameter tracking strategy for constrained position localization is proposed. To estimate channel model parameters, least mean squares method (LMS) is associated with the trilateration method. Quantitative criteria are provided to guarantee the efficiency of the proposed tracking strategy by providing a tradeoff between the constraint resolution and parameter variation. The simulation results show a good behavior of the proposed tracking strategy in presence of space-time variation of the propagation channel. Compared with the existing RSSI based algorithms, the proposed tracking strategy exhibits better localization accuracy but consumes more calculation time. In addition, experimental tracking test is performed to validate the effectiveness of the proposed tracking strategy
Styles APA, Harvard, Vancouver, ISO, etc.
19

Cruz, Edmanuel. « Robotics semantic localization using deep learning techniques ». Doctoral thesis, Universidad de Alicante, 2020. http://hdl.handle.net/10045/109462.

Texte intégral
Résumé :
The tremendous technological advance experienced in recent years has allowed the development and implementation of algorithms capable of performing different tasks that help humans in their daily lives. Scene recognition is one of the fields most benefited by these advances. Scene recognition gives different systems the ability to define a context for the identification or recognition of objects or places. In this same line of research, semantic localization allows a robot to identify a place semantically. Semantic classification is currently an exciting topic and it is the main goal of a large number of works. Within this context, it is a challenge for a system or for a mobile robot to identify semantically an environment either because the environment is visually different or has been gradually modified. Changing environments are challenging scenarios because, in real-world applications, the system must be able to adapt to these environments. This research focuses on recent techniques for categorizing places that take advantage of DL to produce a semantic definition for a zone. As a contribution to the solution of this problem, in this work, a method capable of updating a previously trained model is designed. This method was used as a module of an agenda system to help people with cognitive problems in their daily tasks. An augmented reality mobile phone application was designed which uses DL techniques to locate a customer’s location and provide useful information, thus improving their shopping experience. These solutions will be described and explained in detail throughout the following document.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Cho, Sangman. « SCALABLE TECHNIQUES FOR FAILURE RECOVERY AND LOCALIZATION ». Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/202953.

Texte intégral
Résumé :
Failure localization and recovery is one of the most important issues in network management to provide continuous connectivity to users. In this dissertation, we develop several algorithms for network failure localization and recovery. First, to achieve resilient multipath routing we introduce the concept of Independent Directed Acyclic Graphs (IDAGs). Link-independent (Node-independent) DAGs satisfy the property that any path from a source to the root on one DAG is link-disjoint (node- disjoint) with any path from the source to the root on the other DAG. Given a network, we develop polynomial time algorithms to compute link-independent and node-independent DAGs. The algorithm developed in this dissertation: (1) provides multipath routing; (2) utilizes all possible edges; (3) guarantees recovery from single link failure; and (4) achieves all these with at most one bit per packet as overhead when routing is based on destination address and incoming edge. We show the effectiveness of the proposed IDAGs approach by comparing key performance indices to that of the independent trees and multiple pairs of independent trees techniques through extensive simulations. Secondly, we introduce the concept of monitoring tours (m-tours) to uniquely localize all possible failures up to k links in arbitrary all-optical networks. We establish paths and cycles that can traverse the same link at most twice (backward and forward) and call them m-tours. An m-tour is different from other existing schemes such as m-cycle and m-trail which traverse a link at most once. Closed (open) m-tours start and terminate at the same (distinct) monitor location(s). Each tour is constructed such that any shared risk linked group (SRLG) failure results in the failure of a unique combination of closed and open m-tours. We prove k-edge connectivity is a sufficient condition to localize all SRLG failures with up to k-link failures when only one monitor station is employed. We introduce an integer linear program (ILP) and a greedy scheme to find the placement of monitoring locations to uniquely localize any SRLG failures with up to k links. We provide a heuristic scheme to compute m-tours for a given network. We demonstrate the validity of the proposed monitoring method through simulations. We show that our approach using m-tours significantly reduces the number of required monitoring locations and contributes to reducing monitoring cost and network management complexity through these simulation results. Finally, this dissertation studies the problem of uniquely localizing single network element failures involving a link/node using monitoring cycles, paths, and tours. A monitoring cycle starts and ends at the same monitoring node. A monitoring path starts and ends at distinct monitoring nodes. A monitoring tour starts and ends at a monitoring station, however may traverse a link twice, once in each direction. The failure of any link/node results in the failure of a unique combination of cycles/paths/tours. We develop the necessary theories for monitoring single element (link/node) failures using only one monitoring station and cycles/tours respectively. We show that the scheme employing monitoring tours can decrease the number of monitors required compared to the scheme employing monitoring cycles and paths. With the efficient monitoring approach that uses monitoring tours, the problem of localizing up to k element (link/node) failures using only single monitor is also considered. Through the simulations, we verify the effectiveness of our monitoring algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
21

CATTANEO, DANIELE. « Machine Learning Techniques for Urban Vehicle Localization ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2020. http://hdl.handle.net/10281/263540.

Texte intégral
Résumé :
In questa tesi presento il mio lavoro di dottorato, che ha riguardato la localizzazione di un veicolo stradale in ambito urbano. In particolare, ho fatto uso di tecniche di Machine Learning per l'elaborazione delle immagini provenienti dalle camere a bordo di un veicolo. I sistemi sviluppati hanno lo scopo di produrre una stima della posa del veicolo e quindi, nel caso di deep neural networks, si tratta di reti che effettuano una pose regression. Al meglio della mia conoscenza, alcuni dei miei sviluppi sono i primi in letteratura in grado di effettuare visual pose regression basandosi su mappe tridimensionali. Queste mappe tridimensionali sono usualmente ottenute mediante dispositivi LIDAR da parte di grosse aziende specializzate che costituiscono il mondo delle aziende che producono mappe (HERE, TOM-TOM, etc.). Questo consente di attendersi uno sviluppo commerciale delle mappe ad altissima definizione, che risulteranno quindi utilizzabili per la localizzazione da parte del veicolo. Da nostri contatti con produttori industriali di sistemi di guida autonoma per veicoli stradali, ci risulta che la presenza di LIDAR a bordo dei veicoli sia a tutt'oggi osteggiata, in quanto non sono oggi disponibili LIDAR privi di apparati di scansione meccanica che risultano quindi inusabili a causa delle accelerazioni e vibrazioni presenti su un veicolo stradale. Per questo motivo, essendo inoltre i veicoli usualmente attrezzati di diverse camere già oggi, il fatto di riuscire a svolgere una localizzazione visuale su mappe ad alta definizione costituisce una prospettiva molto significativa, non solo sul piano della ricerca ma anche dell'applicazione. La localizzazione è un aspetto essenziale per ogni robot mobile, specialmente per veicoli stradali a guida autonoma, dove una cattiva stima di posizione può a portare ad incidenti anche fatali di utenti della strada. Non si può fare affidamento solo sui Global Navigation Satellite Systems, come il GPS, a causa della accuratezza e affidabilità di questi sistemi. che spesso non è adeguata per l'applicazione di guida autonoma. Questo è ancora più vero in ambiente urbano dove gli edifici possono bloccare o deflettere i segnali dei satelliti portando così a localizzazioni errate. In questa tesi proponiamo diversi approcci per superare le limitazioni dei sistemi GNNSs sfruttando Deep Neural Networks (DNNs) e altre tecniche di machine Learning Inizialmente proponiamo un approccio probabilistico per la stima della corsia in cui si trova il veicolo. Successivamente proponiamo un approccio che integra DNNs stato dell'arte, sia per la segmentazione semantica a livello di pixel che per la ricostruzione geometrica, all'interno di una pipeline di localizzazione. Il veicolo viene localizzato associando features di alto livello come la geometria della strada e gli edifici ottenute da camere stereoscopiche a bordo veicolo con le loro controparti in un sistema di mapping come Open Street Map. Abbiamo gestito le incertezze in modo probabilistico utilizzando particle filtering. Abbiamo anche proposto una nuova DNN end-to-end per la localizzazione visuale del veicolo in mappe LIDAR ad altissima definizione. Infine, abbiamo proposto una nuova tecnica, sempre basata su DNN, per la localizzazione del veicolo in mappe LIDAR ad altissima definizione che non richiede alcuna informazione a priori sulla sua posizione. Tutti gli approcci che sono stati proposti in questa tesi sono stati validati utilizzando ben noti dataset per la guida autonoma stradale, come KITTI e RobotCar.
In this thesis, we present different approaches which dealt with the localization of a road vehicle in urban settings. In particular, we made use of machine learning techniques to process the images coming from onboard cameras of a vehicle. The developed systems aim at computing a pose and therefore in case of deep neural networks, they are referred to as pose regression networks. To the best of our knowledge, some of the developed approaches are the first deep neural networks in the literature capable of computing visual pose regression basing on 3D maps. Such 3D maps are usually built by means of LIDAR devices, and this is done from large specialized companies, which make the world of commercial map makers. It is therefore likely to expect a commercial development of very high definition maps, which will make it possible to use them for the localization of vehicles. From our contacts with industrial makers of autonomous driving systems for road vehicles, we know that LIDARs onboard the vehicles, as for today, are not well accepted, mainly because of the state-of-the-art of LIDARs, which are based on mechanical scanning systems and therefore are not capable of sustaining the accelerations and vibrations of a road vehicle. For this reason, as today's vehicles already include many cameras, to be able to visually localize a vehicle on high-definition maps is a very significant perspective, not only under a research point of view, but also for real applications. The localization is an essential task for any mobile robot, especially for self-driving cars, where a wrong position estimate might lead to accidents and even fatal injuries for other road users. We cannot rely only on Global Navigation Satellites Systems, such as the Global Positioning System, because the accuracy and reliability of these systems are often inadequate for autonomous driving applications. This is even truer in urban environments, where buildings may block or deflect the satellites' signals, leading to wrong localization. In this thesis, we propose different approaches to overcome the GNSSs limitations, exploiting state-of-the-art Deep Neural Networks (DNNs) and machine learning techniques. First, we propose a probabilistic approach for estimating in which lane the vehicle is driving. Secondly, we integrate state-of-the-art Convolutional Neural Networks for pixel-level semantic segmentation and geometric reconstruction within a localization pipeline. We localize the vehicle by matching high-level features (road geometry and buildings) from an onboard stereo camera rig, with their counterparts in the OpenStreetMap service. We handled the uncertainties in a probabilistic fashion using particle filtering. Afterward, we propose a novel end-to-end DNNs for vehicle localization in LiDAR-maps. Finally, we propose a novel DNN-based technique for localizing a vehicle in LiDAR-maps without any prior information about its position. All the approaches proposed in this thesis have been validated using well-known autonomous driving datasets, such as KITTI and RobotCar.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Hu, Yongtao, et 胡永涛. « Multimodal speaker localization and identification for video processing ». Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/212633.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Reid, Greg L. « Active binaural sound localization techniques, experiments and comparisons ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ39225.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
24

Hays, Mark A. « A Fault-Based Model of Fault Localization Techniques ». UKnowledge, 2014. http://uknowledge.uky.edu/cs_etds/21.

Texte intégral
Résumé :
Every day, ordinary people depend on software working properly. We take it for granted; from banking software, to railroad switching software, to flight control software, to software that controls medical devices such as pacemakers or even gas pumps, our lives are touched by software that we expect to work. It is well known that the main technique/activity used to ensure the quality of software is testing. Often it is the only quality assurance activity undertaken, making it that much more important. In a typical experiment studying these techniques, a researcher will intentionally seed a fault (intentionally breaking the functionality of some source code) with the hopes that the automated techniques under study will be able to identify the fault's location in the source code. These faults are picked arbitrarily; there is potential for bias in the selection of the faults. Previous researchers have established an ontology for understanding or expressing this bias called fault size. This research captures the fault size ontology in the form of a probabilistic model. The results of applying this model to measure fault size suggest that many faults generated through program mutation (the systematic replacement of source code operators to create faults) are very large and easily found. Secondary measures generated in the assessment of the model suggest a new static analysis method, called testability, for predicting the likelihood that code will contain a fault in the future. While software testing researchers are not statisticians, they nonetheless make extensive use of statistics in their experiments to assess fault localization techniques. Researchers often select their statistical techniques without justification. This is a very worrisome situation because it can lead to incorrect conclusions about the significance of research. This research introduces an algorithm, MeansTest, which helps automate some aspects of the selection of appropriate statistical techniques. The results of an evaluation of MeansTest suggest that MeansTest performs well relative to its peers. This research then surveys recent work in software testing using MeansTest to evaluate the significance of researchers' work. The results of the survey indicate that software testing researchers are underreporting the significance of their work.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Swar, Pranay P. « On the Performance of In-Body RF Localization Techniques ». Digital WPI, 2012. https://digitalcommons.wpi.edu/etd-theses/881.

Texte intégral
Résumé :
"Localization inside the human body using Radio Frequency (RF) transmission is gaining importance in a number of applications such as Wireless Capsule Endoscopy. The accuracy of RF localization depends on the technology adopted for this purpose. The two most common RF localization technologies use Received Signal Strength (RSS) and Time-Of-Arrival (TOA). This research first provides bounds for accuracy of localization of a Endoscopy capsule inside the human body as it moves through the gastro-Intestinal track with and without randomness in transmit power using RSS based localization with a triangulation algorithm. It is observed that in spite of presence of a large number of anchor nodes; the localization error is still in range of few cm, which is quite high; hence we resort to TOA based localization. Due to lack of a widely accepted model for TOA based localization inside human body we use a computational technique for simulation inside and around the human body, named Finite Difference Time Domain (FDTD). We first show that our proprietary FDTD simulation software shows acceptable results when compared with real empirical measurements using a vector network analyzer. We then show that, the FDTD method, which has been used extensively in all kinds of electromagnetic modeling due to its versatility and simplicity, suffers seriously because of its demanding requirement on memory storage and computation time, which is due to its inherently recursive nature and the need for absorbing boundary conditions. In this research we suggest a novel computationally efficient technique for simulation using FDTD by considering FDTD as a Linear Time Invariant (LTI) system. Then we use the software to simulate the TOA of the narrowband and wideband signals propagated inside the human body for RF localization to compare the accuracies of the two using this method. "
Styles APA, Harvard, Vancouver, ISO, etc.
26

Cremer, Markus. « Digital beamforming techniques for passive UHF RFID tag localization ». Thesis, London South Bank University, 2016. http://researchopen.lsbu.ac.uk/1819/.

Texte intégral
Résumé :
Radio-frequency identification (RFID) technology is on the way to substitute traditional bar codes in many fields of application. Especially the availability of passive ultra-high frequency (UHF) RFID transponders (or tags) in the frequency band between 860 MHz and 960 MHz has fostered the global application in supply chain management. However, the full potential of these systems will only be exploited if the identification of objects is complemented by accurate and robust localization. Passive UHF RFID tags are cost-effective, very small, extremely lightweight, maintenancefree, rugged and can be produced as adhesive labels that can be attached to almost any object. Worldwide standards and frequency regulations have been established and a wide infrastructure of identification systems is operated today. However, the passive nature of the technology requires a simple communication protocol which results in two major limitations with respect to its use for localization purposes: the small signal bandwidth and the small allocated frequency bandwidth. In the presence of multipath reflections, these limitations reduce the achievable localization accuracy and reliability. Thus, new methods have to be found to realize passive UHF RFID localization systems which provide sufficient performance in typical multipath situations. In this thesis, an enhanced transmission channel model for passive UHF RFID localization systems has been proposed which allows an accurate estimation of the channel behaviour to multipath. It has been used to design a novel simulation environment and to identify three solutions to minimize multipath interference: a) by varying the channel interface parameters, b) by applying diversity techniques, c) by installation of UHF absorbers. Based on the enhanced channel model, a new method for tag readability prediction with high reliability has been introduced. Furthermore, a novel way to rate the magnitude of multipath interference has been proposed. A digital receiver beamforming localization method has been presented which uses the Root MUSIC algorithm for angulation of a target tag and multipath reducing techniques for an optimum localization performance. A new multiangulation algorithm has been proposed to enable the application of diversity techniques. A novel transmitter beamforming localization approach has been presented which exploits the precisely defined response threshold of passive tags in order to achieve high robustness against multipath. The basic technique has been improved significantly with respect to angular accuracy and processing times. Novel experimental testbeds for receiver and transmitter beamforming have been designed, built and used for verification of the localization performance in real-world measurements. All the improvements achieved contribute to an enhancement of the accuracy and especially the robustness of passive UHF RFID localization systems in multipath environments which is the main focus of this research.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Laaraiedh, Mohamed. « Contributions on hybrid localization techniques for heterogeneous wireless networks ». Rennes 1, 2010. https://tel.archives-ouvertes.fr/tel-00624436.

Texte intégral
Résumé :
Recent advancement in wireless networks and systems has seen the rise of localization techniques as a worthwhile and cost-effective basis for novel services. These location based services have been more and more beneficial and money-making for telecommunications operators and companies. Various LBSs can be offered to the user such as tracking, advertisement, security, and management. Wireless networks themselves may benefit from localization information to enhance the performances of the different network layers. Location based routing, synchronization, interference cancellation are some examples of fields where location information can be fruitful. Two main tasks a localization system must be able to do: measurement of location-dependent parameters (RSSI, TOA, and TDOA) and estimation of position using location estimation techniques. The main goal of this dissertation is the study of different location estimation techniques: algebraic and geometric. Studied algebraic techniques are least-squares, maximum likelihood, and semidefinite programming. The proposed geometric technique RGPA is based on interval analysis and geometric representation of location-dependent parameters. The focus is put on the fusion of different location-dependent parameters on the positioning accuracy. Estimation and measurement of location-dependent parameters are also investigated using a provided UWB measurements campaign in order to have a complete understanding of localization field
Les avancements récents dans les technologies sans fil ont vu l’émergence de techniques de localisation qui constitue une base utile et rentable pour offrir des nouveaux services. Ces services topo-dépendants ont été de plus en plus bénéfiques pour les opérateurs et les entreprises de télécommunications. Divers services topo-dépendants peuvent être offerts `a l’utilisateur tels que le suivi, la publicité, la sécurité, et la gestion. Les réseaux sans fil eux-mêmes peuvent bénéficier de l’information de localisation pour améliorer les performances de leurs différentes couches. Le routage, la synchronisation et l’annulation d’interférences sont quelques exemples o`u l’information de localisation peut être fructueuse. Un système de localisation doit être capable d’exécuter deux tâches principales : la mesure des paramètres topo-dépendants (RSSI, TOA, et TDOA) et l’estimation de la position en utilisant des estimateurs appropriés. L’objectif principal de cette thèse est l’étude de différentes techniques d’estimation de la position: algébriques et géométriques. Les techniques algébriques étudiées sont les moindres carrés, le maximum de vraisemblance, et la programmation semi-définie. La technique géométrique RGPA proposée est basée sur l’analyse par intervalles et la représentation géométrique des paramètres topo-dépendants. L’accent est mis sur la fusion de différents paramètres topo-dépendants et son influence sur la précision de positionnement. L’estimation et la mesure des paramètres topo-dépendants sont également étudiées en utilisant une campagne de mesures ULB afin d’avoir une compréhension complète du domaine de localisation
Styles APA, Harvard, Vancouver, ISO, etc.
28

Ollikainen, Vesa. « Simulation techniques for disease gene localization in isolated populations ». Helsinki : University of Helsinki, 2002. http://ethesis.helsinki.fi/julkaisut/mat/tieto/vk/ollikainen/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
29

Batuman, Emrah. « Comparison And Evaluation Of Three Dimensional Passive Source Localization Techniques ». Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12612040/index.pdf.

Texte intégral
Résumé :
Passive source localization is the estimation of the positions of the sources or emitters given the sensor data. In this thesis, some of the well known methods for passive source localization are investigated and compared in a stationary emitter sensor framework. These algorithms are discussed in detail in two and three dimensions for both single and multiple target cases. Passive source localization methods can be divided into two groups as two-step algorithms and single-step algorithms. Angle-of-Arrival (AOA) based Maximum Likelihood (ML) and Least Squares (LS) source localization algorithms, Time- Difference-of-Arrival (TDOA) based ML and LS methods, AOA-TDOA based hybrid ML methods are presented as conventional two step techniques. Direct Position Determination (DPD) method is a well known technique within the single step approaches. In thesis, a number of variants of DPD technique with better computational complexity (the proposed methods do not need eigen-decomposition in the grid search) are presented. These are the Direct Localization (DL) with Multiple Signal Classification (MUSIC), DL with Deterministic ML (DML) and DL with Stochastic ML (SML) methods. The evaluation of these algorithms is done by considering the Cramer Rao Lower Bound (CRLB). Some of the CRLB expressions given in two dimensions in the literature are presented for threedimensions. Extensive simulations are done and the effects of different parameters on the performances of the methods are investigated. It is shown that the performance of the single step algorithms is good even at low SNR. DL with MUSIC algorithm performs as good as the DPD while it has significant savings in computational complexity. AOA, TDOA and hybrid algorithms are compared in different scenarios. It is shown that the improvement achieved by single-step techniques may be acceptable when the system cost and complexity are ignored. The localization algorithms are compared for the multiple target case as well. The effect of sensor deployments on the location performance is investigated.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Jönsson, Mattias. « Detecting the Many-Body Localization Transition with Machine Learning Techniques ». Thesis, KTH, Fysik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231327.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

Yu, Lei. « Fingerprinting based techniques for indoor localization exploiting UWB signal properties ». Rennes 1, 2011. http://www.theses.fr/2011REN1S096.

Texte intégral
Résumé :
Récemment, les systèmes de localisation sans fil sont considérés comme une technologie en pleine expansion. Beaucoup de techniques ont été proposées pour les systèmes de localisation en intérieur et en extérieur. Ces techniques et ces systèms ont permis de concevoir différents services topo-dépendants. Un système de localisation doit ˆetre capable d’executer deux processus: la mesure des paramètres topo-dépendants (RSSI, TOA,. . . ) et l’estimation de la position en utilisant les techniques de localisation appropriées. Dans ce manuscrit, l’estimation et la mesure des paramètres topo-dépendants sont étudiées en utilisant une campagne de mesures ULB. Quatre différentes techniques d’estimation de TOA sont proposées. Des techniques de ranging basées sur RSSI sont présentées. L’objectif principal de cette thèse est l’étude de localisation en intérieur basée sur une technique de fingerprinting. La technique des réseaux de neurones est utilisée pour apprendre la base de données de fingerprinting et pour localiser les points ciblés. La construction des réseaux de neurones et les approches adoptées sont introduites. La base de données pré-mesurée et la base de données pré-simulée sont établies et appliquées aux techniques de fingerprinting. Différents fingerprints et différentes tailles de la base de données sont utilisés pour évaluer les performances de positionnement. Le modèle MultiWall est proposé pour prévoir les fingerprints de RSSI selon l’environnement de propagation. Une adaptation du modèle classique MultiWall pour prendre en compte l’effet de la diffraction pour le mobilier métallique a montré qu’elle peut améliorer la qualité de positionnement
Nowadays, wireless localization systems are considered as a potential technology for future services. Various techniques have been proposed for both indoor and outdoor localization systems. These techniques and systems allowed to conceive different LBSs. The two main processes a localization system must be able to do are the measurement of location-dependent parameters (RSSI, TOA,. . . ) and the estimation of position using different localization techniques. In this manuscript, the estimation and measurement of LDPs such as RSSI and TOA are investigated using a provided UWB measurements campaign. Four different TOA estimation techniques are proposed. RSSI based ranging techniques are also introduced. The main goal of this thesis is the study of fingerprinting based techniques for indoor localization. The neural networks technique is used to learn the fingerprinting database and to locate the targeted points. The construction of the neural networks and the adopted approaches are described. Both the pre-measured and the pre-simulated fingerprinting databases are established to be used in the fingerprinting techniques. Different fingerprints and different sizes of the database are utilized to evaluate the positioning performances. The MultiWall model is proposed to predict the RSSI fingerprint depending on the real propagation environment. An adaptation of the classic MultiWall model to take into account the effect of diffraction for the metallic furniture shows that it can improve the quality of positioning
Styles APA, Harvard, Vancouver, ISO, etc.
32

Huang, Yiteng (Arden). « Real-time acoustic source localization with passive microphone arrays ». Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/15024.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Soldevila, Coma Adrià. « Robust leak localization in water distribution networks using machine learning techniques ». Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/668645.

Texte intégral
Résumé :
This PhD thesis presents a methodology to detect, estimate and localize water leaks (with the main focus in the localization problem) in water distribution networks using hydraulic models and machine learning techniques. The actual state of the art is introduced, the theoretical basis of the machine learning techniques applied are explained and the hydraulic model is also detailed. The whole methodology is presented and tested into different water distribution networks and district metered areas based on simulated and real case studies and compared with published methods. The focus of the contributions is to bring more robust methods against the uncertainties that effects the problem of leak detection, by dealing with them using the self-similarity to create features monitored by the change detection technique intersection-of-confidence-interval, and the leak localization where the problem is tackled using machine learning techniques. By using those techniques, it is expected to learn the leak behavior considering their uncertainty to be used in the diagnosis stage after the training phase. One method for the leak detection problem is presented that is able to estimate the leak size and the time that the leak has been produced. This method captures the normal, leak-free, behavior and contrast it with the new measurements in order to evaluate the state of the network. If the behavior is not normal check if it is due to a leak. To have a more robust leak detection method, a specific validation is designed to operate specifically with leaks and in the temporal region where the leak is most apparent. A methodology to extent the current model-based approach to localize water leaks by means of classifiers is proposed where the non-parametric k-nearest neighbors classifier and the parametric multi-class Bayesian classifier are proposed. A new data-driven approach to localize leaks using a multivariate regression technique without the use of hydraulic models is also introduced. This method presents a clear benefit over the model-based technique by removing the need of the hydraulic model despite of the topological information is still required. Also, the information of the expected leaks is not required since information of the expected hydraulic behavior with leak is exploited to find the place where the leak is more suitable. This method has a good performance in practice, but is very sensitive to the number of sensor in the network and their sensor placement. The proposed sensor placement techniques reduce the computational load required to take into account the amount of data needed to model the uncertainty compared with other optimization approaches while are designed to work with the leak localization problem. More precisely, the proposed hybrid feature selection technique for sensor placement is able to work with any method that can be evaluated with confusion matrix and still being specialized for the leak localization task. This last method is good for a few sensors, but lacks of precision when the number of sensors to place is large. To overcome this problem an incremental sensor placement is proposed which is better for a larger number of sensors to place but worse when the number is small.
Aquesta tesi presenta una nova metodologia per a localització de fuites en xarxes de distribució d'aigua potable. Primer s'ha revisat l'estat del art actual i les bases teòriques tant de les tècniques de machine learning utilitzades al llarg de la tesi com els mètodes existents de localització de fuites. La metodologia presentada s'ha provat en diferents xarxes d'aigua simulades i reals, comparant el resultats amb altres mètodes publicats. L'objectiu principal de la contribució aportada és el de desenvolupar mètodes més robustos enfront les incerteses que afecten a la localització de fuites. En el cas de la detecció i estimació de la magnitud de la fuita, s'utilitza la tècnica self-similarity per crear els indicadors es monitoritzen amb la tècnica de detecció de canvis ("intersection-of-confidence-intervals"). En el cas de la localització de les fuites, s'han fet servir les tècniques de classificadors i interpoladors provinents del machine learning. A l'utilitzar aquestes tècniques s'espera captar el comportament de la fuita i de la incertesa per aprendre i tenir-ho en compte en la fase de la localització de la fuita. El mètode de la detecció de fallades proposat és capaç d'estimar la magnitud de la fuita i l'instant en que s'ha produït. Aquest mètode captura el comportament normal, sense fuita, i el contrasta amb les noves mesures per avaluar l'estat de la xarxa. En el cas que el comportament no sigui el normal, es procedeix a comprovar si això és degut a una fuita. Per tenir una mètode de detecció més robust, es fa servir una capa de validació especialment dissenyada per treballar específicament amb fuites i en la regió temporal en que la fuita és més evident. Per tal de millorar l'actual metodologia de localització de fuites mitjançant models hidràulics s'ha proposat l'ús de classificadors. Per una banda es proposa el classificador no paramètric k-nearest neighbors i per l'altre banda el classificador Bayesià paramètric per múltiples classes. Finalment, s'ha desenvolupat un nou mètode de localització de fuites basat en models de dades mitjançant la regressió de múltiples paràmetres sense l'ús del model hidràulic de la xarxa. Finalment, s'ha tractat el problema de la col·locació de sensors. El rendiment de la localització de fuites està relacionada amb la col·locació de sensors i és particular per a cada mètode de localització. Amb l'objectiu de maximitzar el rendiment dels mètodes de localització de fuites presentats anteriorment, es presenten i avaluen tècniques de col·locació de sensors específicament dissenyats ja que el problema de combinatòria no es pot manejar intentant cada possible combinació de sensors a part de les xarxes més petites amb pocs sensors per instal·lar. Aquestes tècniques de col·locació de sensors exploten el potencial de les tècniques de selecció de variables per tal de realitzar la tasca desitjada.
Esta tesis doctoral presenta una nueva metodología para detectar, estimar el tamaño y localizar fugas de agua (donde el foco principal está puesto en el problema de la localización de fugas) en redes de distribución de agua potable. La tesis presenta una revisión del estado actual y las bases de las técnicas de machine learning que se aplican, así como una explicación del modelo hidráulico de las redes de agua. El conjunto de la metodología se presenta y prueba en diferentes redes de distribución de agua y sectores de consumo con casos de estudio simulados y reales, y se compara con otros métodos ya publicados. La contribución principal es la de desarrollar métodos más robustos frente a la incertidumbre de los datos. En la detección de fugas, la incertidumbre se trata con la técnica del self-similarity para la generación de indicadores que luego son monitoreados per la técnica de detección de cambios conocida como intersection-of-confidece-interval. En la localización de fugas el problema de la incertidumbre se trata con técnicas de machine learning. Al utilizar estas técnicas se espera aprender el comportamiento de la fuga y su incertidumbre asociada para tenerlo en cuenta en la fase de diagnóstico. El método presentado para la detección de fugas tiene la habilidad de estimar la magnitud y el instante en que la fuga se ha producido. Este método captura el comportamiento normal, sin fugas, del sistema y lo contrasta con las nuevas medidas para evaluar el estado actual de la red. En el caso de que el comportamiento no sea el normal, se comprueba si es debido a la presencia de una fuga en el sistema. Para obtener un método de detección más robusto, se considera una capa de validación especialmente diseñada para trabajar específicamente con fugas y durante el periodo temporal donde la fuga es más evidente. Esta técnica se compara con otras ya publicadas proporcionando una detección más fiable, especialmente en el caso de fugas pequeñas, al mismo tiempo que proporciona más información que puede ser usada en la fase de la localización de la fuga permitiendo mejorarla. El principal problema es que el método es más lento que los otros métodos analizados. Con el fin de mejorar la actual metodología de localización de fugas mediante modelos hidráulicos, se propone la utilización de clasificadores. Concretamente, se propone el clasificador no paramétrico k-nearest neighbors y el clasificador Bayesiano paramétrico para múltiples clases. La propuesta de localización de fugas mediante modelos hidráulicos y clasificadores permite gestionar la incertidumbre de los datos mejor para obtener un diagnóstico de la localización de la fuga más preciso. El principal inconveniente recae en el coste computacional, aunque no se realiza en tiempo real, de los datos necesarios por el clasificador para aprender correctamente la dispersión de los datos. Además, el método es muy dependiente de la calidad del modelo hidráulico de la red. En el campo de la localización de fugas, se a propuesto un nuevo método de localización de fugas basado en modelos de datos mediante la regresión de múltiples parámetros sin el uso de modelo hidráulico. Este método presenta un claro beneficio respecto a las técnicas basadas en modelos hidráulicos ya que prescinde de su uso, aunque la información topológica de la red es aún necesaria. Además, la información del comportamiento de la red para cada fuga no es necesario, ya que el conocimiento del efecto hidráulico de una fuga en un determinado punto de la red es utilizado para la localización. Este método ha dado muy buenos resultados en la práctica, aunque es muy sensible al número de sensores y a su colocación en la red. Finalmente, se trata el problema de la colocación de sensores. El desempeño de la localización de fugas está ligado a la colocación de los sensores y es particular para cada método. Con el objetivo de maximizar el desempeño de los métodos de localización de fugas presentados, técnicas de colocación de sensores específicamente diseñados para ellos se han presentado y evaluado. Dado que el problema de combinatoria que presenta no puede ser tratado analizando todas las posibles combinaciones de sensores excepto en las redes más pequeñas con unos pocos sensores para instalar. Estas técnicas de colocación de sensores explotan el potencial de las técnicas de selección de variables para realizar la tarea deseada. Las técnicas de colocación de sensores propuestas reducen la carga computacional, requerida para tener en cuenta todos los datos necesarios para modelar bien la incertidumbre, comparado con otras propuestas de optimización al mismo tiempo que están diseñadas para trabajar en la tarea de la localización de fugas. Más concretamente, la propuesta basada en la técnica híbrida de selección de variables para la colocación de sensores es capaz de trabajar con cualquier técnica de localización de fugas que se pueda evaluar con la matriz de confusión y ser a la vez óptimo. Este método es muy bueno para la colocación de sensores, pero el rendimiento disminuye a medida que el número de sensores a colocar crece. Para evitar este problema, se propone método de colocación de sensores de forma incremental que presenta un mejor rendimiento para un número alto de sensores a colocar, aunque no es tan eficaz con pocos sensores a colocar.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Mirabdollah, Mohammad Hossein [Verfasser]. « Robust techniques for monocular simultaneous localization and mapping / Mohammad Hossein Mirabdollah ». Paderborn : Universitätsbibliothek, 2016. http://d-nb.info/1098210581/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Papazyan, Ruslan. « Techniques for localization of insulation degradation along medium-voltage power cables / ». Stockholm, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-207.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
36

Cooper, Aron J. (Aron Jace). « A comparison of data association techniques for Simultaneous Localization and Mapping ». Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/32438.

Texte intégral
Résumé :
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2005.
Includes bibliographical references (p. 119-124).
The problem of Simultaneous Localization and Mapping (SLAM) has received a great deal of attention within the robotics literature, and the importance of the solutions to this problem has been well documented for successful operation of autonomous agents in a number of environments. Of the numerous solutions that have been developed for solving the SLAM problem many of the most successful approaches continue to either rely on, or stem from, the Extended Kalman Filter method (EKF). However, the new algorithm FastSLAM has attracted attention for many properties not found in EKF based methods. One such property is the ability to deal with unknown data association and its robustness to data association errors. The problem of data association has also received a great deal of attention in the robotics literature in recent years, and various solutions have been proposed. In an effort to both compare the performance of the EKF and FastSLAM under ambiguous data association situations, as well as compare the performance of three different data association methods a comprehensive study of various SLAM filter-data association combinations is performed. This study will consist of pairing the EKF and FastSLAM filtering approaches with the Joint Compatibility, Sequential Compatibility Nearest Neighbor, and Joint Maximum Likelihood data association methods. The comparison will be based on both contrived simulations as well as application to the publicly available Car Park data set. The simulated results will demonstrate a heavy dependence on geometry, particularly landmark separation, for the performance of both filter performance and the data association algorithms used.
(cont.) The real world data set results will demonstrate that the performance of some data association algorithms, when paired with an EKF, can give identical results. At the same time a distinction in mapping performance between those pairings and the EKF paired with Joint Compatibility data association will be shown. These EKF based pairings will be contrasted to the performance obtained for the FastSLAM- Sequential Nearest Neighbor marriage. Finally, the difficulties in applying the Joint Compatibility and Joint Maximum Likelihood data association methods using FastSLAM 1.0 for this data set will be discussed.
by Aron J. Cooper.
S.M.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Gaddam, Sathvik Reddy. « Structural health monitoring system| Filtering techniques, damage localization, and system design ». Thesis, California State University, Long Beach, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10144825.

Texte intégral
Résumé :

Material testing is a major concern in many manufacturing and aeronautical industries, where structures require periodic inspection using equipment and manpower. Environmental Noise (EN) is the major concern when localizing the damage in real time. Inspecting underlying components involves destructive approaches. These factors can be alleviated using Non Destructive Testing (NDT) and a cost effective embedded sensor system.

This project involves NDT implementation of Structural Health Monitoring (SHM) with filtering techniques in real time. A spectrogram and a scalogram are used to analyze lamb response from an embedded array of Piezo Transducers (PZT). This project gives insights on implementing a real time SHM system with a sensor placement strategy and addresses two main problems, namely filtering and damage localization. An Adaptive Correlated Noise Filter (ACNF) removes EN from the lamb response of a structure. A damage map is developed using Short Time Fourier Transform (STFT), and Continuous Wavelet Analysis (CWA).

Styles APA, Harvard, Vancouver, ISO, etc.
38

Al-Olimat, Hussein S. « Optimizing Cloudlet Scheduling and Wireless Sensor Localization using Computational Intelligence Techniques ». University of Toledo / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1403922600.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
39

Stewart, Daniel Franklin. « Minimal time-frequency localization techniques and their application to image compression / ». The Ohio State University, 1994. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487854314872656.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
40

Artemenko, Oleksandr [Verfasser], Andreas [Akademischer Betreuer] Mitschele-Thiel, Gunar [Gutachter] Schorcht et Mario [Gutachter] Gerla. « Localization in Wireless Networks : Improvement of Localization Techniques / Oleksandr Artemenko ; Gutachter : Gunar Schorcht, Mario Gerla ; Betreuer : Andreas Mitschele-Thiel ». Ilmenau : TU Ilmenau, 2013. http://d-nb.info/1178184013/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
41

Whitney, Ann M. « INDOOR-WIRELESS LOCATION TECHNIQUES AND ALGORITHMS UTILIZING UHF RFID AND BLE TECHNOLOGIES ». UKnowledge, 2019. https://uknowledge.uky.edu/me_etds/138.

Texte intégral
Résumé :
The work presented herein explores the ability of Ultra High Frequency Radio Frequency (UHF RF) devices, specifically (Radio Frequency Identification) RFID passive tags and Bluetooth Low Energy (BLE) to be used as tools to locate items of interest inside a building. Localization Systems based on these technologies are commercially available, but have failed to be widely adopted due to significant drawbacks in the accuracy and reliability of state of the art systems. It is the goal of this work to address that issue by identifying and potentially improving upon localization algorithms. The work presented here breaks the process of localization into distance estimations and trilateration algorithms to use those estimations to determine a 2D location. Distance estimations are the largest error source in trilateration. Several methods are proposed to improve speed and accuracy of measurements using additional information from frequency variations and phase angle information. Adding information from the characteristic signature of multipath signals allowed for a significant reduction in distance estimation error for both BLE and RFID which was quantified using neural network optimization techniques. The resulting error reduction algorithm was generalizable to completely new environments with very different multipath behavior and was a significant contribution of this work. Another significant contribution of this work is the experimental comparison of trilateration algorithms, which tested new and existing methods of trilateration for accuracy in a controlled environment using the same data sets. Several new or improved methods of triangulation are presented as well as traditional methods from the literature in the analysis. The Antenna Pattern Method represents a new way of compensating for the antenna radiation pattern and its potential impact on signal strength, which is also an important contribution of this effort. The performance of each algorithm for multiple types of inputs are compared and the resulting error matrix allows a potential system designer to select the best option given the particular system constraints.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Arif, Omar. « Robust target localization and segmentation using statistical methods ». Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33882.

Texte intégral
Résumé :
This thesis aims to contribute to the area of visual tracking, which is the process of identifying an object of interest through a sequence of successive images. The thesis explores kernel-based statistical methods, which map the data to a higher dimensional space. A pre-image framework is provided to find the mapping from the embedding space to the input space for several manifold learning and dimensional learning algorithms. Two algorithms are developed for visual tracking that are robust to noise and occlusions. In the first algorithm, a kernel PCA-based eigenspace representation is used. The de-noising and clustering capabilities of the kernel PCA procedure lead to a robust algorithm. This framework is extended to incorporate the background information in an energy based formulation, which is minimized using graph cut and to track multiple objects using a single learned model. In the second method, a robust density comparison framework is developed that is applied to visual tracking, where an object is tracked by minimizing the distance between a model distribution and given candidate distributions. The superior performance of kernel-based algorithms comes at a price of increased storage and computational requirements. A novel method is developed that takes advantage of the universal approximation capabilities of generalized radial basis function neural networks to reduce the computational and storage requirements for kernel-based methods.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Infante, Fulvio. « Development of magnetic microscopy techniques for failure localization on three-dimensional circuits ». Thesis, Bordeaux 1, 2011. http://www.theses.fr/2011BOR14394/document.

Texte intégral
Résumé :
Dans ce travail, de nouveaux développements sur les techniques de localisation des composants électroniques en trois dimensions sont montrés. Ces développements sont réalisés grâce à l'introduction de simulations pour une technique déjà existante: la Microscopie Magnétique (MM). Dans la première partie, l'état de l'art de l'assemblage des nouveaux composants tridimensionnels est décrit. Il est ensuite suivi par une description du processus FA actuel, tout en le gardant aussi général que possible. Une description de la fiabilité des dispositifs, en fonction de leur temps d'utilisation est décrite, permettant au lecteur de comprendre pourquoi initialement l'Analyse de défaillance est apparue nécessaire. L'ensemble du processus d'analyse de défaillance est alors décrit de manière générale, à partir de la caractérisation électrique du défaut, jusqu'aux résultats finaux. Dans la deuxième partie est ensuite expliquée dans le détail la technique de microscopie magnétique, qui utilise les propriétés des champs magnétiques générés par les courants, et permet de localiser précisément les défauts des composants électroniques standards. La troisième partie de ce travail est consacrée à l'approche de simulation (SA): une nouvelle méthodologie développée pour étendre les capacités des techniques de microscopie magnétique. Le principe de base est de comparer la simulation magnétique générée par des hypothèses de distributions de courant aux acquisitions magnétiques de la distribution réelle. L'évaluation de la corrélation entre les deux donnera ensuite une mesure de la distance entre eux. Par ailleurs, cette approche est capable de surmonter les limitations de la technique: le défaut peut désormais être localisé en trois dimensions. Enfin, dans la quatrième partie, la nouvelle technique est appliquée et validé sur un ensemble de cas d'études
In this work, new developments on localization techniques for three-dimensional electronic components are shown and demonstrated. These are performed through the introduction of simulations for an already existing technique: Magnetic Microscopy (MM). In the first part, a state of the art of new three-dimensional components assembly is described. It is then followed by an up to date FA process description, while keeping it as general as possible. A description of component reliability, in function of the time of usage of such devices is shown, allowing the reader to understand why the need for Failure Analysis arose in the first place. The whole process of Failure Analysis is then described in a general way, starting from the electrical characterization of the defect, to the final results. The second part then explains the Magnetic Microscopy technique in more detail. This technique uses the properties of the magnetic fields, which are generated by the currents, to precisely localize the defects in standard electronic components. The third part of this work is dedicated to the Simulation Approach (SA): a new methodology developed to extend the capabilities of Magnetic Microscopy techniques. The basic principle is that of comparing magnetic simulations generated by hypothetical current distributions to the magnetic acquisitions of the real current distribution. The evaluation of the correlation between the two then gives a measurement of the distance between them. This approach is able to overcome the previous limitations of the technique: the defect can now be localized in three dimensions. Finally, in the fourth part the new technique is applied and validated on a set of case studies
Styles APA, Harvard, Vancouver, ISO, etc.
44

ARESTEGUI, NILTON CESAR ANCHAYHUA. « COMPUTATIONAL INTELLIGENCE TECHNIQUES FOR VISUAL SELF-LOCALIZATION AND MAPPING OF MOBILE ROBOTS ». PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2009. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=31775@1.

Texte intégral
Résumé :
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Esta dissertação introduz um estudo sobre os algoritmos de inteligência computacional para o controle autônomo dos robôs móveis, Nesta pesquisa, são desenvolvidos e implementados sistemas inteligentes de controle de um robô móvel construído no Laboratório de Robótica da PUC-Rio, baseado numa modificação do robô ER1. Os experimentos realizados consistem em duas etapas: a primeira etapa de simulação usando o software Player-Stage de simulação do robô em 2-D onde foram desenvolvidos os algoritmos de navegação usando as técnicas de inteligência computacional; e a segunda etapa a implementação dos algoritmos no robô real. As técnicas implementadas para a navegação do robô móvel estão baseadas em algoritmos de inteligência computacional como são redes neurais, lógica difusa e support vector machine (SVM) e para dar suporte visual ao robô móvel foi implementado uma técnica de visão computacional chamado Scale Invariant Future Transform (SIFT), estes algoritmos em conjunto fazem um sistema embebido para dotar de controle autônomo ao robô móvel. As simulações destes algoritmos conseguiram o objetivo, mas na implementação surgiram diferenças muito claras respeito à simulação pelo tempo que demora em processar o microprocessador.
This theses introduces a study on the computational intelligence algorithms for autonomous control of mobile robots, In this research, intelligent systems are developed and implemented for a robot in the Robotics Laboratory of PUC-Rio, based on a modiÞcation of the robot ER1. The verification consist of two stages: the first stage includes simulation using Player-Stage software for simulation of the robot in 2-D with the developed of artiÞcial intelligence; an the second stage, including the implementation of the algorithms in the real robot. The techniques implemented for the navigation of the mobile robot are based on algorithms of computational intelligence as neural networks, fuzzy logic and support vector machine (SVM); and to give visual support to the mobile robot was implemented the visual algorithm called Scale Invariant Future Transform (SIFT), these algorithms in set makes an absorbed system to endow with independent control the mobile robot. The simulations of these algorithms had obtained the objective but in the implementation clear differences had appeared respect to the simulation, it just for the time that delays in processing the microprocessor.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Callegati, Flavio <1990&gt. « Perception and localization techniques for navigation in agricultural environment and experimental results ». Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amsdottorato.unibo.it/9017/1/Tesi_v2.pdf.

Texte intégral
Résumé :
Notoriously, the agricultural work environment is very hard, where the operator manually carry out any job, often in extreme weather conditions or anyway heat, cold and rain, or simply where the working hours last from dawn to sunset. Recently, the application of automation in agriculture is leading to the development of increasingly autonomous robots, able to take care of different tasks and avoid obstacles, to collaborate and interact with human operators and collect data from the surrounding environment. The latter can then be shared with the user, informing him about the soil moisture rather than the critical health conditions of a single plant. Thus borns the concept of precision agriculture, in which the robot performs its tasks according to the environment conditions it detects, distributing fertilizers or water only where necessary and optimizing treatments and its energy resources. The proposed thesis project consists in the development of a tractor prototype able to automatically act in agricultural semi-structured environment, like orchards organized in rows, and navigating autonomously by means of a laser scanner. In particular, the work is divided into three steps. The first consists in design and construction of a tracked robot, which has been completely realized in the laboratory, from mechanical, electric and electronic subsystems up to the software structure. The second is the development of a navigation and control system, which makes a generic robot able to move autonomously in the orchard using a laser scanner as main sensor. To achieve this goal, a localization algorithm based on rows estimation has been developed. Moreover, a control law has been designed, which regulates the kinematics of the robot. Once the navigation algorithm has been defined, it is necessary to validate it. Indeed, third point consists of experimental tests, with the aim of testing both robot and developed navigation algorithm.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Wahalathantri, Buddhi Lankananda. « Damage assessment in reinforced concrete flexural members using modal strain energy based method ». Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/59509/1/Buddhi_Wahalathantri_Thesis.pdf.

Texte intégral
Résumé :
Damage assessment (damage detection, localization and quantification) in structures and appropriate retrofitting will enable the safe and efficient function of the structures. In this context, many Vibration Based Damage Identification Techniques (VBDIT) have emerged with potential for accurate damage assessment. VBDITs have achieved significant research interest in recent years, mainly due to their non-destructive nature and ability to assess inaccessible and invisible damage locations. Damage Index (DI) methods are also vibration based, but they are not based on the structural model. DI methods are fast and inexpensive compared to the model-based methods and have the ability to automate the damage detection process. DI method analyses the change in vibration response of the structure between two states so that the damage can be identified. Extensive research has been carried out to apply the DI method to assess damage in steel structures. Comparatively, there has been very little research interest in the use of DI methods to assess damage in Reinforced Concrete (RC) structures due to the complexity of simulating the predominant damage type, the flexural crack. Flexural cracks in RC beams distribute non- linearly and propagate along all directions. Secondary cracks extend more rapidly along the longitudinal and transverse directions of a RC structure than propagation of existing cracks in the depth direction due to stress distribution caused by the tensile reinforcement. Simplified damage simulation techniques (such as reductions in the modulus or section depth or use of rotational spring elements) that have been extensively used with research on steel structures, cannot be applied to simulate flexural cracks in RC elements. This highlights a big gap in knowledge and as a consequence VBDITs have not been successfully applied to damage assessment in RC structures. This research will address the above gap in knowledge and will develop and apply a modal strain energy based DI method to assess damage in RC flexural members. Firstly, this research evaluated different damage simulation techniques and recommended an appropriate technique to simulate the post cracking behaviour of RC structures. The ABAQUS finite element package was used throughout the study with properly validated material models. The damaged plasticity model was recommended as the method which can correctly simulate the post cracking behaviour of RC structures and was used in the rest of this study. Four different forms of Modal Strain Energy based Damage Indices (MSEDIs) were proposed to improve the damage assessment capability by minimising the numbers and intensities of false alarms. The developed MSEDIs were then used to automate the damage detection process by incorporating programmable algorithms. The developed algorithms have the ability to identify common issues associated with the vibration properties such as mode shifting and phase change. To minimise the effect of noise on the DI calculation process, this research proposed a sequential order of curve fitting technique. Finally, a statistical based damage assessment scheme was proposed to enhance the reliability of the damage assessment results. The proposed techniques were applied to locate damage in RC beams and slabs on girder bridge model to demonstrate their accuracy and efficiency. The outcomes of this research will make a significant contribution to the technical knowledge of VBDIT and will enhance the accuracy of damage assessment in RC structures. The application of the research findings to RC flexural members will enable their safe and efficient performance.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Kiang, Kai-Ming Mechanical &amp Manufacturing Engineering Faculty of Engineering UNSW. « Natural feature extraction as a front end for simultaneous localization and mapping ». Awarded by:University of New South Wales. School of Mechanical and Manufacturing Engineering, 2006. http://handle.unsw.edu.au/1959.4/26960.

Texte intégral
Résumé :
This thesis is concerned with algorithms for finding natural features that are then used for simultaneous localisation and mapping, commonly known as SLAM in navigation theory. The task involves capturing raw sensory inputs, extracting features from these inputs and using the features for mapping and localising during navigation. The ability to extract natural features allows automatons such as robots to be sent to environments where no human beings have previously explored working in a way that is similar to how human beings understand and remember where they have been. In extracting natural features using images, the way that features are represented and matched is a critical issue in that the computation involved could be wasted if the wrong method is chosen. While there are many techniques capable of matching pre-defined objects correctly, few of them can be used for real-time navigation in an unexplored environment, intelligently deciding on what is a relevant feature in the images. Normally, feature analysis that extracts relevant features from an image is a 2-step process, the steps being firstly to select interest points and then to represent these points based on the local region properties. A novel technique is presented in this thesis for extracting a small enough set of natural features robust enough for navigation purposes. The technique involves a 3-step approach. The first step involves an interest point selection method based on extrema of difference of Gaussians (DOG). The second step applies Textural Feature Analysis (TFA) on the local regions of the interest points. The third step selects the distinctive features using Distinctness Analysis (DA) based mainly on the probability of occurrence of the features extracted. The additional step of DA has shown that a significant improvement on the processing speed is attained over previous methods. Moreover, TFA / DA has been applied in a SLAM configuration that is looking at an underwater environment where texture can be rich in natural features. The results demonstrated that an improvement in loop closure ability is attained compared to traditional SLAM methods. This suggests that real-time navigation in unexplored environments using natural features could now be a more plausible option.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Khan, Umair I. « Computational Techniques for Comparative Performance Evaluation of RF Localization inside the Human Body ». Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/647.

Texte intégral
Résumé :
Localization inside the human body using radio frequency (RF) transmission is gaining importance in a number of applications such as Capsule Endoscopy. The accuracy of RF localization depends on the technology adopted for this purpose. The two most common RF localization technologies use received signal strength (RSS) and time-of-arrival (TOA). This research presents a comparison of the accuracy of TOA and RSS based localization inside human tissue using computational techniques for simulation of radio propagation inside human tissues. Computer simulation of the propagation of radio waves inside the human body is extremely challenging and computationally intensive. We designed a basic, MATLAB coded, finite difference time-domain (FDTD) for the radio propagation in and around the human body and compared the results obtained from this software with the commonly used and commercially available Finite Element Method (FEM) modeling in Ansoft HFSS (ANSYS). We first show that the FDTD analysis yields comparable results. Then we use the software to simulate the RSS and TOA of the wideband signals propagated inside the human body for RF localization to compare the accuracies of the two methods. The accuracy of each technique is compared with the Cramer-Rao Lower Bound (CRLB) commonly used for calculation of bounds for the performance of localization techniques and the effects of human body movements.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Ruangpayoongsak, Niramon. « Development of autonomous features and indoor localization techniques for car-like mobile robots ». [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=982279469.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
50

Chuasomboon, Sasit. « A comparison of ranging and localization techniques in indoor, urban, and tunnel environments ». Thesis, Linköpings universitet, Kommunikationssystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-94517.

Texte intégral
Résumé :
Localization in wireless network sensors is an attractive research area nowadays. It is widely used in many applications e.g., indoor/outdoor asset tracking, intrusion detection, search-and-rescue, road traffic monitoring, and water quality monitoring. An accuracy and robustness to noise are important issues for localization which is needed to study and research to find the best solution. This thesis compares a ranging and localization techniques in indoor, urban and tunnel through a high performance ray-tracing simulator, Wireless InSiteR . Ranging techniques are based on two standard distance related measurement schemes e.g., RSS and TOA. A linearized least squares technique with reference node selection approach is chosen to estimate unknown nodes positions. Indoor and urban area are built-in floor plan and terrain available in simulator program, while tunnel is designed. In general, localization accuracy suffers from multipath and NLOS condition. This thesis also observes characteristic of them from ray-tracing method perspective. Firstly, important simulation parameters such as number ofreflections/diffractions, types of waveform, and types of antenna are analyzed oneach environments. Then, the models for distance estimation based on RSS and TOA measurements are created using measurements in simulated environments. The thesis proposes four scenarios for distance estimation model. They are line-of-sight (LOS), non-line-of-sight (NLOS), combination of LOS and NLOS, and NLOS with obstacle. All four scenarios models are derived along with model error distribution to observe characteristic of noise due to multipath and NLOS condition. Finally, the localization using only LOS condition measurements, is tested on each environment and compared results in term of accuracy.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie