Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Sensor Fusion and Tracking.

Dissertationen zum Thema „Sensor Fusion and Tracking“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Sensor Fusion and Tracking" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Mathew, Vineet. „Radar and Vision Sensor Fusion for Vehicle Tracking“. The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1574441839857988.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Sikdar, Ankita. „Depth based Sensor Fusion in Object Detection and Tracking“. The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1515075130647622.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Moemeni, Armaghan. „Hybrid marker-less camera pose tracking with integrated sensor fusion“. Thesis, De Montfort University, 2014. http://hdl.handle.net/2086/11093.

Der volle Inhalt der Quelle
Annotation:
This thesis presents a framework for a hybrid model-free marker-less inertial-visual camera pose tracking with an integrated sensor fusion mechanism. The proposed solution addresses the fundamental problem of pose recovery in computer vision and robotics and provides an improved solution for wide-area pose tracking that can be used on mobile platforms and in real-time applications. In order to arrive at a suitable pose tracking algorithm, an in-depth investigation was conducted into current methods and sensors used for pose tracking. Preliminary experiments were then carried out on hybrid GPS-Visual as well as wireless micro-location tracking in order to evaluate their suitability for camera tracking in wide-area or GPS-denied environments. As a result of this investigation a combination of an inertial measurement unit and a camera was chosen as the primary sensory inputs for a hybrid camera tracking system. After following a thorough modelling and mathematical formulation process, a novel and improved hybrid tracking framework was designed, developed and evaluated. The resulting system incorporates an inertial system, a vision-based system and a recursive particle filtering-based stochastic data fusion and state estimation algorithm. The core of the algorithm is a state-space model for motion kinematics which, combined with the principles of multi-view camera geometry and the properties of optical flow and focus of expansion, form the main components of the proposed framework. The proposed solution incorporates a monitoring system, which decides on the best method of tracking at any given time based on the reliability of the fresh vision data provided by the vision-based system, and automatically switches between visual and inertial tracking as and when necessary. The system also includes a novel and effective self-adjusting mechanism, which detects when the newly captured sensory data can be reliably used to correct the past pose estimates. The corrected state is then propagated through to the current time in order to prevent sudden pose estimation errors manifesting as a permanent drift in the tracking output. Following the design stage, the complete system was fully developed and then evaluated using both synthetic and real data. The outcome shows an improved performance compared to existing techniques, such as PTAM and SLAM. The low computational cost of the algorithm enables its application on mobile devices, while the integrated self-monitoring, self-adjusting mechanisms allow for its potential use in wide-area tracking applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Lundquist, Christian. „Sensor Fusion for Automotive Applications“. Doctoral thesis, Linköpings universitet, Reglerteknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-71594.

Der volle Inhalt der Quelle
Annotation:
Mapping stationary objects and tracking moving targets are essential for many autonomous functions in vehicles. In order to compute the map and track estimates, sensor measurements from radar, laser and camera are used together with the standard proprioceptive sensors present in a car. By fusing information from different types of sensors, the accuracy and robustness of the estimates can be increased. Different types of maps are discussed and compared in the thesis. In particular, road maps make use of the fact that roads are highly structured, which allows relatively simple and powerful models to be employed. It is shown how the information of the lane markings, obtained by a front looking camera, can be fused with inertial measurement of the vehicle motion and radar measurements of vehicles ahead to compute a more accurate and robust road geometry estimate. Further, it is shown how radar measurements of stationary targets can be used to estimate the road edges, modeled as polynomials and tracked as extended targets. Recent advances in the field of multiple target tracking lead to the use of finite set statistics (FISST) in a set theoretic approach, where the targets and the measurements are treated as random finite sets (RFS). The first order moment of a RFS is called probability hypothesis density (PHD), and it is propagated in time with a PHD filter. In this thesis, the PHD filter is applied to radar data for constructing a parsimonious representation of the map of the stationary objects around the vehicle. Two original contributions, which exploit the inherent structure in the map, are proposed. A data clustering algorithm is suggested to structure the description of the prior and considerably improving the update in the PHD filter. Improvements in the merging step further simplify the map representation. When it comes to tracking moving targets, the focus of this thesis is on extended targets, i.e., targets which potentially may give rise to more than one measurement per time step. An implementation of the PHD filter, which was proposed to handle data obtained from extended targets, is presented. An approximation is proposed in order to limit the number of hypotheses. Further, a framework to track the size and shape of a target is introduced. The method is based on measurement generating points on the surface of the target, which are modeled by an RFS. Finally, an efficient and novel Bayesian method is proposed for approximating the tire radii of a vehicle based on particle filters and the marginalization concept. This is done under the assumption that a change in the tire radius is caused by a change in tire pressure, thus obtaining an indirect tire pressure monitoring system. The approaches presented in this thesis have all been evaluated on real data from both freeways and rural roads in Sweden.
SEFS -- IVSS
VR - ETT
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Romine, Jay Brent. „Fusion of radar and imaging sensor data for target tracking“. Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/13324.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Moody, Leigh. „Sensors, measurement fusion and missile trajectory optimisation“. Thesis, Cranfield University; College of Defence Technology; Department of Aerospace, Power and Sensors, 2003. http://hdl.handle.net/1826/778.

Der volle Inhalt der Quelle
Annotation:
When considering advances in “smart” weapons it is clear that air-launched systems have adopted an integrated approach to meet rigorous requirements, whereas air-defence systems have not. The demands on sensors, state observation, missile guidance, and simulation for air-defence is the subject of this research. Historical reviews for each topic, justification of favoured techniques and algorithms are provided, using a nomenclature developed to unify these disciplines. Sensors selected for their enduring impact on future systems are described and simulation models provided. Complex internal systems are reduced to simpler models capable of replicating dominant features, particularly those that adversely effect state observers. Of the state observer architectures considered, a distributed system comprising ground based target and own-missile tracking, data up-link, and on-board missile measurement and track fusion is the natural choice for air-defence. An IMM is used to process radar measurements, combining the estimates from filters with different target dynamics. The remote missile state observer combines up-linked target tracks and missile plots with IMU and seeker data to provide optimal guidance information. The performance of traditional PN and CLOS missile guidance is the basis against which on-line trajectory optimisation is judged. Enhanced guidance laws are presented that demand more from the state observers, stressing the importance of time-to-go and transport delays in strap-down systems employing staring array technology. Algorithms for solving the guidance twopoint boundary value problems created from the missile state observer output using gradient projection in function space are presented. A simulation integrating these aspects was developed whose infrastructure, capable of supporting any dynamical model, is described in the air-defence context. MBDA have extended this work creating the Aircraft and Missile Integration Simulation (AMIS) for integrating different launchers and missiles. The maturity of the AMIS makes it a tool for developing pre-launch algorithms for modern air-launched missiles from modern military aircraft.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Andersson, Naesseth Christian. „Vision and Radar Sensor Fusion for Advanced Driver Assistance Systems“. Thesis, Linköpings universitet, Reglerteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-94222.

Der volle Inhalt der Quelle
Annotation:
The World Health Organization predicts that by the year 2030, road traffic injuries will be one of the top five leading causes of death. Many of these deaths and injuries can be prevented by driving cars properly equipped with state-of-the-art safety and driver assistance systems. Some examples are auto-brake and auto-collision avoidance which are becoming more and more popular on the market today. A recent study by a Swedish insurance company has shown that on roadswith speeds up to 50 km/h an auto-brake system can reduce personal injuries by up to 64 percent. In fact in an estimated 40 percent of crashes, the auto-brake reduced the effects to the degree that no personal injury was sustained. It is imperative that these so called Advanced Driver Assistance Systems, to be really effective, have good situational awareness. It is important that they have adequate information of the vehicle’s immediate surroundings. Where are other cars, pedestrians or motorcycles relative to our own vehicle? How fast are they driving and in which lane? How is our own vehicle driving? Are there objects in the way of our own vehicle’s intended path? These and many more questions can be answered by a properly designed system for situational awareness. In this thesis we design and evaluate, both quantitatively and qualitatively, sensor fusion algorithms for multi-target tracking. We use a combination of camera and radar information to perform fusion and find relevant objects in a cluttered environment. The combination of these two sensors is very interesting because of their complementary attributes. The radar system has high range resolution but poor bearing resolution. The camera system on the other hand has a very high bearing resolution. This is very promising, with the potential to substantially increase the accuracy of the tracking system compared to just using one of the two. We have also designed algorithms for path prediction and a first threat awareness logic which are both qualitively evaluated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Attalla, Daniela, und Alexandra Tang. „Drones in Arctic Environments: Snow Change Tracking Aid using Sensor Fusion“. Thesis, KTH, Mekatronik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235928.

Der volle Inhalt der Quelle
Annotation:
The Arctic is subject to rapid climate changes that canbe difficult to track. This thesis aims to provide a user case in which researchers in the Arctic benefit from the incorporation of drones in their snow ablation research. This thesis presents a way to measure ablation stakes with the help of a sensor fusion system mounted on a drone. Ablation stakes are stakes placed on a grid over glaciers in the Arctic, below the snow and ice surface, during the winter and then measured during the summer to keep track of the amount of snow that has melted throughout the mass balance year. Each measurement is given by physically going to these stakes. The proposed solution is based on estimating the heights of the ablation stakes using a forward-faced LiDAR on a servo motor and a downward-faced ultrasonic sensor. The stake height is interpreted as the highest ultrasonic distance while the forward-faced sensor system is detecting an object within 3 m distance. The results indicate that stake height estimation using the proposed concept is a potential solution for the researchers if the roll and pitch angles of the sensor system are compensated for.
Arktis är ett område som är utsatt för stora klimatförändringar, vilka kan vara svåra att spåra. Målet med arbetet är att föreslå, utveckla och utvärdera ett koncept där forskare i arktiska områden gagnas av att använda drönar- och sensorteknik i deras arbete gällande snöablation. Arbetet presenterar ett alternativ till att mäta utplacerade referensstavar med hjälp av ett integrerat sensorsystem monterat på en drönare. Dessa referensstavar borras ned, under snö- och isytan, över ett rutnät på glaciärerna i Arktis under vintern för att sedan mätas under sommaren med avsikt att studera mängden snö som smälter under året. Varje mätning görs således genom att fysiskt gå till varje enskild referensstav. Det framtagna konceptet uppskattar höjden på referensstavarna med hjälp av en framåtriktad LiDAR monterad på en servomotor och en nedåriktad ultraljudssensor. Höjden är uttytt som det högsta ultraljudsavståndet då det framåtriktade sensorsystemet detekterar ett föremål inom 3 m avstånd. Resultaten tyder på att det föreslagna konceptets höjduppskattning av referensstavar är en potentiell lösning inom problemområdet om systemets roll- och pitchvinklar kompenseras för.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Andersson, Anton. „Offline Sensor Fusion for Multitarget Tracking using Radar and Camera Detection“. Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208344.

Der volle Inhalt der Quelle
Annotation:
Autonomous driving systems are rapidly improving and may have the ability to change society in the coming decade. One important part of these systems is the interpretation of sensor information into trajectories of objects. In this master’s thesis, we study an energy minimisation method with radar and camera measurements as inputs. An energy is associated with the trajectories; this takes the measurements, the objects’ dynamics and more factors into consideration. The trajectories are chosen to minimise this energy, using a gradient descent method. The lower the energy, the better the trajectories are expected to match the real world. The processing is performed offline, as opposed to in real time. Offline tracking can be used in the evaluation of the sensors’ and the real time tracker’s performance. Offline processing allows for the use of more computer power. It also gives the possibility to use data that was collected after the considered point in time. A study of the parameters of the used energy minimisation method is presented, along with variations of the initial method. The results of the method is an improvement over the individual inputs, as well as over the real time processing used in the cars currently. In the parameter study it is shown which components of the energy function are improving the results.
Mycket resurser läggs på utveckling av självkörande bilsystem. Dessa kan komma att förändra samhället under det kommande decenniet. En viktig del av dessa system är behandling och tolkning av sensordata och skapande av banor för objekt i omgivningen. I detta examensarbete studeras en energiminimeringsmetod tillsammans med radar- och kameramätningar. En energi beräknas för banorna. Denna tar mätningarna, objektets dynamik och fler faktorer i beaktande. Banorna väljs för att minimera denna energi med hjälp av gradientmetoden. Ju lägre energi, desto bättre förväntas banorna att matcha verkligheten. Bearbetning sker offline i motsats till i realtid; offline-bearbetning kan användas då prestandan för sensorer och realtidsbehandlingen utvärderas. Detta möjliggör användning av mer datorkraft och ger möjlighet att använda data som samlats in efter den aktuella tidpunkten. En studie av de ingående parametrarna i den använda energiminimeringsmetoden presenteras, tillsammans med justeringar av den ursprungliga metoden. Metoden ger ett förbättrat resultat jämfört med de enskilda sensormätningarna, och även jämfört med den realtidsmetod som används i bilarna för närvarande. I parameterstudien visas vilka komponenter i energifunktionen som förbättrar metodens prestanda.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Manyika, James. „An information-theoretic approach to data fusion and sensor management“. Thesis, University of Oxford, 1993. http://ora.ox.ac.uk/objects/uuid:6e6dd2a8-1ec0-4d39-8f8b-083289756a70.

Der volle Inhalt der Quelle
Annotation:
The use of multi-sensor systems entails a Data Fusion and Sensor Management requirement in order to optimize the use of resources and allow the synergistic operation of sensors. To date, data fusion and sensor management have largely been dealt with separately and primarily for centralized and hierarchical systems. Although work has recently been done in distributed and decentralized data fusion, very little of it has addressed sensor management. In decentralized systems, a consistent and coherent approach is essential and the ad hoc methods used in other systems become unsatisfactory. This thesis concerns the development of a unified approach to data fusion and sensor management in multi-sensor systems in general and decentralized systems in particular, within a single consistent information-theoretic framework. Our approach is based on considering information and its gain as the main goal of multi-sensor systems. We develop a probabilistic information update paradigm from which we derive directly architectures and algorithms for decentralized data fusion and, most importantly, address sensor management. Presented with several alternatives, the question of how to make decisions leading to the best sensing configuration or actions, defines the management problem. We discuss the issues in decentralized decision making and present a normative method for decentralized sensor management based on information as expected utility. We discuss several ways of realizing the solution culminating in an iterative method akin to bargaining for a general decentralized system. Underlying this is the need for a good sensor model detailing a sensor's physical operation and the phenomenological nature of measurements vis-a-vis the probabilistic information the sensor provides. Also, implicit in a sensor management problem is the existence of several sensing alternatives such as those provided by agile or multi-mode sensors. With our application in mind, we detail such a sensor model for a novel Tracking Sonar with precisely these capabilities making it ideal for managed data fusion. As an application, we consider vehicle navigation, specifically localization and map-building. Implementation is on the OxNav vehicle (JTR) which we are currently developing. The results show, firstly, how with managed data fusion, localization is greatly speeded up compared to previous published work and secondly, how synergistic operation such as sensor-feature assignments, hand-off and cueing can be realised decentrally. This implementation provides new ways of addressing vehicle navigation, while the theoretical results are applicable to a variety of multi-sensing problems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Li, Lingjie Luo Zhi-Quan. „Data fusion and filtering for target tracking and identification /“. *McMaster only, 2003.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Gallagher, Jonathan G. „Likelihood as a Method of Multi Sensor Data Fusion for Target Tracking“. The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1244041862.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Fallah, Haghmohammadi Hamidreza. „Fever Detection for Dynamic Human Environment Using Sensor Fusion“. Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37332.

Der volle Inhalt der Quelle
Annotation:
The objective of this thesis is to present an algorithm for processing infrared images and accomplishing automatic detection and path tracking of moving subjects with fever. The detection is based on two main features: the distinction between the geometry of a human face and other objects in the field of view of the camera and the temperature of the radiating object. These features are used for tracking the identified person with fever. The position of camera with respect to direction of motion the walkers appeared to be critical in this process. Infrared thermography is a remote sensing technique used to measure temperatures based on emitted infrared radiation. This application may be used for fever screening in major public places such as airports and hospitals. For this study, we first look at human body and objects in a line of view with different temperatures that would be higher than the normal human body temperature (37.8C at morning and 38.3C at evening). As a part of the experimental study, two humans with different body temperatures walking a path were subjected to automatic fever detection applied for tracking the detected human with fever. The algorithm consists of image processing to threshold objects based on the temperature and template matching used for fever detection in a dynamic human environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Johansson, Ronnie. „Information Acquisition in Data Fusion Systems“. Licentiate thesis, KTH, Numerical Analysis and Computer Science, NADA, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-1673.

Der volle Inhalt der Quelle
Annotation:

By purposefully utilising sensors, for instance by a datafusion system, the state of some system-relevant environmentmight be adequately assessed to support decision-making. Theever increasing access to sensors o.ers great opportunities,but alsoincurs grave challenges. As a result of managingmultiple sensors one can, e.g., expect to achieve a morecomprehensive, resolved, certain and more frequently updatedassessment of the environment than would be possible otherwise.Challenges include data association, treatment of con.ictinginformation and strategies for sensor coordination.

We use the term information acquisition to denote the skillof a data fusion system to actively acquire information. Theaim of this thesis is to instructively situate that skill in ageneral context, explore and classify related research, andhighlight key issues and possible future work. It is our hopethat this thesis will facilitate communication, understandingand future e.orts for information acquisition.

The previously mentioned trend towards utilisation of largesets of sensors makes us especially interested in large-scaleinformation acquisition, i.e., acquisition using many andpossibly spatially distributed and heterogeneous sensors.

Information acquisition is a general concept that emerges inmany di.erent .elds of research. In this thesis, we surveyliterature from, e.g., agent theory, robotics and sensormanagement. We, furthermore, suggest a taxonomy of theliterature that highlights relevant aspects of informationacquisition.

We describe a function, perception management (akin tosensor management), which realizes information acquisition inthe data fusion process and pertinent properties of itsexternal stimuli, sensing resources, and systemenvironment.

An example of perception management is also presented. Thetask is that of managing a set of mobile sensors that jointlytrack some mobile targets. The game theoretic algorithmsuggested for distributing the targets among the sensors proveto be more robust to sensor failure than a measurement accuracyoptimal reference algorithm.

Keywords:information acquisition, sensor management,resource management, information fusion, data fusion,perception management, game theory, target tracking

APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Palaniappan, Ravishankar. „A SELF-ORGANIZING HYBRID SENSOR SYSTEM WITH DISTRIBUTED DATA FUSION FOR INTRUDER TRACKING AND SURVEILLANCE“. Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2407.

Der volle Inhalt der Quelle
Annotation:
A wireless sensor network is a network of distributed nodes each equipped with its own sensors, computational resources and transceivers. These sensors are designed to be able to sense specific phenomenon over a large geographic area and communicate this information to the user. Most sensor networks are designed to be stand-alone systems that can operate without user intervention for long periods of time. While the use of wireless sensor networks have been demonstrated in various military and commercial applications, their full potential has not been realized primarily due to the lack of efficient methods to self organize and cover the entire area of interest. Techniques currently available focus solely on homogeneous wireless sensor networks either in terms of static networks or mobile networks and suffers from device specific inadequacies such as lack of coverage, power and fault tolerance. Failing nodes result in coverage loss and breakage in communication connectivity and hence there is a pressing need for a fault tolerant system to allow replacing of the failed nodes. In this dissertation, a unique hybrid sensor network is demonstrated that includes a host of mobile sensor platforms. It is shown that the coverage area of the static sensor network can be improved by self-organizing the mobile sensor platforms to allow interaction with the static sensor nodes and thereby increase the coverage area. The performance of the hybrid sensor network is analyzed for a set of N mobile sensors to determine and optimize parameters such as the position of the mobile nodes for maximum coverage of the sensing area without loss of signal between the mobile sensors, static nodes and the central control station. A novel approach to tracking dynamic targets is also presented. Unlike other tracking methods that are based on computationally complex methods, the strategy adopted in this work is based on a computationally simple but effective technique of received signal strength indicator measurements. The algorithms developed in this dissertation are based on a number of reasonable assumptions that are easily verified in a densely distributed sensor network and require simple computations that efficiently tracks the target in the sensor field. False alarm rate, probability of detection and latency are computed and compared with other published techniques. The performance analysis of the tracking system is done on an experimental testbed and also through simulation and the improvement in accuracy over other methods is demonstrated.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Modeling and Simulation PhD
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Fredriksson, Alfred, und Joakim Wallin. „Mapping an Auditory Scene Using Eye Tracking Glasses“. Thesis, Linköpings universitet, Reglerteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170849.

Der volle Inhalt der Quelle
Annotation:
The cocktail party problem introduced in 1953 describes the ability to focus auditory attention in a noisy environment epitomised by a cocktail party. An individual with normal hearing uses several cues to unmask talkers of interest, such cues often lacks for people with hearing loss. This thesis explores the possibility to use a pair of glasses equipped with an inertial measurement unit (IMU), monocular camera and eye tacker to estimate an auditory scene and estimate the attention of the person wearing the glasses. Three main areas of interest have been investigated: estimating head orientation of the user; track faces in the scene and determine talker of interest using gaze. Implemented on a hearing aid, this solution could be used to artificially unmask talkers in a noisy environment. The head orientation of the user has been estimated with an extended Kalman filter (\EKF) algorithm, with a constant velocity model and different sets of measurements: accelerometer; gyrosope; monocular visual odometry (MVO); gaze estimated bias (GEB). An intrinsic property of IMU sensors is a drift in yaw. A method using eye data and gyroscope measurements to estimate gyroscope bias has been investigated and is called GEB. The MVO methods investigated use either optical flow to track features in succeeding frames or a key frame approach to match features over multiple frames.Using estimated head orientation and face detection software, faces have been tracked since they can be assumed as regions of interest in a cocktail party environment. A constant position EKF with a nearest neighbour approach has been used for tracking. Further, eye data retrieved from the glasses has been analyzed to investigate the relation between gaze direction and current talker during conversations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Beale, Gregory Thomas. „Radar and LiDAR Fusion for Scaled Vehicle Sensing“. Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/102932.

Der volle Inhalt der Quelle
Annotation:
Scaled test-beds (STBs) are popular tools to develop and physically test algorithms for advanced driving systems, but often lack automotive-grade radars in their sensor suites. To overcome resolution issues when using a radar at small scale, a high-level sensor fusion approach between the radar and automotive-grade LiDAR was proposed. The sensor fusion approach was expected to leverage the higher spatial resolution of the LiDAR effectively. First, multi object radar tracking software (RTS) was developed to track a maneuvering full-scale vehicle using an extended Kalman filter (EKF) and the joint probabilistic data association (JPDA). Second, a 1/5th scaled vehicle performed the same vehicle maneuvers but scaled to approximately 1/5th the distance and speed. When taking the scaling factor into consideration, the RTS' positional error at small scale was, on average, over 5 times higher than in the full-scale trials. Third, LiDAR object sensor tracks were generated for the small-scale trials using a Velodyne PUCK LiDAR, a simplified point cloud clustering algorithm, and a second EKF implementation. Lastly, the radar sensor tracks and LiDAR sensor tracks served as inputs to a high-level track-to-track fuser for the small-scale trials. The fusion software used a third EKF implementation to track fused objects between both sensors and demonstrated a 30% increase in positional accuracy for a majority of the small-scale trials when compared to using just the radar or just the LiDAR to track the vehicle. The proposed track fuser could be used to increase the accuracy of RTS algorithms when operating in small scale and allow STBs to better incorporate automotive radars into their sensor suites.
Master of Science
Research and development platforms, often supported by robust prototypes, are essential for the development, testing, and validation of automated driving functions. Thousands of hours of safety and performance benchmarks must be met before any advanced driver assistance system (ADAS) is considered production-ready. However, full-scale testbeds are expensive to build, labor-intensive to design, and present inherent safety risks while testing. Scaled prototypes, developed to model system design and vehicle behavior in targeted driving scenarios, can minimize these risks and expenses. Scaled testbeds, more specifically, can improve the ease of safety testing future ADAS systems and help visualize test results and system limitations, better than software simulations, to audiences with varying technical backgrounds. However, these testbeds are not without limitation. Although small-scale vehicles may accommodate similar on-board systems to its full-scale counterparts, as the vehicle scales down the resolution from perception sensors decreases, especially from on board radars. With many automated driving functions relying on radar object detection, the scaled vehicle must host radar sensors that function appropriately at scale to support accurate vehicle and system behavior. However, traditional radar technology is known to have limitations when operating in small-scale environments. Sensor fusion, which is the process of merging data from multiple sensors, may offer a potential solution to this issue. Consequently, a sensor fusion approach is presented that augments the angular resolution of radar data in a scaled environment with a commercially available Light Detection and Ranging (LiDAR) system. With this approach, object tracking software designed to operate in full-scaled vehicles with radars can operate more accurately when used in a scaled environment. Using this improvement, small-scale system tests could confidently and quickly be used to identify safety concerns in ADAS functions, leading to a faster and safer product development cycle.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Larsson, Olof. „Visual-inertial tracking using Optical Flow measurements“. Thesis, Linköping University, Automatic Control, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-59970.

Der volle Inhalt der Quelle
Annotation:

 

Visual-inertial tracking is a well known technique to track a combination of a camera and an inertial measurement unit (IMU). An issue with the straight-forward approach is the need of known 3D points. To by-pass this, 2D information can be used without recovering depth to estimate the position and orientation (pose) of the camera. This Master's thesis investigates the feasibility of using Optical Flow (OF) measurements and indicates the benifits using this approach.

The 2D information is added using OF measurements. OF describes the visual flow of interest points in the image plane. Without the necessity to estimate depth of these points, the computational complexity is reduced. With the increased 2D information, the 3D information required for the pose estimate decreases.

The usage of 2D points for the pose estimation has been verified with experimental data gathered by a real camera/IMU-system. Several data sequences containing different trajectories are used to estimate the pose. It is shown that OF measurements can be used to improve visual-inertial tracking with reduced need of 3D-point registrations.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Chavez, Garcia Ricardo Omar. „Multiple sensor fusion for detection, classification and tracking of moving objects in driving environments“. Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM034/document.

Der volle Inhalt der Quelle
Annotation:
Les systèmes avancés d'assistance au conducteur (ADAS) aident les conducteurs à effectuer des tâches de conduite complexes et à éviter ou atténuer les situations dangereuses. Le véhicule détecte le monde extérieur au moyen de capteurs, et ensuite construit et met à jour un modèle interne de la configuration de l'environnement. La perception de véhicule consiste à établir des relations spatiales et temporelles entre le véhicule et les obstacles statiques et mobiles dans l'environnement. Cette perception se compose de deux tâches principales : la localisation et cartographie simultanées (SLAM) traite de la modélisation de pièces statiques; et la détection et le suivi d'objets en mouvement (DATMO) est responsable de la modélisation des pièces mobiles dans l'environnement. Afin de réaliser un bon raisonnement et contrôle, le système doit modéliser correctement l'environnement. La détection précise et la classification des objets en mouvement est un aspect essentiel d'un système de suivi d'objets. Classification des objets en mouvement est nécessaire pour déterminer le comportement possible des objets entourant le véhicule, et il est généralement réalisée au niveau de suivi des objets. La connaissance de la classe d'objets en mouvement au niveau de la détection peut aider à améliorer leur suivi. La plupart des solutions de perception actuels considèrent informations de classification seulement comme information additional pour la sortie final de la perception. Aussi, la gestion de l'information incomplète est une exigence importante pour les systèmes de perception. Une information incomplète peut être originaire de raisons liées à la détection, tels que les problèmes d calibrage et les dysfonctionnements des capteurs; ou des perturbations de la scène, comme des occlusions, des problèmes de météo et objet déplacement. Les principales contributions de cette thèse se concentrent sur ​​la scène DATMO. Précisément, nous pensons que l'inclusion de la classe de l'objet comme un élément clé de la représentation de l'objet et la gestion de l'incertitude de plusieurs capteurs de détections, peut améliorer les résultats de la tâche de perception. Par conséquent, nous abordons les problèmes de l'association de données, la fusion de capteurs, la classification et le suivi à différents niveaux au sein de la phase de DATMO. Même si nous nous concentrons sur un ensemble de trois capteurs principaux: radar, lidar, et la caméra, nous proposons une architecture modifiables pour inclure un autre type ou nombre de capteurs. Premièrement, nous définissons une représentation composite de l'objet pour inclure des informations de classe et de l'état d'objet deouis le début de la tâche de perception. Deuxièmement, nous proposons, mettre en œuvre, et comparons deux architectures de perception afin de résoudre le problème de DATMO selon le niveau où l'association des objets, la fusion et la classification des informations sont inclus et appliquées. Nos méthodes de fusion de données sont basées sur la théorie de l'evidence, qui est utilisé pour gérer et inclure l'incertitude de la détection du capteur et de la classification des objets. Troisièmement, nous proposons une approche d'association de données bassée en la théorie de l'evidence pour établir une relation entre deux liste des détections d'objets. Quatrièmement, nous intégrons nos approches de fusion dans le cadre d'une application véhicule en temps réel. Cette intégration a été réalisée dans un réelle démonstrateur de véhicule du projet European InteractIVe. Finalement, nous avons analysé et évalué expérimentalement les performances des méthodes proposées. Nous avons comparé notre fusion rapproche les uns contre les autres et contre une méthode state-of-the-art en utilisant des données réelles de scénarios de conduite différents. Ces comparaisons sont concentrés sur la détection, la classification et le suivi des différents objets en mouvement: piétons, vélos, voitures et camions
Advanced driver assistance systems (ADAS) help drivers to perform complex driving tasks and to avoid or mitigate dangerous situations. The vehicle senses the external world using sensors and then builds and updates an internal model of the environment configuration. Vehicle perception consists of establishing the spatial and temporal relationships between the vehicle and the static and moving obstacles in the environment. Vehicle perception is composed of two main tasks: simultaneous localization and mapping (SLAM) deals with modelling static parts; and detection and tracking moving objects (DATMO) is responsible for modelling moving parts in the environment. In order to perform a good reasoning and control, the system has to correctly model the surrounding environment. The accurate detection and classification of moving objects is a critical aspect of a moving object tracking system. Therefore, many sensors are part of a common intelligent vehicle system. Classification of moving objects is needed to determine the possible behaviour of the objects surrounding the vehicle, and it is usually performed at tracking level. Knowledge about the class of moving objects at detection level can help improve their tracking. Most of the current perception solutions consider classification information only as aggregate information for the final perception output. Also, management of incomplete information is an important requirement for perception systems. Incomplete information can be originated from sensor-related reasons, such as calibration issues and hardware malfunctions; or from scene perturbations, like occlusions, weather issues and object shifting. It is important to manage these situations by taking them into account in the perception process. The main contributions in this dissertation focus on the DATMO stage of the perception problem. Precisely, we believe that including the object's class as a key element of the object's representation and managing the uncertainty from multiple sensors detections, we can improve the results of the perception task, i.e., a more reliable list of moving objects of interest represented by their dynamic state and appearance information. Therefore, we address the problems of sensor data association, and sensor fusion for object detection, classification, and tracking at different levels within the DATMO stage. Although we focus on a set of three main sensors: radar, lidar, and camera, we propose a modifiable architecture to include other type or number of sensors. First, we define a composite object representation to include class information as a part of the object state from early stages to the final output of the perception task. Second, we propose, implement, and compare two different perception architectures to solve the DATMO problem according to the level where object association, fusion, and classification information is included and performed. Our data fusion approaches are based on the evidential framework, which is used to manage and include the uncertainty from sensor detections and object classifications. Third, we propose an evidential data association approach to establish a relationship between two sources of evidence from object detections. We observe how the class information improves the final result of the DATMO component. Fourth, we integrate the proposed fusion approaches as a part of a real-time vehicle application. This integration has been performed in a real vehicle demonstrator from the interactIVe European project. Finally, we analysed and experimentally evaluated the performance of the proposed methods. We compared our evidential fusion approaches against each other and against a state-of-the-art method using real data from different driving scenarios. These comparisons focused on the detection, classification and tracking of different moving objects: pedestrian, bike, car and truck
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Högger, Andreas. „Dempster Shafer Sensor Fusion for Autonomously Driving Vehicles : Association Free Tracking of Dynamic Objects“. Thesis, KTH, Skolan för elektro- och systemteknik (EES), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-187814.

Der volle Inhalt der Quelle
Annotation:
Autonomous driving vehicles introduce challenging research areas combining differ-ent disciplines. One challenge is the detection of obstacles with different sensors and the combination of information to generate a comprehensive representation of the environment, which can be used for path planning and decision making.The sensor fusion is demonstrated using two Velodyne multi beam laser scanners, but it is possible to extend the proposed sensor fusion framework for different sensor types. Sensor fusion methods are highly dependent on an accurate pose estimate, which can not be guaranteed in any case. A fault tolerant sensor fusion based on the Dempster Shafer theory to take the uncertainty of the pose estimate into account is discussed and compared using an example, although not implemented on the test vehicle.Based on the fused occupancy grid map, dynamic obstacles are tracked to give a velocity estimate without the need of any object or track association methods. Ex-periments are carried out on real world data as well as on simulated measurements, for which a ground truth reference is provided.The occupancy grid mapping algorithm runs on central- and graphical-processing units, which allows to give a comparison between the two approaches and to stress out which approach is preferably used, depending on the application.
Självkörande bilar har lett till flera intressanta forskningsområden som kombinerar manga olika discipliner. En utmaning är att ge fordonet en sorts ¨ögon. Genom att använda ytterligare sensorer och kombinera data frän samtliga så kan man detektera hinder i fordonets väg. Detta kan naturligtvis användas för att förbättra fordonets planerade rutt och därmed också minska klimatpåverkan. Här används två sammankopplade Velodyne laserstrålsensorer för att undersöka detta Närmare, men det går också att utöka antalet sensorer ytterligare. Sammanlänkningen av sensorer är mycket känslig och kräver därför exakta koordinater, vilket inte alltid kan garanteras. Därför utreds istället om en sensor baserad på Dempster Shaferteorin kan användas för att hantera fel och osäkerheter. Denna används dock inte i testfordonet. Baserat på en sammanvägd kartbild över upptagna och fria områden (occupancy grid mapping) kan objekt och hinder i rörelse följas för att uppskatta fordonets hastighet utan att metoder för objekt- eller banidentifiering behöver användas. Experiment har utförts på verklig data. Dessutom används simulerade mätningar där en sann grundreferens används. Algoritmen som används för occupancy-kartan använder sig av central- och grafik-processorenheter, vilket ger oss möjlighet att jämföra två metoder och finna den bäst fungerande metoden för olika applikationer.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Wilson, Dean A. „Analysis of tracking and identification characteristics of diverse systems and data sources for sensor fusion“. Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2001. http://handle.dtic.mil/100.2/ADA392099.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S. in Aeronautical Engineering) Naval Postgraduate School, June 2001.
Thesis advisors(s): Duren, Russ ; Hutchins, Gary. "June 2001". Includes bibliographical references (p. 115-117). Also available online.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Rutkowski, Adam J. „A BIOLOGICALLY-INSPIRED SENSOR FUSION APPROACH TO TRACKING A WIND-BORNE ODOR IN THREE DIMENSIONS“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=case1196447143.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Marron, Monteserin Juan Jose. „Multi Sensor System for Pedestrian Tracking and Activity Recognition in Indoor Environments“. Scholar Commons, 2014. https://scholarcommons.usf.edu/etd/5068.

Der volle Inhalt der Quelle
Annotation:
The widespread use of mobile devices and the rise of Global Navigation Satellite Systems (GNSS) have allowed mobile tracking applications to become very popular and valuable in outdoor environments. However, tracking pedestrians in indoor environments with Global Positioning System (GPS)-based schemes is still very challenging given the lack of enough signals to locate the user. Along with indoor tracking, the ability to recognize pedestrian behavior and activities can lead to considerable growth in location-based applications including pervasive healthcare, leisure and guide services (such as, museum, airports, stores, etc.), and emergency services, among the most important ones. This thesis presents a system for pedestrian tracking and activity recognition in indoor environments using exclusively common off-the-shelf sensors embedded in smartphones (accelerometer, gyroscope, magnetometer and barometer). The proposed system combines the knowledge found in biomechanical patterns of the human body while accomplishing basic activities, such as walking or climbing stairs up and down, along with identifiable signatures that certain indoor locations (such as turns or elevators) introduce on sensing data. The system was implemented and tested on Android-based mobile phones with a fixed phone position. The system provides accurate step detection and count with an error of 3% in flat floor motion traces and 3.33% in stairs. The detection of user changes of direction and altitude are performed with 98.88% and 96.66% accuracy, respectively. In addition, the activity recognition module has an accuracy of 95%. The combination of modules leads to a total tracking error of 90.81% in common human motion indoor displacements.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Homelius, Marcus. „Tracking of Ground Vehicles : Evaluation of Tracking Performance Using Different Sensors and Filtering Techniques“. Thesis, Linköpings universitet, Reglerteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148432.

Der volle Inhalt der Quelle
Annotation:
It is crucial to find a good balance between positioning accuracy and cost when developing navigation systems for ground vehicles. In open sky or even in a semi-urban environment, a single global navigation satellite system (GNSS) constellation performs sufficiently well. However, the positioning accuracy decreases drastically in urban environments. Because of the limitation in tracking performance for standalone GNSS, particularly in cities, many solutions are now moving toward integrated systems that combine complementary sensors. In this master thesis the improvement of tracking performance for a low-cost ground vehicle navigation system is evaluated when complementary sensors are added and different filtering techniques are used. How the GNSS aided inertial navigation system (INS) is used to track ground vehicles is explained in this thesis. This has shown to be a very effective way of tracking a vehicle through GNSS outages. Measurements from an accelerometer and a gyroscope are used as inputs to inertial navigation equations. GNSS measurements are then used to correct the tracking solution and to estimate the biases in the inertial sensors. When velocity constraints on the vehicle’s motion in the y- and z-axis are included, the GNSS aided INS has shown very good performance, even during long GNSS outages. Two versions of the Rauch-Tung-Striebel (RTS) smoother and a particle filter (PF) version of the GNSS aided INS have also been implemented and evaluated. The PF has shown to be computationally demanding in comparison with the other approaches and a real-time implementation on the considered embedded system is not doable. The RTS smoother has shown to give a smoother trajectory but a lot of extra information needs to be stored and the position accuracy is not significantly improved. Moreover, map matching has been combined with GNSS measurements and estimates from the GNSS aided INS. The Viterbi algorithm is used to output the the road segment identification numbers of the most likely path and then the estimates are matched to the closest position of these roads. A suggested solution to acquire reliable tracking with high accuracy in all environments is to run the GNSS aided INS in real-time in the vehicle and simultaneously send the horizontal position coordinates to a back office where map information is kept and map matching is performed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Gale, Nicholas C. „FUSION OF VIDEO AND MULTI-WAVEFORM FMCW RADAR FOR TRAFFIC SURVEILLANCE“. Wright State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=wright1315857639.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Vincent, David E. „PORTABLE INDOOR MULTI-USER POSITION TRACKING SYSTEM FOR IMMERSIVE VIRTUAL ENVIRONMENTS USING SENSOR FUSION WITH MULTIDIMENSIONAL SCALING“. Miami University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=miami1335634621.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Chen, Yangsheng. „Ground Target Tracking with Multi-Lane Constraint“. ScholarWorks@UNO, 2009. http://scholarworks.uno.edu/td/925.

Der volle Inhalt der Quelle
Annotation:
Knowledge of the lane that a target is located in is of particular interest in on-road surveillance and target tracking systems. We formulate the problem and propose two approaches for on-road target estimation with lane tracking. The first approach for lane tracking is lane identification based ona Hidden Markov Model (HMM) framework. Two identifiers are developed according to different optimality goals of identification, i.e., the optimality for the whole lane sequence and the optimality of the current lane where the target is given the whole observation sequence. The second approach is on-road target tracking with lane estimation. We propose a 2D road representation which additionally allows to model the lateral motion of the target. For fusion of the radar and image sensor based measurement data we develop three, IMM-based, estimators that use different fusion schemes: centralized, distributed, and sequential. Simulation results show that the proposed two methods have new capabilities and achieve improved estimation accuracy for on-road target tracking.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Hucks, John A. „Fusion of ground-based sensors for optimal tracking of military targets“. Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/27067.

Der volle Inhalt der Quelle
Annotation:
Extended Kalman filtering is applied as an extension of the Position Location Reporting System (PLRS) to track a moving target in the XY plane. The application uses four sets of observables which correspond to inputs from a fused-sensor array where the sensors employed are acoustic, seismic, or radar. The nonlinearities to the Kalman filter occur through the measured observables which are: bearings to the target only, ranges to the target only, bearings and ranges to the target, and a Doppler-shifted frequency accompanied by the bearing to that frequency. The observables are nonlinear in their relationships to the Cartesian coordinate states of the filter. Filter error covariances are portrayed as error ellipsoids about the laser target estimate made by the filter. Rotation of the ellipsoids is accomplished to avoid the cross correlation of the coordinates. The ellipsoids employed are one standard of deviation in the rotated coordinate system and correspond to a constant of probability of target location about the latest Kalman target estimate. Filtering techniques are evaluated for both stationary and moving observers with arbitrarily moving targets. The objective of creating a user-friendly, personal computer based tracking algorithm is also discussed
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Mangette, Clayton John. „Perception and Planning of Connected and Automated Vehicles“. Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/98812.

Der volle Inhalt der Quelle
Annotation:
Connected and Automated Vehicles (CAVs) represent a growing area of study in robotics and automotive research. Their potential benefits of increased traffic flow, reduced on-road accident, and improved fuel economy make them an attractive option. While some autonomous features such as Adaptive Cruise Control and Lane Keep Assist are already integrated into consumer vehicles, they are limited in scope and require innovation to realize fully autonomous vehicles. This work addresses the design problems of perception and planning in CAVs. A decentralized sensor fusion system is designed using Multi-target tracking to identify targets within a vehicle's field of view, enumerate each target with the lane it occupies, and highlight the most important object (MIO) for Adaptive cruise control. Its performance is tested using the Optimal Sub-pattern Assignment (OSPA) metric and correct assignment rate of the MIO. The system has an average accuracy assigning the MIO of 98%. The rest of this work considers the coordination of multiple CAVs from a multi-agent motion planning perspective. A centralized planning algorithm is applied to a space similar to a traffic intersection and is demonstrated empirically to be twice as fast as existing multi-agent planners., making it suitable for real-time planning environments.
Master of Science
Connected and Automated Vehicles are an emerging area of research that involve integrating computational components to enable autonomous driving. This work considers two of the major challenges in this area of research. The first half of this thesis considers how to design a perception system in the vehicle that can correctly track other vehicles and assess their relative importance in the environment. A sensor fusion system is designed which incorporates information from different sensor types to form a list of relevant target objects. The rest of this work considers the high-level problem of coordination between autonomous vehicles. A planning algorithm which plans the paths of multiple autonomous vehicles that is guaranteed to prevent collisions and is empirically faster than existing planning methods is demonstrated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Denman, Simon Paul. „Improved detection and tracking of objects in surveillance video“. Queensland University of Technology, 2009. http://eprints.qut.edu.au/29328/.

Der volle Inhalt der Quelle
Annotation:
Surveillance networks are typically monitored by a few people, viewing several monitors displaying the camera feeds. It is then very dicult for a human op- erator to eectively detect events as they happen. Recently, computer vision research has begun to address ways to automatically process some of this data, to assist human operators. Object tracking, event recognition, crowd analysis and human identication at a distance are being pursued as a means to aid human operators and improve the security of areas such as transport hubs. The task of object tracking is key to the eective use of more advanced technolo- gies. To recognize an event people and objects must be tracked. Tracking also enhances the performance of tasks such as crowd analysis or human identication. Before an object can be tracked, it must be detected. Motion segmentation tech- niques, widely employed in tracking systems, produce a binary image in which objects can be located. However, these techniques are prone to errors caused by shadows and lighting changes. Detection routines often fail, either due to erro- neous motion caused by noise and lighting eects, or due to the detection routines being unable to split occluded regions into their component objects. Particle l- ters can be used as a self contained tracking system, and make it unnecessary for the task of detection to be carried out separately except for an initial (of- ten manual) detection to initialise the lter. Particle lters use one or more extracted features to evaluate the likelihood of an object existing at a given point each frame. Such systems however do not easily allow for multiple objects to be tracked robustly, and do not explicitly maintain the identity of tracked objects. This dissertation investigates improvements to the performance of object tracking algorithms through improved motion segmentation and the use of a particle lter. A novel hybrid motion segmentation / optical ow algorithm, capable of simulta- neously extracting multiple layers of foreground and optical ow in surveillance video frames is proposed. The algorithm is shown to perform well in the presence of adverse lighting conditions, and the optical ow is capable of extracting a mov- ing object. The proposed algorithm is integrated within a tracking system and evaluated using the ETISEO (Evaluation du Traitement et de lInterpretation de Sequences vidEO - Evaluation for video understanding) database, and signi- cant improvement in detection and tracking performance is demonstrated when compared to a baseline system. A Scalable Condensation Filter (SCF), a particle lter designed to work within an existing tracking system, is also developed. The creation and deletion of modes and maintenance of identity is handled by the underlying tracking system; and the tracking system is able to benet from the improved performance in uncertain conditions arising from occlusion and noise provided by a particle lter. The system is evaluated using the ETISEO database. The dissertation then investigates fusion schemes for multi-spectral tracking sys- tems. Four fusion schemes for combining a thermal and visual colour modality are evaluated using the OTCBVS (Object Tracking and Classication in and Beyond the Visible Spectrum) database. It is shown that a middle fusion scheme yields the best results and demonstrates a signicant improvement in performance when compared to a system using either mode individually. Findings from the thesis contribute to improve the performance of semi- automated video processing and therefore improve security in areas under surveil- lance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Malik, Zohaib Mansoor. „Design and implementation of temporal filtering and other data fusion algorithms to enhance the accuracy of a real time radio location tracking system“. Thesis, Högskolan i Gävle, Avdelningen för elektronik, matematik och naturvetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-13261.

Der volle Inhalt der Quelle
Annotation:
A general automotive navigation system is a satellite navigation system designed for use inautomobiles. It typically uses GPS to acquire position data to locate the user on a road in the unit's map database. However, due to recent improvements in the performance of small and lightweight micro-machined electromechanical systems (MEMS) inertial sensors have made the application of inertial techniques to such problems, possible. This has resulted in an increased interest in the topic of inertial navigation. In location tracking system, sensors are used either individually or in conjunction like in data fusion. However, still they remain noisy, and so there is a need to measure maximum data and then make an efficient system that can remove the noise from data and provide a better estimate. The task of this thesis work was to take data from two sensors, and use an estimation technique toprovide an accurate estimate of the true location. The proposed sensors were an accelerometer and a GPS device. This thesis however deals with using accelerometer sensor and using estimation scheme, Kalman filter. The thesis report presents an insight to both the proposed sensors and different estimation techniques. Within the scope of the work, the task was performed using simulation software Matlab. Kalman filter’s efficiency was examined using different noise levels.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Johansson, Ronnie. „Large-Scale Information Acquisition for Data and Information Fusion“. Doctoral thesis, Stockholm, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3890.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Lee, Yeongseon. „Bayesian 3D multiple people tracking using multiple indoor cameras and microphones“. Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29668.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Rusell M. Mersereau; Committee Member: Biing Hwang (Fred) Juang; Committee Member: Christopher E. Heil; Committee Member: Georgia Vachtsevanos; Committee Member: James H. McClellan. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Buaes, Alexandre Greff. „A low cost one-camera optical tracking system for indoor wide-area augmented and virtual reality environments“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2006. http://hdl.handle.net/10183/7138.

Der volle Inhalt der Quelle
Annotation:
O número de aplicações industriais para ambientes de “Realidade Aumentada” (AR) e “Realidade Virtual” (VR) tem crescido de forma significativa nos últimos anos. Sistemas óticos de rastreamento (optical tracking systems) constituem um importante componente dos ambientes de AR/VR. Este trabalho propõe um sistema ótico de rastreamento de baixo custo e com características adequadas para uso profissional. O sistema opera na região espectral do infravermelho para trabalhar com ruído ótico reduzido. Uma câmera de alta velocidade, equipada com filtro para bloqueio da luz visível e com flash infravermelho, transfere imagens de escala de cinza não comprimidas para um PC usual, onde um software de pré-processamento de imagens e o algoritmo PTrack de rastreamento reconhecem um conjunto de marcadores retrorefletivos e extraem a sua posição e orientação em 3D. É feita neste trabalho uma pesquisa abrangente sobre algoritmos de pré-processamento de imagens e de rastreamento. Uma bancada de testes foi construída para a realização de testes de acurácia e precisão. Os resultados mostram que o sistema atinge níveis de exatidão levemente piores, mas ainda comparáveis aos de sistemas profissionais. Devido à sua modularidade, o sistema pode ser expandido através do uso de vários módulos monoculares de rastreamento interligados por um algoritmo de fusão de sensores, de modo a atingir um maior alcance operacional. Uma configuração com dois módulos foi montada e testada, tendo alcançado um desempenho semelhante à configuração de um só módulo.
In the last years the number of industrial applications for Augmented Reality (AR) and Virtual Reality (VR) environments has significantly increased. Optical tracking systems are an important component of AR/VR environments. In this work, a low cost optical tracking system with adequate attributes for professional use is proposed. The system works in infrared spectral region to reduce optical noise. A highspeed camera, equipped with daylight blocking filter and infrared flash strobes, transfers uncompressed grayscale images to a regular PC, where image pre-processing software and the PTrack tracking algorithm recognize a set of retro-reflective markers and extract its 3D position and orientation. Included in this work is a comprehensive research on image pre-processing and tracking algorithms. A testbed was built to perform accuracy and precision tests. Results show that the system reaches accuracy and precision levels slightly worse than but still comparable to professional systems. Due to its modularity, the system can be expanded by using several one-camera tracking modules linked by a sensor fusion algorithm, in order to obtain a larger working range. A setup with two modules was built and tested, resulting in performance similar to the stand-alone configuration.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Wijk, Olle. „Triangulation Based Fusion of Sonar Data with Application in Mobile Robot Mapping and Localization“. Doctoral thesis, Stockholm : Tekniska högsk, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3124.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

O-larnnithipong, Nonnarit. „Hand Motion Tracking System using Inertial Measurement Units and Infrared Cameras“. FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3905.

Der volle Inhalt der Quelle
Annotation:
This dissertation presents a novel approach to develop a system for real-time tracking of the position and orientation of the human hand in three-dimensional space, using MEMS inertial measurement units (IMUs) and infrared cameras. This research focuses on the study and implementation of an algorithm to correct the gyroscope drift, which is a major problem in orientation tracking using commercial-grade IMUs. An algorithm to improve the orientation estimation is proposed. It consists of: 1.) Prediction of the bias offset error while the sensor is static, 2.) Estimation of a quaternion orientation from the unbiased angular velocity, 3.) Correction of the orientation quaternion utilizing the gravity vector and the magnetic North vector, and 4.) Adaptive quaternion interpolation, which determines the final quaternion estimate based upon the current conditions of the sensor. The results verified that the implementation of the orientation correction algorithm using the gravity vector and the magnetic North vector is able to reduce the amount of drift in orientation tracking and is compatible with position tracking using infrared cameras for real-time human hand motion tracking. Thirty human subjects participated in an experiment to validate the performance of the hand motion tracking system. The statistical analysis shows that the error of position tracking is, on average, 1.7 cm in the x-axis, 1.0 cm in the y-axis, and 3.5 cm in the z-axis. The Kruskal-Wallis tests show that the orientation correction algorithm using gravity vector and magnetic North vector can significantly reduce the errors in orientation tracking in comparison to fixed offset compensation. Statistical analyses show that the orientation correction algorithm using gravity vector and magnetic North vector and the on-board Kalman-based orientation filtering produced orientation errors that were not significantly different in the Euler angles, Phi, Theta and Psi, with the p-values of 0.632, 0.262 and 0.728, respectively. The proposed orientation correction algorithm represents a contribution to the emerging approaches to obtain reliable orientation estimates from MEMS IMUs. The development of a hand motion tracking system using IMUs and infrared cameras in this dissertation enables future improvements in natural human-computer interactions within a 3D virtual environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Shaban, Heba Ahmed. „A Novel Highly Accurate Wireless Wearable Human Locomotion Tracking and Gait Analysis System via UWB Radios“. Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/27562.

Der volle Inhalt der Quelle
Annotation:
Gait analysis is the systematic study of human walking. Clinical gait analysis is the process by which quantitative information is collected for the assessment and decision-making of any gait disorder. Although observational gait analysis is the therapistâ s primary clinical tool for describing the quality of a patientâ s walking pattern, it can be very unreliable. Modern gait analysis is facilitated through the use of specialized equipment. Currently, accurate gait analysis requires dedicated laboratories with complex settings and highly skilled operators. Wearable locomotion tracking systems are available, but they are not sufficiently accurate for clinical gait analysis. At the same time, wireless healthcare is evolving. Particularly, ultra wideband (UWB) is a promising technology that has the potential for accurate ranging and positioning in dense multi-path environments. Moreover, impulse-radio UWB (IR-UWB) is suitable for low-power and low-cost implementation, which makes it an attractive candidate for wearable, low-cost, and battery-powered health monitoring systems. The goal of this research is to propose and investigate a full-body wireless wearable human locomotion tracking system using UWB radios. Ultimately, the proposed system should be capable of distinguishing between normal and abnormal gait, making it suitable for accurate clinical gait analysis.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Nicolini, Andrea. „Multipath tracking techniques for millimeter wave communications“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17690/.

Der volle Inhalt der Quelle
Annotation:
L'obiettivo di questo elaborato è studiare il problema del tracciamento efficiente e continuo dell'angolo di arrivo dei cammini multipli dominanti in un canale radio ad onde millimetriche. In particolare, viene considerato uno scenario di riferimento in cui devono essere tracciati il cammino diretto da una stazione base e due cammini riflessi da ostacoli in diverse condizioni operative e di movimento dell'utente mobile. Si è assunto che l'utente mobile può effettuare delle misure rumorose di angolo di arrivo dei tre cammini, uno in linea di vista e gli altri due non in linea di vista, ed eventualmente delle misure di distanza tra esso e le tre "sorgenti" (ad esempio ricavandole da misure di potenza ricevuta). Utilizzando un modello "spazio degli stati", sono stati investigati due diversi approcci: il primo utilizza un fitraggio di Kalman direttamente sulle misure di angolo di arrivo, mentre il secondo adotta un metodo a due passi in cui lo stato è rappresentato dalle posizioni della stazione base e dei due ostacoli, dalle quali vengono valutate le stime degli angoli di arrivo. In entrambi i casi è stato investigato l'impatto che ha sulla stima la fusione dei dati ottenuti dai sensori inerziali integrati nel dispositivo, ovvero velocità angolare ed accelerazione del mobile, con le misure di angolo di arrivo. Successivamente ad una fase di modellazione matematica dei due approcci, essi sono stati implementati e testati in MATLAB, sviluppando un simulatore in cui l'utente possa scegliere il valore di vari parametri a seconda dello scenario desiderato. Le analisi effettuate hanno mostrato la robustezza delle strategie proposte in diverse condizioni operative.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Stein, Sebastian. „Multi-modal recognition of manipulation activities through visual accelerometer tracking, relational histograms, and user-adaptation“. Thesis, University of Dundee, 2014. https://discovery.dundee.ac.uk/en/studentTheses/61c22b7e-5f02-4f21-a948-bf9e7b497120.

Der volle Inhalt der Quelle
Annotation:
Activity recognition research in computer vision and pervasive computing has made a remarkable trajectory from distinguishing full-body motion patterns to recognizing complex activities. Manipulation activities as occurring in food preparation are particularly challenging to recognize, as they involve many different objects, non-unique task orders and are subject to personal idiosyncrasies. Video data and data from embedded accelerometers provide complementary information, which motivates an investigation of effective methods for fusing these sensor modalities. This thesis proposes a method for multi-modal recognition of manipulation activities that combines accelerometer data and video at multiple stages of the recognition pipeline. A method for accelerometer tracking is introduced that provides for each accelerometer-equipped object a location estimate in the camera view by identifying a point trajectory that matches well the accelerometer data. It is argued that associating accelerometer data with locations in the video provides a key link for modelling interactions between accelerometer-equipped objects and other visual entities in the scene. Estimates of accelerometer locations and their visual displacements are used to extract two new types of features: (i) Reference Tracklet Statistics characterizes statistical properties of an accelerometer's visual trajectory, and (ii) RETLETS, a feature representation that encodes relative motion, uses an accelerometer's visual trajectory as a reference frame for dense tracklets. In comparison to a traditional sensor fusion approach where features are extracted from each sensor-type independently and concatenated for classification, it is shown that combining RETLETS and Reference Tracklet Statistics with those sensor-specific features performs considerably better. Specifically addressing scenarios in which a recognition system would be primarily used by a single person (e.g., cognitive situational support), this thesis investigates three methods for adapting activity models to a target user based on user-specific training data. Via randomized control trials it is shown that these methods indeed learn user idiosyncrasies. All proposed methods are evaluated on two new challenging datasets of food preparation activities that have been made publicly available. Both datasets feature a novel combination of video and accelerometers attached to objects. The Accelerometer Localization dataset is the first publicly available dataset that enables quantitative evaluation of accelerometer tracking algorithms. The 50 Salads dataset contains 50 sequences of people preparing mixed salads with detailed activity annotations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Lamard, Laetitia. „Approche modulaire pour le suivi temps réel de cibles multi-capteurs pour les applications routières“. Thesis, Clermont-Ferrand 2, 2014. http://www.theses.fr/2014CLF22477/document.

Der volle Inhalt der Quelle
Annotation:
Cette thèse, réalisée en coopération avec l'Institut Pascal et Renault, s'inscrit dans le domaine des applications d'aide à la conduite, la plupart de ces systèmes visant à améliorer la sécurité des passagers du véhicule. La fusion de différents capteurs permet de rendre plus fiable la prise de décision. L'objectif des travaux de cette thèse a été de développer un système de fusion entre un radar et une caméra intelligente pour la détection des obstacles frontaux au véhicule. Nous avons proposé une architecture modulaire de fusion temps réel utilisant des données asynchrones provenant des capteurs sans a priori applicatif. Notre système de fusion de capteurs est basé sur des méthodes de suivi de plusieurs cibles. Des méthodes probabilistes de suivi de cibles ont été envisagées et une méthode particulière, basée sur la modélisation des obstacles par un ensemble fini de variables aléatoires a été choisie et testée en temps réel. Cette méthode, appelée CPHD (Cardinalized Probability Hypothesis Density) permet de gérer les différents défauts des capteurs (non détections, fausses alarmes, imprécision de positions et de vitesses mesurées) et les incertitudes liées à l’environnement (nombre inconnu d'obstacles à détecter). Ce système a été amélioré par la gestion de différents types d'obstacles : piéton, voiture, camion, vélo. Nous avons proposé aussi une méthode permettant de résoudre le problème des occultations avec une caméra de manière explicite par une méthode probabiliste en prenant en compte les imprécisions de ce capteur. L'utilisation de capteurs intelligents a introduit un problème de corrélation des mesures (dues à un prétraitement des données) que nous avons réussi à gérer grâce à une analyse de l'estimation des performances de détection de ces capteurs. Afin de compléter ce système de fusion, nous avons mis en place un outil permettant de déterminer rapidement les paramètres de fusion à utiliser pour les différents capteurs. Notre système a été testé en situation réelle lors de nombreuses expérimentations. Nous avons ainsi validé chacune des contributions de manière qualitative et quantitative
This PhD work, carried out in collaboration with Institut Pascal and Renault, is in the field of the Advanced Driving Assisted Systems, most of these systems aiming to improve passenger security. Sensors fusion makes the system decision more reliable. The goal of this PhD work was to develop a fusion system between a radar and a smart camera, improving obstacles detection in front of the vehicle. Our approach proposes a real-time flexible fusion architecture system using asynchronous data from the sensors without any prior knowledge about the application. Our fusion system is based on a multi targets tracking method. Probabilistic multi target tracking was considered, and one based on random finite sets (modelling targets) was selected and tested in real-time computation. The filter, named CPHD (Cardinalized Probability Hypothesis Density), succeed in taking into account and correcting all sensor defaults (non detections, false alarms and imprecision on position and speed estimated by sensors) and uncertainty about the environment (unknown number of targets). This system was improved by introducing the management of the type of the target: pedestrian, car, truck and bicycle. A new system was proposed, solving explicitly camera occlusions issues by a probabilistic method taking into account this sensor imprecision. Smart sensors use induces data correlation (due to pre-processed data). This issue was solved by correcting the estimation of sensor detection performance. A new tool was set up to complete fusion system: it allows the estimation of all sensors parameters used by fusion filter. Our system was tested in real situations with several experimentations. Every contribution was qualitatively and quantitatively validated
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Mekonnen, Alhayat Ali. „Cooperative people detection and tracking strategies with a mobile robot and wall mounted cameras“. Phd thesis, Université Paul Sabatier - Toulouse III, 2014. http://tel.archives-ouvertes.fr/tel-01068355.

Der volle Inhalt der Quelle
Annotation:
Actuellement, il y a une demande croissante pour le déploiement de robots mobile dans des lieux publics. Pour alimenter cette demande, plusieurs chercheurs ont déployé des systèmes robotiques de prototypes dans des lieux publics comme les hôpitaux, les supermarchés, les musées, et les environnements de bureau. Une principale préoccupation qui ne doit pas être négligé, comme des robots sortent de leur milieu industriel isolé et commencent à interagir avec les humains dans un espace de travail partagé, est une interaction sécuritaire. Pour un robot mobile à avoir un comportement interactif sécuritaire et acceptable - il a besoin de connaître la présence, la localisation et les mouvements de population à mieux comprendre et anticiper leurs intentions et leurs actions. Cette thèse vise à apporter une contribution dans ce sens en mettant l'accent sur les modalités de perception pour détecter et suivre les personnes à proximité d'un robot mobile. Comme une première contribution, cette thèse présente un système automatisé de détection des personnes visuel optimisé qui prend explicitement la demande de calcul prévue sur le robot en considération. Différentes expériences comparatives sont menées pour mettre clairement en évidence les améliorations de ce détecteur apporte à la table, y compris ses effets sur la réactivité du robot lors de missions en ligne. Dans un deuxiè contribution, la thèse propose et valide un cadre de coopération pour fusionner des informations depuis des caméras ambiant affixé au mur et de capteurs montés sur le robot mobile afin de mieux suivre les personnes dans le voisinage. La même structure est également validée par des données de fusion à partir des différents capteurs sur le robot mobile au cours de l'absence de perception externe. Enfin, nous démontrons les améliorations apportées par les modalités perceptives développés en les déployant sur notre plate-forme robotique et illustrant la capacité du robot à percevoir les gens dans les lieux publics supposés et respecter leur espace personnel pendant la navigation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Karlsson, Johannes. „Wireless video sensor network and its applications in digital zoo“. Doctoral thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-38032.

Der volle Inhalt der Quelle
Annotation:
Most computing and communicating devices have been personal computers that were connected to Internet through a fixed network connection. It is believed that future communication devices will not be of this type. Instead the intelligence and communication capability will move into various objects that surround us. This is often referred to as the "Internet of Things" or "Wireless Embedded Internet". This thesis deals with video processing and communication in these types of systems. One application scenario that is dealt with in this thesis is real-time video transmission over wireless ad-hoc networks. Here a set of devices automatically form a network and start to communicate without the need for any previous infrastructure. These devices act as both hosts and routers and can build up large networks where they forward information for each other. We have identified two major problems when sending real-time video over wireless ad-hoc networks. One is the reactive design used by most ad-hoc routing protocols. When nodes move some links that are used in the communication path between the sender and the receiver may disappear. The reactive routing protocols wait until some links on the path breaks and then start to search for a new path. This will lead to long interruptions in packet delivery and does not work well for real-time video transmission. Instead we propose an approach where we identify when a route is about to break and start to search for new routes before this happen. This is called a proactive approach. Another problem is that video codecs are very sensitive for packet losses and at the same time the wireless ad-hoc network is very error prone. The most common way to handle lost packets in video codecs is to periodically insert frames that are not predictively coded. This method periodically corrects errors regardless there has been an error or not. The method we propose is to insert frames that are not predictively coded directly after a packet has been lost, and only if a packet has been lost. Another area that is dealt with in this thesis is video sensor networks. These are small devices that have communication and computational capacity, they are equipped with an image sensor so that they can capture video. Since these devices in general have very limited resources in terms of energy, computation, communication and memory they demand a lot of the video compression algorithms used. In standard video compression algorithms the complexity is high for the encoder while the decoder has low complexity and is just passively controlled by the encoder. We propose video compression algorithms for wireless video sensor networks where complexity is reduced in the encoder by moving some of the image analysis to the decoder side. We have implemented our approach on actual low-power sensor nodes to test our developed algorithms. Finally we have built a "Digital Zoo" that is a complete system including a large scale outdoor video sensor network. The goal is to use the collected data from the video sensor network to create new experiences for physical visitors in the zoo, or "cyber" visitors from home. Here several topics that relate to practical deployments of sensor networks are addressed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Wiklund, Åsa. „Multiple Platform Bias Error Estimation“. Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2126.

Der volle Inhalt der Quelle
Annotation:

Sensor fusion has long been recognized as a mean to improve target tracking. Sensor fusion deals with the merging of several signals into one to get a better and more reliable result. To get an improved and more reliable result you have to trust the incoming data to be correct and not contain unknown systematic errors. This thesis tries to find and estimate the size of the systematic errors that appear when we have a multi platform environment and data is shared among the units. To be more precise, the error estimated within the scope of this thesis appears when platforms cannot determine their positions correctly and share target tracking data with their own corrupted position as a basis for determining the target's position. The algorithms developed in this thesis use the Kalman filter theory, including the extended Kalman filter and the information filter, to estimate the platform location bias error. Three algorithms are developed with satisfying result. Depending on time constraints and computational demands either one of the algorithms could be preferred.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Bebek, Ozkan. „ROBOTIC-ASSISTED BEATING HEART SURGERY“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=case1201289393.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Hachour, Samir. „Suivi et classification d'objets multiples : contributions avec la théorie des fonctions de croyance“. Thesis, Artois, 2015. http://www.theses.fr/2015ARTO0206/document.

Der volle Inhalt der Quelle
Annotation:
Cette thèse aborde le problèeme du suivi et de la classification de plusieurs objets simultanément.Il est montré dans la thèese que les fonctions de croyance permettent d'améliorer les résultatsfournis par des méthodes classiques à base d'approches Bayésiennes. En particulier, une précédenteapproche développée dans le cas d'un seul objet est étendue au cas de plusieurs objets. Il est montréque dans toutes les approches multi-objets, la phase d'association entre observations et objetsconnus est fondamentale. Cette thèse propose également de nouvelles méthodes d'associationcrédales qui apparaissent plus robustes que celles trouvées dans la littérature. Enfin, est abordée laquestion de la classification multi-capteurs qui nécessite une seconde phase d'association. Dans cedernier cas, deux architectures de fusion des données capteurs sont proposées, une dite centraliséeet une autre dite distribuée. De nombreuses comparaisons illustrent l'intérêt de ces travaux, queles classes des objets soient constantes ou variantes dans le temps
This thesis deals with multi-objet tracking and classification problem. It was shown that belieffunctions allow the results of classical Bayesian methods to be improved. In particular, a recentapproach dedicated to a single object classification which is extended to multi-object framework. Itwas shown that detected observations to known objects assignment is a fundamental issue in multiobjecttracking and classification solutions. New assignment solutions based on belief functionsare proposed in this thesis, they are shown to be more robust than the other credal solutions fromrecent literature. Finally, the issue of multi-sensor classification that requires a second phase ofassignment is addressed. In the latter case, two different multi-sensor architectures are proposed, aso-called centralized one and another said distributed. Many comparisons illustrate the importanceof this work, in both situations of constant and changing objects classes
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Magnusson, Daniel. „A network based algorithm for aided navigation“. Thesis, Linköpings universitet, Institutionen för systemteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-75235.

Der volle Inhalt der Quelle
Annotation:
This thesis is concerned with development of a navigation algorithm primarily for the aircraft fighter SAAB JAS 39 Gripen, in swarms of other units. The algorithm uses information from conventional navigation systems and additional information from a radio data link as aiding information, relative range measurements. As the GPS can get jammed, this group tracking solution can provide an increased navigation performance in these conditions. For simplicity, simplified characteristics are used in the simulations where simple generated trajectories and measurements are used. This measurement information can then be fused by using filter theory applied from the sensor fusionarea with statistical approaches. By using the radio data link and the external information sources, i.e. other aircraft and different types of landmarks with often good performance, navigation is aided when the GPS is not usable, at e.g. hostile GPS conditions. A number of scenarios with operative sense of reality were simulated for verifying and studying these conditions, to give results with conclusions.
Det här examensarbetet syftar till utveckling av en algoritm för navigering, primärt för stridsflygplanet SAAB JAS 39 Gripen, i svärmar av andra enheter. Algoritmen använder information från konventionella navigeringssystem och ytterligare information från en radiodatalänk som ger understödjande information, relativa avståndsmätningar. Då den förlitade GPS:en kan störas ut, kan denna gruppspårande lösning öka navigeringsprestandan i dessa förhållanden. För enkelhetens skull, används förenklade karaktäristiker i simuleringarna där enkla genererade trajektorier och mätningar används. Denna mätinformation kan sedan ihopviktas genom att använda filterteori från statistisk sensorfusion. Genom att använda radiodatalänkar och den tillförda informationen från externa informationskällor, således andra flygplan och olika typer av landmärken som väldigt ofta har god prestanda, är navigeringen understödd när GPS inte är användbar, t.ex. i GPS-fientliga miljöer. Ett antal scenarion med operativ verklighetsanknytning simulerades för att verifiera och studera dessa förhållanden, för att ge resultat med slutsatser.
© Daniel Magnusson.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Patel, Shwetak Naran. „Infrastructure mediated sensing“. Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24829.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph.D.)--Computing, Georgia Institute of Technology, 2009.
Committee Chair: Abowd, Gregory; Committee Member: Edwards, Keith; Committee Member: Grinter, Rebecca; Committee Member: LaMarca, Anthony; Committee Member: Starner, Thad.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Neumann, Markus. „Automatic multimodal real-time tracking for image plane alignment in interventional Magnetic Resonance Imaging“. Phd thesis, Université de Strasbourg, 2014. http://tel.archives-ouvertes.fr/tel-01038023.

Der volle Inhalt der Quelle
Annotation:
Interventional magnetic resonance imaging (MRI) aims at performing minimally invasive percutaneous interventions, such as tumor ablations and biopsies, under MRI guidance. During such interventions, the acquired MR image planes are typically aligned to the surgical instrument (needle) axis and to surrounding anatomical structures of interest in order to efficiently monitor the advancement in real-time of the instrument inside the patient's body. Object tracking inside the MRI is expected to facilitate and accelerate MR-guided interventions by allowing to automatically align the image planes to the surgical instrument. In this PhD thesis, an image-based workflow is proposed and refined for automatic image plane alignment. An automatic tracking workflow was developed, performing detection and tracking of a passive marker directly in clinical real-time images. This tracking workflow is designed for fully automated image plane alignment, with minimization of tracking-dedicated time. Its main drawback is its inherent dependence on the slow clinical MRI update rate. First, the addition of motion estimation and prediction with a Kalman filter was investigated and improved the workflow tracking performance. Second, a complementary optical sensor was used for multi-sensor tracking in order to decouple the tracking update rate from the MR image acquisition rate. Performance of the workflow was evaluated with both computer simulations and experiments using an MR compatible testbed. Results show a high robustness of the multi-sensor tracking approach for dynamic image plane alignment, due to the combination of the individual strengths of each sensor.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Cook, Brandon M. „An Intelligent System for Small Unmanned Aerial Vehicle Traffic Management“. University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1617106257481515.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Karimi, Majid. „Master ’s Programme in Information Technology: Using multiple Leap Motion sensors in Assembly workplace in Smart Factory“. Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-32392.

Der volle Inhalt der Quelle
Annotation:
The new industry revolution creates a vast transformation in the manufacturing methods. Embedded Intelligence and communication technologies facilitate the execution of the smart factory. It can provide lots of features for strong customization of products. Assembly system is a critical segment of the smart factory. However, the complexity of production planning and the variety of products being manufactured, persuade the factories to use different methods to guide the workers for unfamiliar tasks in the assembly section. Motion tracking is the process of capturing the movement of human body or objects which has been used in different industrial systems. It can be integrated to a wide range of applications such as interacting with computers, games and entertainment, industry, etc. Motion tracking can be integrated to assembly systems and it has the potential to create an improvement in this industry as well. But the integration of motion tracking in industrial processes is still not widespread. This thesis work provides a fully automatic tracking solution for future systems in manufacturing industry and other fields. In general a configurable, flexible, and scalable motion tracking system is created in this thesis work to amend the tracking process. According to our environment, we have done a research between different motion tracking methods and technologies including Kinect and Leap Motion sensor, and finally the leap motion sensor is selected as the most appropriate method, because it fulfils our demands in this project. Multiple Leap motion sensors are used in this work to cover areas with different size. Data fusion between multiple leap motion sensors can be considered as another novel contribution of this thesis work. To achieve this goal data from multiple sensors are combined. This system can improve the lack of accuracy in order to creating a practical industrial application. By fusion of several sensors in order to achieve accuracies that allow implementation in practice, a motion tracking system with higher accuracy is created.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie