To see the other types of publications on this topic, follow the link: Interactive Multiple Model algorithm.

Dissertations / Theses on the topic 'Interactive Multiple Model algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Interactive Multiple Model algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Munir, Arshed. "Manoeuvring target tracking using different forms of the interacting multiple model algorithm." Thesis, University of Sussex, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.240430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Alat, Gokcen. "A Variable Structure - Autonomous - Interacting Multiple Model Ground Target Tracking Algorithm In Dense Clutter." Phd thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615512/index.pdf.

Full text
Abstract:
Tracking of a single ground target using GMTI radar detections is considered. A Variable Structure- Autonomous- Interactive Multiple Model (VS-A-IMM) structure is developed to address challenges of ground target tracking, while maintaining an acceptable level computational complexity at the same time. The following approach is used in this thesis: Use simple tracker structures
incorporate a priori information such as topographic constraints, road maps as much as possible
use enhanced gating techniques to minimize the eect of clutter
develop methods against stop-move motion and hide motion of the target
tackle on-road/o-road transitions and junction crossings
establish measures against non-detections caused by environment. The tracker structure is derived using a composite state estimation set-up that incorporate multi models and MAP and MMSE estimations. The root mean square position and velocity error performances of the VS-A-IMM algorithm are compared with respect to the baseline IMM and the VS-IMM methods found in the literature. It is observed that the newly developed VS-A-IMM algorithm performs better than the baseline methods in realistic conditions such as on-road/o-road transitions, tunnels, stops, junction crossings, non-detections.
APA, Harvard, Vancouver, ISO, and other styles
3

Benko, Matej. "Hledaní modelů pohybu a jejich parametrů pro identifikaci trajektorie cílů." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-445467.

Full text
Abstract:
Táto práca sa zaoberá odstraňovaním šumu, ktorý vzniká z tzv. multilateračných meraní leteckých cieľov. Na tento účel bude využitá najmä teória Bayesovských odhadov. Odvodí sa aposteriórna hustota skutočnej (presnej) polohy lietadla. Spolu s polohou (alebo aj rýchlosťou) lietadla bude odhadovaná tiež geometria trajektórie lietadla, ktorú lietadlo v aktuálnom čase sleduje a tzv. procesný šum, ktorý charakterizuje ako moc sa skutočná trajektória môže od tejto líšiť. Odhad spomínaného procesného šumu je najdôležitejšou časťou tejto práce. Je odvodený prístup maximálnej vierohodnosti a Bayesovský prístup a ďalšie rôzne vylepšenia a úpravy týchto prístupov. Tie zlepšujú odhad pri napr. zmene manévru cieľa alebo riešia problém počiatočnej nepresnosti odhadu maximálnej vierohodnosti. Na záver je ukázaná možnosť kombinácie prístupov, t.j. odhad spolu aj geometrie aj procesného šumu.
APA, Harvard, Vancouver, ISO, and other styles
4

Ege, Emre. "A Comparative Study Of Tracking Algorithms In Underwater Environment Using Sonar Simulation." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/2/12608866/index.pdf.

Full text
Abstract:
Target tracking is one the most fundamental elements of a radar system. The aim of target tracking is the reliable estimation of a target'
s true state based on a time history of noisy sensor observations. In real life, the sensor data may include substantial noise. This noise can render the raw sensor data unsuitable to be used directly. Instead, we must filter the noise, preferably in an optimal manner. For land, air and surface marine vehicles, very successful filtering methods are developed. However, because of the significant differences in the underwater propagation environment and the associated differences in the corresponding sensors, the successful use of similar principles and techniques in an underwater scenario is still an active topic of research. A comparative study of the effects of the underwater environment on a number of tracking algorithms is the focus of the present thesis. The tracking algorithms inspected are: the Kalman Filter, the Extended Kalman Filter and the Particle Filter. We also investigate in particular the IMM extension to KF and EKF filters. These algorithms are tested under several underwater environment scenarios.
APA, Harvard, Vancouver, ISO, and other styles
5

Rastgoufard, Rastin. "The Interacting Multiple Models Algorithm with State-Dependent Value Assignment." ScholarWorks@UNO, 2012. http://scholarworks.uno.edu/td/1477.

Full text
Abstract:
The value of a state is a measure of its worth, so that, for example, waypoints have high value and regions inside of obstacles have very small value. We propose two methods of incorporating world information as state-dependent modifications to the interacting multiple models (IMM) algorithm, and then we use a game's player-controlled trajectories as ground truths to compare the normal IMM algorithm to versions with our proposed modifications. The two methods involve modifying the model probabilities in the update step and modifying the transition probability matrix in the mixing step based on the assigned values of different target states. The state-dependent value assignment modifications are shown experimentally to perform better than the normal IMM algorithm in both estimating the target's current state and predicting the target's next state.
APA, Harvard, Vancouver, ISO, and other styles
6

Sahin, Mehmet Alper. "Performance Optimization Of Monopulse Tracking Radar." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/2/12605364/index.pdf.

Full text
Abstract:
An analysis and simulation tool is developed for optimizing system parameters of the monopulse target tracking radar and observing effects of the system parameters on the performance of the system over different scenarios. A monopulse tracking radar is modeled for measuring the performance of the radar with given parameters, during the thesis studies. The radar model simulates the operation of a Class IA type monopulse automatic tracking radar, which uses a planar phased array. The interacting multiple model (IMM) estimator with the Probabilistic Data Association (PDA) technique is used as the tracking filter. In addition to modeling of the tracking radar model, an optimization tool is developed to optimize system parameters of this tracking radar model. The optimization tool implements a Genetic Algorithm (GA) belonging to a GA Toolbox distributed by Department of Automatic Control and System Engineering at University of Sheffield. The thesis presents optimization results over some given optimization scenarios and concludes on effect of tracking filter parameters, beamwidth and dwell interval for the confirmed track.
APA, Harvard, Vancouver, ISO, and other styles
7

Aslan, Murat Samil. "Tracker-aware Detection: A Theoretical And An Experimental Study." Phd thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610474/index.pdf.

Full text
Abstract:
A promising line of research attempts to bridge the gap between detector and tracker by means of considering jointly optimal parameter settings for both of these subsystems. Along this fruitful path, this thesis study focuses on the problem of detection threshold optimization in a tracker-aware manner so that a feedback from the tracker to the detector is established to maximize the overall system performance. Special emphasis is given to the optimization schemes based on two non-simulation performance prediction (NSPP) methodologies for the probabilistic data association filter (PDAF), namely, the modified Riccati equation (MRE) and the hybrid conditional averaging (HYCA) algorithm. The possible improvements are presented in two domains: Non-maneuvering and maneuvering target tracking. In the first domain, a number of algorithmic and experimental evaluation gaps are identified and newly proposed methods are compared with the existing ones in a unified theoretical and experimental framework. Furthermore, for the MRE based dynamic threshold optimization problem, a closed-form solution is proposed. This solution brings a theoretical lower bound on the operating signal-to-noise ratio (SNR) concerning when the tracking system should be switched to the track before detect (TBD) mode. As the improvements of the second domain, some of the ideas used in the first domain are extended to the maneuvering target tracking case. The primary contribution is made by extending the dynamic optimization schemes applicable to the PDAF to the interacting multiple model probabilistic data association filter (IMM-PDAF). Resulting in an online feedback from the filter to the detector, this extension makes the tracking system robust against track losses under low SNR values.
APA, Harvard, Vancouver, ISO, and other styles
8

Canolla, Adriano. "Interactive Multiple Model Estimation for Unmanned Aircraft Systems Detect and Avoid." Thesis, Illinois Institute of Technology, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13419136.

Full text
Abstract:

This research presents new methods to apply safety standards to Detect and Avoid (DAA) functions for Unmanned Aircraft Systems (UAS), using maneuvering target tracking and encounter models.

Previous DAA research methods focused on predefined, linear encounter generation. The new estimation and prediction methods in this research are based on the target tracking of maneuvering intruders using Multiple Model Adaptive Estimation and a realistic random encounter generation based on an established encounter model.

When tracking maneuvering intruders there is limited knowledge of changes in intruder behavior beyond the current measurement. The standard Kalman filter (KF) with a single motion model is limited in performance for such problems due to ineffective responses as the target maneuvers. In these cases, state estimation can be improved using MMAE. It is assumed that the current active dynamic model is one of a discrete set of models, each of which is the basis for a separate filter. These filters run in parallel to estimate the states of targets with changing dynamics.

In practical applications of multiple model systems, one of the most popular algorithms for the MMAE is the Interacting Multiple Model (IMM) estimator. In the IMM, the regime switching is modeled by a finite state homogeneous Markov Chain. This is represented by a transition probability matrix characterizing the mode transitions. A Markov Chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the previous event.

This research uses the hazard states estimates (which are derived from DAA standards) to analyze the IMM performance, and then presents a new method to predict the hazard states. To reduce the prediction error, this new method accounts for maneuvering intruders. The new prediction method uses the prediction phase in the IMM algorithm to predict the future intruder aircraft states based on the current and past sensor measurements.

The estimation and prediction methods described in this thesis can help ensure safe encounters between UAS and manned aircraft in the National Airspace System through improvement of the trajectory estimation used to inform the DAA sensor certification process.

APA, Harvard, Vancouver, ISO, and other styles
9

Vince, Robert Johnston. "An electromagnetic radome model using an interactive micro-computer finite element algorithm." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/25893.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Caglar, Musa. "Multiple Criteria Project Selection Problems." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610945/index.pdf.

Full text
Abstract:
In this study, we propose two biobjective mathematical models based on PROMETHEE V method for project selection problems. We develop an interactive approach (ib-PROMETHEE V) including data mining techniques to solve the first proposed mathematical model. For the second model, we propose NSGA-II with constraint handling method. We also develop a Preference Based Interactive Multiobjective Genetic Algorithm (IMGA) to solve the second proposed mathematical model. We test the performance of NSGA-II with constraint handling method and IMGA on randomly generated test problems.
APA, Harvard, Vancouver, ISO, and other styles
11

Oyeniran, Oluyemi. "Estimating the Proportion of True Null Hypotheses in Multiple Testing Problems." Bowling Green State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1466358483.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Jia, Jia. "Interactive Imaging via Hand Gesture Recognition." Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4259.

Full text
Abstract:
With the growth of computer power, Digital Image Processing plays a more and more important role in the modern world, including the field of industry, medical, communications, spaceflight technology etc. As a sub-field, Interactive Image Processing emphasizes particularly on the communications between machine and human. The basic flowchart is definition of object, analysis and training phase, recognition and feedback. Generally speaking, the core issue is how we define the interesting object and track them more accurately in order to complete the interaction process successfully. This thesis proposes a novel dynamic simulation scheme for interactive image processing. The work consists of two main parts: Hand Motion Detection and Hand Gesture recognition. Within a hand motion detection processing, movement of hand will be identified and extracted. In a specific detection period, the current image is compared with the previous image in order to generate the difference between them. If the generated difference exceeds predefined threshold alarm, a typical hand motion movement is detected. Furthermore, in some particular situations, changes of hand gesture are also desired to be detected and classified. This task requires features extraction and feature comparison among each type of gestures. The essentials of hand gesture are including some low level features such as color, shape etc. Another important feature is orientation histogram. Each type of hand gestures has its particular representation in the domain of orientation histogram. Because Gaussian Mixture Model has great advantages to represent the object with essential feature elements and the Expectation-Maximization is the efficient procedure to compute the maximum likelihood between testing images and predefined standard sample of each different gesture, the comparability between testing image and samples of each type of gestures will be estimated by Expectation-Maximization algorithm in Gaussian Mixture Model. The performance of this approach in experiments shows the proposed method works well and accurately.
APA, Harvard, Vancouver, ISO, and other styles
13

Conklin, Nathan James. "A web-based, run-time extensible architecture for interactive visualization and exploration of diverse data." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/35998.

Full text
Abstract:
Information visualizations must often be custom programmed to support complex user tasks and database schemas. This is an expensive and time consuming effort, even when general-purpose visualizations are utilized within the solution. This research introduces the Snap visualization server and system architecture that addresses limitations of previous Snap-Together Visualization research and satisfies the need for flexibility in information visualizations. An enhanced visualization model is presented that formalizes multiple-view visualization in terms of the relational data model. An extensible architecture is introduced that enables flexible construction and component integration. It allows the integration of diverse data, letting users spend less time massaging the data prior to visualization. The web-based server enables universal access, easy distribution, and the ability to intermix and exploit existing components. This web-based software architecture provides a strong foundation for future multiple-view visualization development.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
14

Furuhashi, Takeshi, Tomohiro Yoshikawa, and Masafumi Yamamoto. "A Study on Effects of Migration in MOGA with Island Model by Visualization." 日本知能情報ファジィ学会, 2008. http://hdl.handle.net/2237/20680.

Full text
Abstract:
Session ID: SA-G4-2
Joint 4th International Conference on Soft Computing and Intelligent Systems and 9th International Symposium on advanced Intelligent Systems, September 17-21, 2008, Nagoya University, Nagoya, Japan
APA, Harvard, Vancouver, ISO, and other styles
15

Niu, Yue S., Ning Hao, and Heping Zhang. "Multiple Change-Point Detection: A Selective Overview." INST MATHEMATICAL STATISTICS, 2016. http://hdl.handle.net/10150/622820.

Full text
Abstract:
Very long and noisy sequence data arise from biological sciences to social science including high throughput data in genomics and stock prices in econometrics. Often such data are collected in order to identify and understand shifts in trends, for example, from a bull market to a bear market in finance or from a normal number of chromosome copies to an excessive number of chromosome copies in genetics. Thus, identifying multiple change points in a long, possibly very long, sequence is an important problem. In this article, we review both classical and new multiple change-point detection strategies. Considering the long history and the extensive literature on the change-point detection, we provide an in-depth discussion on a normal mean change-point model from aspects of regression analysis, hypothesis testing, consistency and inference. In particular, we present a strategy to gather and aggregate local information for change-point detection that has become the cornerstone of several emerging methods because of its attractiveness in both computational and theoretical properties.
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Xiaoru. "Multi-Core Implementation of F-16 Flight Surface Control System Using GA Based Multiple Model Reference Adaptive Control Algorithm." University of Toledo / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1302130339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Sokrut, Nikolay. "The Integrated Distributed Hydrological Model, ECOFLOW- a Tool for Catchment Management." Doctoral thesis, Stockholm, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Dukyil, Abdulsalam Saleh. "Artificial intelligence and multiple criteria decision making approach for a cost-effective RFID-enabled tracking management system." Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/17128.

Full text
Abstract:
The implementation of RFID technology has been subject to ever-increasing popularity in relation to the traceability of items as one of the most advance technologies. Implementing such a technology leads to an increase in the visibility management of products. Notwithstanding this, RFID communication performance is potentially greatly affected by interference between the RFID devices. It is also subject to auxiliary costs in investment that should be considered. Hence, seeking a cost-effective design with a desired communication performance for RFID-enabled systems has become a key factor in order to be competitive in today‟s markets. This study introduce a cost and performance-effective design for a proposed RFID-enabled passport tracking system through the development of a multi-objective model that takes in account economic, operation and social criteria. The developed model is aimed at solving the design problem by (i) allocating the optimal numbers of related facilities that should be established and (ii) obtaining trade-offs among three objectives: minimising implementation and operational costs; minimising RFID reader interference; and maximising the social impact measured in the number of created jobs. To come closer to the actual design in terms of considering the uncertain parameters, a fuzzy multi-objective model was developed. To solve the multi-objective optimization problem model, two solution methods were used respectively (epsilon constrain and linear programming) to select the best Pareto solution and a decision-making method was developed to select the final trade-off solution. Moreover, this research aims to provide a user-friendly decision making tool for selecting the best vendor from a group which submitted their tenders for implementing a proposed RFID- based passport tracking system. In addition to that a real case study was applied to examine the applicability of the developed model and the proposed solution methods. The research findings indicate that the developed model is capable of presenting a design for an RFID- enabled passport tracking system. Also, the developed decision-making tool can easily be used to solve similar vendor selection problem. Research findings demonstrate that the proposed RFID-enabled monitoring system for the passport tracking system is economically feasible. The study concludes that the developed mathematical models and optimization approaches can be a useful decision-maker for tackling a number of design and optimization problems for RFID system using artificial intelligence mathematical algorithm based techniques.
APA, Harvard, Vancouver, ISO, and other styles
19

Kartal, Koc Elcin. "An Algorithm For The Forward Step Of Adaptive Regression Splines Via Mapping Approach." Phd thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12615012/index.pdf.

Full text
Abstract:
In high dimensional data modeling, Multivariate Adaptive Regression Splines (MARS) is a well-known nonparametric regression technique to approximate the nonlinear relationship between a response variable and the predictors with the help of splines. MARS uses piecewise linear basis functions which are separated from each other with breaking points (knots) for function estimation. The model estimating function is generated in two stepwise procedures: forward selection and backward elimination. In the first step, a general model including too many basis functions so the knot points are generated
and in the second one, the least contributing basis functions to the overall fit are eliminated. In the conventional adaptive spline procedure, knots are selected from a set of distinct data points that makes the forward selection procedure computationally expensive and leads to high local variance. To avoid these drawbacks, it is possible to select the knot points from a subset of data points, which leads to data reduction. In this study, a new method (called S-FMARS) is proposed to select the knot points by using a self organizing map-based approach which transforms the original data points to a lower dimensional space. Thus, less number of knot points is enabled to be evaluated for model building in the forward selection of MARS algorithm. The results obtained from simulated datasets and of six real-world datasets show that the proposed method is time efficient in model construction without degrading the model accuracy and prediction performance. In this study, the proposed approach is implemented to MARS and CMARS methods as an alternative to their forward step to improve them by decreasing their computing time
APA, Harvard, Vancouver, ISO, and other styles
20

Heine, Christian P. "Simulated Response of Degrading Hysteretic Joints With Slack Behavior." Diss., Virginia Tech, 2001. http://hdl.handle.net/10919/28576.

Full text
Abstract:
A novel, general, numerical model is described that is capable of predicting the load-displacement relationship up to and at failure of multiple-bolt joints in timber of various configurations. The model is not tied to a single input function and bolt holes are permitted to be drilled oversize resulting in a slack system. The model consists of five parts. A new mathematical hysteresis model describes the stiffness of the individual bolt at each time step increment and accounts for non-linear and slack behavior; a mechanically-based structural stiffness model explains the interaction of one bolt with another bolt within a joint; an analytically-based failure model computes the stresses at each time step and initiates failure if crack length equals fastener spacing; a stochastic routine accounts for material property variation; and a heuristic optimization routine estimates the parameters needed. The core model is a modified array of differential equations whose solution describes accurate hysteresis shapes for slack systems. Hysteresis parameter identification is carried out by a genetic algorithm routine that searches for the best-fit parameters following evolutionary principles (survival of the fittest). The structural model is a linear spring model. Failure is predicted based on a newly developed 'Displaced-Volume-Method' in conjunction with beam on elastic foundation theory, elastic theory, and a modified Tsai-Wu Failure criterion. The devised computer model enhances the understanding of the mechanics of multiple-bolt joints in timber, and yields valid predictions of joint response of two-member multiple-bolt joints. This research represents a significant step towards the simulation of structural wood components.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
21

Pennada, Venkata Sai Teja. "Solving Multiple Objective Optimization Problem using Multi-Agent Systems: A case in Logistics Management." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20745.

Full text
Abstract:
Background: Multiple Objective Optimization problems(MOOPs) are common and evident in every field. Container port terminals are one of the fields in which MOOP occurs. In this research, we have taken a case in logistics management and modelled Multi-agent systems to solve the MOOP using Non-dominated Sorting Genetic Algorithm-II (NSGA-II). Objectives: The purpose of this study is to build AI-based models for solving a Multiple Objective Optimization Problem occurred in port terminals. At first, we develop a port agent with an objective function of maximizing throughput and a customer agent with an objective function of maximizing business profit. Then, we solve the problem using the single-objective optimization model and multi-objective optimization model. We then compare the results of both models to assess their performance. Methods: A literature review is conducted to choose the best algorithm among the existing algorithms, which were used previously in solving other Multiple Objective Optimization problems. An experiment is conducted to know how well the models performed to solve the problem so that all the participants are benefited simultaneously. Results: The results show that all three participants that are port, customer one and customer two have gained profits by solving the problem in multi-objective optimization model. Whereas in a single-objective optimization model, a single participant has achieved earnings at a time, leaving the rest of the participants either in loss or with minimal profits. Conclusion: We can conclude that multi-objective optimization model has performed better than the single-objective optimization model because of the impartial results among the participants.
APA, Harvard, Vancouver, ISO, and other styles
22

Miyawaki, Shinjiro. "Automatic construction and meshing of multiscale image-based human airway models for simulations of aerosol delivery." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/1990.

Full text
Abstract:
The author developed a computational framework for the study of the correlation between airway morphology and aerosol deposition based on a population of human subjects. The major improvement on the previous framework, which consists of a geometric airway model, a computational fluid dynamics (CFD) model, and a particle tracking algorithm, lies in automatic geometry construction and mesh generation of airways, which is essential for a population-based study. The new geometric model overcomes the shortcomings of both centerline (CL)-based cylindrical models, which are based on the skeleton and average branch diameters of airways called one-dimensional (1-D) trees, and computed tomography (CT)-based models. CL-based models are efficient in terms of pre- and post-processing, but fail to represent trifurcations and local morphology. In contrast, in spite of the accuracy of CT-based models, it is time-consuming to build these models manually, and non-trivial to match 1-D trees and three-dimensional (3-D) geometry. The new model, also known as a hybrid CL-CT-based model, is able to construct a physiologically-consistent laryngeal geometry, represent trifurcations, fit cylindrical branches to CT data, and create the optimal CFD mesh in an automatic fashion. The hybrid airway geometries constructed for 8 healthy and 16 severe asthmatic (SA) subjects agreed well with their CT-based counterparts. Furthermore, the prediction of aerosol deposition in a healthy subject by the hybrid model agreed well with that by the CT-based model. To demonstrate the potential application of the hybrid model to investigating the correlation between skeleton structure and aerosol deposition, the author applied the large eddy simulation (LES)-based CFD model that accounts for the turbulent laryngeal jet to three hybrid models of SA subjects. The correlation between diseased branch and aerosol deposition was significant in one of the three SA subjects. However, whether skeleton structure contributes to airway abnormality requires further investigation.
APA, Harvard, Vancouver, ISO, and other styles
23

Bahrehmand, Arash. "A Computational model for generating and analysing architectural layouts in virtual environments." Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/395174.

Full text
Abstract:
Designing interior layouts is one of the most common elements of the immersive computer graphics projects (e.g., games, virtual reality and special effects). The design of 3D virtual environments aims to provide not only a visually engaging experience, but also a plausible understanding of the virtual entities through which the user can easily associate real world objects with corresponding 3D digital models. Nowadays, digital artists apply Computer Aid Design (CAD) techniques to draft floor layouts. The planning process, as a complex human activity, becomes more prone to error when the designer is faced with different levels of uncertainty in a multidimensional problem such as this one. Despite the growth of computerized computational methods to generate and simulate floor plans, mostly based on Artificial Intelligence techniques, such methods are not satisfactory enough among designers yet, due to the weak problem formulation of the designers practices and that current methods do not incorporate subjective aspects of the design in the optimization processes. This thesis contributes in two main fields of floorplan layout computation: computational quality metrics and procedural generation of 3D layouts. We introduce novel metrics based on formulating architectural standards, to measure the quality of privacy (as complementarity of visibility) and of circulation. We introduce two different approaches to generate floorplans, one based on subdivision, and focused on enhancing circulation, and another one based on hybrid optimization methods, supporting a wide variety of quality measures, and subjective input. The hybrid optimization algorithm takes advantage of an evolutionary strategy to generate a set of optimal solutions. In order to reach higher quality offspring at each generation and faster convergence towards optimal solutions, a parent selection method is proposed that attempts to find the most appropriate sub layouts in the recombination process (i.e., sub-layouts that more likely generate higher quality children after recombination). In addition, the subjective and contextual aspects of the design are addressed by incorporating user opinion in the fitness function of the optimization algorithm.
El diseño de las disposiciones interiores es uno de los elementos más comunes de los proyectos de gráficos por ordenador y de inmersión (por ejemplo, juegos, realidad virtual y efectos especiales). El diseño de los entornos virtuales 3D tiene como objetivo proporcionar no sólo una experiencia visualmente atractiva, sino también una comprensión plausible de las entidades virtuales a través de la cual el usuario puede asociar fácilmente objetos del mundo real con los correspondientes modelos digitales 3D. Hoy en día, los artistas digitales aplican técnicas de diseño asistido por ordenador (CAD o Computer Aided Design) para elaborar diseños de planta. El proceso de planificación, como toda actividad humana compleja, se vuelve más propenso a error cuando el diseñador se enfrenta a diferentes niveles de incertidumbre en un problema multidimensional como éste. A pesar del crecimiento de métodos computacionales computerizados para generar y simular los planos de planta, en su mayoría basados en técnicas de inteligencia artificial, sin embargo, tales métodos no son lo suficientemente satisfactorios para los diseñadores, debido a la formulación débil del problema de las prácticas de los diseñadores y a que los métodos actuales no incorporan aspectos subjetivos del diseño en los procesos de optimización. Esta tesis contribuye en dos campos principales del diseño computacional de plano: en las métricas computacionales de calidad y en la generación procedimental de las disposiciones 3D. Se introducen nuevos parámetros para medir la calidad de privacidad (como complementariedad de visibilidad) y de la circulación, basados en la formulación computacional de estándares de arquitectura. Por otra parte, se introducen dos enfoques diferentes para generar planos computacionalmente, uno basado en la subdivisión, y centrado en la mejora de la circulación, y el otro basada en métodos de optimización híbridos, que admite una amplia variedad de medidas de calidad, y de entradas subjetivas. El algoritmo de optimización híbrida aprovecha una estrategia evolutiva para generar un conjunto de soluciones óptimas. Con el fin de llegar a la descendencia de mayor calidad en cada generación y una convergencia más rápida hacia soluciones óptimas, se propone un método de selección de los padres para intentar encontrar los sub-diseños más adecuadas en el proceso de recombinación (es decir, sub-diseños que con más probabilidad generan los descendientes de mayor calidad después de recombinación). Además, los aspectos subjetivos y contextuales del diseño se tratan mediante la incorporación de la opinión de usuario en la función de aptitud del algoritmo de optimización.
APA, Harvard, Vancouver, ISO, and other styles
24

Du, Toit Jan Valentine. "Automated construction of generalized additive neural networks for predictive data mining / Jan Valentine du Toit." Thesis, North-West University, 2006. http://hdl.handle.net/10394/128.

Full text
Abstract:
In this thesis Generalized Additive Neural Networks (GANNs) are studied in the context of predictive Data Mining. A GANN is a novel neural network implementation of a Generalized Additive Model. Originally GANNs were constructed interactively by considering partial residual plots. This methodology involves subjective human judgment, is time consuming, and can result in suboptimal results. The newly developed automated construction algorithm solves these difficulties by performing model selection based on an objective model selection criterion. Partial residual plots are only utilized after the best model is found to gain insight into the relationships between inputs and the target. Models are organized in a search tree with a greedy search procedure that identifies good models in a relatively short time. The automated construction algorithm, implemented in the powerful SAS® language, is nontrivial, effective, and comparable to other model selection methodologies found in the literature. This implementation, which is called AutoGANN, has a simple, intuitive, and user-friendly interface. The AutoGANN system is further extended with an approximation to Bayesian Model Averaging. This technique accounts for uncertainty about the variables that must be included in the model and uncertainty about the model structure. Model averaging utilizes in-sample model selection criteria and creates a combined model with better predictive ability than using any single model. In the field of Credit Scoring, the standard theory of scorecard building is not tampered with, but a pre-processing step is introduced to arrive at a more accurate scorecard that discriminates better between good and bad applicants. The pre-processing step exploits GANN models to achieve significant reductions in marginal and cumulative bad rates. The time it takes to develop a scorecard may be reduced by utilizing the automated construction algorithm.
Thesis (Ph.D. (Computer Science))--North-West University, Potchefstroom Campus, 2006.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhao, Mengyao. "Genomic variation detection using dynamic programming methods." Thesis, Boston College, 2014. http://hdl.handle.net/2345/bc-ir:104357.

Full text
Abstract:
Thesis advisor: Gabor T. Marth
Background: Due to the rapid development and application of next generation sequencing (NGS) techniques, large amounts of NGS data have become available for genome-related biological research, such as population genetics, evolutionary research, and genome wide association studies. A crucial step of these genome-related studies is the detection of genomic variation between different species and individuals. Current approaches for the detection of genomic variation can be classified into alignment-based variation detection and assembly-based variation detection. Due to the limitation of current NGS read length, alignment-based variation detection remains the mainstream approach. The Smith-Waterman algorithm, which produces the optimal pairwise alignment between two sequences, is frequently used as a key component of fast heuristic read mapping and variation detection tools for next-generation sequencing data. Though various fast Smith-Waterman implementations are developed, they are either designed as monolithic protein database searching tools, which do not return detailed alignment, or they are embedded into other tools. These issues make reusing these efficient Smith-Waterman implementations impractical. After the alignment step in the traditional variation detection pipeline, the afterward variation detection using pileup data and the Bayesian model is also facing great challenges especially from low-complexity genomic regions. Sequencing errors and misalignment problems still influence variation detection (especially INDEL detection) a lot. The accuracy of genomic variation detection still needs to be improved, especially when we work on low- complexity genomic regions and low-quality sequencing data. Results: To facilitate easy integration of the fast Single-Instruction-Multiple-Data Smith-Waterman algorithm into third-party software, we wrote a C/C++ library, which extends Farrar's Striped Smith-Waterman (SSW) to return alignment information in addition to the optimal Smith-Waterman score. In this library we developed a new method to generate the full optimal alignment results and a suboptimal score in linear space at little cost of efficiency. This improvement makes the fast Single-Instruction-Multiple-Data Smith-Waterman become really useful in genomic applications. SSW is available both as a C/C++ software library, as well as a stand-alone alignment tool at: https://github.com/mengyao/Complete- Striped-Smith-Waterman-Library. The SSW library has been used in the primary read mapping tool MOSAIK, the split-read mapping program SCISSORS, the MEI detector TAN- GRAM, and the read-overlap graph generation program RZMBLR. The speeds of the mentioned software are improved significantly by replacing their ordinary Smith-Waterman or banded Smith-Waterman module with the SSW Library. To improve the accuracy of genomic variation detection, especially in low-complexity genomic regions and on low-quality sequencing data, we developed PHV, a genomic variation detection tool based on the profile hidden Markov model. PHV also demonstrates a novel PHMM application in the genomic research field. The banded PHMM algorithms used in PHV make it a very fast whole-genome variation detection tool based on the HMM method. The comparison of PHV to GATK, Samtools and Freebayes for detecting variation from both simulated data and real data shows PHV has good potential for dealing with sequencing errors and misalignments. PHV also successfully detects a 49 bp long deletion that is totally misaligned by the mapping tool, and neglected by GATK and Samtools. Conclusion: The efforts made in this thesis are very meaningful for methodology development in studies of genomic variation detection. The two novel algorithms stated here will also inspire future work in NGS data analysis
Thesis (PhD) — Boston College, 2014
Submitted to: Boston College. Graduate School of Arts and Sciences
Discipline: Biology
APA, Harvard, Vancouver, ISO, and other styles
26

Ledet, Jeffrey H. "Simulation and Performance Evaluation of Algorithms for Unmanned Aircraft Conflict Detection and Resolution." ScholarWorks@UNO, 2016. http://scholarworks.uno.edu/td/2168.

Full text
Abstract:
The problem of aircraft conflict detection and resolution (CDR) in uncertainty is addressed in this thesis. The main goal in CDR is to provide safety for the aircraft while minimizing their fuel consumption and flight delays. In reality, a high degree of uncertainty can exist in certain aircraft-aircraft encounters especially in cases where aircraft do not have the capabilities to communicate with each other. Through the use of a probabilistic approach and a multiple model (MM) trajectory information processing framework, this uncertainty can be effectively handled. For conflict detection, a randomized Monte Carlo (MC) algorithm is used to accurately detect conflicts, and, if a conflict is detected, a conflict resolution algorithm is run that utilizes a sequential list Viterbi algorithm. This thesis presents the MM CDR method and a comprehensive MC simulation and performance evaluation study that demonstrates its capabilities and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
27

Al-Hasani, Firas Ali Jawad. "Multiple Constant Multiplication Optimization Using Common Subexpression Elimination and Redundant Numbers." Thesis, University of Canterbury. Electrical and Computer Engineering, 2014. http://hdl.handle.net/10092/9054.

Full text
Abstract:
The multiple constant multiplication (MCM) operation is a fundamental operation in digital signal processing (DSP) and digital image processing (DIP). Examples of the MCM are in finite impulse response (FIR) and infinite impulse response (IIR) filters, matrix multiplication, and transforms. The aim of this work is minimizing the complexity of the MCM operation using common subexpression elimination (CSE) technique and redundant number representations. The CSE technique searches and eliminates common digit patterns (subexpressions) among MCM coefficients. More common subexpressions can be found by representing the MCM coefficients using redundant number representations. A CSE algorithm is proposed that works on a type of redundant numbers called the zero-dominant set (ZDS). The ZDS is an extension over the representations of minimum number of non-zero digits called minimum Hamming weight (MHW). Using the ZDS improves CSE algorithms' performance as compared with using the MHW representations. The disadvantage of using the ZDS is it increases the possibility of overlapping patterns (digit collisions). In this case, one or more digits are shared between a number of patterns. Eliminating a pattern results in losing other patterns because of eliminating the common digits. A pattern preservation algorithm (PPA) is developed to resolve the overlapping patterns in the representations. A tree and graph encoders are proposed to generate a larger space of number representations. The algorithms generate redundant representations of a value for a given digit set, radix, and wordlength. The tree encoder is modified to search for common subexpressions simultaneously with generating of the representation tree. A complexity measure is proposed to compare between the subexpressions at each node. The algorithm terminates generating the rest of the representation tree when it finds subexpressions with maximum sharing. This reduces the search space while minimizes the hardware complexity. A combinatoric model of the MCM problem is proposed in this work. The model is obtained by enumerating all the possible solutions of the MCM that resemble a graph called the demand graph. Arc routing on this graph gives the solutions of the MCM problem. A similar arc routing is found in the capacitated arc routing such as the winter salting problem. Ant colony optimization (ACO) meta-heuristics is proposed to traverse the demand graph. The ACO is simulated on a PC using Python programming language. This is to verify the model correctness and the work of the ACO. A parallel simulation of the ACO is carried out on a multi-core super computer using C++ boost graph library.
APA, Harvard, Vancouver, ISO, and other styles
28

Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Full text
Abstract:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
APA, Harvard, Vancouver, ISO, and other styles
29

Hunter, Brandon. "Channel Probing for an Indoor Wireless Communications Channel." BYU ScholarsArchive, 2003. https://scholarsarchive.byu.edu/etd/64.

Full text
Abstract:
The statistics of the amplitude, time and angle of arrival of multipaths in an indoor environment are all necessary components of multipath models used to simulate the performance of spatial diversity in receive antenna configurations. The model presented by Saleh and Valenzuela, was added to by Spencer et. al., and included all three of these parameters for a 7 GHz channel. A system was built to measure these multipath parameters at 2.4 GHz for multiple locations in an indoor environment. Another system was built to measure the angle of transmission for a 6 GHz channel. The addition of this parameter allows spatial diversity at the transmitter along with the receiver to be simulated. The process of going from raw measurement data to discrete arrivals and then to clustered arrivals is analyzed. Many possible errors associated with discrete arrival processing are discussed along with possible solutions. Four clustering methods are compared and their relative strengths and weaknesses are pointed out. The effects that errors in the clustering process have on parameter estimation and model performance are also simulated.
APA, Harvard, Vancouver, ISO, and other styles
30

Hitchcock, Yvonne Roslyn. "Elliptic Curve Cryptography for Lightweight Applications." Queensland University of Technology, 2003. http://eprints.qut.edu.au/15838/.

Full text
Abstract:
Elliptic curves were first proposed as a basis for public key cryptography in the mid 1980's. They provide public key cryptosystems based on the difficulty of the elliptic curve discrete logarithm problem (ECDLP) , which is so called because of its similarity to the discrete logarithm problem (DLP) over the integers modulo a large prime. One benefit of elliptic curve cryptosystems (ECCs) is that they can use a much shorter key length than other public key cryptosystems to provide an equivalent level of security. For example, 160 bit ECCs are believed to provide about the same level of security as 1024 bit RSA. Also, the level of security provided by an ECC increases faster with key size than for integer based discrete logarithm (dl) or RSA cryptosystems. ECCs can also provide a faster implementation than RSA or dl systems, and use less bandwidth and power. These issues can be crucial in lightweight applications such as smart cards. In the last few years, ECCs have been included or proposed for inclusion in internationally recognized standards. Thus elliptic curve cryptography is set to become an integral part of lightweight applications in the immediate future. This thesis presents an analysis of several important issues for ECCs on lightweight devices. It begins with an introduction to elliptic curves and the algorithms required to implement an ECC. It then gives an analysis of the speed, code size and memory usage of various possible implementation options. Enough details are presented to enable an implementer to choose for implementation those algorithms which give the greatest speed whilst conforming to the code size and ram restrictions of a particular lightweight device. Recommendations are made for new functions to be included on coprocessors for lightweight devices to support ECC implementations Another issue of concern for implementers is the side-channel attacks that have recently been proposed. They obtain information about the cryptosystem by measuring side-channel information such as power consumption and processing time and the information is then used to break implementations that have not incorporated appropriate defences. A new method of defence to protect an implementation from the simple power analysis (spa) method of attack is presented in this thesis. It requires 44% fewer additions and 11% more doublings than the commonly recommended defence of performing a point addition in every loop of the binary scalar multiplication algorithm. The algorithm forms a contribution to the current range of possible spa defences which has a good speed but low memory usage. Another topic of paramount importance to ECCs for lightweight applications is whether the security of fixed curves is equivalent to that of random curves. Because of the inability of lightweight devices to generate secure random curves, fixed curves are used in such devices. These curves provide the additional advantage of requiring less bandwidth, code size and processing time. However, it is intuitively obvious that a large precomputation to aid in the breaking of the elliptic curve discrete logarithm problem (ECDLP) can be made for a fixed curve which would be unavailable for a random curve. Therefore, it would appear that fixed curves are less secure than random curves, but quantifying the loss of security is much more difficult. The thesis performs an examination of fixed curve security taking this observation into account, and includes a definition of equivalent security and an analysis of a variation of Pollard's rho method where computations from solutions of previous ECDLPs can be used to solve subsequent ECDLPs on the same curve. A lower bound on the expected time to solve such ECDLPs using this method is presented, as well as an approximation of the expected time remaining to solve an ECDLP when a given size of precomputation is available. It is concluded that adding a total of 11 bits to the size of a fixed curve provides an equivalent level of security compared to random curves. The final part of the thesis deals with proofs of security of key exchange protocols in the Canetti-Krawczyk proof model. This model has been used since it offers the advantage of a modular proof with reusable components. Firstly a password-based authentication mechanism and its security proof are discussed, followed by an analysis of the use of the authentication mechanism in key exchange protocols. The Canetti-Krawczyk model is then used to examine secure tripartite (three party) key exchange protocols. Tripartite key exchange protocols are particularly suited to ECCs because of the availability of bilinear mappings on elliptic curves, which allow more efficient tripartite key exchange protocols.
APA, Harvard, Vancouver, ISO, and other styles
31

Ben, Zid Maha. "Emploi de techniques de traitement de signal MIMO pour des applications dédiées réseaux de capteurs sans fil." Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENT017/document.

Full text
Abstract:
Dans ce travail de thèse, on s'intéresse é l'emploi de techniques de traitement de signal de systèmes de communication MIMO (Multiple Input Multiple Output) pour des applications aux réseaux de capteurs sans fil. Les contraintes énergétiques de cette classe de réseau font appel à des topologies particulières et le réseau peut être perçu comme étant un ensemble de grappes de nœuds capteurs. Ceci ouvre la porte à des techniques avancées de communication de type MIMO. Dans un premier temps, les différents aspects caractérisant les réseaux de capteurs sans fil sont introduits. Puis, les efforts engagés pour optimiser la conservation de l'énergie dans ces réseaux sont résumés. Les concepts de base de systèmes MIMOs sont abordés dans le deuxième chapitre et l'exploration par voie numérique de différentes pistes de la technologie MIMO sont exposées. Nous nous intéressons à des techniques de diversité de polarisation dans le cadre de milieux de communication riches en diffuseurs. Par la suite, des méthodes de type beamforming sont proposées pour la localisation dans les réseaux de capteurs sans fil. Le nouvel algorithme de localisation est présenté et les performances sont évaluées. Nous identifions la configuration pour la communication inter-grappes qui permet pour les meilleurs compromis entre énergie et efficacité spectrale dans les réseaux de capteurs sans fil. Finalement, nous envisageons la technique de sélection de nœuds capteurs afin de réduire la consommation de l'énergie dans le réseau de capteur sans fil
The aim of this work is to study from a signal processing point of view the use of MIMO (Multiple Input Multiple Output) communication systems for algorithms dedicated to wireless sensor networks. We investigate energy-constrained wireless sensor networks and we focus on cluster topology of the network. This topology permits for the use of MIMO communication system model. First, we review different aspects that characterize the wireless sensor network. Then, we introduce the existing strategies for energy conservation in the network. The basic concepts of MIMO systems are presented in the second chapter and numerical results are provided for evaluating the performances of MIMO techniques. Of particular interest, polarization diversity over rich scattering environment is studied. Thereafter, beamforming approach is proposed for the development of an original localization algorithm in wireless sensor network. The novel algorithm is described and performances are evaluated by simulation. We determine the optimal system configuration between a pair of clusters that permits for the highest capacity to energy ratio in the fourth chapter. The final chapter is devoted to sensor nodes selection in wireless sensor network. The aim of using such technique is to make energy conservation in the network
APA, Harvard, Vancouver, ISO, and other styles
32

Chiang, Hsing-kuo, and 江興國. "Interacting Multiple Model Algorithm for NLOS Mitigation in Wireless Location." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/17914550076557737594.

Full text
Abstract:
碩士
國立中山大學
電機工程學系研究所
97
In the thesis, we propose a non-line of sight (NLOS) mitigation approach based on the interacting multiple model (IMM) algorithm. The IMM-based structure, composed of a biased Kalman filter (BKF) and a Kalman filter with NLOS-discarding process (KF-D), is capable of mitigating the ranging error caused by the NLOS effects, and therefore improving the performance and accuracy in wireless location systems. The NLOS effect on signal transmission is one of the major factors that affect the accuracy of the time-based location systems. Effective NLOS identification and mitigation usually count on pre-determined statistic distribution and hypothesis assumption in the signals. Because the variance of the NLOS error is much large than that of measurement noise, hypothesis testing on the LOS/NLOS status can be formulated.The BKF combines the sliding window and decides the status by using hypothesis testing. The calculated variance and the detection result are used in switching between the biased and unbiased modes in the Kalman filter. In the contrast, the KF-D scheme identifies the NLOS status and tries to eliminate the NLOS effects by directly using the estimated results from the LOS stage. The KF-D scheme can achieve reasonably good NLOS mitigation if the estimates in the LOS status are obtained. Due to the discarding process, changes of the state vector within the NLOS stage are possibly ignored, and will cause larger errors in the state estimates. The BKF and KF-D can make up for each other by formulating the filters in an IMM structure, which could tune up the probabilities of BKF and KF-D. In our approach, the measured data are smoothed by sliding window and a BKF. The variance of data and the hypothesis test result are passed to the two filters. The BKF switches between the biased/unbiased modes by using the result. The KF-D may receive the estimated value from BKF based on the results. The probability computation unit changes the weights to get the estimated TOA values. With the simulations in ultra-wideband (UWB) signals, it can be seen that the proposed IMM-based approach can effectively mitigate the NLOS effects and increase the accuracy in wireless position.
APA, Harvard, Vancouver, ISO, and other styles
33

Shin-MingTang and 唐世銘. "Interacting Multiple Model Positioning Algorithm and its Application in Vehicle Navigation." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/07591632781261932674.

Full text
Abstract:
碩士
國立成功大學
電機工程學系碩博士班
100
Nowadays, the Global Positioning System (GPS) has been widely used for vehicle navigation. However, GPS cannot provide an uninterrupted positioning solution when vehicle drives in areas such as urban canyons or tunnels, because the system suffers from signal blockage and multipath effects. In order to deal with these problems, GPS/Inertial Navigation System (INS) integrated navigation technique has become the main direction to facilitate a continuous positioning solution. A GPS/INS integrated navigation system typically utilizes an Extended Kalman Filter (EKF) based navigation algorithm to estimate vehicle position, velocity, and attitude based on GPS and INS measurements. However, as the dynamic state of vehicles is highly variable and complex over time, utilizing single EKF model is not sufficient enough to capture the movement of vehicles. Therefore, this thesis develops an Interacting Multiple Model (IMM) positioning algorithm. IMM approach considers that the system follows one of a finite number of different models, the appropriate state estimates are combined according to the ratio adjustment between models to ensure the positioning accuracy.
APA, Harvard, Vancouver, ISO, and other styles
34

Kang, Jin-Sian, and 康晉賢. "Analysis of Mobile Target Tracking with Cooperative-Interacting Multiple Model Algorithm." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/92503479090756147187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Hu, Chia-Wei, and 胡家維. "The Ultra-Tight GPS/INS Navigation Algorithm Using Interacting Multiple Model Nonlinear Filtering." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/82546030858145814901.

Full text
Abstract:
碩士
國立臺灣海洋大學
通訊與導航工程系
98
In this thesis, application of the ultra-tight integration navigation algorithm using interacting multiple model (IMM) nonlinear filtering is studied for GPS/INS. The ultra-tight integration is also known as deep integration, which increases the receiver tracking bandwidth and suppresses noise, so as to promote GPS receiver performance. When the GPS signal losses, assistance of the aiding INS in the receiver's acquisition and re-acquisition process can still use the position, velocity on the delay lock loop (DLL) and phase lock loop (PLL) to promote the receiver’s tracking loop performance. Using the structure of ultra-tight in the receiver has many advantages, such as disturbance rejection and multi-path rejection, promoting high dynamic performance, tracking weak signals, improving the accuracy, urban or indoor positioning capability, shorten acquisition time, improved phase locked loop bandwidth, achieve a more accurate Doppler frequency shift and measurement of phase, etc. The unscented Kalman filter (UKF) employ a set of sigma points through deterministic sampling, such that the linearization process is not necessary, and therefore the error caused by linearization as in the traditional extended Kalman filter (EKF) can be avoided. There is no need to evaluate the Jacobian matrix. The use of IMM, which describes a set of switching models, finally provides the suitable value of process noise covariance. Consequently, the resulting sensor fusion strategy can efficiently deal with the nonlinear problem in vehicle navigation. The proposed IMMUKF algorithm shows significant improvement in navigation estimation accuracy as compared to the UKF approaches.
APA, Harvard, Vancouver, ISO, and other styles
36

Yang, Chun-Chieh, and 楊竣傑. "A robust EKF-based interacting multiple-model algorithm for improvements of localization in random NLOS environments." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/98475524000751909189.

Full text
Abstract:
碩士
中原大學
電機工程研究所
105
In this thesis, a robust EKF-based interacting multiple-model (IMM-REKF) algorithm is proposed for significantly reducing positioning errors which are caused by NLOS measurement signal propagations in random NLOS environments. Specifically, we consider positioning environments with different NLOS occurrence probability and assume probability NLOS distributions are unknown. We propose a validated measurement structure based on information from LOS base stations to cut off unreasonable received measurement signals. Furthermore, we employ fuzzy inference system for tuning process noise covariance in IMM-REKF to yield a FIMM-REKF algorithm. In simulations, our proposed methods remarkably outperform the R-IMM in the literature. They reduce positioning errors by mitigating NLOS effects. Even for environments with NLOS occurrence probability, the performance of our proposed methods can still meet the FCC requirements. The IMM-REKF and the FIMM-REKF perform similarly. The IMM-REKF is a better choice because the FIMM-REKF requires more computation.
APA, Harvard, Vancouver, ISO, and other styles
37

Hsieh, Chin-Ying, and 謝晉穎. "An Algorithm for Hierarchical Model generation in Interactive Virtual Environments." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/70644577519450046339.

Full text
Abstract:
碩士
國立交通大學
資訊科學學系
83
For complex virtual environments, considering different of detail in display is an appropriate approach to achieve interactive frame rates. In this thsesis, we propose an algorithm that automatically generate the models of virtual object at multiple levels of detail based on the concept of "retiling". with the preservation of sharp features, the algorithm is suited to a wide variety of polygonal objects. Besides, the model at each level is constructed as clos as possible to the original one. Finally, we apply the algorithm to a walkthrough application for this purpose.
APA, Harvard, Vancouver, ISO, and other styles
38

LIU, WAN-CHUN, and 劉婉君. "The Research of Radar Tracking System Using Multiple Model Algorithm." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/56343258954511653329.

Full text
Abstract:
博士
大葉大學
電機工程學系
98
The multiple-target tracking algorithm plays an important role in a radar system. The key techniques for improving the tracking accuracy include data association and maneuvering estimation algorithm. In this dissertation, a data association denoted one-step conditional maximum likelihood algorithm is proposed to improve the tracking capability. Moreover, an improve algorithm denoted the adaptive multiple model estimator is developed in this dissertation. When target maneuvering movement is occurred, maneuver detection and acceleration estimation algorithms are applied to modify the parameters of the tracking filter. In this algorithm, a bank of Kalman filters is applied to improve the tracking accuracy of radar surveillance. Based on computer simulations, we can convince this approach will obtain the better tracking results.
APA, Harvard, Vancouver, ISO, and other styles
39

Chang, Chi-Hsian, and 張吉賢. "An Improved Algorithm of Multiple Model Estimation for Radar Systems." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/53647977322585702536.

Full text
Abstract:
碩士
大葉大學
電機工程學系
95
An improved algorithm for tracking multiple maneuvering targets using a new approach has been developed in this thesis. This algorithm is implemented with an adaptive filter consisting of a data association technique denoted 1-step conditional maximum likelihood together with a bank of Kalman filter as an adaptive maneuvering compensator. Via this approach both data association and target maneuvering problem can be solved simultaneously. Computer simulation results indicate that this approach successfully tracks multiple targets and has better performance also. Moreover, in order to verify such a tracking system is really improved. Detailed simulations of the multi-target tracking using several tracking algorithms for many situations are developed. Computer simulation results indicate that this approach successfully tracks multiple targets and have better performance also.
APA, Harvard, Vancouver, ISO, and other styles
40

Jin, Xing. "Multiple ARX Model Based Identification for Switching/Nonlinear Systems with EM Algorithm." Master's thesis, 2010. http://hdl.handle.net/10048/1056.

Full text
Abstract:
Two different types of switching mechanism are considered in this thesis; one is featured with abrupt/sudden switching while the other one shows gradual changing behavior in its dynamics. It is shown that, through the comparison of the identification results from the proposed method and a benchmark method, the proposed robust identification method can achieve better performance when dealing with the data set mixed with outliers. To model the switched systems exhibiting gradual or smooth transition among different local models, in addition to estimating the local sub-systems parameters, a smooth validity (an exponential function) function is introduced to combine all the local models so that throughout the working range of the gradual switched system, the dynamics of the nonlinear process can be appropriately approximated. Verification results on a simulated numerical example and CSTR process confirm the effectiveness of the proposed Linear Parameter Varying (LPV) identification algorithm.
Process Control
APA, Harvard, Vancouver, ISO, and other styles
41

Yen, Huai-Hsien, and 顏懷先. "Development of Wafer Bin Map Multiple Pattern Recognition Model-Using Genetic Algorithm." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/7yybu9.

Full text
Abstract:
碩士
國立中興大學
資訊管理學系所
103
The complexity of semiconductor manufacturing has increased a lot in recent years. The semiconductor manufacturing consists of more than 400 process steps. In order to ensure product quality, equipment parameters and measurement result of wafers are collected and save into database of automation systems during production, and many tests are needed to be performed afterwards. Engineers need to check all relative process parameters if the test result is abnormal and then find out the root cause according to their expertise and experience. However, the process data are too enormous to analyze effectively for engineers. Therefore, how to analyze huge amounts of data effectively and covert into valuable information is a very important topic for semiconductor industry. Fortunately, different abnormal excursion may form different patterns on wafer bin map, engineers often use wafer bin map to find out the root cause of abnormal excursion. Currently most semiconductor fabs still analyze wafer bin map manually, a few fabs use pattern recognition system in wafer bin map but these systems assume that a wafer bin map only has a pattern. Actually a wafer bin map may consists of serval independent patterns of different regions in wafer map. This research uses supervised machine learning to implement classification of bin map patterns. Extract features of each patterns of training data which has been pre-processed, then calculate weight of each features via genetic algorithm, and then use Weighted Euclidean Distance to calculate similarity of testing data and each patterns. This research use real fab data to build up a recognition system and validate the effectiveness. The experimental results demonstrate that the proposed methodology can not only recognize bin map patterns by extracting features, but also effectively identify multiple bin map patterns on the wafer by recognition rate of 90%
APA, Harvard, Vancouver, ISO, and other styles
42

Yu-Chun, Chiu. "A Study on Using Genetic Algorithm to Solve a Vehicle Routing Model with Multiple Vehicle Types and Multiple Time Windows." 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0016-1303200709263928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Chiu, Yu-Chun, and 邱玉純. "A Study on Using Genetic Algorithm to Solve a Vehicle Routing Model with Multiple Vehicle Types and Multiple Time Windows." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/33621393092337184747.

Full text
Abstract:
碩士
國立清華大學
工業工程與工程管理學系
94
In 1959 Dantzig and Ramser proposed a Vehicle Routing Problem (VRP). Since then developed and extended to varieties of problems in order to reflect contemporary requirements, distributing goods is a part of our daily life. No matter it is an overseas letter sent via FedEx, or a local product delivered by TAKKYUBIN, they all belong to distribution domain. Therefore, commodity distribution occurs anytime, everywhere, among countries, companies, and individuals. This study is focused on vehicle routing and scheduling problems, with different features considered for more realistic applications. These features include: multiple vehicle types for companies and multiple time windows for customers. In addition, to assist a logistic company in distributing goods effectively, the distribution costs that need to be minimized include vehicle dispatching cost, traveling cost, and time-window combination cost. In practice, the complexity of the problem makes it necessary to develop a structural model for facilitating a general analysis and applications. Such model has been developed and illustrated by two examples. Because the considered vehicle routing and scheduling problem with multiple vehicle types and multiple time windows (VRSP-MVMT) is a nondeterministic polynomial time (NP)-hard problem, we have developed a genetic algorithm (GA) for efficient solution. On one hand, when the problem is small-scale problem, the developed GA has shown to be capable of obtaining the optimal solution; on the other hand, when the scale of the problem is large, the GA can obtain the near-optimal solution. Finally, we will draw our conclusion regarding our study and future research.
APA, Harvard, Vancouver, ISO, and other styles
44

LIN, WEI-JHONG, and 林維中. "A genetic algorithm based multiple objective decision making model to explore the sustainable city bus." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/49f88z.

Full text
Abstract:
碩士
國立臺北科技大學
工業工程與管理系碩士班
102
The concept of sustainable development introduces into transport department which is sustainable transport as the main development strategy for the transport department in the various countries. Sustainable transportation which includes environment, economy and society is a Multi-objective programming problem. The problem can obtains many Pareto solutions as feasible solutions through a variety of algorithm which provide decision makers to select, but decision makers directly select the ideal solution based on their own experiences and preferences in the case of many feasible solutions is difficult. Therefore, this research uses a hybrid decision making model to combine multiple objective genetic algorithm and multiple attribute decision making. Multi-objective genetic algorithms can handle complex multi-objective optimization problem to obtain Pareto solutions; multi-attribute decision making can find the preferences of different groups of decision makers and sort the Pareto solutions which helps decision maker selects a preferenced solution to carry out. In order to verify the validity of model, one example of Taoyuan city buses is used to discuss the optimization problem of sustainable city buses. Through experts questionnaires consider the views obtained from government and academic institutions, we compare the results of the hybrid decision making model and the fuzzy multi-objective programming method. The results show that city bus''s capacity is the most important evaluation criteria of experts and two model''s best compromise solutions are similar. It shows the effectiveness of the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
45

Moolman, A. J. (Alwyn Jakobus). "Design of a selective parallel heuristic algorithm for the vehicle routing problem on an adaptive object model." Thesis, 2010. http://hdl.handle.net/2263/29598.

Full text
Abstract:
The Vehicle Routing Problem has been around for more than 50 years and has been of major interest to the operations research community. The VRP pose a complex problem with major benefits for the industry. In every supply chain transportation occurs between customers and suppliers. In this thesis, we analyze the use of a multiple pheromone trial in using Ant Systems to solve the VRP. The goal is to find a reasonable solution for data environments of derivatives of the basic VRP. An adaptive object model approach is followed to allow for additional constraints and customizable cost functions. A parallel method is used to improve speed and traversing the solution space. The Ant System is applied to the local search operations as well as the data objects. The Tabu Search method is used in the local search part of the solution. The study succeeds in allowing for all of the key performance indicators, i.e. efficiency, effectiveness, alignment, agility and integration for an IT system, where the traditional research on a VRP algorithm only focuses on the first two.
Thesis (PhD)--University of Pretoria, 2010.
Industrial and Systems Engineering
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
46

Tsai, Yu-Hsia, and 蔡玉霞. "Effectiveness of Multiple Interactive and Informative Technology-assisted Health Education Program on Atrial Fibrillation Patients Receiving Oral Anticoagulants:Through Health Belief Model." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/286za9.

Full text
Abstract:
博士
國立臺灣大學
護理學研究所
106
Background: Patients with Atrial fibrillation (AFib) are often prescribed oral anticoagulants (OACs) to reduce the risk of stroke. However, considering the general lack of medical knowledge among patients taking OACs, adequate medication instruction is crucial. Purpose: This study examined patients on taking OACs for AFib to determine the effectiveness of a multiple interactive health education program, which was developed based on the Health Belief Model (HBM) and incorporated information technologies. The program’s effectiveness was evaluated according to outcome indicators: the patients’ knowledge regarding OACs, health beliefs, satisfaction over the anticoagulant taken, quality of life (QoL), and health status. Factors that influenced these indicators were also examined. Methodology: A randomized controlled study was conducted on the cardiology outpatients of two medical centers in northern Taiwan. The patients were recruited through purposive sampling. They had been diagnosed with AFib and were receiving OACs. The patients were divided according to the blocks of clinic hours and assigned randomly to the experimental group or control group. The control variables involved demographic characteristics and medical history. The dependent variables and their corresponding research instruments were medication knowledge and health beliefs (questionnaires designed by the research team), medication satisfaction (Duke Anticoagulation Satisfaction Scale, DASS), QoL (Short Form-12, SF-12), and health status. Other than the medication knowledge questionnaire, which was assessed monthly, all the measurement instruments were applied twice: first in a pretest, and again in a posttest administered at the third month. The interventions administered to the experimental group were one-on-one instruction and HBM-driven strategies, health information technology system, monthly telephone follow-ups, and providing medication cards. Patients in the control group only received brochure and medication cards. The data were analyzed using descriptive statistics and inferential statistics (t-test, ANOVA, Chi-square test, McNemar test and Pearson correlation). The effectiveness of the interventions was analyzed using Generalized Estimating Equation (GEE) and effect size. Predictors of the effectiveness were analyzed using multiple linear regression. Results: In total, 164 participants were recruited, and their average age was 65.71 ± 9.84 years. The majority of the patients were men and had an educational level of elementary school. Other than cancer history, the two groups exhibited no difference in pretest. Regarding the posttests, 159 participants were involved, of whom 79 belonged to the experimental group and 80 to the control group. For “knowledge of anticoagulants”, the experimental group’s posttest scores higher than those of the control group for all three posttests. This effectiveness indicated that the instruction program had a high effect size. Of the related complicated and safety issues of interactions between diet or medication exhibited the greatest effectiveness. Regarding the total score of health beliefs, the experimental group’s score of improvement was higher than that of the control group, and the experimental group’s posttest score was higher than the control group significantly. In terms of these effectiveness, the related interventions were deemed to be moderate effect size, and were most effective in the aspect of “self-efficacy”. With regard to “cues to action”, experimental group patients who studied the medical instruction slideshows and who used mobile applications and Facebook revealed a higher total score in knowledge of anticoagulants than those who did not. Those who used medication cards revealed higher total scores in knowledge and health belief. For “medication satisfaction”, the posttest score of the experimental group revealed an increase, but the related interventions were of low effectiveness. Regarding “QoL”, both groups exhibited little difference over the three months, and the difference between the two groups was also nonsignificant. For “health status”, no difference was observed between the two groups in the number of members that experienced bleeding and the international normalization ratio, none of the participants experienced a stroke during the study period. The results of the factors that influenced the effectiveness indicators revealed that the improvements of total scores for knowledge of anticoagulants and health belief were positively correlated. Predictors for high improvement in knowledge included experimental group (B = 5.87), taking non-vitamin K antagonist OACs (B = 3.37), lower perceived severity (B = -0.08), lower self-efficacy (B = -0.06) and higher medication satisfaction (B = -0.05) (lower total score of DASS) in the pretest, and the total variance explained was 58.4%. The predictors for high improvement in health belief were lower medication knowledge pretest score (B = -0.68) and experimental group (B = 6.63), and the total variance explained was 14.3%. Conclusion: This study determined that the multiple interactive health education program, which was developed on the theoretical basis of the Health Belief Model, was significantly more effective than mere provision brochure in improving patients’ knowledge of anticoagulants and patients’ health beliefs. However, the program’s effectiveness was low in terms of patients’ medication satisfaction and QoL. Providing health education based on theory and multiple methods is imperative to improve medication knowledge and health beliefs. It is especially suitable for complicated issues of anticoagulants. Health providers should pay more attention to the different needs among patients who are taking variant anticoagulants, increasing patients’ awareness of taking anticoagulants, self-efficacy for performing precautions, and decreasing the impacts and burdens of taking OACs when designing an educational intervention, which are essential factors for advancement in medication knowledge. The improvements of knowledge of anticoagulants and health belief were correlated. Promoting patients’ medication knowledge will also improve their health beliefs.
APA, Harvard, Vancouver, ISO, and other styles
47

Wu, Po-Fu, and 吳柏富. "3D Digital Map Data Fusion Enabled Real-time Precision Positioning For the Self-driving System Using the Unscented Kalman Filter and Interactive Multiple Model Based Vehicle Motion Detection Techniques." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/94351788298999523598.

Full text
Abstract:
碩士
國立臺灣大學
機械工程學研究所
105
This research proposes an approach that is able to locate vehicle position with lane level precision using low-cost multi-sensor fusion including commercial GNSS, IMU and digital maps. The approach is based on interactive multiple models (IMM), data fusion, and unscented Kalman filter techniques. The unscented Kalman filter (UKF) technique is used to design the estimator of the vehicle position, as well as executing data fusion which integrates multiple sensor data. The sigma points around the position center will be calculated by unscented transform, representing the probability of vehicle position. In this research, the probability of vehicle motion is also estimated by the motion sensor through IMM, including longitudinal motion, lateral motion and slope motion. For the estimation result, digital maps will be used to increase the precision of the vehicle position by providing road information and attributes. By utilizing the constraints such as road boundary on UKF, the sigma points positions can be realigned according to the position reference, increasing the precision of vehicle position. The algorithms proposed in this research uses road and vehicle information obtained from vehicle dynamics simulation software CarSim to validate positioning precision with different vehicle velocity and motion. The results when compared with general cases demonstrated significant enhancement on vehicle positioning, with the proposed algorithm able to gather more road and vehicle motion related data for the driver. Finally, the proposed system has been validated using experimental vehicle driven around the NTU campus and Shue-Yuan expressway, with results showing consistent positioning precision elevation down to lane level.
APA, Harvard, Vancouver, ISO, and other styles
48

Wu, Mingqi. "Population SAMC, ChIP-chip Data Analysis and Beyond." 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-12-8752.

Full text
Abstract:
This dissertation research consists of two topics, population stochastics approximation Monte Carlo (Pop-SAMC) for Baysian model selection problems and ChIP-chip data analysis. The following two paragraphs give a brief introduction to each of the two topics, respectively. Although the reversible jump MCMC (RJMCMC) has the ability to traverse the space of possible models in Bayesian model selection problems, it is prone to becoming trapped into local mode, when the model space is complex. SAMC, proposed by Liang, Liu and Carroll, essentially overcomes the difficulty in dimension-jumping moves, by introducing a self-adjusting mechanism. However, this learning mechanism has not yet reached its maximum efficiency. In this dissertation, we propose a Pop-SAMC algorithm; it works on population chains of SAMC, which can provide a more efficient self-adjusting mechanism and make use of crossover operator from genetic algorithms to further increase its efficiency. Under mild conditions, the convergence of this algorithm is proved. The effectiveness of Pop-SAMC in Bayesian model selection problems is examined through a change-point identification example and a large-p linear regression variable selection example. The numerical results indicate that Pop- SAMC outperforms both the single chain SAMC and RJMCMC significantly. In the ChIP-chip data analysis study, we developed two methodologies to identify the transcription factor binding sites: Bayesian latent model and population-based test. The former models the neighboring dependence of probes by introducing a latent indicator vector; The later provides a nonparametric method for evaluation of test scores in a multiple hypothesis test by making use of population information of samples. Both methods are applied to real and simulated datasets. The numerical results indicate the Bayesian latent model can outperform the existing methods, especially when the data contain outliers, and the use of population information can significantly improve the power of multiple hypothesis tests.
APA, Harvard, Vancouver, ISO, and other styles
49

Palmer, Andrew J. "A novel empirical model of the k-factor for radiowave propagation in Southern Africa for communication planning applications." Thesis, 2003. http://hdl.handle.net/2263/28102.

Full text
Abstract:
The objective of this study was to provide an adequate model of the k-factor for scientific radio planning in South Africa for terrestrial propagation. An extensive literature survey played an essential role in the research and provided verification and confirmation for the novelty of the research on historical grounds. The approach of the research was initially structured around theoretical analysis of existing data, which resulted from the work of J. W. Nel. The search for analytical models was extended further to empirical studies of primary data obtained from the South African Weather Service. The methodology of the research was based on software technology, which provided new tools and opportunities to process data effectively and to visualise the results in an innovative manner by a means of digital terrain maps (DTMs) and spreadsheet graphics. MINITAB
Thesis (PhD)--University of Pretoria, 2005.
Electrical, Electronic and Computer Engineering
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
50

Saroj, Kumar G. "An Integrated Estimation-Guidance Approach for Seeker-less Interceptors." Thesis, 2015. http://etd.iisc.ernet.in/2005/3828.

Full text
Abstract:
In this thesis, the problem of intercepting highly manoeuvrable threats using seeker-less interceptors that operate in the command guidance mode, is addressed. These systems are more prone to estimation errors than standard seeker-based systems. Several non-linear and optimal estimation and guidance concepts are presented in this thesis for interception of randomly maneuvering targets by seeker-less interceptors. The key contributions of the thesis can be broadly categorized into six groups, namely (i) an optimal selection of bank of lters in interactive multiple model (IMM) scheme to cater to various maneuvers that are expected during the end-game, (ii) an innovative algorithm to reduce chattering phenomenon and formulate effective guidance algorithm based on 'differential game guidance law' (modi ed DGL), (iii) IMM/DGL and IMM/modified DGL based integrated estimation/guidance (IEG) strategy, (iv) sensitivity and robustness analysis of Kalman lters and ne tuning of lters in filter bank using innovation covariance, (v) Performance of tuned IMM/PN, tuned IMM/DGL and tuned IMM/modi ed DGL against various target maneuvers, (vi) Performance comparison with realistic missile model. An innovative generalized state estimation formulation has been proposed in this the-sis for accurately estimating the states of incoming high speed randomly maneuvering targets. The IMM scheme and an optimal selection of lters, to cater to various maneu-vers that are expected during the end-game, is described in detail. The key advantage of this formulation is that it is generic and can capture evasive target maneuver as well as straight moving targets in a uni ed framework without any change of target model and tuning parameters. In this thesis, a game optimal guidance law is described in detail for 2D and 3D engagements. The performance of the differential game based guidance law (DGL) is compared with conventional Proportional Navigation (PN) guidance law, especially for 3D interception scenarios. An innovative chatter removal algorithm is introduced by modifying the differential game based guidance law (modified DGL). In this algorithm, chattering is reduced to the maximum extent possible by introducing a boundary layer around the switching surface and using a continuous control within the boundary layer. The thesis presents performance of the modified DGL algorithm against PN and DGL, through a comparison of miss distances and achieved accelerations. Simulation results are also presented for varying fiight path angle errors. Apart from the guidance logic, two novel ideas have been presented following the evolving "integrated estimation and guidance" philosophy. In the rst approach, an in-tegrated estimation/guidance (IEG) algorithm that integrates IMM estimator with DGL law (IMM/DGL), is proposed for seeker-less interception. In this interception scenario, the target performs an evasive bang-bang maneuver, while the sensor has noisy measure-ments and the interceptor is subject to an acceleration bound. The guidance parameters (i.e., the lateral acceleration commands) are computed with the help of zero e ort miss distance. The thesis presents the performance of the IEG algorithm against combined IMM with PN (IMM/PN), through a comparison of miss distances. In the second ap-proach, a novel modi ed IEG algorithm composed of IMM estimator and modi ed DGL guidance law is introduced to eliminate the chattering phenomenon. Results from both of these integrated approaches are quite promising. Monte Carlo simulation results re-veal that modi ed IEG algorithm achieves better homing performance, even if the target maneuver model is unknown to the estimator. These results and their analysis o er an insight to the interception process and the proposed algorithms. The selection of lter tuning parameters puts forward a major challenge for scien-tists and engineers. Two recently developed metrics, based on innovation covariance, are incorporated for determining the filter tuning parameters. For predicting the proper combination of the lter tuning parameters, the metrics are evaluated for a 3D interception problem. A detailed sensitivity and robustness analysis is carried out for each type of Kalman lters. Optimal and tuned Kalman lters are selected in the IMM con guration to cater to various maneuvers that are expected during the end-game. In the interception scenario examined in this thesis, the target performs various types of maneuvers, while the sensor has noisy measurements and the interceptor is subject to acceleration bound. The tuned IMM serves as a basis for synthesis of e cient lters for tracking maneuvering targets and reducing estimation errors. A numerical study is provided which demonstrates the performance and viability of tuned IMM/modi ed DGL based modi ed IEG strategy. In this thesis, comparison is also performed between tuned IMM/PN, tuned IMM/DGL and tuned IMM/modi ed DGL in integrated estimation/guidance scheme. The results are illustrated by an extensive Monte Carlo simulation study in the presence of estimation errors. Simulation results are also presented for end game maneuvers and varying light path angle errors . Numerical simulations to study the aerodynamic e ects on integrated estimation/ guidance structure and its e ect on performance of guidance laws are presented. A detailed comparison is also performed between tuned IMM/PN, tuned IMM/DGL and tuned IMM/modi ed DGL in integrated estimation/guidance scheme with realistically modelled missile against various target maneuvers. Though the time taken to intercept is higher when a realistic model is considered, the integrated estimation/guidance law still performs better. The miss distance is observed to be similar to the one obtained by considering simpli ed kinematic models.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography