Dissertations / Theses on the topic 'Mapping Algorithm'

To see the other types of publications on this topic, follow the link: Mapping Algorithm.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Mapping Algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Morovič, Ján. "To develop a universal gamut mapping algorithm." Thesis, University of Derby, 1998. http://hdl.handle.net/10545/200029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pomerleau, François. "Registration algorithm optimized for simultaneous localization and mapping." Mémoire, Université de Sherbrooke, 2008. http://savoirs.usherbrooke.ca/handle/11143/1465.

Full text
Abstract:
Building maps within an unknown environment while keeping track of the current position is a major step to accomplish safe and autonomous robot navigation. Within the last 20 years, Simultaneous Localization And Mapping (SLAM) became a topic of great interest in robotics. The basic idea of this technique is to combine proprioceptive robot motion information with external environmental information to minimize global positioning errors. Because the robot is moving in its environment, exteroceptive data comes from different points of view and must be expressed in the same coordinate system to be combined. The latter process is called registration. Iterative Closest Point (ICP) is a registration algorithm with very good performances in several 3D model reconstruction applications, and was recently applied to SLAM. However, SLAM has specific needs in terms of real-time and robustness comparatively to 3D model reconstructions, leaving room for specialized robotic mapping optimizations in relation to robot mapping. After reviewing existing SLAM approaches, this thesis introduces a new registration variant called Kd-ICP. This referencing technique iteratively decreases the error between misaligned point clouds without extracting specific environmental features. Results demonstrate that the new rejection technique used to achieve mapping registration is more robust to large initial positioning errors. Experiments with simulated and real environments suggest that Kd-ICP is more robust compared to other ICP variants. Moreover, the Kd-ICP is fast enough for real-time applications and is able to deal with sensor occlusions and partially overlapping maps. Realizing fast and robust local map registrations opens the door to new opportunities in SLAM. It becomes feasible to minimize the cumulation of robot positioning errors, to fuse local environmental information, to reduce memory usage when the robot is revisiting the same location. It is also possible to evaluate network constrains needed to minimize global mapping errors.
APA, Harvard, Vancouver, ISO, and other styles
3

Dunkelberg, Jr John S. "FEM Mesh Mapping to a SIMD Machine Using Genetic Algorithms." Digital WPI, 2001. https://digitalcommons.wpi.edu/etd-theses/1154.

Full text
Abstract:
The Finite Element Method is a computationally expensive method used to perform engineering analyses. By performing such computations on a parallel machine using a SIMD paradigm, these analyses' run time can be drastically reduced. However, the mapping of the FEM mesh elements to the SIMD machine processing elements is an NP-complete problem. This thesis examines the use of Genetic Algorithms as a search technique to find quality solutions to the mapping problem. A hill climbing algorithm is compared to a traditional genetic algorithm, as well as a "messy" genetic algorithm. The results and comparative advantages of these approaches are discussed.
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Zhiyong Michael. "Mapping physical topology with logical topology using genetic algorithm." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ62245.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Curotto, Molina Franco Andreas. "Graphslam algorithm implementation for solving simultaneous localization and mapping." Tesis, Universidad de Chile, 2016. http://repositorio.uchile.cl/handle/2250/139093.

Full text
Abstract:
Ingeniero Civil Eléctrico
SLAM (Simultaneous Localization and Mapping) es el problema de estimar la posición de un robot (u otro agente), y simultáneamente, generar un mapa de su entorno. Es considerado un concepto clave en la robótica móvil, y fundamental para alcanzar sistemas verdaderamente autónomos. Entre las muchas soluciones que se han propuesto para resolver SLAM, los métodos basados en grafos han adquirido gran interés por parte de los investigadores en los últimos años. Estas soluciones presentan varias ventajas, como la habilidad de manejar grandes cantidades de datos, y conseguir la trayectoria completa del robot, en vez de solo la última posición. Una implementación particular de este método es el algoritmo GraphSLAM, presentado por primera vez por Thrun y Montemerlo en 2006. En esta memoria, el algoritmo GraphSLAM es implementado para resolver el problema de SLAM en el caso de dos dimensiones. En objetivo principal de esta memoria es proveer de una solución de SLAM ampliamente aceptada para la realización de pruebas comparativas con nuevos algoritmos de SLAM. La implementación usa el framework g2o como herramienta para la optimización de mínimos cuadrados no lineales. La implementación de GraphSLAM es capaz de resolver SLAM con asociación de datos conocida y desconocida. Esto significa que, incluso cuando el robot no tiene conocimiento del origen de las mediciones, éste puede asociar las mediciones a los estados correspondientes, mediante el uso de estimación probabilística. El algoritmo también usa un método basado en kernel para la estimación robusta ante outliers. Para mejorar el tiempo de cómputo del algoritmo, varias estrategias fueron diseñadas para verificar las asociaciones y ejecutar el algoritmo de manera eficiente. La implementación final se probó con datos simulados y reales, en el caso de asociación conocida y desconocida. El algoritmo fue exitoso en todas las pruebas, siendo capaz de estimar la trayectoria del robot y el mapa del entorno con un error pequeño. Las principales ventajas del algoritmo son su alta precisión, y su alto grado de configuración dado por la selección de parámetros. Las mayores desventajas son el tiempo de cómputo del algoritmo cuando la cantidad de datos es alta, y su incapacidad de eliminar falsos positivos. Finalmente, como trabajo futuro, se sugieren modificaciones para aumentar la velocidad de convergencia, y para eliminar falsos positivos.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Qing. "Development, improvement and assessment of image classification and probability mapping algorithms." OpenSIUC, 2018. https://opensiuc.lib.siu.edu/dissertations/1622.

Full text
Abstract:
Remotely sensed imagery is one of the most important data sources for large-scale and multi-temporal agricultural, forestry, soil, environmental, social and economic applications. In order to accurately extract useful thematic information of the earth surface from images, various techniques and methods have been developed. The methods can be divided into parametric and non-parametric based on the requirement of data distribution, or into global and local based on the characteristics of modeling global trends and local variability, or into unsupervised and supervised based on whether training data are required, and into design-based and model-based in terms of the theory based on which the estimators are developed. The methods have their own disadvantages that impede the improvement of estimation accuracy. Thus, developing novel methods and improving the existing methods are needed. This dissertation focused on the development of a feature-space indicator simulation (FSIS), the improvement of geographically weighted sigmoidal simulation (GWSS) and k-nearest neighbors (kNN), and their assessment of land use and land cover (LULC) classification and probability (fraction) mapping of percentage vegetation cover (PVC) in Duolun County, Xilingol League, Inner Mongolia, China. The FSIS employs an indicator simulation in a high-dimensional feature space and expends derivation of indicator variograms from geographic space to feature space that leads to feature space indicator variograms (FSIV), to circumvent the issues existing in traditional indicator simulation in geostatistics. The GWSS is a stochastic and probability mapping method and considers a spatially nonstationary sample data and the local variation of an interest variable. The improved kNN, called Optimal k-nearest neighbors (OkNN), searches for an optimal number of nearest neighbors at each location based on local variability, and can be used for both classification and probability mapping. Three methods were validated and compared with several widely used approaches for LULC classification and PVC mapping in the study area. The datasets used in the study included a Landsat 8 image and a total of 920 field plots. The results obtained showed that 1) Compared with maximum likelihood classification (ML), support vector machine (SVM) and random forest (RF), the proposed FSIS classifier led to statistically significantly higher classification accuracy of six LULC types (water, agricultural land, grassland, bare soil, built-up area, and forested area); 2) Compared with linear regression (LR), polynomial regression (PR), sigmoidal regression (SR), geographically weighted regression (GWR), and geographically weighted polynomial regression (GWPR), GWSS did not only resulted in more accurate estimates of PVC, but also greatly reduced the underestimations and overestimation of PVC for the small and large values respectively; 3) Most of the red and near infrared bands relevant vegetation indices had significant contributions to improving the accuracy of mapping PVC; 4) OkNN resulted in spatially variable and optimized k values and higher prediction accuracy of PVC than the global methods; and 5) The range parameter of FSIVs was the major factor that spatially affected the classification accuracy of LULC types, but the FSIVs were less sensitive to the number of training samples. Thus, the results answered all six research questions proposed.
APA, Harvard, Vancouver, ISO, and other styles
7

Dash, Padmanava. "SeaWiFS Algorithm for Mapping Phycocyanin in Incipient Freshwater Cyanobacterial Blooms." Bowling Green State University / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1120594611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jiang, Dayou. "An exploration of BMSF algorithm in genome-wide association mapping." Kansas State University, 2013. http://hdl.handle.net/2097/15505.

Full text
Abstract:
Master of Science
Department of Statistics
Haiyan Wang
Motivation: Genome-wide association studies (GWAS) provide an important avenue for investigating many common genetic variants in different individuals to see if any variant is associated with a trait. GWAS is a great tool to identify genetic factors that influence health and disease. However, the high dimensionality of the gene expression dataset makes GWAS challenging. Although a lot of promising machine learning methods, such as Support Vector Machine (SVM), have been investigated in GWAS, the question of how to improve the accuracy of the result has drawn increased attention of many researchers A lot of the studies did not apply feature selection to select a parsimonious set of relevant genes. For those that performed gene selections, they often failed to consider the possible interactions among genes. Here we modify a gene selection algorithm BMSF originally developed by Zhang et al. (2012) for improving the accuracy of cancer classification with binary responses. A continuous response version of BMSF algorithm is provided in this report so that it can be applied to perform gene selection for continuous gene expression dataset. The algorithm dramatically reduces the dimension of the gene markers under concern, thus increases the efficiency and accuracy of GWAS. Results: We applied the continuous response version of BMSF on the wheat phenotypes dataset to predict two quantitative traits based on the genotype marker data. This wheat dataset was previously studied in Long et al. (2009) for the same purpose but used only direct application of SVM regression methods. By applying our gene selection method, we filtered out a large portion of genes which are less relevant and achieved a better prediction result for the test data by building SVM regression model using only selected genes on the training data. We also applied our algorithm on simulated datasets which was generated following the setting of an example in Fan et al. (2011). The continuous response version of BMSF showed good ability to identify active variables hidden among high dimensional irrelevant variables. In comparison to the smoothing based methods in Fan et al. (2011), our method has the advantage of no ambiguity due to difference choices of the smoothing parameter.
APA, Harvard, Vancouver, ISO, and other styles
9

Baichbal, Shashidhar. "MAPPING ALGORITHM FOR AUTONOMOUS NAVIGATION OF LAWN MOWER USING SICK LASER." Wright State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=wright1334587886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Phinjaroenphan, Panu, and s2118294@student rmit edu au. "An Efficient, Practical, Portable Mapping Technique on Computational Grids." RMIT University. Computer Science and Information Technology, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080516.145808.

Full text
Abstract:
Grid computing provides a powerful, virtual parallel system known as a computational Grid on which users can run parallel applications to solve problems quickly. However, users must be careful to allocate tasks to nodes properly because improper allocation of only one task could result in lengthy executions of applications, or even worse, applications could crash. This allocation problem is called the mapping problem, and an entity that tackles this problem is called a mapper. In this thesis, we aim to develop an efficient, practical, portable mapper. To study the mapping problem, researchers often make unrealistic assumptions such as that nodes of Grids are always reliable, that execution times of tasks assigned to nodes are known a priori, or that detailed information of parallel applications is always known. As a result, the practicality and portability of mappers developed in such conditions are uncertain. Our review of related work suggested that a more efficient tool is required to study this problem; therefore, we developed GMap, a simulator researchers/developers can use to develop practical, portable mappers. The fact that nodes are not always reliable leads to the development of an algorithm for predicting the reliability of nodes and a predictor for identifying reliable nodes of Grids. Experimental results showed that the predictor reduced the chance of failures in executions of applications by half. The facts that execution times of tasks assigned to nodes are not known a priori and that detailed information of parallel applications is not alw ays known, lead to the evaluation of five nearest-neighbour (nn) execution time estimators: k-nn smoothing, k-nn, adaptive k-nn, one-nn, and adaptive one-nn. Experimental results showed that adaptive k-nn was the most efficient one. We also implemented the predictor and the estimator in GMap. Using GMap, we could reliably compare the efficiency of six mapping algorithms: Min-min, Max-min, Genetic Algorithms, Simulated Annealing, Tabu Search, and Quick-quality Map, with none of the preceding unrealistic assumptions. Experimental results showed that Quick-quality Map was the most efficient one. As a result of these findings, we achieved our goal in developing an efficient, practical, portable mapper.
APA, Harvard, Vancouver, ISO, and other styles
11

Caudill, Thomas Robert. "Accuracy of the Total Ozone Mapping Spectrometer algorithm at polar latitudes." Diss., The University of Arizona, 1994. http://hdl.handle.net/10150/186897.

Full text
Abstract:
It has been noted that for large solar zenith angles (θ₀ > 75°), there is some uncertainty in the retrieval scheme for determining the total column ozone amount using the Total Ozone Mapping Spectrometer (TOMS) instruments. This uncertainty arises because the current look-up table radiances were calculated by a radiative transfer algorithm using an approximate pseudo-spherical atmosphere. The pseudo-spherical code calculates the primary scatter using spherical geometry but higher order scattering is computed for a plane-parallel atmosphere. To test the accuracy of the pseudo-spherical approximation, a new method for numerically solving the equation of radiative transfer in a spherical shell atmosphere including polarization has been developed. This technique uses a Gauss-Seidel iteration scheme to calculate a steady state solution including all significant orders of scattering. For the TOMS instrument which was on Nimbus-7, large solar zenith angles corresponded primarily to high latitudes due to its sun-synchronous noon crossing orbit. Therefore, the accuracy of the algorithm at large solar zenith angles becomes a critical issue particularly for the precise measurement of ozone over polar regions. Comparisons between the pseudo-spherical and the spherical Gauss-Seidel codes show that for solar zenith angles greater than 80° the error introduced by not properly accounting for the sphericity can be significant. Large intensity and ozone differences (5-10%) are possible in the forward (φ = 0°) and backscatter (φ = 180°) directions which are caused by the incorrect attenuation of the solar beam. Because of its particular viewing geometry (along φ = 90°), the error in the Nimbus-7/TOMS ozone amount is generally less than 1%. However, when θ₀ = 88° with a high surface reflectivity, ozone amounts reported for Nimbus-7/TOMS may be overestimated by up to 8%. The TOMS instrument currently on Meteor-3, because of its less inclined orbit, has a much wider range of viewing geometries. Under extreme conditions, it appears that errors on the order of 10 to 30% may be possible.
APA, Harvard, Vancouver, ISO, and other styles
12

Aguilar-Gonzalez, Abiel. "Monocular-SLAM dense mapping algorithm and hardware architecture for FPGA acceleration." Thesis, Université Clermont Auvergne‎ (2017-2020), 2019. http://www.theses.fr/2019CLFAC055.

Full text
Abstract:
La localisation et la cartographie simultanées (SLAM) consiste à construire une carte 3D tout en situant le ou les capteurs (ayant servi au SLAM) dans cette carte. Ces dernières années, le travail s'est focalisé sur des systèmes utilisant une seule caméra mobile comme moyen de perception (monoculaire-SLAM). Ce choix a été motivé par le fait qu'il est aujourd'hui possible de trouver des caméras commerciales peu coûteuses, plus petites et plus légères que les autres capteurs utilisés auparavant. De plus ces caméras fournissent des informations environnementales visuelles qui peuvent être exploitées pour créer des cartes 3D complexes tandis que les poses des caméras peuvent être estimées simultanément. Malheureusement, les systèmes monoculaires SLAM sont basés sur des techniques d'optimisation qui limitent les performances des applications embarquées en temps réel. Pour résoudre ce problème, nous proposons dans ce travail une nouvelle formulation SLAM monoculaire basée sur l'hypothèse qu'il est possible d'atteindre une haute efficacité pour les applications embarquées, en augmentant la densité de la carte des nuages de points (et donc la densité de la carte 3D et le positionnement et la cartographie globale) et en reformulant le processus de suivi des caractéristiques/rappariement des fonctionnalités pour obtenir de hautes performances pour les architectures matérielles embarquées, comme le FPGA ou CUDA. Afin d'augmenter la densité de la carte des nuages de points, nous proposons de nouveaux algorithmes pour le suivi et la mise en correspondance de primitives ainsi que des algorithmes de calcul profondeur à partir du mouvement pouvant se ramener à une extension d'un problème de mise en correspondance stéréo. Ensuite, deux architectures matérielles différentes (basées respectivement sur FPGA et CUDA) entièrement compatibles avec les contraintes embarquées temps réel sont proposées. Les résultats expérimentaux montrent qu'il est possible d'obtenir des estimations précises de la pose de la caméra. Par rapport aux systèmes monoculaires de l'état de l'art, nous occupons la 5ème place dans la suite de benchmarks KITTI, avec un score supérieur à celui de l'année dernière (nous sommes l'algorithme le plus rapide du benchmark) et une densité du nuage de points dix fois plus élevée que les approches précédentes
Simultaneous Localization and Mapping (SLAM) is the problem of constructing a 3D map while simultaneously keeping track of an agent location within the map. In recent years, work has focused on systems that use a single moving camera as the only sensing mechanism (monocular-SLAM). This choice was motivated because nowadays, it is possible to find inexpensive commercial cameras, smaller and lighter than other sensors previously used and, they provide visual environmental information that can be exploited to create complex 3D maps while camera poses can be simultaneously estimated. Unfortunately, previous monocular-SLAM systems are based on optimization techniques that limits the performance for real-time embedded applications. To solve this problem, in this work, we propose a new monocular SLAM formulation based on the hypothesis that it is possible to reach high efficiency for embedded applications, increasing the density of the point cloud map (and therefore, the 3D map density and the overall positioning and mapping) by reformulating the feature-tracking/feature-matching process to achieve high performance for embedded hardware architectures, such as FPGA or CUDA. In order to increase the point cloud map density, we propose new feature-tracking/feature-matching and depth-from-motion algorithms that consists of extensions of the stereo matching problem. Then, two different hardware architectures (based on FPGA and CUDA, respectively) fully compliant for real-time embedded applications are presented. Experimental results show that it is possible to obtain accurate camera pose estimations. Compared to previous monocular systems, we are ranked as the 5th place in the KITTI benchmark suite, with a higher processing speed (we are the fastest algorithm in the benchmark) and more than x10 the density of the point cloud from previous approaches
APA, Harvard, Vancouver, ISO, and other styles
13

Gonzalez, Cadenillas Clayder Alejandro. "An improved feature extractor for the lidar odometry and mapping algorithm." Tesis, Universidad de Chile, 2019. http://repositorio.uchile.cl/handle/2250/171499.

Full text
Abstract:
Tesis para optar al grado de Magíster en Ciencias de la Ingeniería, Mención Eléctrica
La extracción de características es una tarea crítica en la localización y mapeo simultáneo o Simultaneous Localization and Mapping (SLAM) basado en características, que es uno de los problemas más importantes de la comunidad robótica. Un algoritmo que resuelve SLAM utilizando características basadas en LiDAR es el algoritmo LiDAR Odometry and Mapping (LOAM). Este algoritmo se considera actualmente como el mejor algoritmo SLAM según el Benchmark KITTI. El algoritmo LOAM resuelve el problema de SLAM a través de un enfoque de emparejamiento de características y su algoritmo de extracción de características detecta las características clasifican los puntos de una nube de puntos como planos o agudos. Esta clasificación resulta de una ecuación que define el nivel de suavidad para cada punto. Sin embargo, esta ecuación no considera el ruido de rango del sensor. Por lo tanto, si el ruido de rango del LiDAR es alto, el extractor de características de LOAM podría confundir los puntos planos y agudos, lo que provocaría que la tarea de emparejamiento de características falle. Esta tesis propone el reemplazo del algoritmo de extracción de características del LOAM original por el algoritmo Curvature Scale Space (CSS). La elección de este algoritmo se realizó después de estudiar varios extractores de características en la literatura. El algoritmo CSS puede mejorar potencialmente la tarea de extracción de características en entornos ruidosos debido a sus diversos niveles de suavizado Gaussiano. La sustitución del extractor de características original de LOAM por el algoritmo CSS se logró mediante la adaptación del algoritmo CSS al Velodyne VLP-16 3D LiDAR. El extractor de características de LOAM y el extractor de características de CSS se probaron y compararon con datos reales y simulados, incluido el dataset KITTI utilizando las métricas Optimal Sub-Pattern Assignment (OSPA) y Absolute Trajectory Error (ATE). Para todos estos datasets, el rendimiento de extracción de características de CSS fue mejor que el del algoritmo LOAM en términos de métricas OSPA y ATE.
APA, Harvard, Vancouver, ISO, and other styles
14

Moss, Andrew. "Temporally adjusted complex ambiguity function mapping algorithm for geolocating radio frequency signals." Thesis, Monterey, California: Naval Postgraduate School, 2014. http://hdl.handle.net/10945/44625.

Full text
Abstract:
Approved for public release; distribution is unlimited
The Complex Ambiguity Function (CAF) allows simultaneous estimates of the Time Difference of Arrival (TDOA) and Frequency Difference of Arrival (FDOA) for two received signals. The Complex Ambiguity Function Geo-Mapping (CAFMAP) algorithm then directly maps the CAF to geographic coordinates to provide a direct estimation of the emitter’s position. The CAFMAP can only use short-duration CAFs, however, because the collector motion changes the system geometry over time. In an attempt to mitigate this shortfall and improve geolocation accuracy, the CAFMAP takes multiple CAF snapshots and sums the amplitudes of each. Unfortunately, this method does not provide the expected accuracy improvement, and a new method is sought. This thesis reformulates the equations used in computing the CAF, in order to account for the collector’s motion, and uses the results to derive a new CAFMAP algorithm. This new algorithm is implemented in MATLAB, and its results and characteristics analyzed. The conclusions are as follows: the new algorithm functions as intended, removes the accuracy limitations of the original method, and merits further investigation. Immediate future work should focus on ways to reduce its computation time and modifying the algorithm to account for 3-Dimensional reality, non-linear motion of collectors, and motion of the emitter.
APA, Harvard, Vancouver, ISO, and other styles
15

Guo, Yunbo. "Multi-population genetic algorithm for the mapping of landscape of complex function /." View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?PHYS%202009%20GUO.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Kartal, Koc Elcin. "An Algorithm For The Forward Step Of Adaptive Regression Splines Via Mapping Approach." Phd thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12615012/index.pdf.

Full text
Abstract:
In high dimensional data modeling, Multivariate Adaptive Regression Splines (MARS) is a well-known nonparametric regression technique to approximate the nonlinear relationship between a response variable and the predictors with the help of splines. MARS uses piecewise linear basis functions which are separated from each other with breaking points (knots) for function estimation. The model estimating function is generated in two stepwise procedures: forward selection and backward elimination. In the first step, a general model including too many basis functions so the knot points are generated
and in the second one, the least contributing basis functions to the overall fit are eliminated. In the conventional adaptive spline procedure, knots are selected from a set of distinct data points that makes the forward selection procedure computationally expensive and leads to high local variance. To avoid these drawbacks, it is possible to select the knot points from a subset of data points, which leads to data reduction. In this study, a new method (called S-FMARS) is proposed to select the knot points by using a self organizing map-based approach which transforms the original data points to a lower dimensional space. Thus, less number of knot points is enabled to be evaluated for model building in the forward selection of MARS algorithm. The results obtained from simulated datasets and of six real-world datasets show that the proposed method is time efficient in model construction without degrading the model accuracy and prediction performance. In this study, the proposed approach is implemented to MARS and CMARS methods as an alternative to their forward step to improve them by decreasing their computing time
APA, Harvard, Vancouver, ISO, and other styles
17

Gennari, Rosella. "Mapping Inferences: Constraint Propagation and Diamond Satisfaction." Diss., Universiteit van Amsterdam, 2002. http://hdl.handle.net/10919/71553.

Full text
Abstract:
The main theme shared by the two main parts of this thesis is EFFICIENT AUTOMATED REASONING.Part I is focussed on a general theory underpinning a number of efficient approximate algorithms for Constraint Satisfaction Problems (CSPs),the constraint propagation algorithms.In Chapter 3, we propose a Structured Generic Algorithm schema (SGI) for these algorithms. This iterates functions according to a certain strategy, i.e. by searching for a common fixpoint of the functions. A simple theory for SGI is developed by studying properties of functions and of the ways these influence the basic strategy. One of the primary objectives of our theorisation is thus the following: using SGI or some of its variations for DESCRIBINING and ANALISYING HOW the "pruning" and "propagation" process is carried through by constraint propagation algorithms.Hence, in Chapter 4, different domains of functions (e.g., domain orderings) are related to different classes of constraint propagation algorithms (e.g., arc consistency algorithms); thus each class of constraint propagation algorithms is associated with a "type" of function domains, and so separated from the others. Then we analys each such class: we distinguished functions on the same domains for their different ways of performing pruning (point or set based), and consequently differentiated between algorithms of the same class (e.g., AC-1 and AC-3 versus AC-4 or AC-5). Besides, we also show how properties of functions (e.g., commutativity or stationarity) are related to different strategies of propagation in constraint algorithms of the same class (see, for instance, AC-1 versus AC-3). In Chapter 5 we apply the SGI schema to the case of soft CSPs (a generalisation of CSPs with sort-of preferences), thereby clarifying some of the similarities and differences between the "classical" and soft constraint-propagation algorithms. Finally, in Chapter 6, we summarise and characterise all the functions used for constraint propagation; in fact, the other goal of our theorisation is abstracting WHICH functions, iterated as in SGI or its variations, perform the task of "pruning" or "propagation" of inconsistencies in constraint propagation algorithms.We focus on relations and relational structures in Part II of the thesis. More specifically, modal languages allow us to talk about various relational structures and their properties. Once the latter are formulated in a modal language, they can be passed to automated theorem provers and tested for satisfiability, with respect to certain modal logics. Our task, in this part, can be described as follows: determining the satisfiability of modal formulas in an efficient manner. In Chapter 8, we focus on one way of doing this: we refine the standard translation as the layered translation, and use existing theorem provers for first-order logic on the output of this refined translation. We provide ample experimental evidence on the improvements in performances that were obtained by means of the refinement.The refinement of the standard translation is based on the tree model property. This property is also used in the basic algorithm schema in Chapter 9 ---the original schema is due to~\cite{seb97}. The proposed algorithm proceeds layer by layer in the modal formula and in its candidate models, applying constraint propagation and satisfaction algorithms for finite CSPs at each layer. With Chapter 9, we wish to draw the attention of constraint programmers to modal logics, and of modal logicians to CSPs.Modal logics themselves express interesting problems in terms of relations and unary predicates, like temporal reasoning tasks. On the other hand, constraint algorithms manipulate relations in the form of constraints, and unary predicates in the form of domains or unary constraints, see Chapter 6. Thus the question of how efficiently those algorithms can be applied to modal reasoning problems seems quite natural and challenging.
APA, Harvard, Vancouver, ISO, and other styles
18

Andreasson, Erik, and Amanda Axelsson. "Comparing technologies and algorithms behind mapping and routing APIs for Electric Vehicles." Thesis, Tekniska Högskolan, Jönköping University, JTH, Datateknik och informatik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-50019.

Full text
Abstract:
The fast-developing industry of electric vehicles is growing, and so is the driver community, which puts pressure on the electric charging grid. The purpose of this thesis is to simplify for the drivers of electric cars to charge their cars during trips. The research questions investigated are” How do the technologies and algorithms behind navigation APIs differ from each other?” and “What information is provided by the charging station APIs and how do they collect data about new stations?”. Information for the thesis was collected by reading and analyzing both documentation and previous work, as well as by conducting experiments. The study was limited to purely electric vehicles. We created an application to conduct experiments on the API combination Mapbox and Open Charge Map, we call it ChargeX. We compare, TomTom, Tesla, Plugshare, Google Maps and ChargeX. The most common shortest-path algorithms are Dijkstra’s, A* and Bidirectional A*. They provide reasonable solutions to the shortest path problem. The algorithms can be improved by considering traffic flow, travel time and distance between origin and destination and apply it as weights on the edges. What has the largest impact on the final route is the choice of charging stations. The algorithm for picking charging stations can be optimized in several ways for example by considering real time availability information of the charging stations, prioritize highways, calculate the temperature and altitude impact on the battery or prioritize faster chargers such as superchargers for Tesla.
APA, Harvard, Vancouver, ISO, and other styles
19

Cabrerizo, Mercedes. "A new algorithm for electroencephalogram functional brain mapping based on an auditory-comprehension process." FIU Digital Commons, 2003. http://digitalcommons.fiu.edu/etd/1959.

Full text
Abstract:
The main objective of this thesis is to gain insight on the dynamics of the human brain through electroencephalography (EEG) analysis, with emphasis placed on the characterization of the effects of an Auditory/Comprehension task. A thorough examination of the EEG recordings was accomplished through the use of the most common brain waves (Alpha, Beta, Delta, and Theta). Conceivably, as the EEG recordings based on these tasks become better understood, their use as a helpful tool in mapping the different functions of the brain will become more effective. The EEG data was collected from 15 patients at Miami Children's Hospital using the ESI-256 system. A final evaluation of spectral arrays is performed based on comprehensive color topographic maps of the various induced brain activities. This representation allows us to bring out how different patients react under different circumstances, and to detect consequently neurological disorders such as the Attention Deficit Disorder (ADD).
APA, Harvard, Vancouver, ISO, and other styles
20

Zamir, Syed Waqas. "Perceptually-inspired gamut mapping for display and projection tecnologies." Doctoral thesis, Universitat Pompeu Fabra, 2017. http://hdl.handle.net/10803/404677.

Full text
Abstract:
The cinema and television industries are continuously working in the development of image features that can provide a better visual experience to viewers; these image attributes include large spatial resolution, high temporal resolution (frame rate), greater contrast, and recently, with emerging display technologies, much wider color gamut. The gamut of a device is the set of colors that this device is capable of reproducing. Gamut Mapping Algorithms (GMAs) transform colors of the original content to the color palette of the display device with the simultaneous goals of (a) reproducing content accurately while preserving the artistic intent of the original content’s creator and (b) exploiting the full color rendering potential of the target display device. There are two types of gamut mapping: Gamut Reduction (GR) and Gamut Extension (GE). GR involves the transformation of colors from a larger source gamut to a smaller destination gamut. Whereas in GE, colors are mapped from a smaller source gamut to a larger destination gamut. In this thesis we propose three spatial Gamut Reduction Algorithms (GRAs) and four spatial Gamut Extension Algorithms (GEAs). These methods comply with some basic global and local perceptual properties of human vision, producing state of the art results that appear natural and are perceptually faithful to the original material. Moreover, we present a psychophysical evaluation of GEAs specifically for cinema using a digital cinema projector under cinematic (low ambient light) conditions; to the best of our knowledge this is the first evaluation of this kind reported in the literature. We also show how currently available image quality metrics, when applied to the gamut extension problem, provide results that do not correlate well with users’ choices.
Les indústries de cinema i televisió estan treballant contínuament en el desenvolupament de diferents característiques de la imatge que puguin proporcionar una millor experiència visual per als espectadors; aquests atributs d’imatge inclouen la resolució espacial, la resolució temporal (fotogrames per segon), major contrast i, recentment, amb les noves tecnologies de visualització emergents, una gamma de colors (gamut) molt més ampli. El gamut d’un dispositiu és el conjunt de colors que aquest dispositiu és capaç de reproduir. Els algoritmes de modificació de gamut (GMA, de les seves sigles en anglés) transformen els colors del contingut original a la paleta de color del dispositiu de visualització amb els objectius de (a) reproduir el contingut amb precisió preservant al mateix temps la intenció artística del creador del contingut original i (b) utilitzar tot el gamut de color del dispositiu de visualització. Hi ha dos tipus d’algoritmes de modificació de gamut: Reducció de Gamut (GR) i Extensió de Gamut (GE). GR implica la transformació dels colors d’un gamut d’origen més gran a un gamut de destinació més petit. Mentre que a GE, els colors s’assignen d’un gamut d’origen petit a un gamut de destinació més gran. En aquesta tesi es proposen tres algoritmes de Reducció de Gamut (GRAs) i quatre algoritmes d’extensió de Gamut (GEAs). Aquests mètodes compleixen amb algunes propietats perceptives globals i locals bàsiques de la visió humana, produint resultats que són estat de l’art, i que són naturals i perceptualment fidels al material original. D’altra banda, es presenta una avaluació psicofísica del problema d’extensió de Gamut específicament dissenyada per a cinema utilitzant un projector de cinema digital en condicions cinemàtiques (baixa llum ambiental); aquest estudi creiem que és el primer del seu tipus a la literatura. També mostrem com les mètriques de qualitat d’imatge disponibles actualment proporcionen resultats que no es correlacionen bé amb l’elecció dels usuaris quan s’apliquen al problema d’Extensió de Gamut.
APA, Harvard, Vancouver, ISO, and other styles
21

FIGUEIREDO, AURELIO MORAES. "MAPPING HORIZONS AND SEISMIC FAULTS FROM 3D SEISMIC DATA USING THE GROWING NEURAL GAS ALGORITHM." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2007. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=11341@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
GRUPO DE TECNOLOGIA DE COMPUTAÇÃO GRÁFICA - PUC-RIO
Neste trabalho apresentamos um algoritmo baseado em agrupamento de dados para o mapeamento automático de horizontes e de falhas sísmicas a partir de dados sísmicos 3D. Apresentamos uma técnica para quantizar o volume sísmico de entrada a partir dos neurônios do grafo resultante do processo de treinamento de uma instância do algoritmo Growing Neural Gas (GNG). No conjunto de amostras de entrada utilizadas pelo GNG, cada amostra representa um voxel do volume de entrada, e retém informações da vizinhança vertical desse voxel. Depois da etapa de treinamento, a partir do grafo gerado pelo GNG um novo volume quantizado é gerado, e nesse volume possíveis ambigüidades e imperfeições existentes no volume de entrada tendem a ser minimizadas. A partir do volume quantizado descrevemos uma nova técnica de extração de horizontes, desenvolvida com o objetivo de que seja possível mapear horizontes na presença de estruturas geológicas complexas, como por exemplo horizontes que possuam porções completamente desconectadas por uma ou mesmo diversas falhas sísmicas. Também iniciamos o desenvolvimento de uma abordagem de mapeamento de falhas sísmicas utilizando informações presentes no volume quantizado. Os resultados obtidos pelo processo de mapeamento de horizontes, testado em volumes diferentes, foram bastante promissores. Além disso, os resultados iniciais obtidos pelo processo de extração de falhas sugerem que a técnica pode vir a ser uma boa alternativa para a tarefa.
In this work we present a clusterization-based method to map seismic horizons and faults from 3D seismic data. We describe a method used to quantize an initial seismic volume using a trained instance of the Growing Neural Gas (GNG) algorithm. To accomplish this task we create a training set where each sample corresponds to an entry volume voxel, retaining its vertical neighboring information. After the training procedure, the resulting graph is used to create a quantized version of the original volume. In this quantized volume both horizons and faults are more evidenced in the data, and we present a method that uses the created volume to map seismic horizons, even when they are completely disconnected by seismic faults. We also present another method that uses the quantized version of the volume to map the seismic faults. The horizon mapping procedure, tested in different volume date, yields good results. The preliminary results presented for the fault mapping procedure also yield good results, but needs further testing.
APA, Harvard, Vancouver, ISO, and other styles
22

Ayar, Yusuf Yavuz. "Design And Simulation Of A Flash Translation Layer Algorithm." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12611995/index.pdf.

Full text
Abstract:
Flash Memories have been widely used as a storage media in electronic devices such as USB flash drives, mobile phones and cameras. Flash Memory offers a portable and non-volatile de- sign, which can be carried to everywhere without data loss. It is durable against temperature and humidity. With all these advantages, Flash Memory gets popular day by day. However, Flash Memory has also some disadvantages, such as erase-before restriction and erase limi- tation of each individual block. Erase-before restriction pushes every single writable unit to be erased before an update operation. Another limitation is that every block can be erased up to a fixed number. Flash Translation Layer - FTL is the solution for these disadvantages. Flash Translation Layer is a software module inside the Flash Memory working between the operating system and the memory. FTL tries to reduce these disadvantages of Flash Memory via implementing garbage collector, address mapping scheme, error correcting and many oth- ers. There are various Flash Translation Layer software. Some of them have been reviewed in terms of their advantages and disadvantages. The study aims at designing, implementing and simulating a NAND type FTL algorithm.
APA, Harvard, Vancouver, ISO, and other styles
23

Almqvist, Saga, and Lana Nore. "Where to Stack the Chocolate? : Mapping and Optimisation of the Storage Locations with Associated Transportation Cost at Marabou." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-135912.

Full text
Abstract:
Today, inventory management at Marabou is organised in such way that articles are stored based on which production line they belong to and are sent to storage locations close to their production line. However, some storage locations are not optimised, insofar articles are stored out of pure habit and follow what is considered most convenient. This means that the storage locations are not based on any fixed instructions or standard. In this report, we propose optimal storage locations with respect to transportation cost by modelling the problem mathematically as a minimal cost matching problem, which we solve using the so-called Hungarian algorithm. To be able to implement the Hungarian algorithm, we collected data regarding the stock levels of articles in the factory throughout 2016. We adjusted the collected data by turning the articles into units of pallets. We considered three different implementations of the Hungarian algorithm. The results from the different approaches are presented together with several suggestions regarding pallet optimisation. In addition to the theoretical background, our work is based on an empirical study through participant observations as well as qualitative interviews with factory employees. In addition to our modelling work, we thus offer several further suggestions for efficiency savings or improvements at the factory, as well as for further work building on this report.
Idag är lagerhanteringen i Marabou fabriken ordnat på sådant sätt att artiklarna är lagrade utifrån vilken linje den tillhör och därmed står i ett lager nära den specifika linjen. Dock finns det lagerplatser idag som inte är optimerade, i den mån att det endast är lagrade från vana och vad som anses enklast. Därmed är lagerplatserna inte ordnade utifrån någon standard.I detta examensarbete föreslår vi därför de mest optimala lagerplatserna med hänsyn till totala transportkostnaderna. Det här problemet kan modelleras som ett matchningsproblem som kan lösas av en så kallad Ungersk algoritm. Denna ska resultera i den optimala matchningen mellan produktionslinjens behov mot lagerplatserna i fabriken med tillhörande kostnad. För att använda Ungerska algoritmen samlade vi in data av den totala mängd artiklar som fanns i fabriken för 2016, vilket togs fram genom datasystemet SAP som Marabou använder sig av. Därefter justerade vi datat genom att dela upp alla artiklarna i antalet pallar samt vilken linje den tillhör. Denna information kompletterades med empiriska undersökningar genom egna observationer samt kvalitativa intervjuer med de anställda i fabriken. I metoden använder vi tre olika implementeringar av den Ungerska algoritmen. I resultatet presenteras resultaten från de olika tillvägagångsätten tillsammans med flera palloptimeringsförslag. I slutet sammanställs flera förbättringsförslag och idéer om vidareutveckling i rapporten.
APA, Harvard, Vancouver, ISO, and other styles
24

Kharbouch, Alaa Amin. "A bacterial algorithm for surface mapping using a Markov modulated Markov chain model of bacterial chemotaxis." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/36186.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
Includes bibliographical references (p. 83-85).
Bacterial chemotaxis is the locomotory response of bacteria to chemical stimuli. E. coli movement can be described as a biased random walk, and it is known that the general biological or evolutionary function is to increase exposure to some substances and reduce exposure to others. In this thesis we introduce an algorithm for surface mapping, which tracks the motion of a bacteria-like software agent (based on a low-level model of the biochemical network responsible for chemotaxis) on a surface or objective function. Towards that end, a discrete Markov modulated Markov chains model of the chemotaxis pathway is described and used. Results from simulations using one- and two-dimensional test surfaces show that the software agents, referred to as bacterial agents, and the surface mapping algorithm can produce an estimate which shares some broad characteristics with the surface and uncovers some features of it. We also demonstrate that the bacterial agent, when given the ability to reduce the value of the surface at locations it visits (analogous to consuming a substance on a concentration surface), is more effective in reducing the surface integral within a certain period of time when compared to a bacterial agent lacking the ability to sense surface information or respond to it.
by Alaa Amin Kharbouch.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
25

Bhave, Sampada Vasant. "Novel dictionary learning algorithm for accelerating multi-dimensional MRI applications." Diss., University of Iowa, 2016. https://ir.uiowa.edu/etd/2182.

Full text
Abstract:
The clinical utility of multi-dimensional MRI applications like multi-parameter mapping and 3D dynamic lung imaging is limited by long acquisition times. Quantification of multiple tissue MRI parameters has been shown to be useful for early detection and diagnosis of various neurological diseases and psychiatric disorders. They also provide useful information about disease progression and treatment efficacy. Dynamic lung imaging enables the diagnosis of abnormalities in respiratory mechanics in dyspnea and regional lung function in pulmonary diseases like chronic obstructive pulmonary disease (COPD), asthma etc. However, the need for acquisition of multiple contrast weighted images as in case of multi-parameter mapping or multiple time points as in case of pulmonary imaging, makes it less applicable in the clinical setting as it increases the scan time considerably. In order to achieve reasonable scan times, there is often tradeoffs between SNR and resolution. Since, most MRI images are sparse in known transform domain; they can be recovered from fewer samples. Several compressed sensing schemes have been proposed which exploit the sparsity of the signal in pre-determined transform domains (eg. Fourier transform) or exploit the low rank characteristic of the data. However, these methods perform sub-optimally in the presence of inter-frame motion since the pre-determined dictionary does not account for the motion and the rank of the data is considerably higher. These methods rely on two step approach where they first estimate the dictionary from the low resolution data and using these basis functions they estimate the coefficients by fitting the measured data to the signal model. The main focus of the thesis is accelerating the multi-parameter mapping and 3D dynamic lung imaging applications to achieve desired volume coverage and spatio-temporal resolution. We propose a novel dictionary learning framework called the Blind compressed sensing (BCS) scheme to recover the underlying data from undersampled measurements, in which the underlying signal is represented as a sparse linear combination of basic functions from a learned dictionary. We also provide an efficient implementation using variable splitting technique to reduce the computational complexity by up to 15 fold. In both multi- parameter mapping and 3D dynamic lung imaging, the comparisons of BCS scheme with other schemes indicates superior performance as it provides a richer presentation of the data. The reconstructions from BCS scheme result in high accuracy parameter maps for parameter imaging and diagnostically relevant image series to characterize respiratory mechanics in pulmonary imaging.
APA, Harvard, Vancouver, ISO, and other styles
26

Kim, Kyung Cheol. "Calibration and validation of high frequency radar for ocean surface current mapping." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Jun%5FKim.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Casella, Stacey E. "Gamut extension algorithm development and evaluation for the mapping of standard image content to wide-gamut displays /." Online version of thesis, 2008. http://hdl.handle.net/1850/8416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Anthony. "Mapping the site of origin of ventricular arrhythmias : the development and testing of a novel pacemapping algorithm." Thesis, St George's, University of London, 2016. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.703122.

Full text
Abstract:
Background: Catheter ablation is a successful tool for the treatment of ventricular arrhythmias (VA) but procedures are long and complex. Pacemapping is used to locate the site of origin (SO) of VA but there is currently no guide as to where the catheter should be placed in order to locate the SO. Aims: 1. To identify variables related to intracardiac catheter location that may alter the paced QRS morphology. 2. To investigate the relationship between change in surface ECG morphology and distance within the ventricles. 3. To test a novel software algorithm that uses the above relationship m individual patients to prospectively locate the SO of VA. Methods: Patients undergoing ablation of VAs were enrolled. Data from pacemapping within tissue with preserved myocardial voltage was collected and measurements made on QRS morphology and intracardiac electro grams. Data on catheter position and chamber geometry were extracted from a 3D mapping system into custom software to construct linear regression models of distance against morphology difference. A novel software algorithm to automatically locate the SO of VAs was prospectively tested. Results: 935 pacemaps were collected in 68 patients over 74 procedures. QRS width was associated with pacing within dense scar tissue. 6219 pacemap pairs were used in distance-similarity regression models. Distance was significantly and positively associated with change in ECG morphology between patients and across ventricles despite the presence of structural heart disease. The software algorithm was tested on 46 clinical VAs in 35 separate procedures and correctly identified the exit site of 45/46 VAs. Conclusions: There is a robust relationship between distance and difference in surface EeG morphology when pacemapping in myocardium with preserved voltage. This relationship can be constructed in individual patients using a software algorithm and used to identify the SO of VAs in an automated manner.
APA, Harvard, Vancouver, ISO, and other styles
29

Bourisly, Ali Khaled. "Neuronal Correlates of Diacritics and an Optimization Algorithm for Brain Mapping and Detecting Brain Function by way of Functional Magnetic Resonance Imaging." Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-dissertations/113.

Full text
Abstract:
The purpose of this thesis is threefold: 1) A behavioral examination of the role of diacritics in Arabic, 2) A functional magnetic resonance imaging (fMRI) investigative study of diacritics in Arabic, and 3) An optimization algorithm for brain mapping and detecting brain function. Firstly, the role of diacritics in Arabic was examined behaviorally. The stimulus was a lexical decision task (LDT) that constituted of low, mid, and high frequency words and nonwords; with and without diacritics. Results showed that the presence of vowel diacritics slowed reaction time but did not affect word recognition accuracy. The longer reaction times for words with diacritics versus without diacritics suggest that the diacritics may contribute to differences in word recognition strategies. Secondly, an Event-related fMRI experiment of lexical decisions associated with real words with versus without diacritics in Arabic readers was done. Real words with no diacritics yielded shorter response times and stronger activation than with real words with diacritics in the hippocampus and middle temporal gyrus possibly reflecting a search from among multiple meanings associated with these words in a semantic store. In contrast, real words with diacritics had longer response times than real words without diacritics and activated the insula and frontal areas suggestive of phonological and semantic mediation in lexical retrieval. Both the behavioral and fMRI results in this study appear to support a role for diacritics in reading in Arabic. The third research work in this thesis is an optimization algorithm for fMRI data analysis. Current data-driven approaches for fMRI data analysis, such as independent component analysis (ICA), rely on algorithms that may have low computational expense, but are much more prone to suboptimal results. In this work, a genetic algorithm (GA) based on a clustering technique was designed, developed, and implemented for fMRI ICA data analysis. Results for the algorithm, GAICA, showed that although it might be computationally expensive; it provides global optimum convergence and results. Therefore, GAICA can be used as a complimentary or supplementary technique for brain mapping and detecting brain function by way of fMRI.
APA, Harvard, Vancouver, ISO, and other styles
30

Yenket, Renoo. "Understanding methods for internal and external preference mapping and clustering in sensory analysis." Diss., Kansas State University, 2011. http://hdl.handle.net/2097/8770.

Full text
Abstract:
Doctor of Philosophy
Department of Human Nutrition
Edgar Chambers IV
Preference mapping is a method that provides product development directions for developers to see a whole picture of products, liking and relevant descriptors in a target market. Many statistical methods and commercial statistical software programs offering preference mapping analyses are available to researchers. Because of numerous available options, there are two questions addressed in this research that most scientists must answer before choosing a method of analysis: 1) are the different methods providing the same interpretation, co-ordinate values and object orientation; and 2) which method and program should be used with the data provided? This research used data from paint, milk and fragrance studies, representing complexity from lesser to higher. The techniques used are principal component analysis, multidimensional preference map (MDPREF), modified preference map (PREFMAP), canonical variate analysis, generalized procrustes analysis and partial least square regression utilizing statistical software programs of SAS, Unscrambler, Senstools and XLSTAT. Moreover, the homogeneousness of consumer data were investigated through hierarchical cluster analysis (McQuitty’s similarity analysis, median, single linkage, complete linkage, average linkage, and Ward’s method), partitional algorithm (k-means method), nonparametric method versus four manual clustering groups (strict, strict-liking-only, loose, loose-liking-only segments). The manual clusters were extracted according to the most frequently rated highest for best liked and least liked products on hedonic ratings. Furthermore, impacts of plotting preference maps for individual clusters were explored with and without the use of an overall mean liking vector. Results illustrated various statistical software programs were not similar in their oriented and co-ordinate values, even when using the same preference method. Also, if data were not highly homogenous, interpretation could be different. Most computer cluster analyses did not segment consumers relevant to their preferences and did not yield as homogenous clusters as manual clustering. The interpretation of preference maps created by the highest homogeneous clusters had little improvement when applied to complicated data. Researchers should look at key findings from univariate data in descriptive sensory studies to obtain accurate interpretations and suggestions from the maps, especially for external preference mapping. When researchers make recommendations based on an external map alone for complicated data, preference maps may be overused.
APA, Harvard, Vancouver, ISO, and other styles
31

Botterill, Tom. "Visual navigation for mobile robots using the Bag-of-Words algorithm." Thesis, University of Canterbury. Computer Science and Software Engineering, 2011. http://hdl.handle.net/10092/5511.

Full text
Abstract:
Robust long-term positioning for autonomous mobile robots is essential for many applications. In many environments this task is challenging, as errors accumulate in the robot’s position estimate over time. The robot must also build a map so that these errors can be corrected when mapped regions are re-visited; this is known as Simultaneous Localisation and Mapping, or SLAM. Successful SLAM schemes have been demonstrated which accurately map tracks of tens of kilometres, however these schemes rely on expensive sensors such as laser scanners and inertial measurement units. A more attractive, low-cost sensor is a digital camera, which captures images that can be used to recognise where the robot is, and to incrementally position the robot as it moves. SLAM using a single camera is challenging however, and many contemporary schemes suffer complete failure in dynamic or featureless environments, or during erratic camera motion. An additional problem, known as scale drift, is that cameras do not directly measure the scale of the environment, and errors in relative scale accumulate over time, introducing errors into the robot’s speed and position estimates. Key to a successful visual SLAM system is the ability to continue operation despite these difficulties, and to recover from positioning failure when it occurs. This thesis describes the development of such a scheme, which is known as BoWSLAM. BoWSLAM enables a robot to reliably navigate and map previously unknown environments, in real-time, using only a single camera. In order to position a camera in visually challenging environments, BoWSLAM combines contemporary visual SLAM techniques with four new components. Firstly, a new Bag-of-Words (BoW) scheme is developed, which allows a robot to recognise places it has visited previously, without any prior knowledge of its environment. This BoW scheme is also used to select the best set of frames to reconstruct positions from, and to find efficient wide-baseline correspondences between many pairs of frames. Secondly, BaySAC, a new outlier- robust relative pose estimation scheme based on the popular RANSAC framework, is developed. BaySAC allows the efficient computation of multiple position hypotheses for each frame. Thirdly, a graph-based representation of these position hypotheses is proposed, which enables the selection of only reliable position estimates in the presence of gross outliers. Fourthly, as the robot explores, objects in the world are recognised and measured. These measurements enable scale drift to be corrected. BoWSLAM is demonstrated mapping a 25 minute 2.5km trajectory through a challenging and dynamic outdoor environment in real-time, and without any other sensor input; considerably further than previous single camera SLAM schemes.
APA, Harvard, Vancouver, ISO, and other styles
32

McCloy, K. R. "Development and evaluation of a remote sensing algorithm suitable for mapping environments containing significant spatial variability : with particular reference to pastures /." Title page and table of contents only, 1987. http://web4.library.adelaide.edu.au/theses/09PH/09phm127.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Tolman, Matthew A. "A Detailed Look at the Omega-k Algorithm for Processing Synthetic Aperture Radar Data." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2634.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Makhubela, J. K. "Visual simultaneous localization and mapping in a noisy static environment." Thesis, Vaal University of Technology, 2019. http://hdl.handle.net/10352/462.

Full text
Abstract:
M. Tech. (Department of Information and Communication Technology, Faculty of Applied and Computer Sciences), Vaal University of Technology
Simultaneous Localization and Mapping (SLAM) has seen tremendous interest amongst the research community in recent years due to its ability to make the robot truly independent in navigation. Visual Simultaneous Localization and Mapping (VSLAM) is when an autonomous mobile robot is embedded with a vision sensor such as monocular, stereo vision, omnidirectional or Red Green Blue Depth (RGBD) camera to localize and map an unknown environment. The purpose of this research is to address the problem of environmental noise, such as light intensity in a static environment, which has been an issue that makes a Visual Simultaneous Localization and Mapping (VSLAM) system to be ineffective. In this study, we have introduced a Light Filtering Algorithm into the Visual Simultaneous Localization and Mapping (VSLAM) method to reduce the amount of noise in order to improve the robustness of the system in a static environment, together with the Extended Kalman Filter (EKF) algorithm for localization and mapping and A* algorithm for navigation. Simulation is utilized to execute experimental performance. Experimental results show a 60% landmark or landfeature detection of the total landmark or landfeature within a simulation environment and a root mean square error (RMSE) of 0.13m, which is minimal when compared with other Simultaneous Localization and Mapping (SLAM) systems from literature. The inclusion of a Light Filtering Algorithm has enabled the Visual Simultaneous Localization and Mapping (VSLAM) system to navigate in an obscure environment.
APA, Harvard, Vancouver, ISO, and other styles
35

Hall, Bryan, University of Western Sydney, Faculty of Science and Technology, and School of Science. "A review of the environmental resource mapping system and a proof that it is impossible to write a general algorithm for analysing interactions between organisms distributed at locations described by a locationally linked database and physical properties recorded within the database." THESIS_FST_SS_Hall_B.xml, 1994. http://handle.uws.edu.au:8081/1959.7/750.

Full text
Abstract:
The Environmental Resource Mapping System (E-RMS) is a geographic information system (GIS) that is used by the National Parks and Wildlife Service to assist in management of national parks. The package is available commercially from the Service and is used by other government departments for environmental management. E-RMS has also been present in Australian Universities and used for academic work for a number of years. This thesis demonstrates that existing procedures for product quality and performance have not been followed in the production of the package and that the package and therefore much of the work undertaken with the package is fundamentally flawed. The E-RMS software contains and produces a number of serious mistakes. Several problems are identified and discussed in this thesis. As a result of the shortcomings, the author recommends that an enquiry be conducted to investigate *1/ The technical feasibility of each project for which the E-RMS package has been used; *2/ The full extent and consequences of the failings inherent with the package; and *3/ The suitability of the E-RMS GIS package for the purposes for which it is sold. Australian Standard 3898 requires that the purpose, functions and limitations of consumer software shall be described. To comply with this standard, users of the E-RMS package would have to be informed of several factors related to it. These are discussed in the research. Failure to consider the usefulness and extractable nature of information in any GIS database will inevitably lead to problems that may endanger the phenomena that the GIS is designed to protect.
Master of Applied Science (Environmental Science)
APA, Harvard, Vancouver, ISO, and other styles
36

Peltonen, Joanna. "Development of effective algorithm for coupled thermal-hydraulics : neutron-kinetics analysis of reactivity transient." Licentiate thesis, Stockholm : Skolan för teknikvetenskap, Kungliga Tekniska högskolan, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-11033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Ozgur, Ayhan. "A Novel Mobile Robot Navigation Method Based On Combined Feature Based Scan Matching And Fastslam Algorithm." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612431/index.pdf.

Full text
Abstract:
The main focus of the study is the implementation of a practical indoor localization and mapping algorithm for large scale, structured indoor environments. Building an incremental consistent map while also using it for localization is partially unsolved problem and of prime importance for mobile robot navigation. Within this framework, a combined method consisting of feature based scan matching and FastSLAM algorithm using LADAR and odometer sensor is presented. In this method, an improved data association and localization accuracy is achieved by feeding the SLAM module with better incremental pose information from scan matching instead of raw odometer output. This thesis presents the following contributions for indoor localization and mapping. Firstly a method combining feature based scan matching and FastSLAM is achieved. Secondly, improved geometrical relations are used for scan matching and also a novel method based on vector transformation is used for the calculation of pose difference. These are carefully studied and tuned based on localization and mapping performance failures encountered in different realistic LADAR datasets. Thirdly, in addition to position, orientation information usage in line segment and corner oriented data association is presented as an extension in FastSLAM module. v The method is tested with LADAR and odometer data taken from real robot platforms operated in different indoor environments. In addition to using datasets from the literature, own datasets are collected on Pioneer 3AT experimental robot platform. As a result, a real time working localization algorithm which is pretty successive in large scale, structured environments is achieved.
APA, Harvard, Vancouver, ISO, and other styles
38

Frese, Udo. "An O(log n) algorithm for simultaneous localization and mapping of mobile robots in indoor environments Ein O(log n)-Algorithmus für gleichzeitige Lokalisierung und Kartierung mobiler Roboter in Innenräumen /." [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=972029516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Tran, Quoc Huy Martin, and Carl Ronström. "Mapping and Visualisation of the Patient Flow from the Emergency Department to the Gastroenterology Department at Södersjukhuset." Thesis, KTH, Medicinteknik och hälsosystem, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279605.

Full text
Abstract:
The Emergency department at Södersjukhuset currently suffers from very long waiting times. This is partly due to problems within visualisation and mapping of patient data and other information that is fundamental in the handling of patients at the Emergency department. This led to a need in the creation of improvement suggestions to the visualisation of the patient flow between the Emergency department and the Gastroenterology department at Södersjukhuset. During the project, a simulated graphical user interface was created with the purpose of mimicking Södersjukhusets current patient flow. This simulated user interface would visualise the patient flow between the Emergency department and the Gastroenterology department. Additionally, a patient symptoms estimation algorithm was implemented to guess the likelihood of a patient being admitted to a department. The result shows that there are many possible improvements to Södersjukhusets current hospital information system, TakeCare, that would facilitate the care coordinators work and in turn lower the waiting times at the Emergency department.
Akutmottagningen på Södersjukhuset har i dagsläget väldigt långa väntetider. Detta beror till viss del utav problem inom visualiseringen och kartläggning av patient data och annan fundamental information för att hantera patienter på akutmottagningen. Detta ledde till att det finns ett behov att skapa förbättringsförslag på visualiseringen av patientflödet mellan akutmottagningen och gastroenterologiavdelningen på Södersjukhuset. Under projektets gång skapades ett simulerat användargränssnitt med syfte att efterlikna Södersjukhusets nuvarande patientflöde. Denna lösning visualiserar patientflödet mellan akutmottagningen och gastroenterologiavdelningen. Dessutom implementerades en enkel sorteringsalgoritm som kan bedöma sannolikheten om en patient skall bli inlagd på en avdelning. Resultatet visar att det finns flera möjliga förbättringar i Södersjukhusets nuvarande elektroniska journalsystemet, TakeCare, som skulle underlätta vårdkoordinatorernas arbete och därmed sänka väntetiderna på akutmottagningen.
APA, Harvard, Vancouver, ISO, and other styles
40

Werner, Sebastian. "Variabilitätsmodellierung in Kartographierungs- und Lokalisierungsverfahren." Thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-172457.

Full text
Abstract:
In der heutigen Zeit spielt die Automatisierung eine immer bedeutendere Rolle, speziell im Bereich der Robotik entwickeln sich immer neue Einsatzgebiete, in denen der Mensch durch autonome Fahrzeuge ersetzt wird. Dabei orientiert sich der Großteil der eingesetzten Roboter an Streckenmarkierungen, die in den Einsatzumgebungen installiert sind. Bei diesen Systemen gibt es jedoch einen hohen Installationsaufwand, was die Entwicklung von Robotersystemen, die sich mithilfe ihrer verbauten Sensorik orientieren, vorantreibt. Es existiert zwar eine Vielzahl an Robotern die dafür verwendet werden können. Die Entwicklung der Steuerungssoftware ist aber immer noch Teil der Forschung. Für die Steuerung wird eine Umgebungskarte benötigt, an der sich der Roboter orientieren kann. Hierfür eignen sich besonders SLAM-Verfahren, die simultanes Lokalisieren und Kartographieren durchführen. Dabei baut der Roboter während seiner Bewegung durch den Raum mithilfe seiner Sensordaten eine Umgebungskarte auf und lokalisiert sich daran, um seine Position auf der Karte exakt zu bestimmen. Im Laufe dieser Arbeit wurden über 30 verschiedene SLAM Implementierungen bzw. Umsetzungen gefunden die das SLAM Problem lösen. Diese sind jedoch größtenteils an spezielle Systembzw. Umgebungsvoraussetzungen angepasste eigenständige Implementierungen. Es existiert keine öffentlich zugängliche Übersicht, die einen Vergleich aller bzw. des Großteils der Verfahren, z.B. in Bezug auf ihre Funktionsweise, Systemvoraussetzungen (Sensorik, Roboterplattform), Umgebungsvoraussetzungen (Indoor, Outdoor, ...), Genauigkeit oder Geschwindigkeit, gibt. Viele dieser SLAMs besitzen Implementierungen und Dokumentationen in denen ihre Einsatzgebiete, Testvoraussetzungen oder Weiterentwicklungen im Vergleich zu anderen SLAMVerfahren beschrieben werden, was aber bei der großen Anzahl an Veröffentlichungen das Finden eines passenden SLAM-Verfahrens nicht erleichtert. Bei einer solchen Menge an SLAM-Verfahren und Implementierungen stellen sich aus softwaretechnologischer Sicht folgende Fragen: 1. Besteht die Möglichkeit einzelne Teile des SLAM wiederzuverwenden? 2. Besteht die Möglichkeit einzelne Teile des SLAM dynamisch auszutauschen? Mit dieser Arbeit wird das Ziel verfolgt, diese beiden Fragen zu beantworten. Hierfür wird zu Beginn eine Übersicht über alle gefundenen SLAMs aufgebaut um diese in ihren grundlegenden Eigenschaften zu unterscheiden. Aus der Vielzahl von Verfahren werden die rasterbasierten Verfahren, welche Laserscanner bzw. Tiefenbildkamera als Sensorik verwenden, als zu untersuchende Menge ausgewählt. Diese Teilmenge an SLAM-Verfahren wird hinsichtlich ihrer nichtfunktionalen Eigenschaften genauer untersucht und versucht in Komponenten zu unterteilen, welche in mehreren verschiedenen Implementierungen wiederverwendet werden können. Anhand der extrahierten Komponenten soll ein Featurebaum aufgebaut werden, der dem Anwender einen Überblick und die Möglichkeit bereitstellt SLAM-Verfahren nach speziellen Kriterien (Systemvoraussetzungen, Umgebungen, ...) zusammenzusetzen bzw. zur Laufzeit anzupassen. Dafür müssen die verfügbaren SLAM Implementierungen und dazugehörigen Dokumentationen in Bezug auf ihre Gemeinsamkeiten und Unterschiede analysiert werden.
APA, Harvard, Vancouver, ISO, and other styles
41

SOUZA, Viviane Lucy Santos de. "Uma metodologia para síntese de circuitos digitais em FPGAs baseada em otimização multiobjetivo." Universidade Federal de Pernambuco, 2015. https://repositorio.ufpe.br/handle/123456789/17339.

Full text
Abstract:
Submitted by Irene Nascimento (irene.kessia@ufpe.br) on 2016-07-12T18:32:53Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Tese_Final_bib.pdf: 4325542 bytes, checksum: 5cafa644d256b743ce0f06490e4d5920 (MD5)
Made available in DSpace on 2016-07-12T18:32:53Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Tese_Final_bib.pdf: 4325542 bytes, checksum: 5cafa644d256b743ce0f06490e4d5920 (MD5) Previous issue date: 2015-08-20
Atualmente, a evolução na arquitetura dos FPGAs (Field programable gate arrays) permite que os mesmos sejam empregados em aplicações que vão desde a prototipação rápida de circuitos digitais simples a coprocessadores para computação de alto desempenho. Entretanto, a utilização eficiente dessas arquiteturas é fortemente dependente, entre outros fatores, da ferramenta de síntese empregada. O desafio das ferramentas de síntese está em converter a lógica do projetista em circuitos que utilizem de maneira efetiva a área do chip, não degradem a frequência de operação e que, sobretudo, sejam eficientes em reduzir o consumo de energia. Nesse sentido, pesquisadores e grandes fabricantes de FPGA estão, frequentemente, desenvolvendo novas ferramentas com vistas a esses objetivos, que se caracterizam por serem conflitantes. O fluxo de síntese de projetos baseados em FPGAs engloba as etapas de otimização lógica, mapeamento, agrupamento, posicionamento e roteamento. Essas fases são dependentes, de forma que, otimizações nas etapas iniciais produzem impactos positivos nas etapas posteriores. No âmbito deste trabalho de doutorado, estamos propondo uma metodologia para otimização do fluxo de síntese, especificamente, nas etapas de mapeamento e agrupamento. Classicamente, a etapa de mapeamento é realizada mediante heurísticas que determinam uma solução para o problema, mas que, não permitem a busca por soluções ótimas, ou que beneficiam um objetivo em detrimento de outros. Desta forma, estamos propondo a utilização de uma abordagem multiobjetivo baseada em algoritmo genético e de uma abordagem multiobjetivo baseada em colônia artificial de abelhas que, associadas a heurísticas específicas do problema, permitem que sejam obtidas soluções de melhor qualidade e que resultam em circuitos finais com área reduzida, ganhos na frequência de operação e com menor consumo de potência dinâmica. Além disso, propomos uma nova abordagem de agrupamento multiobjetivo que se diferencia do estado da arte, por utilizar uma técnica de predição e por considerar características dinâmicas do problema, produzindo circuitos mais eficientes e que facilitam a tarefa das etapas de posicionamento e roteamento. Toda a metodologia proposta foi integrada ao fluxo acadêmico do VTR (Verilog to routing), um projeto código aberto e colaborativo que conta com múltiplos grupos de pesquisa, conduzindo trabalhos nas áreas de desenvolvimento de arquitetura de FPGAs e de novas ferramentas de síntese. Além disso, utilizamos como benchmark, um conjunto dos 20 maiores circuitos do MCNC (Microelectronics Center of North Carolina) que são frequentemente utilizados em pesquisas da área. O resultado do emprego integrado das ferramentas frutos da metodologia proposta permite a redução de importantes aspectos pós-roteamento avaliados. Em comparação ao estado da arte, são obtidas, em média, redução na área dos circuitos de até 19%, além da redução do caminho crítico em até 10%, associada à diminuição na potência dinâmica total estimada de até 18%. Os experimentos também mostram que as metodologias de mapeamento propostas são computacionalmente mais custosas em comparação aos métodos presentes no estado da arte, podendo ser até 4,7x mais lento. Já a metodologia de agrupamento apresentou pouco ou nenhum overhead em comparação ao metodo presente no VTR. Apesar do overhead presente no mapeamento, os métodos propostos, quando integrados ao fluxo completo, podem reduzir o tempo de execução da síntese em cerca de 40%, isto é o resultado da produção de circuitos mais simples e que, consequentemente, favorecem as etapas de posicionamento e roteamento.
Nowadays, the evolution of FPGAs (Field Programmable Gate Arrays) allows them to be employed in applications from rapid prototyping of digital circuits to coprocessor of high performance computing. However, the efficient use of these architectures is heavily dependent, among other factors, on the employed synthesis tool. The synthesis tools challenge is in converting the designer logic into circuits using effectively the chip area, while, do not degrade the operating frequency and, especially, are efficient in reducing power consumption. In this sense, researchers and major FPGA manufacturers are often developing new tools to achieve those goals, which are characterized by being conflicting. The synthesis flow of projects based on FPGAs comprises the steps of logic optimization, mapping, packing, placement and routing. These steps are dependent, such that, optimizations in the early stages bring positive results in later steps. As part of this doctoral work, we propose a methodology for optimizing the synthesis flow, specifically, on the steps of mapping and grouping. Classically, the mapping step is performed by heuristics which determine a solution to the problem, but do not allow the search for optimal solutions, or that benefit a goal at the expense of others. Thus, we propose the use of a multi-objective approach based on genetic algorithm and a multi-objective approach based on artificial bee colony that, combined with problem specific heuristics, allows a better quality of solutions are obtained, yielding circuits with reduced area, operating frequency gains and lower dynamic power consumption. In addition, we propose a new multi-objective clustering approach that differs from the state-of-the-art, by using a prediction technique and by considering dynamic characteristics of the problem, producing more efficient circuits and that facilitate the tasks of placement and routing steps . The proposal methodology was integrated into the VTR (Verilog to routing) academic flow, an open source and collaborative project that has multiple research groups, conducting work in the areas of FPGA architecture development and new synthesis tools. Furthermore, we used a set of the 20 largest MCNC (Microelectronics Center of North Carolina) benchmark circuits that are often used in research area. The results of the integrated use of tools based on the proposed methodology allow the reduction of important post-routing aspects evaluated. Compared to the stateof- the-art, are achieved, on average, 19% reduction in circuit area, besides 10% reduction in critical path, associated with 18% decrease in the total dynamic estimated power. The experiments also reveal that proposed mapping methods are computationally more expensive in comparison to methods in the state-of-the-art, and may even be 4.7x slower. However, the packing methodology presented little or no overhead compared to the method in VTR. Although the present overhead mapping, the proposed methods, when integrated into the complete flow, can reduce the running time of the synthesis by approximately 40%, which is the result of more simple circuits and which, consequently, favor the steps of placement and routing.
APA, Harvard, Vancouver, ISO, and other styles
42

Lee, Po-Cheng, and 李柏成. "A Tone Mapping Algorithm with Detail Enhancement Based on Retinex Algorithm." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/70759937164346636036.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
100
Because of the progress of the digital camera technique recently, we can directly obtain the HDRI (High Dynamic Range Image) from camera. Nevertheless, limited by display, we still transfer the HDRI to the display which can show LDRI (Low Dynamic Range Image). This technique is known as tone-mapping. The goal of tone-mapping is to compress the luminance dynamic range into low dynamic range while decreasing distortion and preserving detail. We use logarithm first to compress high dynamic range based on background luminance. The retinex local contrast enhancement is thus being performed to enhancement the image in dark regions. Using our method can preserve most of detail without contrast distortion especially dark areas.
APA, Harvard, Vancouver, ISO, and other styles
43

Hung, Pei-Hsiu, and 洪培修. "The Design of Virtual Network Mapping Algorithm." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/48033132484358320626.

Full text
Abstract:
碩士
銘傳大學
資訊工程學系碩士班
99
Cloud Computing has been a very popular topic recently. One of the applications in the Cloud Computing is virtualization. Network virtualization has emerged as a powerful way to allow multiple virtual networks, each customized to a particular application, to run on a common substrate network. Our research consists of two parts. The first is a node mapping algorithm and a link mapping algorithm. The second is a path migration algorithm. The first part focuses on how to use the proposed node mapping algorithm and link mapping algorithm to map the virtual networks to a substrate network. The second part focuses on how to use the proposed path migration algorithm to migrate virtual links to different substrate paths, which can improve the substrate’s ability to accept more virtual networks. The study of the first part concentrates on mapping problems. The mapping problems consist of two parts. The first is node mapping algorithm, which focus on how to map the virtual nodes to substrate nodes. The greedy algorithm is proposed to assign the virtual nodes to substrate nodes. The second is link mapping algorithm. Link mapping algorithm also consists of two problems. One is that a virtual link is mapped to a single substrate path. The widest path algorithm and cut-shortest path algorithm are proposed for this problem. The other is that a virtual link is mapped to multiple substrate paths. In this case, path diversity is enabled in the substrate network. Cut-shortest path algorithm proposed for this problem. The study of the second part focuses on path migration in the substrate network. When a new virtual network requests to map to a substrate network, it is possible that no resource in the substrate network can meet the requirement of the new virtual network. In this situation, path migration must be enabled to re-arrange all the virtual networks that have already been mapped to the substrate network. The proposed path migration algorithm consists of three steps. First, the virtual nodes in the new virtual network are mapped to the substrate nodes by using node mapping algorithm. Second, the algorithm selects one existing virtual network to migrate to other substrate links. Third, if the resource in the substrate can meet the requirement of the new virtual network, the path migration algorithm stops. Otherwise, the algorithm continues to migrate the next existing virtual network. The cut-shortest path algorithm is proposed for path migration.
APA, Harvard, Vancouver, ISO, and other styles
44

Kidar, Lin, and 林奇達. "A New Architecture Specific Technology Mapping Algorithm." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/52142680070757393540.

Full text
Abstract:
碩士
中原大學
資訊工程研究所
86
In this thesis, we propose a new technology mapping, called ArchMap,for LUT-based FPGAs with the hard-wired connection architecture in PLBs. To minimize the delay in the mapped network.ArchMap are divided into two step as follows:1. Mapping the initial circuit to a LUT network: Instead of mapping the initial circuit to a K-LUT network for a fixed K, we try to map the initial circuit to a LUT network more suitable to the desired architecture using the multiple K-feasible cut technique.2. Mapping the LUT network to a PLB network: An architecture-specific labeling procedure is designed to map the LUT network to a PLB network. In our experiments, we use MCNC benchmark circuit as test circuits andchoose Xilinx XC4000 series FPGAs as the target architecture. Experimentalresults show that ArchMap reduce the depth of CLB network by 39.74% and reduce the number of CLBs by 27.61% compared with that obtained by MIS-pga-delay plus match_4k. On the other hand, ArchMap obtained 7.69% of improvement of the depth of CLB network and only 1.86% of drawback of the number of CLBs compared with FlowMap script of RASP.
APA, Harvard, Vancouver, ISO, and other styles
45

Ma, Hsin-kai, and 馬欣愷. "Multiple Images Fusing and Tone Mapping Algorithm." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/69041306406117555168.

Full text
Abstract:
碩士
世新大學
資訊管理學研究所(含碩專班)
99
A multi-image fusion model, similar to the method suggested by Kao et al (2008) was firstly applied in this research to create high-dynamic-range (HDR) images. It was applied in the fusion process of three differently exposed images, captured by a typical still digital camera; and then in the creation process of HDR images, for further applications in the cross-media color reproduction. The images, created by the derived fusion method, had higher HDR than those of images obtained by conventional imaging process. Moreover, they also preserves much more details and information on both shadow and highlight areas, if compared to those images taken by the same digital cameras but using the conventional exposure method (i.e. only using one normal exposure). Additionally, since the dynamic range of most displays is limited, it would be impossible to obtain satisfactory representations of the original HDR scenes using such conventional soft-proofing displays. Therefore, a serial of algorithms were optimally derived, integrated, and tested in this thesis. These derived algorithms included local white-balancing for considering multi-illuminant scene conditions, tone-mapping, gamut-mapping, and CIECAM02 color appearance. Furthermore, a Gaussian pyramid method, which was based on a multi-scale model of adaptation and spatial vision, was also derived here. It was used to further enhance detail-rendition performance of the combined HDR imaging mechanism, derived in this research. Finally, the experimental testing results, (obtained from the imaging process of these integrated algorithms mentioned above), showed that the cross-media resulted images, when shown on the general display devices having lower HDR than those fused HDR images of interest, can pleasingly give satisfactory color appearances.
APA, Harvard, Vancouver, ISO, and other styles
46

Chen, Shin-Liang, and 陳世梁. "A Technology Mapping Algorithm for CPLD Architectures." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/38725139604636936234.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
89
In this thesis, we propose a technology mapping algorithm for CPLD architectures. Our algorithm proceeds in two phases: mapping for single-output PLAs and packing for multiple-output PLAs. In the mapping phase, based on the results in [4], we propose a Look-Up-Table (LUT) based mapping algorithm. We will take advantage of existing LUT mapping algorithms for area and depth minimization. We also study, for a given (i, p, o)-PLA block structure, the problem of selecting the values of input and product term constraints for mapping for single-output PLA. Benchmark results show that our algorithm produce better results in terms of area and depth as compared to those by TEMPLA.
APA, Harvard, Vancouver, ISO, and other styles
47

Deng, Ren-Fu, and 鄧人輔. "A Heuristic Memory Mapping Algorithm for Interface Synthesis." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/e2vqrj.

Full text
Abstract:
碩士
國立臺北科技大學
電腦與通訊研究所
94
The communication of variable or data between partitioned hardware and software parts will be produced in hardware-software codesign. The interface synthesis methodology will conquer the variables communication problem. The one of solution for interface synthesis is using memory mapping method. In this thesis, we propose a heuristic algorithm of memory mapping to solve variable mapping issue that includes appropriate memory port and number. Experimental results shown that our proposed algorithm under the condition of varying clock cycles and the number of variables can reduce 7.6% hardware cost, 9.2% used the ports of multi-port memory numbers and 8% used the number of multi-port-memory instances. Under the condition of varying the number of the most accessed variables during a clock cycle, our proposed algorithm can reduce 7.8% hardware cost, 7.9% used the ports of multi-port-memory numbers and 8.9% used the number of multi-port-memory instances.
APA, Harvard, Vancouver, ISO, and other styles
48

李杰翰. "Images Dependent Gamut Mapping Algorithm by Linear Programming." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/66469927587971622325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Huang, Hsin-Hsiung, and 黃信雄. "A Functional Decomposition Algorithm for Low Power Technology Mapping." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/36689470360199656087.

Full text
Abstract:
碩士
中原大學
資訊工程學系
88
With the fast growth of portable electronic systems such as notebook, PDA, communication goods, low power has become an attractive issue that plays an important role to the future trend of very large scalar integrated circuit. Most of existing technology mapping algorithms, such as HeuDecomp[45], consider only the structure of a given circuit, and do not consider the functionality of the circuit. In contrast to the conventional approaches, we proposed a new algorithm consider both the structure and the functionality to further reduce the power dissipation. Our approach consists of five steps: (1) Decomposing the k-bound circuit into a tree of 2-bound. (2) Minimizing numbers of inverters according to DeMorgan’s Theorem. (3) Merging gates with considering both the structure and the functionality .(4) Decomposing k-bound circuit into 2-bound by HeuDecomp algorithm.(5)Maximizing numbers of inverters according to DeMorgan’s Theorem. Our method provides circuits with up to 12.7% lower average power consumption than HeuDecomp algorithm. No matter what the ratio of signal probability of primary inputs is, our algorithm is stable rather than sensitive. We give some examples to demonstrate the superior of our approach.
APA, Harvard, Vancouver, ISO, and other styles
50

MA, YI-ZHENG, and 馬譯政. "Systolic array mapping of sequential algorithm for VLSI architecture." Thesis, 1986. http://ndltd.ncl.edu.tw/handle/89888402865063130518.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography