Dissertations / Theses on the topic 'Mapping Algorithm'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Mapping Algorithm.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Morovič, Ján. "To develop a universal gamut mapping algorithm." Thesis, University of Derby, 1998. http://hdl.handle.net/10545/200029.
Full textPomerleau, François. "Registration algorithm optimized for simultaneous localization and mapping." Mémoire, Université de Sherbrooke, 2008. http://savoirs.usherbrooke.ca/handle/11143/1465.
Full textDunkelberg, Jr John S. "FEM Mesh Mapping to a SIMD Machine Using Genetic Algorithms." Digital WPI, 2001. https://digitalcommons.wpi.edu/etd-theses/1154.
Full textLiu, Zhiyong Michael. "Mapping physical topology with logical topology using genetic algorithm." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ62245.pdf.
Full textCurotto, Molina Franco Andreas. "Graphslam algorithm implementation for solving simultaneous localization and mapping." Tesis, Universidad de Chile, 2016. http://repositorio.uchile.cl/handle/2250/139093.
Full textSLAM (Simultaneous Localization and Mapping) es el problema de estimar la posición de un robot (u otro agente), y simultáneamente, generar un mapa de su entorno. Es considerado un concepto clave en la robótica móvil, y fundamental para alcanzar sistemas verdaderamente autónomos. Entre las muchas soluciones que se han propuesto para resolver SLAM, los métodos basados en grafos han adquirido gran interés por parte de los investigadores en los últimos años. Estas soluciones presentan varias ventajas, como la habilidad de manejar grandes cantidades de datos, y conseguir la trayectoria completa del robot, en vez de solo la última posición. Una implementación particular de este método es el algoritmo GraphSLAM, presentado por primera vez por Thrun y Montemerlo en 2006. En esta memoria, el algoritmo GraphSLAM es implementado para resolver el problema de SLAM en el caso de dos dimensiones. En objetivo principal de esta memoria es proveer de una solución de SLAM ampliamente aceptada para la realización de pruebas comparativas con nuevos algoritmos de SLAM. La implementación usa el framework g2o como herramienta para la optimización de mínimos cuadrados no lineales. La implementación de GraphSLAM es capaz de resolver SLAM con asociación de datos conocida y desconocida. Esto significa que, incluso cuando el robot no tiene conocimiento del origen de las mediciones, éste puede asociar las mediciones a los estados correspondientes, mediante el uso de estimación probabilística. El algoritmo también usa un método basado en kernel para la estimación robusta ante outliers. Para mejorar el tiempo de cómputo del algoritmo, varias estrategias fueron diseñadas para verificar las asociaciones y ejecutar el algoritmo de manera eficiente. La implementación final se probó con datos simulados y reales, en el caso de asociación conocida y desconocida. El algoritmo fue exitoso en todas las pruebas, siendo capaz de estimar la trayectoria del robot y el mapa del entorno con un error pequeño. Las principales ventajas del algoritmo son su alta precisión, y su alto grado de configuración dado por la selección de parámetros. Las mayores desventajas son el tiempo de cómputo del algoritmo cuando la cantidad de datos es alta, y su incapacidad de eliminar falsos positivos. Finalmente, como trabajo futuro, se sugieren modificaciones para aumentar la velocidad de convergencia, y para eliminar falsos positivos.
Wang, Qing. "Development, improvement and assessment of image classification and probability mapping algorithms." OpenSIUC, 2018. https://opensiuc.lib.siu.edu/dissertations/1622.
Full textDash, Padmanava. "SeaWiFS Algorithm for Mapping Phycocyanin in Incipient Freshwater Cyanobacterial Blooms." Bowling Green State University / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1120594611.
Full textJiang, Dayou. "An exploration of BMSF algorithm in genome-wide association mapping." Kansas State University, 2013. http://hdl.handle.net/2097/15505.
Full textDepartment of Statistics
Haiyan Wang
Motivation: Genome-wide association studies (GWAS) provide an important avenue for investigating many common genetic variants in different individuals to see if any variant is associated with a trait. GWAS is a great tool to identify genetic factors that influence health and disease. However, the high dimensionality of the gene expression dataset makes GWAS challenging. Although a lot of promising machine learning methods, such as Support Vector Machine (SVM), have been investigated in GWAS, the question of how to improve the accuracy of the result has drawn increased attention of many researchers A lot of the studies did not apply feature selection to select a parsimonious set of relevant genes. For those that performed gene selections, they often failed to consider the possible interactions among genes. Here we modify a gene selection algorithm BMSF originally developed by Zhang et al. (2012) for improving the accuracy of cancer classification with binary responses. A continuous response version of BMSF algorithm is provided in this report so that it can be applied to perform gene selection for continuous gene expression dataset. The algorithm dramatically reduces the dimension of the gene markers under concern, thus increases the efficiency and accuracy of GWAS. Results: We applied the continuous response version of BMSF on the wheat phenotypes dataset to predict two quantitative traits based on the genotype marker data. This wheat dataset was previously studied in Long et al. (2009) for the same purpose but used only direct application of SVM regression methods. By applying our gene selection method, we filtered out a large portion of genes which are less relevant and achieved a better prediction result for the test data by building SVM regression model using only selected genes on the training data. We also applied our algorithm on simulated datasets which was generated following the setting of an example in Fan et al. (2011). The continuous response version of BMSF showed good ability to identify active variables hidden among high dimensional irrelevant variables. In comparison to the smoothing based methods in Fan et al. (2011), our method has the advantage of no ambiguity due to difference choices of the smoothing parameter.
Baichbal, Shashidhar. "MAPPING ALGORITHM FOR AUTONOMOUS NAVIGATION OF LAWN MOWER USING SICK LASER." Wright State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=wright1334587886.
Full textPhinjaroenphan, Panu, and s2118294@student rmit edu au. "An Efficient, Practical, Portable Mapping Technique on Computational Grids." RMIT University. Computer Science and Information Technology, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080516.145808.
Full textCaudill, Thomas Robert. "Accuracy of the Total Ozone Mapping Spectrometer algorithm at polar latitudes." Diss., The University of Arizona, 1994. http://hdl.handle.net/10150/186897.
Full textAguilar-Gonzalez, Abiel. "Monocular-SLAM dense mapping algorithm and hardware architecture for FPGA acceleration." Thesis, Université Clermont Auvergne (2017-2020), 2019. http://www.theses.fr/2019CLFAC055.
Full textSimultaneous Localization and Mapping (SLAM) is the problem of constructing a 3D map while simultaneously keeping track of an agent location within the map. In recent years, work has focused on systems that use a single moving camera as the only sensing mechanism (monocular-SLAM). This choice was motivated because nowadays, it is possible to find inexpensive commercial cameras, smaller and lighter than other sensors previously used and, they provide visual environmental information that can be exploited to create complex 3D maps while camera poses can be simultaneously estimated. Unfortunately, previous monocular-SLAM systems are based on optimization techniques that limits the performance for real-time embedded applications. To solve this problem, in this work, we propose a new monocular SLAM formulation based on the hypothesis that it is possible to reach high efficiency for embedded applications, increasing the density of the point cloud map (and therefore, the 3D map density and the overall positioning and mapping) by reformulating the feature-tracking/feature-matching process to achieve high performance for embedded hardware architectures, such as FPGA or CUDA. In order to increase the point cloud map density, we propose new feature-tracking/feature-matching and depth-from-motion algorithms that consists of extensions of the stereo matching problem. Then, two different hardware architectures (based on FPGA and CUDA, respectively) fully compliant for real-time embedded applications are presented. Experimental results show that it is possible to obtain accurate camera pose estimations. Compared to previous monocular systems, we are ranked as the 5th place in the KITTI benchmark suite, with a higher processing speed (we are the fastest algorithm in the benchmark) and more than x10 the density of the point cloud from previous approaches
Gonzalez, Cadenillas Clayder Alejandro. "An improved feature extractor for the lidar odometry and mapping algorithm." Tesis, Universidad de Chile, 2019. http://repositorio.uchile.cl/handle/2250/171499.
Full textLa extracción de características es una tarea crítica en la localización y mapeo simultáneo o Simultaneous Localization and Mapping (SLAM) basado en características, que es uno de los problemas más importantes de la comunidad robótica. Un algoritmo que resuelve SLAM utilizando características basadas en LiDAR es el algoritmo LiDAR Odometry and Mapping (LOAM). Este algoritmo se considera actualmente como el mejor algoritmo SLAM según el Benchmark KITTI. El algoritmo LOAM resuelve el problema de SLAM a través de un enfoque de emparejamiento de características y su algoritmo de extracción de características detecta las características clasifican los puntos de una nube de puntos como planos o agudos. Esta clasificación resulta de una ecuación que define el nivel de suavidad para cada punto. Sin embargo, esta ecuación no considera el ruido de rango del sensor. Por lo tanto, si el ruido de rango del LiDAR es alto, el extractor de características de LOAM podría confundir los puntos planos y agudos, lo que provocaría que la tarea de emparejamiento de características falle. Esta tesis propone el reemplazo del algoritmo de extracción de características del LOAM original por el algoritmo Curvature Scale Space (CSS). La elección de este algoritmo se realizó después de estudiar varios extractores de características en la literatura. El algoritmo CSS puede mejorar potencialmente la tarea de extracción de características en entornos ruidosos debido a sus diversos niveles de suavizado Gaussiano. La sustitución del extractor de características original de LOAM por el algoritmo CSS se logró mediante la adaptación del algoritmo CSS al Velodyne VLP-16 3D LiDAR. El extractor de características de LOAM y el extractor de características de CSS se probaron y compararon con datos reales y simulados, incluido el dataset KITTI utilizando las métricas Optimal Sub-Pattern Assignment (OSPA) y Absolute Trajectory Error (ATE). Para todos estos datasets, el rendimiento de extracción de características de CSS fue mejor que el del algoritmo LOAM en términos de métricas OSPA y ATE.
Moss, Andrew. "Temporally adjusted complex ambiguity function mapping algorithm for geolocating radio frequency signals." Thesis, Monterey, California: Naval Postgraduate School, 2014. http://hdl.handle.net/10945/44625.
Full textThe Complex Ambiguity Function (CAF) allows simultaneous estimates of the Time Difference of Arrival (TDOA) and Frequency Difference of Arrival (FDOA) for two received signals. The Complex Ambiguity Function Geo-Mapping (CAFMAP) algorithm then directly maps the CAF to geographic coordinates to provide a direct estimation of the emitter’s position. The CAFMAP can only use short-duration CAFs, however, because the collector motion changes the system geometry over time. In an attempt to mitigate this shortfall and improve geolocation accuracy, the CAFMAP takes multiple CAF snapshots and sums the amplitudes of each. Unfortunately, this method does not provide the expected accuracy improvement, and a new method is sought. This thesis reformulates the equations used in computing the CAF, in order to account for the collector’s motion, and uses the results to derive a new CAFMAP algorithm. This new algorithm is implemented in MATLAB, and its results and characteristics analyzed. The conclusions are as follows: the new algorithm functions as intended, removes the accuracy limitations of the original method, and merits further investigation. Immediate future work should focus on ways to reduce its computation time and modifying the algorithm to account for 3-Dimensional reality, non-linear motion of collectors, and motion of the emitter.
Guo, Yunbo. "Multi-population genetic algorithm for the mapping of landscape of complex function /." View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?PHYS%202009%20GUO.
Full textKartal, Koc Elcin. "An Algorithm For The Forward Step Of Adaptive Regression Splines Via Mapping Approach." Phd thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12615012/index.pdf.
Full textand in the second one, the least contributing basis functions to the overall fit are eliminated. In the conventional adaptive spline procedure, knots are selected from a set of distinct data points that makes the forward selection procedure computationally expensive and leads to high local variance. To avoid these drawbacks, it is possible to select the knot points from a subset of data points, which leads to data reduction. In this study, a new method (called S-FMARS) is proposed to select the knot points by using a self organizing map-based approach which transforms the original data points to a lower dimensional space. Thus, less number of knot points is enabled to be evaluated for model building in the forward selection of MARS algorithm. The results obtained from simulated datasets and of six real-world datasets show that the proposed method is time efficient in model construction without degrading the model accuracy and prediction performance. In this study, the proposed approach is implemented to MARS and CMARS methods as an alternative to their forward step to improve them by decreasing their computing time
Gennari, Rosella. "Mapping Inferences: Constraint Propagation and Diamond Satisfaction." Diss., Universiteit van Amsterdam, 2002. http://hdl.handle.net/10919/71553.
Full textAndreasson, Erik, and Amanda Axelsson. "Comparing technologies and algorithms behind mapping and routing APIs for Electric Vehicles." Thesis, Tekniska Högskolan, Jönköping University, JTH, Datateknik och informatik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-50019.
Full textCabrerizo, Mercedes. "A new algorithm for electroencephalogram functional brain mapping based on an auditory-comprehension process." FIU Digital Commons, 2003. http://digitalcommons.fiu.edu/etd/1959.
Full textZamir, Syed Waqas. "Perceptually-inspired gamut mapping for display and projection tecnologies." Doctoral thesis, Universitat Pompeu Fabra, 2017. http://hdl.handle.net/10803/404677.
Full textLes indústries de cinema i televisió estan treballant contínuament en el desenvolupament de diferents característiques de la imatge que puguin proporcionar una millor experiència visual per als espectadors; aquests atributs d’imatge inclouen la resolució espacial, la resolució temporal (fotogrames per segon), major contrast i, recentment, amb les noves tecnologies de visualització emergents, una gamma de colors (gamut) molt més ampli. El gamut d’un dispositiu és el conjunt de colors que aquest dispositiu és capaç de reproduir. Els algoritmes de modificació de gamut (GMA, de les seves sigles en anglés) transformen els colors del contingut original a la paleta de color del dispositiu de visualització amb els objectius de (a) reproduir el contingut amb precisió preservant al mateix temps la intenció artística del creador del contingut original i (b) utilitzar tot el gamut de color del dispositiu de visualització. Hi ha dos tipus d’algoritmes de modificació de gamut: Reducció de Gamut (GR) i Extensió de Gamut (GE). GR implica la transformació dels colors d’un gamut d’origen més gran a un gamut de destinació més petit. Mentre que a GE, els colors s’assignen d’un gamut d’origen petit a un gamut de destinació més gran. En aquesta tesi es proposen tres algoritmes de Reducció de Gamut (GRAs) i quatre algoritmes d’extensió de Gamut (GEAs). Aquests mètodes compleixen amb algunes propietats perceptives globals i locals bàsiques de la visió humana, produint resultats que són estat de l’art, i que són naturals i perceptualment fidels al material original. D’altra banda, es presenta una avaluació psicofísica del problema d’extensió de Gamut específicament dissenyada per a cinema utilitzant un projector de cinema digital en condicions cinemàtiques (baixa llum ambiental); aquest estudi creiem que és el primer del seu tipus a la literatura. També mostrem com les mètriques de qualitat d’imatge disponibles actualment proporcionen resultats que no es correlacionen bé amb l’elecció dels usuaris quan s’apliquen al problema d’Extensió de Gamut.
FIGUEIREDO, AURELIO MORAES. "MAPPING HORIZONS AND SEISMIC FAULTS FROM 3D SEISMIC DATA USING THE GROWING NEURAL GAS ALGORITHM." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2007. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=11341@1.
Full textCOORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
GRUPO DE TECNOLOGIA DE COMPUTAÇÃO GRÁFICA - PUC-RIO
Neste trabalho apresentamos um algoritmo baseado em agrupamento de dados para o mapeamento automático de horizontes e de falhas sísmicas a partir de dados sísmicos 3D. Apresentamos uma técnica para quantizar o volume sísmico de entrada a partir dos neurônios do grafo resultante do processo de treinamento de uma instância do algoritmo Growing Neural Gas (GNG). No conjunto de amostras de entrada utilizadas pelo GNG, cada amostra representa um voxel do volume de entrada, e retém informações da vizinhança vertical desse voxel. Depois da etapa de treinamento, a partir do grafo gerado pelo GNG um novo volume quantizado é gerado, e nesse volume possíveis ambigüidades e imperfeições existentes no volume de entrada tendem a ser minimizadas. A partir do volume quantizado descrevemos uma nova técnica de extração de horizontes, desenvolvida com o objetivo de que seja possível mapear horizontes na presença de estruturas geológicas complexas, como por exemplo horizontes que possuam porções completamente desconectadas por uma ou mesmo diversas falhas sísmicas. Também iniciamos o desenvolvimento de uma abordagem de mapeamento de falhas sísmicas utilizando informações presentes no volume quantizado. Os resultados obtidos pelo processo de mapeamento de horizontes, testado em volumes diferentes, foram bastante promissores. Além disso, os resultados iniciais obtidos pelo processo de extração de falhas sugerem que a técnica pode vir a ser uma boa alternativa para a tarefa.
In this work we present a clusterization-based method to map seismic horizons and faults from 3D seismic data. We describe a method used to quantize an initial seismic volume using a trained instance of the Growing Neural Gas (GNG) algorithm. To accomplish this task we create a training set where each sample corresponds to an entry volume voxel, retaining its vertical neighboring information. After the training procedure, the resulting graph is used to create a quantized version of the original volume. In this quantized volume both horizons and faults are more evidenced in the data, and we present a method that uses the created volume to map seismic horizons, even when they are completely disconnected by seismic faults. We also present another method that uses the quantized version of the volume to map the seismic faults. The horizon mapping procedure, tested in different volume date, yields good results. The preliminary results presented for the fault mapping procedure also yield good results, but needs further testing.
Ayar, Yusuf Yavuz. "Design And Simulation Of A Flash Translation Layer Algorithm." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12611995/index.pdf.
Full textAlmqvist, Saga, and Lana Nore. "Where to Stack the Chocolate? : Mapping and Optimisation of the Storage Locations with Associated Transportation Cost at Marabou." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-135912.
Full textIdag är lagerhanteringen i Marabou fabriken ordnat på sådant sätt att artiklarna är lagrade utifrån vilken linje den tillhör och därmed står i ett lager nära den specifika linjen. Dock finns det lagerplatser idag som inte är optimerade, i den mån att det endast är lagrade från vana och vad som anses enklast. Därmed är lagerplatserna inte ordnade utifrån någon standard.I detta examensarbete föreslår vi därför de mest optimala lagerplatserna med hänsyn till totala transportkostnaderna. Det här problemet kan modelleras som ett matchningsproblem som kan lösas av en så kallad Ungersk algoritm. Denna ska resultera i den optimala matchningen mellan produktionslinjens behov mot lagerplatserna i fabriken med tillhörande kostnad. För att använda Ungerska algoritmen samlade vi in data av den totala mängd artiklar som fanns i fabriken för 2016, vilket togs fram genom datasystemet SAP som Marabou använder sig av. Därefter justerade vi datat genom att dela upp alla artiklarna i antalet pallar samt vilken linje den tillhör. Denna information kompletterades med empiriska undersökningar genom egna observationer samt kvalitativa intervjuer med de anställda i fabriken. I metoden använder vi tre olika implementeringar av den Ungerska algoritmen. I resultatet presenteras resultaten från de olika tillvägagångsätten tillsammans med flera palloptimeringsförslag. I slutet sammanställs flera förbättringsförslag och idéer om vidareutveckling i rapporten.
Kharbouch, Alaa Amin. "A bacterial algorithm for surface mapping using a Markov modulated Markov chain model of bacterial chemotaxis." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/36186.
Full textIncludes bibliographical references (p. 83-85).
Bacterial chemotaxis is the locomotory response of bacteria to chemical stimuli. E. coli movement can be described as a biased random walk, and it is known that the general biological or evolutionary function is to increase exposure to some substances and reduce exposure to others. In this thesis we introduce an algorithm for surface mapping, which tracks the motion of a bacteria-like software agent (based on a low-level model of the biochemical network responsible for chemotaxis) on a surface or objective function. Towards that end, a discrete Markov modulated Markov chains model of the chemotaxis pathway is described and used. Results from simulations using one- and two-dimensional test surfaces show that the software agents, referred to as bacterial agents, and the surface mapping algorithm can produce an estimate which shares some broad characteristics with the surface and uncovers some features of it. We also demonstrate that the bacterial agent, when given the ability to reduce the value of the surface at locations it visits (analogous to consuming a substance on a concentration surface), is more effective in reducing the surface integral within a certain period of time when compared to a bacterial agent lacking the ability to sense surface information or respond to it.
by Alaa Amin Kharbouch.
S.M.
Bhave, Sampada Vasant. "Novel dictionary learning algorithm for accelerating multi-dimensional MRI applications." Diss., University of Iowa, 2016. https://ir.uiowa.edu/etd/2182.
Full textKim, Kyung Cheol. "Calibration and validation of high frequency radar for ocean surface current mapping." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Jun%5FKim.pdf.
Full textCasella, Stacey E. "Gamut extension algorithm development and evaluation for the mapping of standard image content to wide-gamut displays /." Online version of thesis, 2008. http://hdl.handle.net/1850/8416.
Full textLi, Anthony. "Mapping the site of origin of ventricular arrhythmias : the development and testing of a novel pacemapping algorithm." Thesis, St George's, University of London, 2016. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.703122.
Full textBourisly, Ali Khaled. "Neuronal Correlates of Diacritics and an Optimization Algorithm for Brain Mapping and Detecting Brain Function by way of Functional Magnetic Resonance Imaging." Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-dissertations/113.
Full textYenket, Renoo. "Understanding methods for internal and external preference mapping and clustering in sensory analysis." Diss., Kansas State University, 2011. http://hdl.handle.net/2097/8770.
Full textDepartment of Human Nutrition
Edgar Chambers IV
Preference mapping is a method that provides product development directions for developers to see a whole picture of products, liking and relevant descriptors in a target market. Many statistical methods and commercial statistical software programs offering preference mapping analyses are available to researchers. Because of numerous available options, there are two questions addressed in this research that most scientists must answer before choosing a method of analysis: 1) are the different methods providing the same interpretation, co-ordinate values and object orientation; and 2) which method and program should be used with the data provided? This research used data from paint, milk and fragrance studies, representing complexity from lesser to higher. The techniques used are principal component analysis, multidimensional preference map (MDPREF), modified preference map (PREFMAP), canonical variate analysis, generalized procrustes analysis and partial least square regression utilizing statistical software programs of SAS, Unscrambler, Senstools and XLSTAT. Moreover, the homogeneousness of consumer data were investigated through hierarchical cluster analysis (McQuitty’s similarity analysis, median, single linkage, complete linkage, average linkage, and Ward’s method), partitional algorithm (k-means method), nonparametric method versus four manual clustering groups (strict, strict-liking-only, loose, loose-liking-only segments). The manual clusters were extracted according to the most frequently rated highest for best liked and least liked products on hedonic ratings. Furthermore, impacts of plotting preference maps for individual clusters were explored with and without the use of an overall mean liking vector. Results illustrated various statistical software programs were not similar in their oriented and co-ordinate values, even when using the same preference method. Also, if data were not highly homogenous, interpretation could be different. Most computer cluster analyses did not segment consumers relevant to their preferences and did not yield as homogenous clusters as manual clustering. The interpretation of preference maps created by the highest homogeneous clusters had little improvement when applied to complicated data. Researchers should look at key findings from univariate data in descriptive sensory studies to obtain accurate interpretations and suggestions from the maps, especially for external preference mapping. When researchers make recommendations based on an external map alone for complicated data, preference maps may be overused.
Botterill, Tom. "Visual navigation for mobile robots using the Bag-of-Words algorithm." Thesis, University of Canterbury. Computer Science and Software Engineering, 2011. http://hdl.handle.net/10092/5511.
Full textMcCloy, K. R. "Development and evaluation of a remote sensing algorithm suitable for mapping environments containing significant spatial variability : with particular reference to pastures /." Title page and table of contents only, 1987. http://web4.library.adelaide.edu.au/theses/09PH/09phm127.pdf.
Full textTolman, Matthew A. "A Detailed Look at the Omega-k Algorithm for Processing Synthetic Aperture Radar Data." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2634.pdf.
Full textMakhubela, J. K. "Visual simultaneous localization and mapping in a noisy static environment." Thesis, Vaal University of Technology, 2019. http://hdl.handle.net/10352/462.
Full textSimultaneous Localization and Mapping (SLAM) has seen tremendous interest amongst the research community in recent years due to its ability to make the robot truly independent in navigation. Visual Simultaneous Localization and Mapping (VSLAM) is when an autonomous mobile robot is embedded with a vision sensor such as monocular, stereo vision, omnidirectional or Red Green Blue Depth (RGBD) camera to localize and map an unknown environment. The purpose of this research is to address the problem of environmental noise, such as light intensity in a static environment, which has been an issue that makes a Visual Simultaneous Localization and Mapping (VSLAM) system to be ineffective. In this study, we have introduced a Light Filtering Algorithm into the Visual Simultaneous Localization and Mapping (VSLAM) method to reduce the amount of noise in order to improve the robustness of the system in a static environment, together with the Extended Kalman Filter (EKF) algorithm for localization and mapping and A* algorithm for navigation. Simulation is utilized to execute experimental performance. Experimental results show a 60% landmark or landfeature detection of the total landmark or landfeature within a simulation environment and a root mean square error (RMSE) of 0.13m, which is minimal when compared with other Simultaneous Localization and Mapping (SLAM) systems from literature. The inclusion of a Light Filtering Algorithm has enabled the Visual Simultaneous Localization and Mapping (VSLAM) system to navigate in an obscure environment.
Hall, Bryan, University of Western Sydney, Faculty of Science and Technology, and School of Science. "A review of the environmental resource mapping system and a proof that it is impossible to write a general algorithm for analysing interactions between organisms distributed at locations described by a locationally linked database and physical properties recorded within the database." THESIS_FST_SS_Hall_B.xml, 1994. http://handle.uws.edu.au:8081/1959.7/750.
Full textMaster of Applied Science (Environmental Science)
Peltonen, Joanna. "Development of effective algorithm for coupled thermal-hydraulics : neutron-kinetics analysis of reactivity transient." Licentiate thesis, Stockholm : Skolan för teknikvetenskap, Kungliga Tekniska högskolan, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-11033.
Full textOzgur, Ayhan. "A Novel Mobile Robot Navigation Method Based On Combined Feature Based Scan Matching And Fastslam Algorithm." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612431/index.pdf.
Full textFrese, Udo. "An O(log n) algorithm for simultaneous localization and mapping of mobile robots in indoor environments Ein O(log n)-Algorithmus für gleichzeitige Lokalisierung und Kartierung mobiler Roboter in Innenräumen /." [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=972029516.
Full textTran, Quoc Huy Martin, and Carl Ronström. "Mapping and Visualisation of the Patient Flow from the Emergency Department to the Gastroenterology Department at Södersjukhuset." Thesis, KTH, Medicinteknik och hälsosystem, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279605.
Full textAkutmottagningen på Södersjukhuset har i dagsläget väldigt långa väntetider. Detta beror till viss del utav problem inom visualiseringen och kartläggning av patient data och annan fundamental information för att hantera patienter på akutmottagningen. Detta ledde till att det finns ett behov att skapa förbättringsförslag på visualiseringen av patientflödet mellan akutmottagningen och gastroenterologiavdelningen på Södersjukhuset. Under projektets gång skapades ett simulerat användargränssnitt med syfte att efterlikna Södersjukhusets nuvarande patientflöde. Denna lösning visualiserar patientflödet mellan akutmottagningen och gastroenterologiavdelningen. Dessutom implementerades en enkel sorteringsalgoritm som kan bedöma sannolikheten om en patient skall bli inlagd på en avdelning. Resultatet visar att det finns flera möjliga förbättringar i Södersjukhusets nuvarande elektroniska journalsystemet, TakeCare, som skulle underlätta vårdkoordinatorernas arbete och därmed sänka väntetiderna på akutmottagningen.
Werner, Sebastian. "Variabilitätsmodellierung in Kartographierungs- und Lokalisierungsverfahren." Thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-172457.
Full textSOUZA, Viviane Lucy Santos de. "Uma metodologia para síntese de circuitos digitais em FPGAs baseada em otimização multiobjetivo." Universidade Federal de Pernambuco, 2015. https://repositorio.ufpe.br/handle/123456789/17339.
Full textMade available in DSpace on 2016-07-12T18:32:53Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Tese_Final_bib.pdf: 4325542 bytes, checksum: 5cafa644d256b743ce0f06490e4d5920 (MD5) Previous issue date: 2015-08-20
Atualmente, a evolução na arquitetura dos FPGAs (Field programable gate arrays) permite que os mesmos sejam empregados em aplicações que vão desde a prototipação rápida de circuitos digitais simples a coprocessadores para computação de alto desempenho. Entretanto, a utilização eficiente dessas arquiteturas é fortemente dependente, entre outros fatores, da ferramenta de síntese empregada. O desafio das ferramentas de síntese está em converter a lógica do projetista em circuitos que utilizem de maneira efetiva a área do chip, não degradem a frequência de operação e que, sobretudo, sejam eficientes em reduzir o consumo de energia. Nesse sentido, pesquisadores e grandes fabricantes de FPGA estão, frequentemente, desenvolvendo novas ferramentas com vistas a esses objetivos, que se caracterizam por serem conflitantes. O fluxo de síntese de projetos baseados em FPGAs engloba as etapas de otimização lógica, mapeamento, agrupamento, posicionamento e roteamento. Essas fases são dependentes, de forma que, otimizações nas etapas iniciais produzem impactos positivos nas etapas posteriores. No âmbito deste trabalho de doutorado, estamos propondo uma metodologia para otimização do fluxo de síntese, especificamente, nas etapas de mapeamento e agrupamento. Classicamente, a etapa de mapeamento é realizada mediante heurísticas que determinam uma solução para o problema, mas que, não permitem a busca por soluções ótimas, ou que beneficiam um objetivo em detrimento de outros. Desta forma, estamos propondo a utilização de uma abordagem multiobjetivo baseada em algoritmo genético e de uma abordagem multiobjetivo baseada em colônia artificial de abelhas que, associadas a heurísticas específicas do problema, permitem que sejam obtidas soluções de melhor qualidade e que resultam em circuitos finais com área reduzida, ganhos na frequência de operação e com menor consumo de potência dinâmica. Além disso, propomos uma nova abordagem de agrupamento multiobjetivo que se diferencia do estado da arte, por utilizar uma técnica de predição e por considerar características dinâmicas do problema, produzindo circuitos mais eficientes e que facilitam a tarefa das etapas de posicionamento e roteamento. Toda a metodologia proposta foi integrada ao fluxo acadêmico do VTR (Verilog to routing), um projeto código aberto e colaborativo que conta com múltiplos grupos de pesquisa, conduzindo trabalhos nas áreas de desenvolvimento de arquitetura de FPGAs e de novas ferramentas de síntese. Além disso, utilizamos como benchmark, um conjunto dos 20 maiores circuitos do MCNC (Microelectronics Center of North Carolina) que são frequentemente utilizados em pesquisas da área. O resultado do emprego integrado das ferramentas frutos da metodologia proposta permite a redução de importantes aspectos pós-roteamento avaliados. Em comparação ao estado da arte, são obtidas, em média, redução na área dos circuitos de até 19%, além da redução do caminho crítico em até 10%, associada à diminuição na potência dinâmica total estimada de até 18%. Os experimentos também mostram que as metodologias de mapeamento propostas são computacionalmente mais custosas em comparação aos métodos presentes no estado da arte, podendo ser até 4,7x mais lento. Já a metodologia de agrupamento apresentou pouco ou nenhum overhead em comparação ao metodo presente no VTR. Apesar do overhead presente no mapeamento, os métodos propostos, quando integrados ao fluxo completo, podem reduzir o tempo de execução da síntese em cerca de 40%, isto é o resultado da produção de circuitos mais simples e que, consequentemente, favorecem as etapas de posicionamento e roteamento.
Nowadays, the evolution of FPGAs (Field Programmable Gate Arrays) allows them to be employed in applications from rapid prototyping of digital circuits to coprocessor of high performance computing. However, the efficient use of these architectures is heavily dependent, among other factors, on the employed synthesis tool. The synthesis tools challenge is in converting the designer logic into circuits using effectively the chip area, while, do not degrade the operating frequency and, especially, are efficient in reducing power consumption. In this sense, researchers and major FPGA manufacturers are often developing new tools to achieve those goals, which are characterized by being conflicting. The synthesis flow of projects based on FPGAs comprises the steps of logic optimization, mapping, packing, placement and routing. These steps are dependent, such that, optimizations in the early stages bring positive results in later steps. As part of this doctoral work, we propose a methodology for optimizing the synthesis flow, specifically, on the steps of mapping and grouping. Classically, the mapping step is performed by heuristics which determine a solution to the problem, but do not allow the search for optimal solutions, or that benefit a goal at the expense of others. Thus, we propose the use of a multi-objective approach based on genetic algorithm and a multi-objective approach based on artificial bee colony that, combined with problem specific heuristics, allows a better quality of solutions are obtained, yielding circuits with reduced area, operating frequency gains and lower dynamic power consumption. In addition, we propose a new multi-objective clustering approach that differs from the state-of-the-art, by using a prediction technique and by considering dynamic characteristics of the problem, producing more efficient circuits and that facilitate the tasks of placement and routing steps . The proposal methodology was integrated into the VTR (Verilog to routing) academic flow, an open source and collaborative project that has multiple research groups, conducting work in the areas of FPGA architecture development and new synthesis tools. Furthermore, we used a set of the 20 largest MCNC (Microelectronics Center of North Carolina) benchmark circuits that are often used in research area. The results of the integrated use of tools based on the proposed methodology allow the reduction of important post-routing aspects evaluated. Compared to the stateof- the-art, are achieved, on average, 19% reduction in circuit area, besides 10% reduction in critical path, associated with 18% decrease in the total dynamic estimated power. The experiments also reveal that proposed mapping methods are computationally more expensive in comparison to methods in the state-of-the-art, and may even be 4.7x slower. However, the packing methodology presented little or no overhead compared to the method in VTR. Although the present overhead mapping, the proposed methods, when integrated into the complete flow, can reduce the running time of the synthesis by approximately 40%, which is the result of more simple circuits and which, consequently, favor the steps of placement and routing.
Lee, Po-Cheng, and 李柏成. "A Tone Mapping Algorithm with Detail Enhancement Based on Retinex Algorithm." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/70759937164346636036.
Full text國立臺灣大學
資訊工程學研究所
100
Because of the progress of the digital camera technique recently, we can directly obtain the HDRI (High Dynamic Range Image) from camera. Nevertheless, limited by display, we still transfer the HDRI to the display which can show LDRI (Low Dynamic Range Image). This technique is known as tone-mapping. The goal of tone-mapping is to compress the luminance dynamic range into low dynamic range while decreasing distortion and preserving detail. We use logarithm first to compress high dynamic range based on background luminance. The retinex local contrast enhancement is thus being performed to enhancement the image in dark regions. Using our method can preserve most of detail without contrast distortion especially dark areas.
Hung, Pei-Hsiu, and 洪培修. "The Design of Virtual Network Mapping Algorithm." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/48033132484358320626.
Full text銘傳大學
資訊工程學系碩士班
99
Cloud Computing has been a very popular topic recently. One of the applications in the Cloud Computing is virtualization. Network virtualization has emerged as a powerful way to allow multiple virtual networks, each customized to a particular application, to run on a common substrate network. Our research consists of two parts. The first is a node mapping algorithm and a link mapping algorithm. The second is a path migration algorithm. The first part focuses on how to use the proposed node mapping algorithm and link mapping algorithm to map the virtual networks to a substrate network. The second part focuses on how to use the proposed path migration algorithm to migrate virtual links to different substrate paths, which can improve the substrate’s ability to accept more virtual networks. The study of the first part concentrates on mapping problems. The mapping problems consist of two parts. The first is node mapping algorithm, which focus on how to map the virtual nodes to substrate nodes. The greedy algorithm is proposed to assign the virtual nodes to substrate nodes. The second is link mapping algorithm. Link mapping algorithm also consists of two problems. One is that a virtual link is mapped to a single substrate path. The widest path algorithm and cut-shortest path algorithm are proposed for this problem. The other is that a virtual link is mapped to multiple substrate paths. In this case, path diversity is enabled in the substrate network. Cut-shortest path algorithm proposed for this problem. The study of the second part focuses on path migration in the substrate network. When a new virtual network requests to map to a substrate network, it is possible that no resource in the substrate network can meet the requirement of the new virtual network. In this situation, path migration must be enabled to re-arrange all the virtual networks that have already been mapped to the substrate network. The proposed path migration algorithm consists of three steps. First, the virtual nodes in the new virtual network are mapped to the substrate nodes by using node mapping algorithm. Second, the algorithm selects one existing virtual network to migrate to other substrate links. Third, if the resource in the substrate can meet the requirement of the new virtual network, the path migration algorithm stops. Otherwise, the algorithm continues to migrate the next existing virtual network. The cut-shortest path algorithm is proposed for path migration.
Kidar, Lin, and 林奇達. "A New Architecture Specific Technology Mapping Algorithm." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/52142680070757393540.
Full text中原大學
資訊工程研究所
86
In this thesis, we propose a new technology mapping, called ArchMap,for LUT-based FPGAs with the hard-wired connection architecture in PLBs. To minimize the delay in the mapped network.ArchMap are divided into two step as follows:1. Mapping the initial circuit to a LUT network: Instead of mapping the initial circuit to a K-LUT network for a fixed K, we try to map the initial circuit to a LUT network more suitable to the desired architecture using the multiple K-feasible cut technique.2. Mapping the LUT network to a PLB network: An architecture-specific labeling procedure is designed to map the LUT network to a PLB network. In our experiments, we use MCNC benchmark circuit as test circuits andchoose Xilinx XC4000 series FPGAs as the target architecture. Experimentalresults show that ArchMap reduce the depth of CLB network by 39.74% and reduce the number of CLBs by 27.61% compared with that obtained by MIS-pga-delay plus match_4k. On the other hand, ArchMap obtained 7.69% of improvement of the depth of CLB network and only 1.86% of drawback of the number of CLBs compared with FlowMap script of RASP.
Ma, Hsin-kai, and 馬欣愷. "Multiple Images Fusing and Tone Mapping Algorithm." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/69041306406117555168.
Full text世新大學
資訊管理學研究所(含碩專班)
99
A multi-image fusion model, similar to the method suggested by Kao et al (2008) was firstly applied in this research to create high-dynamic-range (HDR) images. It was applied in the fusion process of three differently exposed images, captured by a typical still digital camera; and then in the creation process of HDR images, for further applications in the cross-media color reproduction. The images, created by the derived fusion method, had higher HDR than those of images obtained by conventional imaging process. Moreover, they also preserves much more details and information on both shadow and highlight areas, if compared to those images taken by the same digital cameras but using the conventional exposure method (i.e. only using one normal exposure). Additionally, since the dynamic range of most displays is limited, it would be impossible to obtain satisfactory representations of the original HDR scenes using such conventional soft-proofing displays. Therefore, a serial of algorithms were optimally derived, integrated, and tested in this thesis. These derived algorithms included local white-balancing for considering multi-illuminant scene conditions, tone-mapping, gamut-mapping, and CIECAM02 color appearance. Furthermore, a Gaussian pyramid method, which was based on a multi-scale model of adaptation and spatial vision, was also derived here. It was used to further enhance detail-rendition performance of the combined HDR imaging mechanism, derived in this research. Finally, the experimental testing results, (obtained from the imaging process of these integrated algorithms mentioned above), showed that the cross-media resulted images, when shown on the general display devices having lower HDR than those fused HDR images of interest, can pleasingly give satisfactory color appearances.
Chen, Shin-Liang, and 陳世梁. "A Technology Mapping Algorithm for CPLD Architectures." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/38725139604636936234.
Full text國立清華大學
資訊工程學系
89
In this thesis, we propose a technology mapping algorithm for CPLD architectures. Our algorithm proceeds in two phases: mapping for single-output PLAs and packing for multiple-output PLAs. In the mapping phase, based on the results in [4], we propose a Look-Up-Table (LUT) based mapping algorithm. We will take advantage of existing LUT mapping algorithms for area and depth minimization. We also study, for a given (i, p, o)-PLA block structure, the problem of selecting the values of input and product term constraints for mapping for single-output PLA. Benchmark results show that our algorithm produce better results in terms of area and depth as compared to those by TEMPLA.
Deng, Ren-Fu, and 鄧人輔. "A Heuristic Memory Mapping Algorithm for Interface Synthesis." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/e2vqrj.
Full text國立臺北科技大學
電腦與通訊研究所
94
The communication of variable or data between partitioned hardware and software parts will be produced in hardware-software codesign. The interface synthesis methodology will conquer the variables communication problem. The one of solution for interface synthesis is using memory mapping method. In this thesis, we propose a heuristic algorithm of memory mapping to solve variable mapping issue that includes appropriate memory port and number. Experimental results shown that our proposed algorithm under the condition of varying clock cycles and the number of variables can reduce 7.6% hardware cost, 9.2% used the ports of multi-port memory numbers and 8% used the number of multi-port-memory instances. Under the condition of varying the number of the most accessed variables during a clock cycle, our proposed algorithm can reduce 7.8% hardware cost, 7.9% used the ports of multi-port-memory numbers and 8.9% used the number of multi-port-memory instances.
李杰翰. "Images Dependent Gamut Mapping Algorithm by Linear Programming." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/66469927587971622325.
Full textHuang, Hsin-Hsiung, and 黃信雄. "A Functional Decomposition Algorithm for Low Power Technology Mapping." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/36689470360199656087.
Full text中原大學
資訊工程學系
88
With the fast growth of portable electronic systems such as notebook, PDA, communication goods, low power has become an attractive issue that plays an important role to the future trend of very large scalar integrated circuit. Most of existing technology mapping algorithms, such as HeuDecomp[45], consider only the structure of a given circuit, and do not consider the functionality of the circuit. In contrast to the conventional approaches, we proposed a new algorithm consider both the structure and the functionality to further reduce the power dissipation. Our approach consists of five steps: (1) Decomposing the k-bound circuit into a tree of 2-bound. (2) Minimizing numbers of inverters according to DeMorgan’s Theorem. (3) Merging gates with considering both the structure and the functionality .(4) Decomposing k-bound circuit into 2-bound by HeuDecomp algorithm.(5)Maximizing numbers of inverters according to DeMorgan’s Theorem. Our method provides circuits with up to 12.7% lower average power consumption than HeuDecomp algorithm. No matter what the ratio of signal probability of primary inputs is, our algorithm is stable rather than sensitive. We give some examples to demonstrate the superior of our approach.
MA, YI-ZHENG, and 馬譯政. "Systolic array mapping of sequential algorithm for VLSI architecture." Thesis, 1986. http://ndltd.ncl.edu.tw/handle/89888402865063130518.
Full text