Дисертації з теми "Processing algorithm"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Processing algorithm.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Processing algorithm".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Berry, Thomas. "Algorithm engineering : string processing." Thesis, Liverpool John Moores University, 2002. http://researchonline.ljmu.ac.uk/4973/.

Повний текст джерела
Анотація:
The string matching problem has attracted a lot of interest throughout the history of computer science, and is crucial to the computing industry. The theoretical community in Computer Science has a developed a rich literature in the design and analysis of string matching algorithms. To date, most of this work has been based on the asymptotic analysis of the algorithms. This analysis rarely tell us how the algorithm will perform in practice and considerable experimentation and fine-tuning is typically required to get the most out of a theoretical idea. In this thesis, promising string matching algorithms discovered by the theoretical community are implemented, tested and refined to the point where they can be usefully applied in practice. In the course of this work we have presented the following new algorithms. We prove that the time complexity of the new algorithms, for the average case is linear. We also compared the new algorithms with the existing algorithms by experimentation. " We implemented the existing one dimensional string matching algorithms for English texts. From the findings of the experimental results we identified the best two algorithms. We combined these two algorithms and introduce a new algorithm. " We developed a new two dimensional string matching algorithm. This algorithm uses the structure of the pattern to reduce the number of comparisons required to search for the pattern. " We described a method for efficiently storing text. Although this reduces the size of the storage space, it is not a compression method as in the literature. Our aim is to improve both space and time taken by a string matching algorithm. Our new algorithm searches for patterns in the efficiently stored text without decompressing the text. " We illustrated that by pre-processing the text we can improve the speed of the string matching algorithm when we search for a large number of patterns in a given text. " We proposed a hardware solution for searching in an efficiently stored DNA text.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

D'Souza, Sammy Raymond. "Parallelizing a nondeterministic optimization algorithm." CSUSB ScholarWorks, 2007. https://scholarworks.lib.csusb.edu/etd-project/3084.

Повний текст джерела
Анотація:
This research explores the idea that for certain optimization problems there is a way to parallelize the algorithm such that the parallel efficiency can exceed one hundred percent. Specifically, a parallel compiler, PC, is used to apply shortcutting techniquest to a metaheuristic Ant Colony Optimization (ACO), to solve the well-known Traveling Salesman Problem (TSP) on a cluster running Message Passing Interface (MPI). The results of both serial and parallel execution are compared using test datasets from the TSPLIB.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Samara, Rafat. "TOP-K AND SKYLINE QUERY PROCESSING OVER RELATIONAL DATABASE." Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH. Forskningsmiljö Informationsteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-20108.

Повний текст джерела
Анотація:
Top-k and Skyline queries are a long study topic in database and information retrieval communities and they are two popular operations for preference retrieval. Top-k query returns a subset of the most relevant answers instead of all answers. Efficient top-k processing retrieves the k objects that have the highest overall score. In this paper, some algorithms that are used as a technique for efficient top-k processing for different scenarios have been represented. A framework based on existing algorithms with considering based cost optimization that works for these scenarios has been presented. This framework will be used when the user can determine the user ranking function. A real life scenario has been applied on this framework step by step. Skyline query returns a set of points that are not dominated (a record x dominates another record y if x is as good as y in all attributes and strictly better in at least one attribute) by other points in the given datasets. In this paper, some algorithms that are used for evaluating the skyline query have been introduced. One of the problems in the skyline query which is called curse of dimensionality has been presented. A new strategy that based on the skyline existing algorithms, skyline frequency and the binary tree strategy which gives a good solution for this problem has been presented. This new strategy will be used when the user cannot determine the user ranking function. A real life scenario is presented which apply this strategy step by step. Finally, the advantages of the top-k query have been applied on the skyline query in order to have a quickly and efficient retrieving results.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Nosa, Ogbewi. "Signal Processing and patternrecognition algorithm for monitoringParkinson’s disease." Thesis, Högskolan Dalarna, Datateknik, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:du-2376.

Повний текст джерела
Анотація:
This masters thesis describes the development of signal processing and patternrecognition in monitoring Parkison’s disease. It involves the development of a signalprocess algorithm and passing it into a pattern recogniton algorithm also. Thesealgorithms are used to determine , predict and make a conclusion on the study ofparkison’s disease. We get to understand the nature of how the parkinson’s disease isin humans.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Brown, Derek William. "Systolic algorithm design for control and signal processing." Thesis, Queen's University Belfast, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.337644.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Harp, J. G. "Algorithms and architectures for image processing." Thesis, University of Surrey, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.379431.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Keränen, V. (Vesa). "Cryptographic algorithm benchmarking in mobile devices." Master's thesis, University of Oulu, 2014. http://urn.fi/URN:NBN:fi:oulu-201401141005.

Повний текст джерела
Анотація:
The aim of this thesis was to determine the execution times of different cryptographic algorithms in one of the hardware used in Asha families and compare the execution times between HW accelerated, OpenSSL and a company proprietary cryptographic library. Moreover, the motivation was to find out if the HW accelerated cryptographic function should be used when available due to execution time. Furthermore the aim was to find out if the transition to use OpenSSL is to be preferred in terms of the execution times over the company proprietary cryptographic library. In order to give wider perspective of the cryptography the thesis introduces a brief history of the cryptography. The following cryptographic functions are discussed in the thesis: hash functions, message authentication, symmetric and asymmetric cryptographic algorithms. The cryptographic algorithms are also examined from the embedded systems point of view. In addition of that the security of the embedded systems is discussed briefly in order to introduce the context where cryptographic functions are used. The other aim and the author’s personal motivation was to learn and deepen the knowledge of cryptographic algorithms, their usage and the issues to take into consideration when working with embedded devices. The research methods were a literature review and empirical research. The results supports hypothesis that it’s not always beneficial to use HW accelerated algorithms. The results surprised partially because the expectations were that HW accelerated execution would be much faster. Further investigations should be made in order to find the root cause for the results. Most of the algorithms were faster with OpenSSL in comparison to the company proprietary library and some of the HW accelerated algorithms. The performance is better in most of the crypto operations with OpenSSL than with the company proprietary crypto library. The thesis contributes the fact that no one should assume that it is obvious that the HW accelerated crypto operations are always faster. In many cases it is not significant in terms of performance, but in those cases where the performance is the critical issue the execution times should be measured rather than assumed.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Manfredsson, Johan. "Evaluation Tool for a Road Surface Algorithm." Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138936.

Повний текст джерела
Анотація:
Modern cars are often equipped with sensors like radar, infrared cameras and stereo cameras that collect information about its surroundings. By using a stereo camera, it is possible to receive information about the distance to points in front of the car. This information can be used to estimate the height of the predicted path of the car. An application which does this is the stereo based Road surface preview (RSP) algorithm. By using the output from the RSP algorithm it is possible to use active suspension control, which controls the vertical movement of the wheels relative to the chassis. This application primarily makes the driving experience more comfortable, but also extends the durability of the vehicle. The idea behind this Master’s thesis is to create an evaluation tool for the RSP algorithm, which can be used at arbitrary roads.  The thesis describes the proposed evaluation tool, where focus has been to make an accurate comparison of camera data received from the RSP algorithm and laser data used as ground truth in this thesis. Since the tool shall be used at the company proposing this thesis, focus has also been on making the tool user friendly. The report discusses the proposed methods, possible sources to errors and improvements. The evaluation tool considered in this thesis shows good results for the available test data, which made it possible to include an investigation of a possible improvement of the RSP algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hadimli, Kerem. "Processing Turkish Radiology Reports." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613280/index.pdf.

Повний текст джерела
Анотація:
Radiology departments utilize various visualization techniques of patients&rsquo
bodies, and narrative free text reports describing the findings in these visualizations are written by medical doctors. The information within these narrative reports is required to be extracted for medical information systems. Turkish is an highly agglutinative language and this poses problems in information retrieval and extraction from Turkish free texts. In this thesis one rule-based and one data-driven alternate methods for information retrieval and structured information extraction from Turkish radiology reports are presented. Contrary to previous studies in medical NLP systems, both of these methods do not utilize any medical lexicon or ontology. Information extraction is performed on the level of extracting medically related phrases from the sentence. The aim is to measure baseline performance Turkish language can provide for medical information extraction and retrieval, in isolation of other factors.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Rullmann, Markus, Rainer Schaffer, Sebastian Siegel, and Renate Merker. "SYMPAD - A Class Library for Processing Parallel Algorithm Specifications." Universitätsbibliothek Chemnitz, 2007. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200700971.

Повний текст джерела
Анотація:
In this paper we introduce a new class library to model transformations of parallel algorithms. SYMPAD serves as a basis to develop automated tools and methods to generate efficient implementations of such algorithms. The paper gives an overview over the general structure, as well as features of the library. We further describe the fundamental design process that is controlled by our developed methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Pawar, Madhurima. "Routing algorithm for distributed Web documents." [Gainesville, Fla.] : University of Florida, 2001. http://etd.fcla.edu/etd/uf/2001/anp1304/MASTER1.pdf.

Повний текст джерела
Анотація:
Thesis (M.S.)--University of Florida, 2001.
Title from first page of PDF file. Document formatted into pages; contains x, 52 p.; also contains graphics. Vita. Includes bibliographical references (p. 49-51).
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Abdoel-Gawad, Farag Saleh. "Efficient hardware implementation of the CORDIC algorithm." Thesis, Liverpool John Moores University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299066.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Corman, Etienne. "Functional representation of deformable surfaces for geometry processing." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLX075/document.

Повний текст джерела
Анотація:
La création et la compréhension des déformations de surfaces sont des thèmes récurrent pour le traitement de géométrie 3D. Comme les surfaces lisses peuvent être représentées de multiples façon allant du nuage ​​de points aux maillages polygonales, un enjeu important est de pouvoir comparer ou déformer des formes discrètes indépendamment de leur représentation. Une réponse possible est de choisir une représentation flexible des surfaces déformables qui peut facilement être transportées d'une structure de données à une autre.Dans ce but, les "functional map" proposent de représenter des applications entre les surfaces et, par extension, des déformations comme des opérateurs agissant sur des fonctions. Cette approche a été introduite récemment pour le traitement de modèle 3D, mais a été largement utilisé dans d'autres domaines tels que la géométrie différentielle, la théorie des opérateurs et les systèmes dynamiques, pour n'en citer que quelques-uns. Le principal avantage de ce point de vue est de détourner les problèmes encore non-résolus, tels que la correspondance forme et le transfert de déformations, vers l'analyse fonctionnelle dont l'étude et la discrétisation sont souvent mieux connues. Cette thèse approfondit l'analyse et fournit de nouvelles applications à ce cadre d'étude. Deux questions principales sont discutées.Premièrement, étant donné deux surfaces, nous analysons les déformations sous-jacentes. Une façon de procéder est de trouver des correspondances qui minimisent la distorsion globale. Pour compléter l'analyse, nous identifions les parties les moins fiables du difféomorphisme grâce une méthode d'apprentissage. Une fois repérés, les défauts peuvent être éliminés de façon différentiable à l'aide d'une représentation adéquate des champs de vecteurs tangents.Le deuxième développement concerne le problème inverse : étant donné une déformation représentée comme un opérateur, comment déformer une surface en conséquence ? Dans une première approche, nous analysons un encodage de la structure intrinsèque et extrinsèque d'une forme en tant qu'opérateur fonctionnel. Dans ce cadre, l'objet déformé peut être obtenu, à rotations et translations près, en résolvant une série de problèmes d'optimisation convexe. Deuxièmement, nous considérons une version linéarisée de la méthode précédente qui nous permet d'appréhender les champs de déformation comme agissant sur la métrique induite. En conséquence la résolution de problèmes difficiles, tel que le transfert de déformation, sont effectués à l'aide de simple systèmes linéaires d'équations
Creating and understanding deformations of surfaces is a recurring theme in geometry processing. As smooth surfaces can be represented in many ways from point clouds to triangle meshes, one of the challenges is being able to compare or deform consistently discrete shapes independently of their representation. A possible answer is choosing a flexible representation of deformable surfaces that can easily be transported from one structure to another.Toward this goal, the functional map framework proposes to represent maps between surfaces and, to further extents, deformation of surfaces as operators acting on functions. This approach has been recently introduced in geometry processing but has been extensively used in other fields such as differential geometry, operator theory and dynamical systems, to name just a few. The major advantage of such point of view is to deflect challenging problems, such as shape matching and deformation transfer, toward functional analysis whose discretization has been well studied in various cases. This thesis investigates further analysis and novel applications in this framework. Two aspects of the functional representation framework are discussed.First, given two surfaces, we analyze the underlying deformation. One way to do so is by finding correspondences that minimize the global distortion. To complete the analysis we identify the least and most reliable parts of the mapping by a learning procedure. Once spotted, the flaws in the map can be repaired in a smooth way using a consistent representation of tangent vector fields.The second development concerns the reverse problem: given a deformation represented as an operator how to deform a surface accordingly? In a first approach, we analyse a coordinate-free encoding of the intrinsic and extrinsic structure of a surface as functional operator. In this framework a deformed shape can be recovered up to rigid motion by solving a set of convex optimization problems. Second, we consider a linearized version of the previous method enabling us to understand deformation fields as acting on the underlying metric. This allows us to solve challenging problems such as deformation transfer are solved using simple linear systems of equations
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Skjei, Thomas. "Real-Time Fundamental Frequency Estimation Algorithm for Disconnected Speech." VCU Scholars Compass, 2011. http://scholarscompass.vcu.edu/etd/191.

Повний текст джерела
Анотація:
A new algorithm is presented for real-time fundamental frequency estimation of speech signals. This method extends and alters the YIN algorithm, which uses the autocorrelation-based difference function, by adding features to reduce latency, correct predictable errors, and make it structurally appropriate for real-time processing scenarios. The algorithm is shown to reduce the error rate of its predecessor while demonstrating latencies sufficient for real-time processing. The results indicate that the algorithm can be realized as a real-time estimator of spoken pitch and pitch variation, which has applications including diagnosis and biofeedback-based therapy of many speech disorders.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Danks, Jacob R. "Algorithm Optimizations in Genomic Analysis Using Entropic Dissection." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc804921/.

Повний текст джерела
Анотація:
In recent years, the collection of genomic data has skyrocketed and databases of genomic data are growing at a faster rate than ever before. Although many computational methods have been developed to interpret these data, they tend to struggle to process the ever increasing file sizes that are being produced and fail to take advantage of the advances in multi-core processors by using parallel processing. In some instances, loss of accuracy has been a necessary trade off to allow faster computation of the data. This thesis discusses one such algorithm that has been developed and how changes were made to allow larger input file sizes and reduce the time required to achieve a result without sacrificing accuracy. An information entropy based algorithm was used as a basis to demonstrate these techniques. The algorithm dissects the distinctive patterns underlying genomic data efficiently requiring no a priori knowledge, and thus is applicable in a variety of biological research applications. This research describes how parallel processing and object-oriented programming techniques were used to process larger files in less time and achieve a more accurate result from the algorithm. Through object oriented techniques, the maximum allowable input file size was significantly increased from 200 mb to 2000 mb. Using parallel processing techniques allowed the program to finish processing data in less than half the time of the sequential version. The accuracy of the algorithm was improved by reducing data loss throughout the algorithm. Finally, adding user-friendly options enabled the program to use requests more effectively and further customize the logic used within the algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Chandra, Manik. "Analytical study of a control algorithm based on emotional processing." Thesis, Texas A&M University, 2005. http://hdl.handle.net/1969.1/4914.

Повний текст джерела
Анотація:
This work presents a control algorithm developed from the mammalian emotional processing network. Emotions are processed by the limbic system in the mammalian brain. This system consists of several components that carry out different tasks. The system level understanding of the limbic system has been previously captured in a discrete event computational model. This computational model was modified suitably to be used as a feedback mechanism to regulate the output of a continuous-time first order plant. An extension to a class of nonlinear plants is also discussed. The combined system of the modified model and the linear plant are represented as a set of bilinear differential equations valid in a half space of the 3-dimensional real space. The bounding plane of this half space is the zero level of the square of the plant output. This system of equations possesses a continuous set of equilibrium points which lies on the bounding plane of the half space. The occurrence of a connected equilibrium set is uncommon in control engineering, and to prove stability for such cases one needs an extended Lyapunov-like theorem, namely LaSalle's Invariance Principle. In the process of using this Principle, it is shown that this set of equations possesses a first integral as well. A first integral is identified using the compatibility method, and this first integral is utilized to prove asymptotic stability for a region of the connect equilibrium set.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Gorrell, Genevieve. "Generalized Hebbian Algorithm for Dimensionality Reduction in Natural Language Processing." Doctoral thesis, Linköping : Department of Computer and Information Science, Linköpings universitet, 2006. http://www.bibl.liu.se/liupubl/disp/disp2006/tek1045s.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

He, Jia Ying. "Detecting earthquake survivors by digital signal processing and beat algorithm." Thesis, University of Macau, 2015. http://umaclib3.umac.mo/record=b3335706.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Mahmoudi, Ramzi, and Ramzi Mahmoudi. "Real time image processing : algorithm parallelization on multicore multithread architecture." Phd thesis, Université Paris-Est, 2011. http://pastel.archives-ouvertes.fr/pastel-00680735.

Повний текст джерела
Анотація:
Topological features of an object are fundamental in image processing. In many applications,including medical imaging, it is important to maintain or control the topology of the image. Howeverthe design of such transformations that preserve topology and geometric characteristics of the inputimage is a complex task, especially in the case of parallel processing.Parallel processing is applied to accelerate computation by sharing the workload among multipleprocessors. In terms of algorithm design, parallel computing strategies profits from the naturalparallelism (called also partial order of algorithms) present in the algorithm which provides two main resources of parallelism: data and functional parallelism. Concerning architectural design, it is essential to link the spectacular evolution of parallel architectures and the parallel processing. In effect, if parallelization strategies become necessary, it is thanks to the considerable improvements in multiprocessing systems and the rise of multi-core processors. All these reasons make multiprocessing very practical. In the case of SMP machines, immediate sharing of data provides more flexibility in designing such strategies and exploiting data and functional parallelism, notably with the evolution of interconnection system between processors.In this perspective, we propose a new parallelization strategy, called SD&M (Split Distribute andMerge) strategy that cover a large class of topological operators. SD&M has been developed in orderto provide a parallel processing for many topological transformations.Based on this strategy, we proposed a series of parallel topological algorithm (new or adaptedversion). In the following we present our main contributions:(1)A new approach to compute watershed transform based on MSF transform, that is parallel,preserves the topology, does not need prior minima extraction and suited for SMP machines.Proposed algorithm makes use of Jean Cousty streaming approach and it does not require any sortingstep, or the use of any hierarchical queue. This contribution came after an intensive study of allexisting watershed transform in the discrete case.(2)A similar study on thinning transform was conducted. It concerns sixteen parallel thinningalgorithms that preserve topology. In addition to performance criteria, we introduce two qualitativecriteria, to compare and classify them. New classification criteria are based on the relationshipbetween the medial axis and the obtained homotopic skeleton. After this classification, we tried toget better results through the proposal of a new adapted version of Couprie's filtered thinningalgorithm by applying our strategy.(3)An enhanced computation method for topological smoothing through combining parallelcomputation of Euclidean Distance Transform using Meijster algorithm and parallel Thinning-Thickening processes using the adapted version of Couprie's algorithm already mentioned.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Wright, Stephen. "An algorithm to improve ATM cell processing in SDH multiplexers." Thesis, University of Ulster, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.250777.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Greenaway, Richard Scott. "Image processing and data analysis algorithm for application in haemocytometry." Thesis, University of Hertfordshire, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.263063.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Flores, Daniel. "Evaluation of an image processing algorithm for scene change detection." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2008. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Mahmoudi, Ramzi. "Real time image processing : algorithm parallelization on multicore multithread architecture." Thesis, Paris Est, 2011. http://www.theses.fr/2011PEST1033/document.

Повний текст джерела
Анотація:
Les caractéristiques topologiques d'un objet sont fondamentales dans le traitement d'image. Dansplusieurs applications, notamment l'imagerie médicale, il est important de préserver ou de contrôlerla topologie de l'image. Cependant la conception de telles transformations qui préservent à la foi la topologie et les caractéristiques géométriques de l'image est une tache complexe, en particulier dans le cas du traitement parallèle.Le principal objectif du traitement parallèle est d'accélérer le calcul en partagent la charge de travail à réaliser entre plusieurs processeurs. Si on approche cet objectif sous l'angle de la conception algorithmique, les stratégies du calcul parallèle exploite l'ordre partiel des algorithmes, désigné également par le parallélisme naturel qui présent dans l'algorithme et qui fournit deux principales sources de parallélisme : le parallélisme de données et le parallélisme fonctionnelle.De point de vue conception architectural, il est essentiel de lier l'évolution spectaculaire desarchitectures parallèles et le traitement parallèle. En effet, si les stratégies de parallèlisation sont devenues nécessaire, c'est grâce à des améliorations considérables dans les systèmes de multitraitement ainsi que la montée des architectures multi-core. Toutes ces raisons font du calculeparallèle une approche très efficace. Dans le cas des machines à mémoire partagé, il existe un autreavantage à savoir le partage immédiat des données qui offre plus de souplesse, notamment avec l'évolution du système d'interconnexion entre processeurs, dans la conception de ces stratégies etl'exploitation du parallélisme de données et le parallélisme fonctionnel.Dans cette perspective, nous proposons une nouvelle stratégie de parallèlisation, baptisé SD&M(Split, Distribute and Merge) stratégie qui couvrent une large classe d'opérateurs topologiques.SD&M a été développée afin de fournir un traitement parallèle de tout opérateur basée sur latransformation topologique. Basé sur cette stratégie, nous avons proposé une série d'algorithmestopologiques parallèle (nouvelle version ou version adapté). Nos principales contributions sont :(1)Une nouvelle approche pour calculer la ligne de partage des eaux basée sur ‘MSF transform'.L'algorithme proposé est parallèle, préserve la topologie, n'a pas besoin d'extraction préalable deminima et adaptée pour les machines parallèle à mémoire partagée. Il utilise la même approchede calcule de flux proposé par Jean Cousty et il ne nécessite aucune étape de tri, ni l'utilisationd'une file d'attente hiérarchique. Cette contribution a été précédé par une étude intensive desalgorithmes de calcule de la ligne de partage des eaux dans le cas discret.(2)Une étude similaire sur les algorithmes d'amincissement a été menée. Elle concerne seizealgorithmes d'amincissement qui préservent la topologie. En sus des critères de performance,nous somme basé sur deux critères qualitative pour les comparer et les classés. Après cetteclassification, nous avons essayé d'obtenir de meilleurs résultats grâce avec une version adaptéede l'algorithme d'amincissement proposé par Michel Couprie.(3)Une méthode de calcul amélioré pour le lissage topologique grâce à la combinaison du calculparallèle de la distance euclidienne (en utilisant l'algorithme Meijster) et l'amincissement/épaississement parallèle (en utilisant la version adaptée de l'algorithme de Couprie déjàmentionné)
Topological features of an object are fundamental in image processing. In many applications,including medical imaging, it is important to maintain or control the topology of the image. Howeverthe design of such transformations that preserve topology and geometric characteristics of the inputimage is a complex task, especially in the case of parallel processing.Parallel processing is applied to accelerate computation by sharing the workload among multipleprocessors. In terms of algorithm design, parallel computing strategies profits from the naturalparallelism (called also partial order of algorithms) present in the algorithm which provides two main resources of parallelism: data and functional parallelism. Concerning architectural design, it is essential to link the spectacular evolution of parallel architectures and the parallel processing. In effect, if parallelization strategies become necessary, it is thanks to the considerable improvements in multiprocessing systems and the rise of multi-core processors. All these reasons make multiprocessing very practical. In the case of SMP machines, immediate sharing of data provides more flexibility in designing such strategies and exploiting data and functional parallelism, notably with the evolution of interconnection system between processors.In this perspective, we propose a new parallelization strategy, called SD&M (Split Distribute andMerge) strategy that cover a large class of topological operators. SD&M has been developed in orderto provide a parallel processing for many topological transformations.Based on this strategy, we proposed a series of parallel topological algorithm (new or adaptedversion). In the following we present our main contributions:(1)A new approach to compute watershed transform based on MSF transform, that is parallel,preserves the topology, does not need prior minima extraction and suited for SMP machines.Proposed algorithm makes use of Jean Cousty streaming approach and it does not require any sortingstep, or the use of any hierarchical queue. This contribution came after an intensive study of allexisting watershed transform in the discrete case.(2)A similar study on thinning transform was conducted. It concerns sixteen parallel thinningalgorithms that preserve topology. In addition to performance criteria, we introduce two qualitativecriteria, to compare and classify them. New classification criteria are based on the relationshipbetween the medial axis and the obtained homotopic skeleton. After this classification, we tried toget better results through the proposal of a new adapted version of Couprie's filtered thinningalgorithm by applying our strategy.(3)An enhanced computation method for topological smoothing through combining parallelcomputation of Euclidean Distance Transform using Meijster algorithm and parallel Thinning–Thickening processes using the adapted version of Couprie's algorithm already mentioned
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Halverson, Ranette Hudson. "Efficient Linked List Ranking Algorithms and Parentheses Matching as a New Strategy for Parallel Algorithm Design." Thesis, University of North Texas, 1993. https://digital.library.unt.edu/ark:/67531/metadc278153/.

Повний текст джерела
Анотація:
The goal of a parallel algorithm is to solve a single problem using multiple processors working together and to do so in an efficient manner. In this regard, there is a need to categorize strategies in order to solve broad classes of problems with similar structures and requirements. In this dissertation, two parallel algorithm design strategies are considered: linked list ranking and parentheses matching.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Appelgren, Filip, and Måns Ekelund. "Performance Evaluation of a Signal Processing Algorithm with General-Purpose Computing on a Graphics Processing Unit." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-253816.

Повний текст джерела
Анотація:
Graphics Processing Units (GPU) are increasingly being used for general-purpose programming, instead of their traditional graphical tasks. This is because of their raw computational power, which in some cases give them an advantage over the traditionally used Central Processing Unit (CPU). This thesis therefore sets out to identify the performance of a GPU in a correlation algorithm, and what parameters have the greatest effect on GPU performance. The method used for determining performance was quantitative, utilizing a clock library in C++ to measure performance of the algorithm as problem size increased. Initial problem size was set to 28 and increased exponentially to 221. The results show that smaller sample sizes perform better on the serial CPU implementation but that the parallel GPU implementations start outperforming the CPU between problem sizes of 29 and 210. It became apparent that GPU’s benefit from larger problem sizes, mainly because of the memory overhead costs involved with allocating and transferring data. Further, the algorithm that is under evaluation is not suited for a parallelized implementation due to a high amount of branching. Logic can lead to warp divergence, which can drastically lower performance. Keeping logic to a minimum and minimizing the number of memory transfers are vital in order to reach high performance with a GPU.
GPUer (grafikprocessor) som traditionellt används för att rita grafik i datorer, används mer och mer till att utföra vanliga programmeringsuppgifter. Detta är för att de har en stor beräkningskraft, som kan ge dem ett övertag över vanliga CPUer (processor) i vissa uppgifter. Det här arbetet undersöker därför prestandaskillnaderna mellan en CPU och en GPU i en korrelations-algoritm samt vilka parametrar som har störst påverkan på prestanda. En kvantitativ metod har använts med hjälp av ett klock-bibliotek, som finns tillgängligt i C++, för att utföra tidtagning. Initial problemstorlek var satt till 28 och ökade sedan exponentiellt till 221. Resultaten visar att algoritmen är snabbare på en CPU vid mindre problemstorlekar. Däremot börjar GPUn prestera bättre än CPUn mellan problemstorlekar av 29 och 210. Det blev tydligt att GPUer tjänar på större problem, framför allt för att det tar mycket tid att involvera GPUn i algoritmen. Datäoverföringar och minnesallokering på GPUn tar tid, vilket blir tydligt vid små storlekar. Algoritmen passar sig inte heller speciellt bra för en parallell lösning, eftersom den innehåller mycket logik. En algoritm med design där exekveringstrådarna kan gå isär under exekvering, är helst att undvika eftersom mycket parallell prestanda tappas. Att minimera logik, datäoverföringar samt minnesallokeringar är viktiga delar för hög GPU-prestanda.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Hirotsu, Kenichi. "Neural network hardware with random weight change learning algorithm." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/15765.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Wall, Helene. "Context-Based Algorithm for Face Detection." Thesis, Linköping University, Department of Science and Technology, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-4171.

Повний текст джерела
Анотація:

Face detection has been a research area for more than ten years. It is a complex problem due to the high variability in faces and amongst faces; therefore it is not possible to extract a general pattern to be used for detection. This is what makes the face detection problem a challenge.

This thesis gives the reader a background to the face detection problem, where the two main approaches of the problem are described. A face detection algorithm is implemented using a context-based method in combination with an evolving neural network. The algorithm consists of two majors steps: detect possible face areas and within these areas detect faces. This method makes it possible to reduce the search space.

The performance of the algorithm is evaluated and analysed. There are several parameters that affect the performance; the feature extraction method, the classifier and the images used.

This work resulted in a face detection algorithm and the performance of the algorithm is evaluated and analysed. The analysis of the problems that occurred has provided a deeper understanding for the complexity of the face detection problem.

Стилі APA, Harvard, Vancouver, ISO та ін.
28

Yao, Yu [Verfasser]. "Model-based Algorithm Development with Focus on Biosignal Processing / Yu Yao." Wuppertal : Universitätsbibliothek Wuppertal, 2015. http://d-nb.info/1076916813/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Satirapod, Chalermchon Surveying &amp Spatial Information Systems Faculty of Engineering UNSW. "Improving the GPS Data Processing Algorithm for Precise Static Relative Positioning." Awarded by:University of New South Wales. School of Surveying and Spatial Information Systems, 2002. http://handle.unsw.edu.au/1959.4/18244.

Повний текст джерела
Анотація:
Since its introduction in the early 1980????s, the Global Positioning System (GPS) has become an important tool for high-precision surveying and geodetic applications. Carrier phase measurements are the key to achieving high accuracy positioning results. This research addresses one of the most challenging aspects in the GPS data processing algorithm, especially for precise GPS static positioning, namely the definition of a realistic stochastic model. Major contributions of this research are: (a) A comparison of the two data quality indicators, which are widely used to assist in the definition of the stochastic model for GPS observations, has been carried out. Based on the results obtained from a series of tests, both the satellite elevation angle and the signal-to-noise ratio information do not always reflect the reality. (b) A simplified MINQUE procedure for the estimation of the variance-covariance components of GPS observations has been proposed. The proposed procedure has been shown to produce similar results to those from the standard MINQUE procedure. However, the computational load and time are significantly reduced, and in addition the effect of a changing number of satellites on the computations is effectively dealt with. (c) An iterative stochastic modelling procedure has been developed in which all error features in the GPS observations are taken into account. Experimental results show that by applying the proposed procedure, both the certainty and the accuracy of the positioning results are improved. In addition, the quality of ambiguity resolution can be more realistically evaluated. (d) A segmented stochastic modelling procedure has been developed to effectively deal with long observation period data sets, and to reduce the computational load. This procedure will also take into account the temporal correlations in the GPS measurements. Test results obtained from both simulated and real data sets indicate that the proposed procedure can improve the accuracy of the positioning results to the millimetre level. (e) A novel approach to GPS analysis based on a combination of the wavelet decomposition technique and the simplified MINQUE procedure has been proposed. With this new approach, the certainty of ambiguity resolution and the accuracy of the positioning results are improved.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Chen, Chen. "An evaluation of a 2-way semijoin distributed query processing algorithm." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ62199.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Bilhanan, Anuleka. "High level synthesis of an image processing algorithm for cancer detection." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000303.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Han, Seungju. "A family of minimum Renyi's error entropy algorithm for information processing." [Gainesville, Fla.] : University of Florida, 2007. http://purl.fcla.edu/fcla/etd/UFE0021428.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Silva, Jesús, Palma Hugo Hernández, Núẽz William Niebles, David Ovallos-Gazabon, and Noel Varela. "Parallel Algorithm for Reduction of Data Processing Time in Big Data." Institute of Physics Publishing, 2020. http://hdl.handle.net/10757/652134.

Повний текст джерела
Анотація:
Technological advances have allowed to collect and store large volumes of data over the years. Besides, it is significant that today's applications have high performance and can analyze these large datasets effectively. Today, it remains a challenge for data mining to make its algorithms and applications equally efficient in the need of increasing data size and dimensionality [1]. To achieve this goal, many applications rely on parallelism, because it is an area that allows the reduction of cost depending on the execution time of the algorithms because it takes advantage of the characteristics of current computer architectures to run several processes concurrently [2]. This paper proposes a parallel version of the FuzzyPred algorithm based on the amount of data that can be processed within each of the processing threads, synchronously and independently.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Savov, Emil. "On the LMS algorithm for high-speed adaptive digital signal processing." Thesis, University of Ottawa (Canada), 1986. http://hdl.handle.net/10393/5217.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Mansouri, Abdol-Reza 1962. "An algorithm for detecting line segments in digital pictures /." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66185.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Tuvesson, Markus. "Implementation of the Weighted Filtered Backprojection Algorithm in the Dual-Energy Iterative Algorithm DIRA-3D." Thesis, Linköpings universitet, Institutionen för systemteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-173583.

Повний текст джерела
Анотація:
DIRA-3D is an iterative model-based reconstruction method for dual-energy helical CT whose goal is to determine the material composition of the patient from accurate linear attenuation coefficients (LACs). Possible applications are, for example, to aid in calculations of radiation transport and dose calculations in brachytherapy with low energy photons, and in proton therapy. There was a need to replace the current image reconstruction method, the PI-method, with a weighted filtered backprojection (wFBP) algorithm for image reconstruction, since wFBP is used for image reconstruction in Siemens's CT-scanners. The new DIRA-3D algorithm implemented the program take for cone-beam projection generation and the FreeCT wFBP algorithm for image reconstruction. Experiments showed that the accuracies of the resulting LACs for the DIRA-3D algorithm using wFBP for image reconstruction were comparable to the one using the PI-method for image reconstruction. The relative LAC errors reached a value below 0.2% after 10 iterations.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Meng, Bojun. "Efficient intra prediction algorithm in H.264 /." View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202003%20MENG.

Повний текст джерела
Анотація:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 66-68). Also available in electronic version. Access restricted to campus users.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Mohammad, Maruf H. "Blind Acquisition of Short Burst with Per-Survivor Processing (PSP)." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/46193.

Повний текст джерела
Анотація:
This thesis investigates the use of Maximum Likelihood Sequence Estimation (MLSE) in the presence of unknown channel parameters. MLSE is a fundamental problem that is closely related to many modern research areas like Space-Time Coding, Overloaded Array Processing and Multi-User Detection. Per-Survivor Processing (PSP) is a technique for approximating MLSE for unknown channels by embedding channel estimation into the structure of the Viterbi Algorithm (VA). In the case of successful acquisition, the convergence rate of PSP is comparable to that of the pilot-aided RLS algorithm. However, the performance of PSP degrades when certain sequences are transmitted. In this thesis, the blind acquisition characteristics of PSP are discussed. The problematic sequences for any joint ML data and channel estimator are discussed from an analytic perspective. Based on the theory of indistinguishable sequences, modifications to conventional PSP are suggested that improve its acquisition performance significantly. The effect of tree search and list-based algorithms on PSP is also discussed. Proposed improvement techniques are compared for different channels. For higher order channels, complexity issues dominate the choice of algorithms, so PSP with state reduction techniques is considered. Typical misacquisition conditions, transients, and initialization issues are reported.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Srinivasa, Manjunath Cheekur. "Nonlinear Structure Identification of Single Degree of Freedom System Using NARMAX Algorithm." University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1504872945338834.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Harter, Nathan M. "Development of a Single-Channel Direction Finding Algorithm." Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/31973.

Повний текст джерела
Анотація:
A radio direction finding (DF) system uses a multiple-element antenna array coupled with one or more receivers to estimate the direction-of-arrival (DOA) of a targeted emitter using characteristics of the signal received at each of the antennas in the array. In general, DF systems can be classified both by the number of receivers employed as well as which characteristics of the received signal are used to produce the DOA estimate, such as the signal's amplitude, phase, or time of arrival. This work centers on the development and implementation of a novel single-channel direction finding system based on the differential phase of the target signal received by a uniform circular antenna array with a commutative switch. The algorithm is called the PLL DF Method and differs from older single-channel DF techniques in that it is a digital algorithm intended for implementation on a software-defined radio (SDR) platform with a custom-designed antenna array and RF switching network. It uses a bank of parallel software PLLs to estimate the phase of the signal received at each element of the multi-antenna array. Theses estimated phase values are then fed to a specialized signal processing block that estimates the DOA of the received signal. This thesis presents the details of the initial version of the PLL algorithm which was used to produce a proof-of-concept system with an eight-element circular array. It then discusses various technical challenges uncovered in the initial implementation and presents numerous enhancements to the algorithm to overcome these challenges, such as a modification to the PLL model to offer increased estimator robustness in the presence of a frequency offset between the transmitter and receiver, revisions of the software implementation to reduce the algorithm's processing requirements, and the adaptation of the DF algorithm for use with a 16-element circular array. The performance of the algorithm with these modifications under various conditions are simulated to investigate their impact on the DOA estimation process and the results of their implementation on an SDR are considered.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
41

He, Zhijun. "System And Algorithm Design For Varve Image Analysis System." Diss., The University of Arizona, 2007. http://hdl.handle.net/10150/196015.

Повний текст джерела
Анотація:
This dissertation describes the design and implementation of a computer vision based varve image analysis system. The primary issues covered are software engineering design, varve image calibrations, varve image enhancement, varve Dynamic Spatial Warping (DSW) profile generation, varve core image registration, varve identification, boundary identification and varve thickness measurement. A varve DSW matching algorithm is described to generate DSW profile and register two core images. Wavelet Multiple Resolution Analysis (MRA) is also used to do the core image registrations. By allowing an analyst to concentrate on other research work while the VARVES software analyzes a sample, much of the tedious varve analysis work is reduced, and potentially increasing the productivity. Additionally, by using new computer vision techniques, VARVES system is able to do some varve analysis which was impossible handled manually.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Tsang, Kai Fung. "Motion estimation and compensation using fast mesh based algorithm /." View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202004%20TSANG.

Повний текст джерела
Анотація:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2004.
Includes bibliographical references (leaves 58-61). Also available in electronic version. Access restricted to campus users.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Groder, Seth. "Modeling and synthesis of the HD photo compression algorithm /." Online version of thesis, 2008. http://hdl.handle.net/1850/7118.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Ordońẽz, Carlos. "Mining complex databases using the EM algorithm." Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/8232.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Wang, He. "Advanced Electromyogram Signal Processing with an Emphasis on Simplified, Near-Optimal Whitening." Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-theses/1338.

Повний текст джерела
Анотація:
Estimates of the time-varying standard deviation of the surface EMG signal (EMGσ) are extensively used in the field of EMG-torque estimation. The use of a whitening filter can substantially improve the accuracy of EMGσ estimation by removing the signal correlation and increasing the statistical bandwidth. However, a subject-specific whitening filter which is calibrated to each subject, is quite complex and inconvenient. To solve this problem, we first calibrated a 60th-order “Universal” FIR whitening filter by using the ensemble mean of the inverse of the square root of the power spectral density (PSD) of the noise-free EMG signal. Pre-existing data from elbow contraction of 64 subjects, providing 512 recording trials were used. The test error on an EMG-torque task based on the “Universal” FIR whitening filter had a mean error of 4.80% maximum voluntary contraction (MVC) with a standard deviation of 2.03% MVC. Meanwhile the subject-specific whitening filter had performance of 4.84±1.98% MVC (both have a whitening band limit at 600 Hz). These two methods had no statistical difference. Furthermore, a 2nd-order IIR whitening filter was designed based on the magnitude response of the “Universal” FIR whitening filter, via the differential evolution algorithm. The performance of this IIR whitening filter was very similar to the FIR filter, with a performance of 4.81±2.12% MVC. A statistical test showed that these two methods had no significant difference either. Additionally, a complete theory of EMG in additive measured noise contraction modeling is described. Results show that subtracting the variance of whitened noise by computing the root difference of the square (RDS) is the correct way to remove noise from the EMG signal.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Gum, Teren. "A simple linear algorithm for computing edge-to-edge visibility in a polygon /." Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=65508.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Amaro, Ricardo Jorge Pina. "AAL SAFE: signal processing algorithm." Master's thesis, 2009. http://hdl.handle.net/10316/11931.

Повний текст джерела
Анотація:
Sendo o aumento de população idosa, um dos problemas demográficos que acarreta maiores preocupações actualmente, foi criado a nível europeu um programa intitulado Ambient Assisted Living que tem como objectivo desenvolver soluções um ambiente mais seguro e acolhedor às pessoas idosas. Assim podem viver de forma mais autónoma e sem perda de privacidade. O desafio deste projecto foi tirar partido de uma tecnologia em alta no mercado, que é o comando da consola Wii, e conseguir deste modo torna-lo útil dentro do conceito do Ambient Assisted Living. O sistema desenvolvido intitula-se AAL Safe, e tem como funções detectar quedas, monitorizar as actividades da rotina diária assim como calcular o dispêndio energético. O sistema é constituído por um comando da consola Wii que comunica via Bluetooth com um computador, onde os dados são adquiridos e processados. Esta tese fornece uma descrição dos modos de processamento e dos algoritmos desenvolvidos para a detecção da queda, identificação das actividades da rotina diária e cálculo do dispêndio energético, assim como o seu modo de implementação.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

"An algorithm design environment for signal processing." Research Laboratory of Electronics, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/4203.

Повний текст джерела
Анотація:
Michele Mae Covell.
Also issued as Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1989.
Includes bibliographical references (p. 253-256).
Supported in part by the Defense Advanced Research Projects Agency and monitored by the Office of Naval Research. N00014-89-J-1489 Supported in part by the National Science Foundation. MIP 87-14969 Supported in part by Sanders Associates, Incorporated.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

HUANG, YAO TIEN, and 黃耀田. "Application of EM algorithm to image processing." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/80210703879458063194.

Повний текст джерела
Анотація:
碩士
國立中山大學
應用數學研究所
84
EM(Expectation-Maximization) 法則是統計學中使用於"最大似然估計" (MLE, Maximum Likelihood Estimation) 的運算方法, EM 法則使用機 率分佈函數來表示影響影像的因子(如影像灰階值, 圖素的數目),而過 去的技術看來,並沒有任何一種技術能將影像數位化的複雜過程,像 EM 法則一樣,使用機率分佈函數來整合影響影像成像的各種複雜因素,所 以 EM 法則可以適用於影像處理領域的問題,因此近幾年來 EM 法則被應 用於醫學影像分析(如: 電腦斷層掃描圖),符號偵測, 參數估計,過濾 布窪松過程之時間延遲估計一類的問題上。在本篇論文中,我們應用 EM 法則於增強影像解析度的問題上,亦即既定有限解析度的掃描器獲取數位 影像後,經由考慮影響影像亮度值的所有成像因素來增強影像的解析度。 經由 EM 法則可以將影響影像解析度的因素 (諸如: 掃描過程中,光子照 射的情形; 感應器的分佈情形等) 以機率函數來表示,所以研究使用EM法 則來增強影像的解析度。EM 法則除了應用於影像解析度增強之外,亦可 以應用於影像的放大處理。過去影像放大的處理方法是使用線性內插法( linear interpolation)與重覆圖素法(simple duplication)。在本論文 中亦對以上三種放大的模擬結果做比較與探討。 The EM (expectation-maximization) algorithm is a broadly applicable method for calculating maximum likelihood estimates given incomplete data. EM algorithm has received considerable attention due to their computation feasibility in tomographic image reconstruction, symbol detection and parameter estimation. However, it is less recognized that EM algorithm can be equally applicable to image processing. No past techniques surveyed can incorporate the potentially complex nature of various image formation processes into a probability density function as EM procedure does. In this paper, we apply the EM algorithm to the image resolution enhancement, that is "Given the limited spatial resolution of sensor arrays, how can we best compensate the physical configuration of the underlying imaging digitization system and render the desired resolution of image cells ?" EM algorithm can incorporate these the potentially complex nature of various image formation processes (e.g. photons emission process, sensor sensitivity, etc.) into a probability density array. Hence, we use EM algorithm to enhance the image resolution. The EM algorithm can use not only in resolution enhancement but also in the image scaling. In the past, methods used for changing the scale of an image are simple duplication or linear interpolation. In the paper, the discussion of these three scaling methods are also presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Wang, Hong-Ming, and 王泓銘. "Object Removal Algorithm/Hardware in Image Processing." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/07673837842783838507.

Повний текст джерела
Анотація:
碩士
國立成功大學
電機工程學系碩博士班
92
We propose a novel algorithm for object removal in image processing. The proposed algorithm can filter and compensate the unwanted object in digital photograph. For different background texture, we present an object removal algorithm integrates weighted interpolation and sub-patch texture synthesis to fill the lost region. The sub-patch texture synthesis algorithm can past a line each time and weighted interpolation technique with block-smoothing method. The proposed regular algorithms are suitable to be integrated in smart digital camera due to the regularity. The comparison of previous algorithms is also provided in this thesis. We present a first algorithm used in constrained texture synthesis by regular synthesizing order. The proposed algorithm can achieve better performance, faster computation and regular architecture for VLSI design.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії