To see the other types of publications on this topic, follow the link: Projection extraction.

Dissertations / Theses on the topic 'Projection extraction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 21 dissertations / theses for your research on the topic 'Projection extraction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Martinez, Eduardo Rodriguez. "Evolutionary induction of projection maps for feature extraction." Thesis, University of Liverpool, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.569587.

Full text
Abstract:
This thesis proposes an evolutionary scheme for automatic design of feature extraction methods, tailored to a given classification problem. The main advantage of the proposed scheme is its capacity to formulate new models when the exiR.tmg'ones do not fit the .. ' problem at hand. The learning phase is expressed as a model selection problem, where the best performing model is selected among the genetic pool, assessed by an estimation of out-of-sample generalization error. Each individual in the genetic pool represents a potential model encoded into a hybrid genotype, specifically designed to hold a tree structure and an scalar array to represent both feature-extraction and classification stages. The role of the inducer is to automatically design a mapping function to be used as the core of the feature-extraction stage, as well as fine-tune the corresponding hyper-parameters for the feature-extraction/classification pair. Two paradigms are explored to express the feature-extraction stage, namely projection pursuit and spectral embedding methods. Both paradigms can express several feature extraction algorithms under a common template. In the case of projection pursuit, such template consist on the optimisation of a cost function, also known as projection index, that can be specifically designed to highlight certain properties of the extracted features. While for spectral embedding methods, a suitable set of similarity metrics is needed to construct a weight matrix, which encodes the links between any two samples on the vertices of a graph. The eigendecomposition of such weight matrix represents the solution to an optimisation problem looking for a low-dimensional space, retaining the characteristics described by the original distance metric. The proposed inducer evolves an optimal projection index or a desired distance metric for the corresponding feature-extraction paradigm. Addi- tionally, projection pursuit was extended to the nonlinear case by means of the kernel trick. The determination of a nonlinear residual subspace for sequential projection pursuit is reduced to the computation of an updated kernel matrix.
APA, Harvard, Vancouver, ISO, and other styles
2

Weingessel, Andreas, Martin Natter, and Kurt Hornik. "Using independent component analysis for feature extraction and multivariate data projection." SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 1998. http://epub.wu.ac.at/1424/1/document.pdf.

Full text
Abstract:
Deriving low-dimensional perceptual spaces from data consisting of many variables is of crucial interest in strategic market planning. A frequently used method in this context is Principal Components Analysis, which finds uncorrelated directions in the data. This methodology which supports the identification of competitive structures can gainfully be utilized for product (re)positioning or optimal product (re)design. In our paper, we investigate the usefulness of a novel technique, Independent Component Analysis, to discover market structures. Independent Component Analysis is an extension of Principal Components Analysis in the sense that it looks for directions in the data that are not only uncorrelated but also independent. Comparing the two approaches on the basis of an empirical data set, we find that Independent Component Analysis leads to clearer and sharper structures than Principal Components Analysis. Furthermore, the results of Independent Component Analysis have a reasonable marketing interpretation.
Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
APA, Harvard, Vancouver, ISO, and other styles
3

Herrmann, Carmen. "Projection techniques for complexity reduction and information extraction in correlated quantum systems /." Zürich : ETH, 2006. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=16952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rutledge, Glen A. "Dictionary projection pursuit, a wavelet packet technique for acoustic spectral feature extraction." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0008/NQ52770.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Onak, Onder Nazim. "Comparison Of Ocr Algorithms Using Fourier And Wavelet Based Feature Extraction." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12612928/index.pdf.

Full text
Abstract:
A lot of research have been carried in the field of optical character recognition. Selection of a feature extraction scheme is probably the most important factor in achieving high recognition performance. Fourier and wavelet transforms are among the popular feature extraction techniques allowing rotation invariant recognition. The performance of a particular feature extraction technique depends on the used dataset and the classifier. Dierent feature types may need dierent types of classifiers. In this thesis Fourier and wavelet based features are compared in terms of classification accuracy. The influence of noise with dierent intensities is also analyzed. Character recognition system is implemented with Matlab. Isolated gray scale character image first transformed into one dimensional function. Then, set of features are extracted. The feature set are fed to a classifier. Two types of classifier were used, Nearest Neighbor and Linear Discriminant Function. The performance of each feature extraction and classification methods were tested on various rotated and scaled character images.
APA, Harvard, Vancouver, ISO, and other styles
6

Nilsson, Jim, and Peter Valtersson. "Machine Vision Inspection of the Lapping Process in the Production of Mass Impregnated High Voltage Cables." Thesis, Blekinge Tekniska Högskola, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16707.

Full text
Abstract:
Background. Mass impregnated high voltage cables are used in, for example, submarine electric power transmission. One of the production steps of such cables is the lapping process in which several hundred layers of special purpose paper are wrapped around the conductor of the cable. It is important for the mechanical and electrical properties of the finished cable that the paper is applied correctly, however there currently exists no reliable way of continuously ensuring that the paper is applied correctly. Objective. The objective of this thesis is to develop a prototype of a cost-effective machine vision system which monitors the lapping process and detects and records any errors that may occur during the process; with an accuracy of at least one tenth of a millimetre. Methods. The requirements of the system are specified and suitable hardware is identified. Using a method where the images are projected down to one axis as well as other signal processing methods, the errors are measured. Experiments are performed where the accuracy and performance of the system is tested in a controlled environment. Results. The results show that the system is able to detect and measure errors accurately down to one tenth of a millimetre while operating at a frame rate of 40 frames per second. The hardware cost of the system is less than €200. Conclusions. A cost-effective machine vision system capable of performing measurements accurate down to one tenth of a millimetre can be implemented using the inexpensive Raspberry Pi 3 and Raspberry Pi Camera Module V2. Th
APA, Harvard, Vancouver, ISO, and other styles
7

Alencar, Aretha Barbosa. "Visualização da evolução temporal de coleções de artigos científicos." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-11042013-155653/.

Full text
Abstract:
Artigos científicos são o principal mecanismo que pesquisadores usam para reportar suas descobertas científicas, e uma coleção de artigos em uma área de pesquisa pode revelar muito sobre sua evolução ao longo do tempo, como a emergência de novos tópicos e a evolução dos mesmos quanto ao seu conteúdo. No entanto, dada uma ampla coleção de artigos é geralmente muito difícil extrair informações importantes que possam ajudar leitores a interpretar globalmente, navegar e então eventualmente focar em itens relevantes para sua tarefa. Mapas de documentos baseados em conteúdo são representações visuais criadas para avaliar a similaridade entre documentos, e têm se mostrado úteis em auxiliar tarefas exploratórias neste cenário. Documentos são representados por marcadores visuais projetados em um espaço bidimensional de forma que documentos com conteúdo similar permaneçam próximos. Apesar de estes mapas permitirem a identificação visual de grupos de documentos relacionados e de fronteiras entre esses grupos, eles não transmitem explicitamente a evolução temporal de uma coleção. Nesta tese, propomos e validamos um mapa de documentos dinâmico interativo para coleções de artigos científicos capaz de evidenciar o comportamento temporal para apoiar tarefas de análise, preservando ao mesmo tempo a acurácia local do mapa e o contexto do usuário. As mudanças nas relações de similaridade, evidenciadas ao longo do tempo nesse mapa, oferecem suporte para detecção da evolução temporal dos tópicos. Essa evolução é caracterizada por meio de eventos de transição entre grupos, como a emergência de novos grupos e tópicos em momentos específicos e a especialização de um grupo, e pela detecção de mudanças no vocabulário dos tópicos, utilizando técnicas que extraem os termos mais relevantes (tópicos) em cada grupo, em diferentes momentos
Scientific articles are the major mechanism used by researchers to report their scientific results, and a collection of articles in a research area can reveal a lot about its evolution over time, such as the emergence of new topics and changes in topic vocabulary. However, given a broad collection of articles it is usually very difficult to extract important information that can help readers to globally interpret, navigate and then eventually focus on subjects relevant to their task. Document maps based on content are visual representations created to convey the similarity between documents, and have proven to be useful in helping users conducting exploratory tasks in this scenario. Documents are represented by graphical markers projected onto a two-dimensional space so that documents similar in content remain close. Although these maps allow visual identification of groups of related documents and boundaries between these groups, they do not explicitly convey the temporal evolution of a collection. In this thesis, we propose and validate a dynamic document map for collections of scientific articles capable of showing the temporal behavior to support analysis tasks, while simultaneously preserving the local accuracy of the map and the user global context. Changes in the similarity relationships, evidenced over time in this map, support the detection of the temporal evolution of topics. This evolution is characterized by transition events between groups such as the emergence of new groups and topics at specific moments and the specialization of a group, as well by detecting changes in the vocabulary of topics, using techniques that extract the most relevant terms (topics) in each group, at different times
APA, Harvard, Vancouver, ISO, and other styles
8

Piffet, Loïc. "Décomposition d’image par modèles variationnels : débruitage et extraction de texture." Thesis, Orléans, 2010. http://www.theses.fr/2010ORLE2053/document.

Full text
Abstract:
Cette thèse est consacrée dans un premier temps à l’élaboration d’un modèle variationnel dedébruitage d’ordre deux, faisant intervenir l’espace BV 2 des fonctions à hessien borné. Nous nous inspirons ici directement du célèbre modèle de Rudin, Osher et Fatemi (ROF), remplaçant la minimisation de la variation totale de la fonction par la minimisation de la variation totale seconde, c’est à dire la variation totale de ses dérivées. Le but est ici d’obtenir un modèle aussi performant que le modèle ROF, permettant de plus de résoudre le problème de l’effet staircasing que celui-ci engendre. Le modèle que nous étudions ici semble efficace, entraînant toutefois l’apparition d’un léger effet de flou. C’est afin de réduire cet effet que nous introduisons finalement un modèle mixte, permettant d’obtenir des solutions à la fois non constantes par morceaux et sans effet de flou au niveau des détails. Dans une seconde partie, nous nous intéressons au problème d’extraction de texture. Un modèle reconnu comme étant l’un des plus performants est le modèle T V -L1, qui consiste simplement à remplacer dans le modèle ROF la norme L2 du terme d’attache aux données par la norme L1. Nous proposons ici une méthode originale permettant de résoudre ce problème utilisant des méthodes de Lagrangien augmenté. Pour les mêmes raisons que dans le cas du débruitage, nous introduisons également le modèle T V 2-L1, consistant encore une fois à remplacer la variation totale par la variation totale seconde. Un modèle d’extraction de texture mixte est enfin très brièvement introduit. Ce manuscrit est ponctué d’un vaste chapitre dédié aux tests numériques
This thesis is devoted in a first part to the elaboration of a second order variational modelfor image denoising, using the BV 2 space of bounded hessian functions. We here take a leaf out of the well known Rudin, Osher and Fatemi (ROF) model, where we replace the minimization of the total variation of the function with the minimization of the second order total variation of the function, that is to say the total variation of its partial derivatives. The goal is to get a competitive model with no staircasing effect that generates the ROF model anymore. The model we study seems to be efficient, but generates a blurry effect. In order to deal with it, we introduce a mixed model that permits to get solutions with no staircasing and without blurry effect on details. In a second part, we take an interset to the texture extraction problem. A model known as one of the most efficient is the T V -L1 model. It just consits in replacing the L2 norm of the fitting data term with the L1 norm.We propose here an original way to solve this problem by the use of augmented Lagrangian methods. For the same reason than for the denoising case, we also take an interest to the T V 2-L1 model, replacing again the total variation of the function by the second order total variation. A mixed model for texture extraction is finally briefly introduced. This manuscript ends with a huge chapter of numerical tests
APA, Harvard, Vancouver, ISO, and other styles
9

Breutel, Stephan Werner. "Analysing the behaviour of neural networks." Queensland University of Technology, 2004. http://eprints.qut.edu.au/15943/.

Full text
Abstract:
A new method is developed to determine a set of informative and refined interface assertions satisfied by functions that are represented by feed-forward neural networks. Neural networks have often been criticized for their low degree of comprehensibility.It is difficult to have confidence in software components if they have no clear and valid interface description. Precise and understandable interface assertions for a neural network based software component are required for safety critical applications and for theintegration into larger software systems. The interface assertions we are considering are of the form "e if the input x of the neural network is in a region (alpha symbol) of the input space then the output f(x) of the neural network will be in the region (beta symbol) of the output space "e and vice versa. We are interested in computing refined interface assertions, which can be viewed as the computation of the strongest pre- and postconditions a feed-forward neural network fulfills. Unions ofpolyhedra (polyhedra are the generalization of convex polygons in higher dimensional spaces) are well suited for describing arbitrary regions of higher dimensional vector spaces. Additionally, polyhedra are closed under affine transformations. Given a feed-forward neural network, our method produces an annotated neural network, where each layer is annotated with a set of valid linear inequality predicates. The main challenges for the computation of these assertions is to compute the solution of a non-linear optimization problem and the projection of a polyhedron onto a lower-dimensional subspace.
APA, Harvard, Vancouver, ISO, and other styles
10

NEDELJKOVIC, SONJA R. "PARAMETER EXTRACTION AND DEVICE PHYSICS PROJECTIONS ON LATERAL LOW VOLTAGE POWER MOSFET CONFIGURATIONS." University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1005163403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Dobiáš, Roman. "Holografická injekce." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445541.

Full text
Abstract:
Táto práca sa zaoberá návrhom a implementáciou nástroja, ktorý umožní používať klasické 3D OpenGL aplikácie na tzv. autostereoskopických displayoch s plným využitím ich hĺbkových možností a s minimálnym zásahom od užívateľa. Nástrojom je konverzná vrstva, ktorá umožní transparentne beh OpenGL aplikácií s interným rozšírením o vykreslenie z viacerých pohľadov vo formáte, vhodnom pre 3D display. Motiváciou tejto diplomovej práce je potenciálne rozšírenie tzv. autostereskopických displayov, ktoré je v súčasnosti závislé na cene a dostupnosti špecializovaných aplikácií pre tieto displaye. Text práce sa zaoberá dizajnom takejto vrstvy z pohľadu nutných API volaní, ktoré je potrebné korektne prepísať, aby aplikácie, vytvorené pomocou jednotlivých verzii štandardu OpenGL, pracovali správne, ako aj popisom problémov, ktoré vznikajú použitím rôznych vykreslovacích techník, a ktoré sú motiváciou pre komplexnejšie chovanie nástroja. Na záver práce sú ukážky konverzie programov, dopad na výkonnosť, ako aj identifikácia nedostatkov konverznej vrstvy s návrhmi možných riešení pre ďalší vývoj.
APA, Harvard, Vancouver, ISO, and other styles
12

Jaf, Sardar. "The application of constraint rules to data-driven parsing." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/the-application-of-constraint-rules-to-datadriven-parsing(fe7b983d-e5ec-4e86-8f97-05066c1455b1).html.

Full text
Abstract:
The process of determining the structural relationships between words in both natural and machine languages is known as parsing. Parsers are used as core components in a number of Natural Language Processing (NLP) applications such as online tutoring applications, dialogue-based systems and textual entailment systems. They have been used widely in the development of machine languages. In order to understand the way parsers work, we will investigate and describe a number of widely used parsing algorithms. These algorithms have been utilised in a range of different contexts such as dependency frameworks and phrase structure frameworks. We will investigate and describe some of the fundamental aspects of each of these frameworks, which can function in various ways including grammar-driven approaches and data-driven approaches. Grammar-driven approaches use a set of grammatical rules for determining the syntactic structures of sentences during parsing. Data-driven approaches use a set of parsed data to generate a parse model which is used for guiding the parser during the processing of new sentences. A number of state-of-the-art parsers have been developed that use such frameworks and approaches. We will briefly highlight some of these in this thesis. There are three specific important features that it is important to integrate into the development of parsers. These are efficiency, accuracy, and robustness. Efficiency is concerned with the use of as little time and computing resources as possible when processing natural language text. Accuracy involves maximising the correctness of the analyses that a parser produces. Robustness is a measure of a parser’s ability to cope with grammatically complex sentences and produce analyses of a large proportion of a set of sentences. In this thesis, we present a parser that can efficiently, accurately, and robustly parse a set of natural language sentences. Additionally, the implementation of the parser presented here allows for some trading-off between different levels of parsing performance. For example, some NLP applications may emphasise efficiency/robustness over accuracy while some other NLP systems may require a greater focus on accuracy. In dialogue-based systems, it may be preferable to produce a correct grammatical analysis of a question, rather than incorrectly analysing the grammatical structure of a question or quickly producing a grammatically incorrect answer for a question. Alternatively, it may be desirable that document translation systems translate a document into a different language quickly but less accurately, rather than slowly but highly accurately, because users may be able to correct grammatically incorrect sentences manually if necessary. The parser presented here is based on data-driven approaches but we will allow for the application of constraint rules to it in order to improve its performance.
APA, Harvard, Vancouver, ISO, and other styles
13

Almehio, Yasser. "A Cumulative Framework for Image Registration using Level-line Primitives." Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112155.

Full text
Abstract:
Nous proposons dans cette thèse une nouvelle approche cumulative de recalage d'images basée sur des primitives construites à partir des lignes de niveaux. Les lignes de niveaux sont invariantes par rapport aux diverses perturbations affectant l'image tels que les changements de contraste. Par ailleurs, leur abondance dans une image suggère naturellement un processus de décision cumulatif. Nous proposons alors un algorithme récursif d'extraction des lignes de niveaux simple et efficace qui extrait les lignes par groupes rectiligne appelés ``segments''. Les segments sont ensuite groupés -- sous contrainte de proximité -- en fonction du modèle de transformation recherchée et afin de faciliter le calcul des invariants. Les primitives construites ont alors la forme de Z, Y ou W et sont classées en fonction de leur fiabilité, ce qui participe au paramétrage du processus de décision cumulatif. Le vote est multi-tours et constitué d'une phase préliminaire de construction de listes de préférences inspiré de la technique des mariages stables. Les primitives votent à une itération donnée en fonction de leur fiabilité. Chaque itération fournit ainsi un estimé de la transformation recherchée que le tour suivant peut raffiner. Ce procédé multi-tours permet, de ce fait, d'éliminer les ambiguïtés d'appariement générées par les motifs répétitifs présents dans les images. Notre approche a été validée pour recaler des images sous différents modèles de transformations allant de la plus simple (similarité) à la plus complexe (projective). Nous montrons dans cette thèse comment le choix pertinent de primitives basées sur les lignes de niveaux en conjonction avec un processus de décision cumulatif permet d'obtenir une méthode de recalage d'images robuste, générique et complète, fournissant alors différents niveaux de précision et pouvant ainsi s'appliquer à différents contextes
In this thesis, we propose a new image registration method that relies on level-line primitives. Level-lines are robust towards contrast changes and proposed primitives inherit their robustness. Moreover, their abundance in the image is well adapted to a cumulative matching process based on a multi-stage primitive election procedure. We propose a simple recursive tracking algorithm to extract level lines by straight sets called "segments". Segments are then grouped under proximity constraints to construct primitives (Z, Y and W shapes) that are classified into categories according to their reliability. Primitive shapes are defined according to the transformation model. The cumulative process is based on a preliminary step of preference lists construction that is inspired from the stable marriage matching algorithm. Primitives vote in a given voting stage according to their reliability. Each stage provides a coarse estimate of the transformation that the next stage gets to refine. This process, in turn, eliminate gradually the ambiguity happened by incorrect correspondences. Our additional contribution is to validate further geometric transformations, from simple to complex ones, completing the path "similarity, affine, projective". We show in this thesis how the choice of level lines in conjunction with a cumulative decision process allows defining a complete robust registration approach that is tested and evaluated on several real image sequences including different type of transformations
APA, Harvard, Vancouver, ISO, and other styles
14

LIN, YI-AN, and 林奕安. "Amplitude extraction method for 2D pattern projection profilometry." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/d4a739.

Full text
Abstract:
碩士
國立高雄應用科技大學
光電與通訊工程研究所
106
In this paper, Amplitude extraction method is proposed. Projection and capture of images using a common optical axis path with a shallow depth of field lens. The object is sampled at each separation distance. Use filtered background image and fringe phase. The curve of the fringe changes with the depth of field of the lens. Then use the smoothing filter on the image to smooth the curve, obtain the amplitude without noise, and then use the amplitude of all the sampled images of the object to be tested, and use the quadratic function to find the maximum amplitude position. Then convert this position to the original height to restore the original image position. However, this method will not encounter the problems of the Fast Fourier Transform method, because the leakage problem caused by the Fast Fourier Transform has a problem of amplitude expansion after band-pass filter uses a narrow bandwidth limited bandwidth range, and the error edge is judged as the high frequency fringe. Then restore the edges together. In the new way, the amplitude of the image can be captured more quickly, and the surface topography of the object can be converted using all amplitudes.
APA, Harvard, Vancouver, ISO, and other styles
15

Huang, Sung-Jing, and 黃淞靖. "Character Extraction of Engineering Drawing Images Based on Projection Schemes." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/09194957944961138772.

Full text
Abstract:
碩士
大葉大學
資訊工程學系碩士班
100
In everyday life, character extraction are common. Such as indicators of roadside billboards, magazines are printed with text. Text is always a mixture of pictures and designs on these items. To identify and extract these objects are important issue for the image process. This thesis presents the study of character extraction of engineering drawing images based on projection schemes. First to look for the horizon line is greater than the threshold, and record the sum of Horizontal projection pixel values, and then set a threshold to filter out unwanted horizontal lines. Second to look for the vertical line is greater than the threshold, and record the sum of vertical projection pixel values, and then set a threshold to filter out unwanted vertical lines. After filtration, we find the horizontal and vertical line to do a logical OR operation to combine. And then do the median filter to remove some noise, and to complete the character extraction results. And do a logical XOR operation with original image to complete the line extraction. This method can find out the position of the characters and line of engineering drawing images.
APA, Harvard, Vancouver, ISO, and other styles
16

翁育達. "Off-line signature verification using stroke extraction and projection weighting." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/9x5edy.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

"Using independent component analysis for feature extraction and multivariate data projection." SFB Adaptive Information Systems and Modelling in Economics and Management Science, 1998. http://epub.wu-wien.ac.at/dyn/dl/wp/epub-wu-01_1c7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Rutledge, Glen Alfred. "Dictionary projection pursuit : a wavelet packet technique for acoustic spectral feature extraction." Thesis, 2000. https://dspace.library.uvic.ca//handle/1828/9104.

Full text
Abstract:
This thesis uses the powerful mathematics of wavelet packet signal processing to efficiently extract features from sampled acoustic spectra for the purpose of discriminating between different classes of sounds. An algorithm called dictionary projection pursuit (DPP) is developed which is a fast approximate version of the projection pursuit (PP) algorithm [P.J. Huber Projection Pursuit, Annals of Statistics, 13 ( 2) 435–525, 1985]. When used with a wavelet packet or cosine packet dictionary, this algorithm is significantly faster than the PP algorithm with relatively little degradation in performance provided that the multivariate vectors are samples of an underlying continuous waveform or image. The DPP algorithm is applied to the problem of approximating the Karhunen-Loève transform (KLT) in high dimensional spaces and simulations are performed to compare this algorithm to Wickerhauser's approximate KLT algorithm [M.V. Wickerhauser. Adapted Wavelet Analysis from Theory to Software, A.K. Peters Ltd, 1994]. Both algorithms perform very well relative to the eigenanalysis form of the KLT algorithm at a small fraction of the computational cost. The DPP algorithm is then applied to the problem of finding discriminant features in acoustic spectra for sound recognition tasks; extensive simulations are performed to compare this algorithm to previously developed dictionary methods for discrimination such as Saito and Coifman's local discriminant bases [N. Saito and R. Coifman. Local Discriminant Bases and their Applications. Journal of Mathematical Imaging and Vision, 5 (4) 337–358, 1995] and Buckheit and Donoho's discriminant pursuit [J. Buckheit and D. Donoho. Improved Linear Discrimination Using Time-Frequency Dictionaries. Proceedings of SPIE Wavelet Applications in Signal and Image Processing III Vol 2569, 540–551, July, 1995]. It is found that each feature extraction algorithm performs well under different conditions, but the DPP algorithm is the most flexible and consistent performer.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
19

CHEN, SZU-PEI, and 陳詩沛. "Semantic Search on the World Wide Web: The Semantic Extraction, Reasoning, and Projection Framework." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/02032531032746187193.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
91
The power of keyword-based search engines is limited by their approach. Many people have been working on developing better search methods for the Web. One major effort is the W3C Semantic Web. Semantic Web proposes the architecture of modular layers where meaningful data are encoded in RDF. Beyond the RDF layer there are more layers, such as Ontology, Rules, Logic Framework, and Proof layers. In this paper we present another framework for semantic search on the web — The Semantic Extraction, Reasoning, and Projection Framework. Our framework tries to solve this problem by providing a simple architecture in which the only layer is logic. For this, we develop a new logic language — the Path Inference Language — to extract meaningful data from XML documents, and use logic reasoning procedures to perform search in the extracted data. This approach differs from the one Semantic Web provides in several aspects. We also discuss the differences in this paper.
APA, Harvard, Vancouver, ISO, and other styles
20

Yeh, Tien-Der, and 葉天德. "Extraction and Recognition of License Plate Characters Using Scale-Space Binarization and Accumulated Gradient Projection Methods." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/15909755037843994112.

Full text
Abstract:
博士
國立交通大學
電控工程研究所
99
A system consisting of three methods to deal with license plate characters recognition is proposed in this dissertation. The first method, scale-space binarization, is suitable for extracting characters from gray-level images. The method combines the robust Difference-of-Gaussian function and dynamic thresholding technique to extract the license plate characters directly. In order to speed up the extraction process, optimization methods are also disclosed to reduce the computation time. The second method, voting boundary method, is suitable for correcting characters from geometric deformation induced during capture process. It assumes many straight lines candidates and detects the best one passing through most of the edge pixels by voting. The boundary lines can be used for correcting the deformation and improve recognition rate thereby. The third one, accumulated gradient projection method, recognizes isolated characters by accumulating the gradient projection of the characters and converts them into feature vector for comparison. The feature vector is called accumulated gradient projection vector and is proven robust regardless of noise and illumination change in experiments.
APA, Harvard, Vancouver, ISO, and other styles
21

Hennig, Paul. "Adaptive Isogeometric Analysis of Phase-Field Models." 2020. https://tud.qucosa.de/id/qucosa%3A73811.

Full text
Abstract:
In this thesis, a robust, reliable and efficient isogeometric analysis framework is presented that allows for an adaptive spatial discretization of non-linear and time-dependent multi-field problems. In detail, B\'ezier extraction of truncated hierarchical B-splines is proposed that allows for a strict element viewpoint, and in this way, for the application of standard finite element procedures. Furthermore, local mesh refinement and coarsening strategies are introduced to generate graded meshes that meet given minimum quality requirements. The different strategies are classified in two groups and compared in the adaptive isogeometric analysis of two- and three-dimensional, singular and non-singular problems of elasticity and the Poisson equation. Since a large class of boundary value problems is non-linear or time-dependent in nature and requires incremental solution schemes, projection and transfer operators are needed to transfer all state variables to the new locally refined or coarsened mesh. For field variables, two novel projection methods are proposed and compared to existing global and semi-local versions. For internal variables, two different transfer operators are discussed and compared in numerical examples. The developed analysis framework is than combined with the phase-field method. Numerous phase-field models are discussed including the simulation of structural evolution processes to verify the stability and efficiency of the whole adaptive framework and to compare the projection and transfer operators for the state variables. Furthermore, the phase-field method is used to develop an unified modelling approach for weak and strong discontinuities in solid mechanics as they arise in the numerical analysis of heterogeneous materials due to rapidly changing mechanical properties at material interfaces or due to propagation of cracks if a specific failure load is exceeded. To avoid the time consuming mesh generation, a diffuse representation of the material interface is proposed by introducing a static phase-field. The material in the resulting transition region is recomputed by a homogenization of the adjacent material parameters. The extension of this approach by a phase-field model for crack propagation that also accounts for interface failure allows for the computation of brittle fracture in heterogeneous materials using non-conforming meshes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography