Rozprawy doktorskie na temat „Feature extraction”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Feature extraction.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Feature extraction”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Goodman, Steve. "Feature extraction and classification". Thesis, University of Sunderland, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301872.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Liu, Raymond. "Feature extraction in classification". Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/23634.

Pełny tekst źródła
Streszczenie:
Feature extraction, or dimensionality reduction, is an essential part of many machine learning applications. The necessity for feature extraction stems from the curse of dimensionality and the high computational cost of manipulating high-dimensional data. In this thesis we focus on feature extraction for classification. There are several approaches, and we will focus on two such: the increasingly popular information-theoretic approach, and the classical distance-based, or variance-based approach. Current algorithms for information-theoretic feature extraction are usually iterative. In contrast, PCA and LDA are popular examples of feature extraction techniques that can be solved by eigendecomposition, and do not require an iterative procedure. We study the behaviour of an example of iterative algorithm that maximises Kapur's quadratic mutual information by gradient ascent, and propose a new estimate of mutual information that can be maximised by closed-form eigendecomposition. This new technique is more computationally efficient than iterative algorithms, and its behaviour is more reliable and predictable than gradient ascent. Using a general framework of eigendecomposition-based feature extraction, we show a connection between information-theoretic and distance-based feature extraction. Using the distance-based approach, we study the effects of high input dimensionality and over-fitting on feature extraction, and propose a family of eigendecomposition-based algorithms that can solve this problem. We investigate the relationship between class-discrimination and over-fitting, and show why the advantages of information-theoretic feature extraction become less relevant in high-dimensional spaces.
Style APA, Harvard, Vancouver, ISO itp.
3

Dhanjal, Charanpal. "Sparse Kernel feature extraction". Thesis, University of Southampton, 2008. https://eprints.soton.ac.uk/64875/.

Pełny tekst źródła
Streszczenie:
The presence of irrelevant features in training data is a significant obstacle for many machine learning tasks, since it can decrease accuracy, make it harder to understand the learned model and increase computational and memory requirements. One approach to this problem is to extract appropriate features. General approaches such as Principal Components Analysis (PCA) are successful for a variety of applications, however they can be improved upon by targeting feature extraction towards more specific problems. More recent work has been more focused and considers sparser formulations which potentially have improved generalisation. However, sparsity is not always efficiently implemented and frequently requires complex optimisation routines. Furthermore, one often does not have a direct control on the sparsity of the solution. In this thesis, we address some of these problems, first by proposing a general framework for feature extraction which possesses a number of useful properties. The framework is based on Partial Least Squares (PLS), and one can choose a user defined criterion to compute projection directions. It draws together a number of existing results and provides additional insights into several popular feature extraction methods. More specific feature extraction is considered for three objectives: matrix approximation, supervised feature extraction and learning the semantics of two-viewed data. Computational and memory efficiency is prioritised, as well as sparsity in a direct manner and simple implementations. For the matrix approximation case, an analysis of different orthogonalisation methods is presented in terms of the optimal choice of projection direction. The analysis results in a new derivation for Kernel Feature Analysis (KFA) and the formation of two novel matrix approximation methods based on PLS. In the supervised case, we apply the general feature extraction framework to derive two new methods based on maximising covariance and alignment respectively. Finally, we outline a novel sparse variant of Kernel Canonical Correlation Analysis (KCCA) which approximates a cardinality constrained optimisation. This method, as well as a variant which performs feature selection in one view, is applied to an enzyme function prediction case study.
Style APA, Harvard, Vancouver, ISO itp.
4

Drangel, Andreas. "Feature extraction from images with augmented feature inputs". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-219073.

Pełny tekst źródła
Streszczenie:
Machine learning models for visual recognition tasks such as image recognition is a common research area as of lately. However, not much research has been made when multiple features are to be extracted from the same input. This thesis researches if and how knowledge about one feature influences model performance of a model classifying another feature, as well as how the similarity and generality of the feature data distributions influences model performance. Incorporating augmentation inputs in the form of extra feature information in image models was found to yield different results depending on feature data distribution similarity and level of generality. Care must be taken when augmenting with features in order for the feature not to be completely redundant or to completely take over in the learning process. Selecting reasonable augmentation inputs might yield desired synergy effects which influences model performance to the better.
Maskininlärningsmodeller för uppgifter inom visuellt igenkännande så som bildigenkänning har på senaste tiden varit ett vanligt forskningsområde. Dock har inte mycket forskning fokuserats på att extrahera multipla särdrag från samma inmatning. Detta examensarbete syftar till att undersöka hur kunskap om ett särdrag influerar en modells prestanda som syftar till att klassificera ett annat särdrag, men även hur likhet och generalitet i särdragens datadistribution influerar modellprestanda. Integrering av förstärkande inmatning i form av extra särdragsinformation i bildklassificeringsmodeller visades ge olika resultat beroende på likhet och generalitet av distribution av särdragsdata. Hänsyn måste tas när förstärkande särdrag används för att de förstärkande särdragen inte ska bli helt redundanta eller helt ta över under träningsprocessen. Väljande av rimliga förstärkningssärdrag kan medföra önskade synergieffekter vilket påverkar modellprestandan till det bättre.
Style APA, Harvard, Vancouver, ISO itp.
5

Kerr, Dermot. "Autonomous Scale Invariant Feature Extraction". Thesis, University of Ulster, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.502896.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Alathari, Thamer. "Feature extraction in volumetric images". Thesis, University of Southampton, 2015. https://eprints.soton.ac.uk/379936/.

Pełny tekst źródła
Streszczenie:
The increased interest in volumetric images in recent years requires new feature extraction methods for 3D image interpretation. The aim of this study is to provide algorithms that aid the process of detecting and segmenting geometrical objects from volumetric images. Due to high computational expense, such methods have yet to be established in the volumetric space. Only few have tackled this problem using shape descriptors and key-points of a specific shape; those techniques can detect complex shapes rather than simple geometric shapes due to the well defined key-points. Simplifying the data in the volumetric image using a surface detector and surface curvature estimation preserves the important information about the shapes at the same time reducing the computational expense. Whilst the literature describes only the template of the three-dimensional Sobel operator and not its basis, we present an extended version of the Sobel operator, which considers the gradients of all directions to extract an object’s surface, and with clear basis that allows for development of larger operators. Surface curvature descriptors are usually based on geometrical properties of a segmented object rather than on the change in image intensity. In this work, a new approach is described to estimate the surface curvature of objects using local changes of image intensity. The new methods have shown reliable results on both synthetic and on real volumetric images. The curvature and edge data are then processed in two new techniques for evidence gathering to extract a geometrical shape’s main axis or centre point. The accumulated data are taken directly from voxels’ geometrical locations rather than the surface normals as proposed in literature. The new approaches have been applied to detect a cylinder’s axis and spherical shapes. A new 3D line detection based on origin shifting has also been introduced. Accumulating, at every voxel, the angles resulting from a coordinate transform of a Cartesian to spherical system successfully indicates the existence of a 3D line in the volumetric image. A novel method based on using an analogy to pressure is introduced to allow analysis/ visualisation of objects as though they have been separated, when they were actually touching in the original volumetric images. The approach provides a new domain highlighting the connected areas between multiple touching objects. A mask is formed to detach the interconnected objects and remarkable results are achieved. This is applied successfully to isolate coins within an image of a Roman hoard of coins, and other objects. The approach can fail to isolate objects when the space between them appears to be of similar density to the objects themselves. This motivated development of an operator extended by high-pass filtering and morphological operations. This led to more accurate extraction of coins within the Roman hoard, and to successful isolation of femurs in a database of scanned body images enabling better isolation of hip components in replacement therapy.
Style APA, Harvard, Vancouver, ISO itp.
7

Serce, Hakan. "Facial Feature Extraction Using Deformable Templates". Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1224674/index.pdf.

Pełny tekst źródła
Streszczenie:
The purpose of this study is to develop an automatic facial feature extraction system, which is able to identify the detailed shape of eyes, eyebrows and mouth from facial images. The developed system not only extracts the location information of the features, but also estimates the parameters pertaining the contours and parts of the features using parametric deformable templates approach. In order to extract facial features, deformable models for each of eye, eyebrow, and mouth are developed. The development steps of the geometry, imaging model and matching algorithms, and energy functions for each of these templates are presented in detail, along with the important implementation issues. In addition, an eigenfaces based multi-scale face detection algorithm which incorporates standard facial proportions is implemented, so that when a face is detected the rough search regions for the facial features are readily available. The developed system is tested on JAFFE (Japanese Females Facial Expression Database), Yale Faces, and ORL (Olivetti Research Laboratory) face image databases. The performance of each deformable templates, and the face detection algorithm are discussed separately.
Style APA, Harvard, Vancouver, ISO itp.
8

Sherrah, Jamie. "Automatic feature extraction for pattern recognition /". Title page, contents and abstract only, 1998. http://web4.library.adelaide.edu.au/theses/09PH/09phs553.pdf.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--University of Adelaide, Dept. of Electrical and Electronic Engineering, 1999.
CD-ROM in back pocket comprises experimental results and executables. Includes bibliographical references (p. 251-261).
Style APA, Harvard, Vancouver, ISO itp.
9

Ljumić, Elvis. "Image feature extraction using fuzzy morphology". Diss., Online access via UMI:, 2007.

Znajdź pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--State University of New York at Binghamton, Department of Systems Science and Industrial Engineering, Thomas J. Watson School of Engineering and Applied Science, 2007.
Includes bibliographical references.
Style APA, Harvard, Vancouver, ISO itp.
10

Daniušis, Povilas. "Feature extraction via dependence structure optimization". Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2012. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2012~D_20121001_093645-66010.

Pełny tekst źródła
Streszczenie:
In many important real world applications the initial representation of the data is inconvenient, or even prohibitive for further analysis. For example, in image analysis, text analysis and computational genetics high-dimensional, massive, structural, incomplete, and noisy data sets are common. Therefore, feature extraction, or revelation of informative features from the raw data is one of fundamental machine learning problems. Efficient feature extraction helps to understand data and the process that generates it, reduce costs for future measurements and data analysis. The representation of the structured data as a compact set of informative numeric features allows applying well studied machine learning techniques instead of developing new ones.. The dissertation focuses on supervised and semi-supervised feature extraction methods, which optimize the dependence structure of features. The dependence is measured using the kernel estimator of Hilbert-Schmidt norm of covariance operator (HSIC measure). Two dependence structures are investigated: in the first case we seek features which maximize the dependence on the dependent variable, and in the second one, we additionally minimize the mutual dependence of features. Linear and kernel formulations of HBFE and HSCA are provided. Using Laplacian regularization framework we construct semi-supervised variants of HBFE and HSCA. Suggested algorithms were investigated experimentally using conventional and multilabel classification data... [to full text]
Daugelis praktiškai reikšmingu sistemu mokymo uždaviniu reikalauja gebeti panaudoti didelio matavimo, strukturizuotus, netiesinius duomenis. Vaizdu, teksto, socialiniu bei verslo ryšiu analize, ivairus bioinformatikos uždaviniai galetu buti tokiu uždaviniu pavyzdžiais. Todel požymiu išskyrimas dažnai yra pirmasis žingsnis, kuriuo pradedama duomenu analize ir nuo kurio priklauso galutinio rezultato sekme. Šio disertacinio darbo tyrimo objektas yra požymiu išskyrimo algoritmai, besiremiantys priklausomumo savoka. Darbe nagrinejamas priklausomumas, nusakytas kovariacinio operatoriaus Hilberto-Šmidto normos (HSIC mato) branduoliniu ivertiniu. Pasiulyti šiuo ivertiniu besiremiantys HBFE ir HSCA algoritmai leidžia dirbti su bet kokios strukturos duomenimis, bei yra formuluojami tikriniu vektoriu terminais (tai leidžia optimizavimui naudoti standartinius paketus), bei taikytini ne tik prižiurimo, bet ir dalinai prižiurimo mokymo imtims. Pastaruoju atveju HBFE ir HSCA modifikacijos remiasi Laplaso reguliarizacija. Eksperimentais su klasifikavimo bei daugiažymio klasifikavimo duomenimis parodyta, jog pasiulyti algoritmai leidžia pagerinti klasifikavimo efektyvuma lyginant su PCA ar LDA.
Style APA, Harvard, Vancouver, ISO itp.
11

Elliott, Rodney Bruce. "Feature extraction techniques for grasp classification". Thesis, University of Canterbury. Mechanical Engineering, 1998. http://hdl.handle.net/10092/3447.

Pełny tekst źródła
Streszczenie:
This thesis examines the ability of four signal parameterisation techniques to provide discriminatory information between six different classes of signal. This was done with a view to assessing the suitability of the four techniques for inclusion in the real-time control scheme of a next generation robotic prosthesis. Each class of signal correlates to a particular type of grasp that the robotic prosthesis is able to form. Discrimination between the six classes of signal was done on the basis of parameters extracted from four channels of electromyographie (EMG) data that was recorded from muscles in the forearm. Human skeletal muscle tissue produces EMG signals whenever it contracts. Therefore, providing that the EMG signals of the muscles controlling the movements of the hand vary sufficiently when forming the different grasp types, discrimination between the grasps is possible. While it is envisioned that the chosen command discrimination system will be used by mid-forearm amputees to control a robotic prosthesis, the viability of the different parameterisation techniques was tested on data gathered from able-bodied volunteers in order to establish an upper limit of performance. The muscles from which signals were recorded are: the extensor pollicis brevis and extensor pollicis longus pair (responsible for moving the thumb); the extensor communis digitorum (responsible for moving the middle and index fingers); and the extensor carpi ulnaris (responsible for moving the little finger). The four signal parameterisation techniques that were evaluated are: 1. Envelope Maxima. This method parameterises each EMG signal by the maximum value of a smoothed fitted signal envelope. A tenth order polynomial is fitted to the rectified EMG signal peaks, and the maximum value of the polynomial is used to parameterise the signal. 2. Orthogonal Decomposition. This method uses a set of orthogonal functions to decompose the EMG signal into a finite set of orthogonal components. Each burst is then parameterised by the coefficients of the set of orthogonal functions. Two sets of orthogonal functions were tested: the Legendre polynomials, and the wavelet packets associated with the scaling functions of the Haar wavelet (referred to as the Haar wavelet for brevity). 3. Global Dynamical Model. This method uses a discretised set of nonlinear ordinary differential equations to model the dynamical processes that produced the recorded EMG signals. The coefficients of this model are then used to parameterise the EMG signal 4. EMG Histogram. This method formulates a histogram detailing the frequency with which the EMG signal enters particular voltage bins) and uses these frequency measurements to parameterise the signal. Ten sets of EMG data were gathered and processed to extract the desired parameters. Each data set consisted of 600 grasps- lOO grasp records of four channels of EMG data for each of the six grasp classes. From this data a hit rate statistic was formed for each feature extraction technique. The mean hit rates obtained from the four signal parameterisation techniques that were tested are summarised in Table 1. The EMG histogram provided Parameterisation Technique Hit Rate (%) Envelope Maxima 75 Legendre Polynomials 77 Haar Wavelets 79 Global Dynamical Model 75 EMG Histogram 81 Table 1: Hit Rate Summary. the best mean hit rate of all the signal parameterisation techniques of 81%. However, like all of the signal parameterisations that were tested, there was considerable variance in hit rates between the ten sets of data. This has been attributed to the manner in which the electrodes used to record the EMG signals were positioned. By locating the muscles of interest more accurately, consistent hit rates of 95% are well within reach. The fact that the EMG histogram produces the best mean hit rates is surprising given its relative simplicity. However, this simplicity makes the EMG histogram feature ideal for inclusion in a real-time control scheme.
Style APA, Harvard, Vancouver, ISO itp.
12

Guo, Da 1976. "Automated feature extraction in oceanographic visualization". Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/33438.

Pełny tekst źródła
Streszczenie:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Ocean Engineering; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (leaves 141-147).
The ocean is characterized by a multitude of powerful, sporadic biophysical dynamical events; scientific research has reached the stage that their interpretation and prediction is now becoming possible. Ocean prediction, analogous to atmospheric weather prediction but combining biological, chemical and physical features is able to help us understand the complex coupled physics, biology and acoustics of the ocean. Applications of the prediction of the ocean environment include exploitation and management of marine resources, pollution control such as planning of maritime and naval operations. Given the vastness of ocean, it is essential for effective ocean prediction to employ adaptive sampling to best utilize the available sensor resources in order to minimize the forecast error. It is important to concentrate measurements to the regions where one can witness features of physical or biological significance in progress. Thus automated feature extraction in oceanographic visualization can facilitate adaptive sampling by presenting the physically relevant features directly to the operation planners. Moreover it could be used to help automate adaptive sampling. Vortices (eddies and gyres) and upwelling, two typical and important features of the ocean, are studied.
(cont.) A variety of feature extraction methods are presented, and those more pertinent to this study are implemented, including derived field generation and attribute set extraction. Detection results are evaluated in terms of accuracy, computational efficiency, clarity and usability. Vortices, a very important flow feature is the primary focus of this study. Several point-based and set-based vortex detection methods are reviewed. A set-based vortex core detection method based on geometric properties of vortices is applied to both classical vortex models and real ocean models. The direction spanning property, which is a geometric property, guides the detection of all the vortex core candidates, and the conjugate pair eigenvalue method is responsible for filtering out the false positives from the candidate set. Results show the new method to be analytically accurate and practically feasible, and superior to traditional point-based vortex detection methods. Detection methods of streamlines are also discussed. Using the novel cross method or the winding angle method, closed streamlines around vortex cores can be detected.
(cont.) Therefore, the whole vortex area, i.e., the combination of vortex core and surrounding streamlines, is detected. Accuracy and feasibility are achieved through automated vortex detection requiring no human inspection. The detection of another ocean feature, upwelling, is also discussed.
by Da Guo.
S.M.
Style APA, Harvard, Vancouver, ISO itp.
13

Filho, Carlos André Braile Przewodowski. "Feature extraction from 3D point clouds". Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-30072018-111718/.

Pełny tekst źródła
Streszczenie:
Computer vision is a research field in which images are the main object of study. One of its category of problems is shape description. Object classification is one important example of applications using shape descriptors. Usually, these processes were performed on 2D images. With the large-scale development of new technologies and the affordable price of equipment that generates 3D images, computer vision has adapted to this new scenario, expanding the classic 2D methods to 3D. However, it is important to highlight that 2D methods are mostly dependent on the variation of illumination and color, while 3D sensors provide depth, structure/3D shape and topological information beyond color. Thus, different methods of shape descriptors and robust attributes extraction were studied, from which new attribute extraction methods have been proposed and described based on 3D data. The results obtained from well known public datasets have demonstrated their efficiency and that they compete with other state-of-the-art methods in this area: the RPHSD (a method proposed in this dissertation), achieved 85:4% of accuracy on the University of Washington RGB-D dataset, being the second best accuracy on this dataset; the COMSD (another proposed method) has achieved 82:3% of accuracy, standing at the seventh position in the rank; and the CNSD (another proposed method) at the ninth position. Also, the RPHSD and COMSD methods have relatively small processing complexity, so they achieve high accuracy with low computing time.
Visão computacional é uma área de pesquisa em que as imagens são o principal objeto de estudo. Um dos problemas abordados é o da descrição de formatos (em inglês, shapes). Classificação de objetos é um importante exemplo de aplicação que usa descritores de shapes. Classicamente, esses processos eram realizados em imagens 2D. Com o desenvolvimento em larga escala de novas tecnologias e o barateamento dos equipamentos que geram imagens 3D, a visão computacional se adaptou para este novo cenário, expandindo os métodos 2D clássicos para 3D. Entretanto, estes métodos são, majoritariamente, dependentes da variação de iluminação e de cor, enquanto os sensores 3D fornecem informações de profundidade, shape 3D e topologia, além da cor. Assim, foram estudados diferentes métodos de classificação de objetos e extração de atributos robustos, onde a partir destes são propostos e descritos novos métodos de extração de atributos a partir de dados 3D. Os resultados obtidos utilizando bases de dados 3D públicas conhecidas demonstraram a eficiência dos métodos propóstos e que os mesmos competem com outros métodos no estado-da-arte: o RPHSD (um dos métodos propostos) atingiu 85:4% de acurácia, sendo a segunda maior acurácia neste banco de dados; o COMSD (outro método proposto) atingiu 82:3% de acurácia, se posicionando na sétima posição do ranking; e o CNSD (outro método proposto) em nono lugar. Além disso, os métodos RPHSD têm uma complexidade de processamento relativamente baixa. Assim, eles atingem uma alta acurácia com um pequeno tempo de processamento.
Style APA, Harvard, Vancouver, ISO itp.
14

Nash, Jason Mark. "Evidence gathering for dynamic feature extraction". Thesis, University of Southampton, 1999. https://eprints.soton.ac.uk/253031/.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Direkoglu, Cem. "Feature extraction via heat flow analogy". Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/66595/.

Pełny tekst źródła
Streszczenie:
Feature extraction is an important field of image processing and computer vision. Features can be classified as low-level and high-level. Low-level features do not give shape information of the objects, where the popular low-level feature extraction techniques are edge detection, corner detection, thresholding as a point operation and optical flow estimation. On the other hand, high-level features give shape information, where the popular techniques are active contours, region growing, template matching and the Hough transform. In this thesis, we investigate the heat flow analogy, which is a physics based analogy, both for low-level and high-level feature extraction. Three different contributions to feature extraction, based on using the heat conduction analogy, are presented in this thesis. The solution of the heat conduction equation depends on properties of the material, the heat source as well as specified initial and boundary conditions. In our contributions, we consider and represent particular heat conduction problems, in the image and video domains, for feature extraction. The first contribution is moving-edge detection for motion analysis, which is a low-level feature extraction. The second contribution is shape extraction from images which is a high-level feature extraction. Finally, the third contribution is silhouette object feature extraction for recognition purpose and this can be considered as a combination of low-level and high-level feature extraction. Our evaluations and experimental results show that the heat analogy can be applied successfully both for low-level and for high-level feature extraction purposes in image processing and computer vision.
Style APA, Harvard, Vancouver, ISO itp.
16

Wärngård, Fredrik. "Feature Extraction from an SMT Problem". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-452233.

Pełny tekst źródła
Streszczenie:
One way of solving complex satisfiability problems is by using the method of SMT(Satisfiability Modulo Theories). For satisfiability problems that are harder to solve than trivial ones, we could very well save ourselves a lot of time if we choose the most suitable solver immediately. These problems tend to take some time to solve, in accordance to their complexities, and different solvers specialise in different types of satisfiability problems. SMT is similar to the SAT modeling language but it is at a higher level, and allows further abstraction by including theories. In comparison to SAT problems, SMTproblems are more information-dense. In addition, SMT solvers often come with complementary performance profiles. In this paper we entertain the idea that we could indeed find a similarity between problems by collecting metadata from them. As a result of this, a feature extractor to count a manually defined set of features over SMT-LIB problems has been implemented. The idea is that if we have a problem, then these characteristics, if chosen carefully, could help us recognise problems with similar feature counts that has already been successfully solved within a portfolio solver system. The most successful solver for those problems may then be the most suitable solver for our problem as well, or at least we can use similar solving methods. Examples of these features are the number of uses of certain functions, or the total number of symbols.These test runs have in addition to collecting the metadata shown some variation between the benchmark groups whose results are presented and analysed grouped bylogics in the result section of this paper.
Style APA, Harvard, Vancouver, ISO itp.
17

Rampally, Deepthi. "Iris recognition based on feature extraction". Manhattan, Kan. : Kansas State University, 2010. http://hdl.handle.net/2097/3647.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Porter, Reid. "Evolution on FPGAs for feature extraction". Thesis, Queensland University of Technology, 2001.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Cagli, Eleonora. "Feature Extraction for Side-Channel Attacks". Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS295.

Pełny tekst źródła
Streszczenie:
La cryptographie embarquée sur les composants sécurisés peut être vulnérable à des attaques par canaux auxiliaires basées sur l’observation de fuites d’information issues de signaux acquis durant l’exécution de l’algorithme. Aujourd’hui, la présence de nombreuses contremesures peut conduire à l’acquisition de signaux à la fois très bruités, ce qui oblige un attaquant, ou un évaluateur sécuritaire, à utiliser des modèles statistiques, et très larges, ce qui rend difficile l’estimation de tels modèles. Dans cette thèse nous étudions les techniques de réduction de dimension en tant que prétraitement, et plus généralement le problème de l’extraction d’information dans le cas des signaux de grandes dimensions. Les premiers travaux concernent l’application des extracteurs de caractéristiques linéaires classiques en statistiques appliquées, comme l'analyse en composantes principales et l’analyse discriminante linéaire. Nous analysons ensuite une généralisation non linéaire de ce deuxième extracteur qui permet de définir une méthode de prétraitement qui reste efficace en présence de contremesures de masquage. Finalement, en généralisant davantage les modèles d’extractions, nous explorons certaines méthodes d’apprentissage profond pour réduire les prétraitements du signal et extraire de façon automatique l’information du signal brut. En particulier, l’application des réseaux de neurones convolutifs nous permet de mener des attaques qui restent efficaces en présence de désynchronisation
Cryptographic integrated circuits may be vulnerable to attacks based on the observation of information leakages conducted during the cryptographic algorithms' executions, the so-called Side-Channel Attacks. Nowadays the presence of several countermeasures may lead to the acquisition of signals which are at the same time highly noisy, forcing an attacker or a security evaluator to exploit statistical models, and highly multi-dimensional, letting hard the estimation of such models. In this thesis we study preprocessing techniques aiming at reducing the dimension of the measured data, and the more general issue of information extraction from highly multi-dimensional signals. The first works concern the application of classical linear feature extractors, such as Principal Component Analysis and Linear Discriminant Analysis. Then we analyse a non-linear generalisation of the latter extractor, obtained through the application of a « Kernel Trick », in order to let such preprocessing effective in presence of masking countermeasures. Finally, further generalising the extraction models, we explore the deep learning methodology, in order to reduce signal preprocessing and automatically extract sensitive information from rough signal. In particular, the application of the Convolutional Neural Network allows us to perform some attacks that remain effective in presence of signal desynchronisation
Style APA, Harvard, Vancouver, ISO itp.
20

DESHPANDE, SUSHILENDRA ARUN. "FEATURE EXTRACTION AND INTRA-FEATURE DESIGN ADVISOR FOR SHEET METAL PARTS". University of Cincinnati / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1070392705.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Al-Khafaji, Suhad. "Spectral-Spatial Feature Extraction for Hyperspectral Image Matching and Boundary Detection". Thesis, Griffith University, 2020. http://hdl.handle.net/10072/401445.

Pełny tekst źródła
Streszczenie:
A hyperspectral image contains a huge amount of information compared to grayscale and RGB images thanks to its high spectral resolution and wide sensing spectrum. This facilitates analysis and interpretation of properties and features of specific materials in the image. Exploiting both spectral and spatial information can provide more comprehensive and discriminative characteristics of objects of interest than traditional methods. Recently, hyperspectral imaging has been used in many applications such as medicine, agriculture, environment and astronomy. Furthermore, due to the availability and cost reduction of imaging devices, hyperspectral imaging has also been introduced into computer vision and can be adopted in many pattern recognition tasks, for example, object detection, recognition and segmentation. Therefore, the demand for developing new methods that explore all information of hyperspectral images has increased. Accordingly, the main target of this thesis is to extract effective and robust spectral-spatial features from close-range hyperspectral images to facilitating some of the pattern recognition and image processing tasks. In this thesis, we propose three novel spectral-spatial feature extraction methods for image matching and boundary detection. First, we exploit both spectral and spatial dimensions simultaneously by extracting 3D features from close-range hyperspectral images. These features are spectral and geometric transformation invariants that can be used to register hyperspectral images with different spectral conditions and different geometric projections. This method is named Spectral-Spatial Scale Invariant Feature Transform (SS-SIFT). Similar to the classic SIFT algorithm, SS-SIFf consists of keypoint detection and descriptor construction steps. Our main contribution to this method is extracting 3D keypoints from spectral-spatial scale space by detecting them from extrema after 3D difference of Gaussian is applied to the data cube. Furthermore, for each keypoint, two descriptors are proposed by exploring the distribution of spectral-spatial gradient magnitudes in its local 3D neighbourhood. SS-SIFT features are used effectively to match hyperspectral images that are captured with different light conditions, viewing angles, and using two different hyperspectral cameras. The second work of this thesis is extracting spectral-spatial features for boundary detection in close-range hyperspectral images. Boundary detection is a fundamental task in computer vision and numerous research has been proposed in RGB images. However, there is a dearth of research on boundary detection in hyperspectral image due to high data dimensionality and the complexity of information that is distributed over the spectral bands. Thus, we propose a spectral-spatial feature based statistical co-occurrence method for this task. We extract simple and significant features from both spectral and spatial dimensions at the same time to describe both structure and material properties. Each pixel vector is converted to a feature space based on its surrounding neighbours. Then, we adopt the probability density function to estimate the co-occurrence of features at neighbouring pixel pairs. Then spectral-spatial affinity matrix is constructed based on probability density function values. After that, a spectral clustering algorithm is applied on the affinity matrix to solve the eigenproblem and calculate the eigenvectors corresponding to the smallest eigenvalues. Finally, the boundary map is constructed from the most informative eigenvectors. Our proposed algorithm is very effective in exploring object boundaries from close-range hyperspectral images, considering both object material and structure. Lastly, we propose a novel method for effective and efficient boundary detection in close-range hyperspectral images. Considering that the reflectance of materials and their abundances provide significant details of the data, our algorithm investigates the potential of integrating hyperspectral unmixing results with the spectral responses to provide effective spectral-spatial feature. We use a nonnegative matrix factorization method to estimate the material abundances and then fuse them linearly with material reflectance to provide spectral-spatial features. We subsequently construct a spectral-spatial affinity matrix based on calculating the efficient spectral similarity measurements between the feature vectors of neighbouring pixels within a local neighbourhood. After that, the eigenproblem of the affinity matrix is solved and eigenvectors of the smallest eigenvalues are calculated. Finally, we construct the boundary map from the most informative eigenvectors. Regardless of the efficiency of the model, both proposed boundary detection methods are effective to predict boundaries of objects with similar colour but different materials and can cope with several scenarios which methods based on colour images cannot handle.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Info & Comm Tech
Science, Environment, Engineering and Technology
Full Text
Style APA, Harvard, Vancouver, ISO itp.
22

Cetin, Melih. "Extraction Of Buildings In Satellite Images". Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12611944/index.pdf.

Pełny tekst źródła
Streszczenie:
In this study, an automated building extraction system, which is capable of detecting buildings from satellite images using only RGB color band is implemented. The approach used in this work has four main steps: local feature extraction, feature selection, classification and post processing. There are many studies in literature that deal with the same problem. The main issue is to find the most suitable features to distinguish a building. This work presents a feature selection scheme that is connected with the classification framework of Adaboost. As well as Adaboost, four SVM kernels are used for classification. Detailed analysis regarding window type and size, feature type, feature selection, feature count and training set is done for determining the optimal parameters for the classifiers. A detailed comparison of SVM and Adaboost is done based on pixel and object performances and the results obtained are presented both numerically and visually. It is observed that SVM performs better if quadratic kernel is used than the cases using linear, RBF or polynomial kernels. SVM performance is better if features are selected either by Adaboost or by considering errors obtained on histograms of features. The performance obtained by quadratic kernel SVM operated on Adaboost selected features is found to be 38% in terms of pixel based performance criteria quality percentage and 48% in terms object based performance criteria correct detection with building detection threshold 0.4. Adaboost performed better than SVM resulting in 43% quality percentage and 67% correct detection with the same threshold.
Style APA, Harvard, Vancouver, ISO itp.
23

Atar, Neriman. "Video Segmentation Based On Audio Feature Extraction". Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610397/index.pdf.

Pełny tekst źródła
Streszczenie:
In this study, an automatic video segmentation and classification system based on audio features has been presented. Video sequences are classified such as videos with &ldquo
speech&rdquo
, &ldquo
music&rdquo
, &ldquo
crowd&rdquo
and &ldquo
silence&rdquo
. The segments that do not belong to these regions are left as &ldquo
unclassified&rdquo
. For the silence segment detection, a simple threshold comparison method has been done on the short time energy feature of the embedded audio sequence. For the &ldquo
speech&rdquo
, &ldquo
music&rdquo
and &ldquo
crowd&rdquo
segment detection a multiclass classification scheme has been applied. For this purpose, three audio feature set have been formed, one of them is purely MPEG-7 audio features, other is the audio features that is used in [31] the last one is the combination of these two feature sets. For choosing the best feature a histogram comparison method has been used. Audio segmentation system was trained and tested with these feature sets. The evaluation results show that the Feature Set 3 that is the combination of other two feature sets gives better performance for the audio classification system. The output of the classification system is an XML file which contains MPEG-7 audio segment descriptors for the video sequence. An application scenario is given by combining the audio segmentation results with visual analysis results for getting audio-visual video segments.
Style APA, Harvard, Vancouver, ISO itp.
24

SUNDHOLM, JOEL. "Feature Extraction for Anomaly Detection inMaritime Trajectories". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-155898.

Pełny tekst źródła
Streszczenie:
The operators of a maritime surveillance system are hardpressed to make complete use of the near real-time informationflow available today. To assist them in this matterthere has been an increasing amount of interest in automated systems for the detection of anomalous trajectories.Specifically, it has been proposed that the framework of conformal anomaly detection can be used, as it provides the key property of a well-tuned alarm rate. However, inorder to get an acceptable precision there is a need to carefully tailor the nonconformity measure used to determine if a trajectory is anomalous. This also applies to the features that are used by the measure. To contribute to a better understandingof what features are feasible and how the choice of feature space relates to the types of anomalies that can be found we have evaluated a number of features on real maritime trajectory data with simulated anomalies. It isfound that none of the tested feature spaces was best for detecting all anomaly types in the test set. While one feature space might be best for detecting one kind of anomaly,another feature space might be better for other anomalies.There are indications that the best possible non conformity measure should capture both absolute anomalies, such asan anomalous position, as well as relative anomalies, such as strange turns or stops.
Style APA, Harvard, Vancouver, ISO itp.
25

Lim, Suryani. "Feature extraction, browsing and retrieval of images". Monash University, School of Computing and Information Technology, 2005. http://arrow.monash.edu.au/hdl/1959.1/9677.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Mashhadi-Farahani, Bahman. "Feature extraction and selection for speech recognition". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq38255.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Chilo, José. "Feature extraction for low-frequency signal classification /". Stockholm : Fysik, Physics, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4661.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Tarcin, Serkan. "Fast Feature Extraction From 3d Point Cloud". Master's thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615659/index.pdf.

Pełny tekst źródła
Streszczenie:
To teleoperate an unmanned vehicle a rich set of information should be gathered from surroundings.These systems use sensors which sends high amounts of data and processing the data in CPUs can be time consuming. Similarly, the algorithms that use the data may work slow because of the amount of the data. The solution is, preprocessing the data taken from the sensors on the vehicle and transmitting only the necessary parts or the results of the preprocessing. In this thesis a 180 degree laser scanner at the front end of an unmanned ground vehicle (UGV) tilted up and down on a horizontal axis and point clouds constructed from the surroundings. Instead of transmitting this data directly to the path planning or obstacle avoidance algorithms, a preprocessing stage has been run. In this preprocess rst, the points belonging to the ground plane have been detected and a simplied version of ground has been constructed then the obstacles have been detected. At last, a simplied ground plane as ground and simple primitive geometric shapes as obstacles have been sent to the path planning algorithms instead of sending the whole point cloud.
Style APA, Harvard, Vancouver, ISO itp.
29

Fahimi, Farshad. "A feature extraction system for human surveillance". Thesis, University of Portsmouth, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.507157.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Jastram, Michael Oliver. "Inspection and feature extraction of marine propellers". Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/42632.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Jaggi, S. (Seema). "Multiscale geometric feature extraction and object recognition". Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/10764.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.
Includes bibliographical references (leaves 138-142).
by Seema Jaggi.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
32

Jolly, Alistair Duncan. "Feature extraction from millimetre wave radar images". Thesis, University of Central Lancashire, 1992. http://clok.uclan.ac.uk/19034/.

Pełny tekst źródła
Streszczenie:
This thesis describes research performed into the segmentation and classification of features on images of wound terrain generated from an airborne millimetre wave radar. The principles of operation of the radar are established and it is shown how an image is produced from this particular radar. The parameters such as wavelength, antenna size and pulse length are related to the images and a mathematical description of the radar data is given. The effectiveness of established image processing techniques is reviewed when applied to millimetre wave radar images and a statistical classification technique is seen to yield encouraging results. This method of segmentation and classification is then extended to make optimal use of the available information from the radar. An orthogonal expansion of the Poincaré sphere representation of polarised radiation is established and it is shown how different terrain types cluster in the eigenspace of these spherical harmonics. Segmentation then follows from the clustering properties of pixels within this multidimensional eigenspace and classification from the locations of the clusters.
Style APA, Harvard, Vancouver, ISO itp.
33

Tang, Yu. "Feature Extraction for the Cardiovascular Disease Diagnosis". Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-33742.

Pełny tekst źródła
Streszczenie:
Cardiovascular disease is a serious life-threatening disease. It can occur suddenly and progresses rapidly. Finding the right disease features in the early stage is important to decrease the number of deaths and to make sure that the patient can fully recover. Though there are several methods of examination, describing heart activities in signal form is the most cost-effective way. In this case, ECG is the best choice because it can record heart activity in signal form and it is safer, faster and more convenient than other methods of examination. However, there are still problems involved in the ECG. For example, not all the ECG features are clear and easily understood. In addition, the frequency features are not present in the traditional ECG. To solve these problems, the project uses the optimized CWT algorithm to transform data from the time domain into the time-frequency domain. The result is evaluated by three data mining algorithms with different mechanisms. The evaluation proves that the features in the ECG are successfully extracted and important diagnostic information in the ECG is preserved. A user interface is designed increasing efficiency, which facilitates the implementation.
Style APA, Harvard, Vancouver, ISO itp.
34

Hurley, David J. "Force field feature extraction for ear biometrics". Thesis, University of Southampton, 2001. https://eprints.soton.ac.uk/256792/.

Pełny tekst źródła
Streszczenie:
The overall objective in defining feature space is to reduce the dimensionality of the original pattern space, whilst maintaining discriminatory power for classification. To meet this objective in the context of ear biometrics a novel force field transformation is introduced in which the image is treated as an array of mutually attracting particles that act as the source of a Gaussian force field. In a similar way to Newton’s Law of Universal Gravitation pixels are imagined to attract each other according to the product of their intensities and inversely to the square of the distance between them. Underlying the force field there is a scalar potential energy field, which in the case of an ear takes the form of a smooth surface that resembles a small mountain with a number of peaks joined by ridges. The peaks correspond to potential energy wells and to extend the analogy the ridges correspond to potential energy channels. The directional properties of the force field are exploited to automatically locate these wells and channels, which then form the basis of a set of characteristic ear features. The new features are robust especially in the presence of noise, and have the advantage that the ear does not need to be explicitly extracted from its background. The directional properties of the ensuing force field lead to two equivalent extraction techniques; one is algorithmic and based on field lines, while the other is analytical and based on the divergence of force direction. The technique is validated by performing recognition on a database of ears selected from the XM2VTS face database. This confirms not only that ears do indeed appear to have potential as a biometric, but also that the new approach is well suited to their description.
Style APA, Harvard, Vancouver, ISO itp.
35

Raffoul, Joseph Naim. "Blob Feature Extraction for Event Detection Cameras". University of Dayton / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1590165017029087.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Akyildiz, Yeliz. "Feature extraction from synthetic aperture radar imagery". Connect to resource, 2000. http://rave.ohiolink.edu/etdc/view.cgi?acc%5Fnum=osu1258651629.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Aktaruzzaman, M. "FEATURE EXTRACTION AND CLASSIFICATION THROUGH ENTROPY MEASURES". Doctoral thesis, Università degli Studi di Milano, 2015. http://hdl.handle.net/2434/277947.

Pełny tekst źródła
Streszczenie:
Entropy is a universal concept that represents the uncertainty of a series of random events. The notion “entropy" is differently understood in different disciplines. In physics, it represents the thermodynamical state variable; in statistics it measures the degree of disorder. On the other hand, in computer science, it is used as a powerful tool for measuring the regularity (or complexity) in signals or time series. In this work, we have studied entropy based features in the context of signal processing. The purpose of feature extraction is to select the relevant features from an entity. The type of features depends on the signal characteristics and classification purpose. Many real world signals are nonlinear and nonstationary and they contain information that cannot be described by time and frequency domain parameters, instead they might be described well by entropy. However, in practice, estimation of entropy suffers from some limitations and is highly dependent on series length. To reduce this dependence, we have proposed parametric estimation of various entropy indices and have derived analytical expressions (when possible) as well. Then we have studied the feasibility of parametric estimations of entropy measures on both synthetic and real signals. The entropy based features have been finally employed for classification problems related to clinical applications, activity recognition, and handwritten character recognition. Thus, from a methodological point of view our study deals with feature extraction, machine learning, and classification methods. The different versions of entropy measures are found in the literature for signals analysis. Among them, approximate entropy (ApEn), sample entropy (SampEn) followed by corrected conditional entropy (CcEn) are mostly used for physiological signals analysis. Recently, entropy features are used also for image segmentation. A related measure of entropy is Lempel-Ziv complexity (LZC), which measures the complexity of a time-series, signal, or sequences. The estimation of LZC also relies on the series length. In particular, in this study, analytical expressions have been derived for ApEn, SampEn, and CcEn of an auto-regressive (AR) models. It should be mentioned that AR models have been employed for maximum entropy spectral estimation since many years. The feasibility of parametric estimates of these entropy measures have been studied on both synthetic series and real data. In feasibility study, the agreement between numeral estimates of entropy and estimates obtained through a certain number of realizations of the AR model using Montecarlo simulations has been observed. This agreement or disagreement provides information about nonlinearity, nonstationarity, or nonGaussinaity presents in the series. In some classification problems, the probability of agreement or disagreement have been proved as one of the most relevant features. VII After feasibility study of the parametric entropy estimates, the entropy and related measures have been applied in heart rate and arterial blood pressure variability analysis. The use of entropy and related features have been proved more relevant in developing sleep classification, handwritten character recognition, and physical activity recognition systems. The novel methods for feature extraction researched in this thesis give a good classification or recognition accuracy, in many cases superior to the features reported in the literature of concerned application domains, even with less computational costs.
Style APA, Harvard, Vancouver, ISO itp.
38

Alaei, Fahimeh. "Texture Feature-based Document Image Retrieval". Thesis, Griffith University, 2019. http://hdl.handle.net/10072/385939.

Pełny tekst źródła
Streszczenie:
Storing and manipulating documents in digital form to contribute to a paperless society has been the propensity of emerging technology. There has been notable growth in the variety and quantity of digitised documents, which have often been scanned/photographed and archived as images without any labelling or sufficient index information. The growth of these kinds of document images will undoubtedly continue with new technology. To provide an effective way for retrieving and organizing these document images, many techniques have been implemented in the literature. However, designing automation systems to accurately retrieve document images from archives remains a challenging problem. Finding discriminative and effective features is the fundamental task for developing an efficient retrieval system. An overview of the literature reveals that research on document image retrieval using texture-based features has not yet been broadly investigated. Texture features are suitable for large volume data and are generally fast to compute. In this study, the effectiveness of more than 50 different texture-based feature extraction methods from four categories of texture features - statistical, transform-based, model-based, and structural approaches - are investigated in order to propose a more accurate method for document image retrieval. Moreover, the influence of resolution and similarity metrics on document image retrieval are examined. The MTDB, ITESOFT, and CLEF_IP datasets, which are heterogeneous datasets providing a great variety of page layouts and contents, are considered for experimentation, and the results are computed in terms of retrieval precision, recall, and F-score. By considering the performance, time complexity, and memory usage of different texture features on three datasets, the best category of texture features for obtaining the best retrieval results is discussed. The effectiveness of the transform-based category over other categories in regard to obtaining higher retrieval result is proven. Many new feature extraction and document image retrieval methods are proposed in this research. To attain fast document image retrieval, the number of extracted features and time complexity play a significant role in the retrieval process. Thus, a fast and non-parametric texture feature extraction method based on summarising the local grey-level structure of the image is further proposed in this research work. The proposed fast local binary pattern provided promising results, with lower computing time as well as smaller memory space consumption compared to other variations of local binary pattern-based methods. There is a challenge in DIR systems when document images in queries are of different resolutions from the document images considered for training the system. In addition, a small number of document image samples with a particular resolution may only be available for training a DIR system. To investigate these two issues, an under-sampling concept is considered to generate under-sampled images and to improve the retrieval results. In order to use more than one characteristic of document images for document image retrieval, two different texture-based features are used for feature extraction. The fast-local binary method as a statistical approach, and a wavelet analysis technique as a transform-based approach, are used for feature extraction, and two feature vectors are obtained for every document image. The classifier fusion method using the weighted average fusion of distance measures obtained in relation to each feature vector is then proposed to improve document image retrieval results. To extract features similar to human visual system perception, an appearance-based feature extraction method for document images is also proposed. In the proposed method, the Gist operator is employed on the sub-images obtained from the wavelet transform. Thereby, a set of global features from the original image as well as sub-images are extracted. Wavelet-based features are also considered as the second feature set. The classifier fusion technique is finally employed to find similarity distances between the extracted features using the Gist and wavelet transform from a given query and the knowledge-base. Higher document image retrieval results have been obtained from this proposed system compared to the other systems in the literature. The other appearance-based document image retrieval system proposed in this research is based on the use of a saliency map obtained from human visual attention. The saliency map obtained from the input document image is used to form a weighted document image. Features are then extracted from the weighted document images using the Gist operator. The proposed retrieval system provided the best document image retrieval results compared to the results reported from other systems. Further research could be undertaken to combine the properties of other approaches to improve retrieval result. Since in the conducted experiments, a priori knowledge regarding document image layout and content has not been considered, the use of prior knowledge about the document classes may also be integrated into the feature set to further improve the retrieval performance
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Info & Comm Tech
Science, Environment, Engineering and Technology
Full Text
Style APA, Harvard, Vancouver, ISO itp.
39

Masip, David. "Feature extraction in face recognition on the use of internal and external features". Saarbrücken VDM Verlag Dr. Müller, 2005. http://d-nb.info/989265706/04.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Jackson, Julie Ann. "Three-Dimensional Feature Models for Synthetic Aperture Radar and Experiments in Feature Extraction". The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1250608768.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Jakeš, Jan. "Visipedia - Embedding-driven Visual Feature Extraction and Learning". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-236120.

Pełny tekst źródła
Streszczenie:
Multidimenzionální indexování je účinným nástrojem pro zachycení podobností mezi objekty bez nutnosti jejich explicitní kategorizace. V posledních letech byla tato metoda hojně využívána pro anotaci objektů a tvořila významnou část publikací spojených s projektem Visipedia. Tato práce analyzuje možnosti strojového učení z multidimenzionálně indexovaných obrázků na základě jejich obrazových příznaků a přestavuje metody predikce multidimenzionálních souřadnic pro předem neznámé obrázky. Práce studuje příslušené algoritmy pro extrakci příznaků, analyzuje relevantní metody strojového účení a popisuje celý proces vývoje takového systému. Výsledný systém je pak otestován na dvou různých datasetech a provedené experimenty prezentují první výsledky pro úlohu svého druhu.
Style APA, Harvard, Vancouver, ISO itp.
42

Graf, Arnulf B. A. "Classification and feature extraction in man and machine". [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=972533508.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Westin, Carl-Fredrik. "Feature extraction based on a tensor image description". Licentiate thesis, Linköping University, Linköping University, Computer Vision, 1991. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54888.

Pełny tekst źródła
Streszczenie:

Feature extraction from a tensor based local image representation introduced by Knutsson in [37] is discussed. The tensor representation keeps statements of structure, certainty of statement and energy separate. Further processing for obtaining new features also having these three entities separate is achieved by the use of a new concept, tensor field filtering. Tensor filters for smoothing and for extraction of circular symmetries are presented and discussed in particular. These methods are used for corner detection and extraction of more global features such as lines in images. A novel method for grouping local orientation estimates into global line parameters is introduced. The method is based on a new parameter space, the Möbius Strip parameter space, which has similarities to the Hough transform. A local centroid clustering algorithm is used for classification in this space. The procedure automatically divides curves into line segments with appropriate lengths depending on the curvature. A linked list structure is built up for storing data in an efficient way.


Ogiltigt nummer / annan version: I publ. nr 290:s ISBN: 91-7870-815-X.
Style APA, Harvard, Vancouver, ISO itp.
44

Tellioglu, Zafer Hasim. "Real Time 3d Surface Feature Extraction On Fpga". Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612200/index.pdf.

Pełny tekst źródła
Streszczenie:
Three dimensional (3D) surface feature extractions based on mean (H) and Gaussian (K) curvature analysis of range maps, also known as depth maps, is an important tool for machine vision applications such as object detection, registration and recognition. Mean and Gaussian curvature calculation algorithms have already been implemented and examined as software. In this thesis, hardware based digital curvature processors are designed. Two types of real time surface feature extraction and classification hardware are developed which perform mean and Gaussian curvature analysis at different scale levels. The techniques use different gradient approximations. A fast square root algorithm using both LUT (look up table) and linear fitting technique is developed to calculate H and K values of the surface described by the 3D Range Map formed by fixed point numbers. The proposed methods are simulated in MatLab software and implemented on different FPGAs using VHDL hardware language. Calculation times, outputs and power analysis of these techniques are compared to CPU based 64 bit float data type calculations.
Style APA, Harvard, Vancouver, ISO itp.
45

Lorentzon, Matilda. "Feature Extraction for Image Selection Using Machine Learning". Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-142095.

Pełny tekst źródła
Streszczenie:
During flights with manned or unmanned aircraft, continuous recording can result in avery high number of images to analyze and evaluate. To simplify image analysis and tominimize data link usage, appropriate images should be suggested for transfer and furtheranalysis. This thesis investigates features used for selection of images worthy of furtheranalysis using machine learning. The selection is done based on the criteria of havinggood quality, salient content and being unique compared to the other selected images.The investigation is approached by implementing two binary classifications, one regardingcontent and one regarding quality. The classifications are made using support vectormachines. For each of the classifications three feature extraction methods are performedand the results are compared against each other. The feature extraction methods used arehistograms of oriented gradients, features from the discrete cosine transform domain andfeatures extracted from a pre-trained convolutional neural network. The images classifiedas both good and salient are then clustered based on similarity measures retrieved usingcolor coherence vectors. One image from each cluster is retrieved and those are the resultingimages from the image selection. The performance of the selection is evaluated usingthe measures precision, recall and accuracy. The investigation showed that using featuresextracted from the discrete cosine transform provided the best results for the quality classification.For the content classification, features extracted from a convolutional neuralnetwork provided the best results. The similarity retrieval showed to be the weakest partand the entire system together provides an average accuracy of 83.99%.
Style APA, Harvard, Vancouver, ISO itp.
46

Dai, Guang. "Feature extraction VIA kernel weighted discriminant analysis methods /". View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?CSED%202007%20DAI.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Pan, Wendy. "A simulated shape recognition system using feature extraction /". Online version of thesis, 1989. http://hdl.handle.net/1850/10496.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Yeu, Yeon. "Linear feature extraction". 2003. http://catalog.hathitrust.org/api/volumes/oclc/54114716.html.

Pełny tekst źródła
Streszczenie:
Thesis (M.S.)--University of Wisconsin--Madison, 2003.
Typescript. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 58-60).
Style APA, Harvard, Vancouver, ISO itp.
49

Wang, Wen-Hung, i 王文宏. "Research and Applications on Feature Extraction and Feature MatchingResearch and Applications on Feature Extraction and Feature Matching". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/19133507765828544532.

Pełny tekst źródła
Streszczenie:
碩士
華梵大學
資訊管理學系碩士班
95
In computer vision, there are lots of difficult problems such as object recognition, 3D reconstruction, object tracking, etc. And the basis of solving the above problems relies on image matching. In the work, we propose to modify the descriptors of SIFT (Scale Invariant Feature Transform) so the resulting invariant feature points can produce better image matching results. The SIFT algorithm can successfully extract the most descriptive feature points in given input images taken under different viewpoints. This work not only modifies the generation of the descriptors but also proposes to employ the Earth Mover's Distance (EMD) as the measurement of dissimilarity between two descriptors. The feature matching process is easily interfered by noise, lighting conditions, etc. We conduct experiments to test whether the proposed method is sensitive to interferences by examining the feature matching accuracy in computing the fundamental matrix.
Style APA, Harvard, Vancouver, ISO itp.
50

Hsu, Yu-Cheng, i 許育誠. "Automatic Tongue Feature Extraction". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/28453769138666871838.

Pełny tekst źródła
Streszczenie:
碩士
國立中山大學
資訊工程學系研究所
98
In recent years, Chinese medicine in the medical profession in the West triggered a wave of new wave. Chinese medicine is based on four examinations which are listening and smelling examination, inquiry, and palpation to diagnose the patient. Tongue diagnosis is also the first of four diagnostic. The result of tongue diagnosis is based on features of tongue which are diagnosed by doctor. Observation of the tongue focuses on the tongue phenomenon which is structured by the shape of the tongue, and the substance of the tongue, and the coating of the tongue. Pathology of the tongue-shaped includes the medium, fat, lean and crooked, etc. In Pathology of the tongue-substanced, tongue color includes pale, closed to pale, reddish, red, dark red, dark purple, also have some features about ecchymosis, breaken line, tooth mark, and red dot. Pathology for the coating of the tongue includes white, yellow, black, greasy, thick, thin, peeling, or no, etc. Clinically, doctors mostly rely on their own knowledge and experience when determining major lesions of a patient by observing the coloration, overall modalities, and volume of salivary on different parts of the tongue. As a result, the diagnosis tends to be limited by knowledge, experience, train of thought, and diagnostic techniques.The subjective determination is likely to be affected by the doctor’s color sensitivity and interpretation.Different doctors may come to drastically different judgments on the same tongue presentation with little overlap. Therefore, it is important to develop scientific methods that can help doctors diagnose based on standardized differentiation procedures and render reliable diagnoses in order to enhance the clinical application value of Chinese Medicine.The computerized automatic capture of characteristics shown on the images of the surface.At first, the captured image achieves brightness and color correction by brightness calibration and color calibration. Then, the original tongue images go through HSI color space conversion, detection of the control points inside and outside the surface of the tongue, curve smoothness modification and active contour model to capture images of the tongue. After that, the tongue shape, tongue fur, tongue body, and body fluid are captured from the image of the tongue.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii