Dissertations / Theses on the topic 'Feature processing'

To see the other types of publications on this topic, follow the link: Feature processing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Feature processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Porter, Nicholas David. "Facial feature processing using artificial neural networks." Thesis, University of Warwick, 1998. http://wrap.warwick.ac.uk/59539/.

Full text
Abstract:
Describing a human face is a natural ability used in eveyday life. To the police, a witness description of a suspect is key evidence in the identification of the suspect. However, the process of examining "mug shots" to find a match to the description is tedious and often unfruitful. If a description could be stored with each photograph and used as a searchable index, this would provide a much more effective means of using "mug shots" for identification purposes. A set of descriptive measures have been defined by Shepherd [73] which seek to describe faces in a manner that may be used for just this purpose. This work investigates methods of automatically determining these descriptive measures from digitised images. Analysis is performed on the images to establish the potential for distinguishing between different categories in these descriptions. This reveals that while some of the classifications are relatively linear, others are very non-linear. Artificial neural networks (ANNs), being often used as non-linear classifiers, are considered as a means of automatically performing the classification of the images. As a comparison, simple linear classifiers are also applied to the same problems.
APA, Harvard, Vancouver, ISO, and other styles
2

Hosie, Judith A. "Feature and configural factors in face processing." Thesis, Cardiff University, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.293027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dyson, Benjamin J. "Processing and representation in auditory cognition." Thesis, University of York, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.270043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pohl, Carsten [Verfasser], and Andrea [Akademischer Betreuer] Kiesel. "Feature processing and feature integration in unconscious processing : A Study with chess novices and experts / Carsten Pohl. Betreuer: Andrea Kiesel." Würzburg : Universitätsbibliothek der Universität Würzburg, 2012. http://d-nb.info/1019487135/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Xiao Yu. "Feature matching of deformable models /." View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?MECH%202008%20CHENX.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Youn, Eun Seog. "Feature selection in support vector machines." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE1000171.

Full text
Abstract:
Thesis (M.S.)--University of Florida, 2002.
Title from title page of source document. Document formatted into pages; contains x, 50 p.; also contains graphics. Includes vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
7

Smith, Stephen Mark. "Feature based image sequence understanding." Thesis, University of Oxford, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316951.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sommerville, M. G. L. "Viewer-centred geometric feature recognition." Thesis, Heriot-Watt University, 1996. http://hdl.handle.net/10399/691.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hocking, Julia. "The anatomical substrates of feature integration during object processing." Thesis, University College London (University of London), 2008. http://discovery.ucl.ac.uk/1444274/.

Full text
Abstract:
Objects can be identified from a number of perceptual attributes, including visual, auditory and tactile sensory input. The integration of these perceptual attributes constitutes our semantic knowledge of an object representation. This research uses functional neuroimaging to investigate the brain areas that integrate perceptual features into an object representation, and how these regions are modulated by stimulus- and task-specific features. A series of experiments are reported that utilise different types of perceptual integration, both within and across sensory modalities. These include 1) the integration of visual form with colour, 2) the integration of visual and auditory object features, and 3) the integration of visual and tactile abstract shapes. Across these experiments I have also manipulated additional factors, including the meaning of the perceptual information (meaningful objects versus meaningless shapes), the verbal or non-verbal nature of the perceptual inputs (e.g. spoken words versus environmental sounds) and the congruency of crossmodal inputs. These experiments have identified a network of brain regions both common to, and selective for, different types of object feature integration. For instance, I have identified a common bilateral network involved in the integration and association of crossmodal audiovisual objects and intra-modal auditory or visual object pairs. However, I have also determined that activation in response to the same concepts can be modulated by the type of stimulus input (verbal versus nonverbal), the timing of those inputs (simultaneous versus sequential presentation), and the congruency of stimulus pairs (congruent versus incongruent). Taken together, the results from these experiments demonstrate modulations of neuronal activation by different object attributes at multiple different levels of the object processing hierarchy, from early sensory processing through to stored object representations. Critically, these differential effects have even been observed with the same conceptual stimuli. Together these findings highlight the need for a model of object feature processing that can account for the functional demands that elicit these anatomical differences.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhu, Wenyao. "Time-Series Feature Extraction in Embedded Sensor Processing System." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281820.

Full text
Abstract:
Embedded sensor-based systems mounted with tens or hundreds of sensors can collect enormous time-series data, while the data analysis on those time-series is commonly conducted on the remote server-side. With the development of microprocessors, there have been increasing demands to move the analysis process to the local embedded systems. In this thesis, the objective is to inves- tigate the possibility of the time-series feature extraction methods suitable for the embedded sensor processing systems.As the research problem raised from the objective, we have explored the traditional statistic methods and machine learning approaches on time-series data mining. To narrow down the research scope, the thesis focuses on the similarity search methods together with the clustering algorithms from the time-series feature extraction perspective. In the project, we have chosen and implemented two clustering algorithms, the K-means and the Self-Organizing Map (SOM), combined with two similarity search methods, the Euclidean dis- tance and the Dynamic Time Warping (DTW). The evaluation setup uses four public datasets with labels, and the Rand index (RI) to score the accuracy. We have tested the performance on accuracy and time consumption of the four combinations of the chosen algorithms on the embedded platform.The results show that the SOM with DTW can generally achieve better accuracy with a relatively longer inferring time than the other evaluated meth- ods. Quantitatively, the SOM with DTW can do clustering on one time-series sample of 300 data points for twelve classes in 40 ms using the ESP32 embed- ded microprocessor, with a 4 percentage of accuracy advantage than the fastest K-means with Euclidean distance in RI score. We can conclude that the SOM with DTW algorithm can be used to handle the time-series clustering tasks on the embedded sensor processing systems if the timing requirement is not so stringent.
Inbyggda sensorbaserade system monterade med tiotals eller hundratals senso- rer kan samla in enorma tidsseriedata, medan dataanalysen på dessa tidsserier vanligtvis utförs på en fjärrserver. Med utvecklingen av mikroprocessorer har behovet att flytta analysprocessen till de lokala inbäddade systemen ökat. I detta examensarbete är målet att undersöka vilka tidsserie-extraktionsmetoder som är lämpliga för de inbäddade sensorbehandlingssystemen.Som forskningsproblem för målet har vi undersökt traditionella statistik- metoder och maskininlärningsmetoder för tidsserie-data mining. För att be- gränsa forskningsområdet fokuserar examensarbet på likhetssökningsmetoder tillsammans med klusteralgoritmer från tidsserieens feature extraktionsper- spektiv. I projektet har vi valt och implementerat två klusteralgoritmer, K- means och Self-Organizing Map (SOM), i kombination med två likhetssök- ningsmetoder, det euklidiska avståndet och Dynamic Time Warping (DTW). Resultaten utvärderas med fyra offentliga datasätt med märkt data. Randin- dex (RI) används för att utvärdera noggrannheten. Vi har testat prestandan för noggrannhet och tidsförbrukning för de fyra kombinationerna av de valda al- goritmerna på den inbäddade plattformen.Resultaten visar att SOM med DTW i allmänhet kan uppnå bättre nog- grannhet med en relativt längre inferenstid än de andra utvärderade metoder- na. Kvantitativt kan SOM med DTW uföra klustring på ett tidsserieprov med 300 datapunkter för tolv klasser på 40 ms med en ESP32-inbäddad mikropro- cessor, vilket är en 4-procentig förbättring i noggrannhet i RI-poäng jämfört med det snabbaste K-medel klustringen med Euklidiskt avstånd. Vi drar slut- satsen att SOM med DTW algoritmen kan användas för att hantera tidsserie- klusteruppgifter på de inbäddade sensorbehandlingssystemen om tidsbehovet inte är så strängt.
APA, Harvard, Vancouver, ISO, and other styles
11

Friedel, Paul. "Sensory information processing : detection, feature extraction, & multimodal integration." kostenfrei, 2008. http://mediatum2.ub.tum.de/doc/651333/651333.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Suh, Hyejean. "The role of local feature processing in object perception /." May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

NOZZA, DEBORA. "Deep Learning for Feature Representation in Natural Language Processing." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2018. http://hdl.handle.net/10281/241185.

Full text
Abstract:
La mole di dati generata dagli utenti sul Web è esponenzialmente cresciuta negli ultimi dieci anni, creando nuove e rilevanti opportunità per ogni tipo di dominio applicativo. Per risolvere i problemi derivanti dall’eccessiva quantità di dati, la ricerca nell’ambito dell’elaborazione del linguaggio naturale si è mossa verso lo sviluppo di modelli computazionali capaci di capirlo ed interpretarlo senza (o quasi) alcun intervento umano. Recentemente, questo campo di studi è stato testimone di un incremento sia in termini di efficienza computazionale che di risultati, per merito dell’avvento di una nuova linea di ricerca nell’apprendimento automatico chiamata Deep Learning. Questa tesi si focalizza in modo particolare su una specifica classe di modelli di Deep Learning atta ad apprendere rappresentazioni di alto livello, e conseguentemente più significative, dei dati di input in ambiente non supervisionato. Nelle tecniche di Deep Learning, queste rappresentazioni sono ottenute tramite multiple trasformazioni non lineari di complessità e astrazione crescente a partire dai dati di input. Questa fase, in cui vengono elaborate le sopracitate rappresentazioni, è un processo cruciale per l’elaborazione del linguaggio naturale in quanto include la procedura di trasformazione da simboli discreti (es. lettere) a una rappresentazione vettoriale che può essere facilmente trattata da un elaboratore. Inoltre, questa rappresentazione deve anche essere in grado di codificare la sintattica e la semantica espressa nel linguaggio utilizzato nei dati. La prima direzione di ricerca di questa tesi mira ad evidenziare come i modelli di elaborazione del linguaggio naturale possano essere potenziati dalle rappresentazioni ottenute con metodi non supervisionati di Deep Learning al fine di conferire un senso agli ingenti contenuti generati dagli utenti. Nello specifico, questa tesi si focalizza su diversi ambiti che sono considerati cruciali per capire di cosa il testo tratti (Named Entity Recognition and Linking) e qual è l’opinione che l’utente sta cercando di esprimere considerando la possibile presenza di ironia (Sentiment Analysis e Irony Detection). Per ognuno di questi ambiti, questa tesi propone modelli innovativi di elaborazione del linguaggio naturale potenziati dalla rappresentazione ottenuta tramite metodi di Deep Learning. Come seconda direzione di ricerca, questa tesi ha approfondito lo sviluppo di un nuovo modello di Deep Learning per l’apprendimento di rappresentazioni significative del testo ulteriormente valorizzato considerando anche la struttura relazionale che sta alla base dei contenuti generati sul Web. Il processo di inferenza terrà quindi in considerazione sia il testo dei dati di input che la componente relazionale sottostante. La rappresentazione, dopo essere stata ottenuta, potrà quindi essere utilizzata da modelli di apprendimento automatico standard per poter eseguire svariate tipologie di analisi nell'ambito di elaborazione del linguaggio naturale. Concludendo, gli studi sperimentali condotti in questa tesi hanno rilevato che l’utilizzo di rappresentazioni più significative ottenute con modelli di Deep Learning, associate agli innovativi modelli di elaborazione del linguaggio naturale proposti in questa tesi, porta ad un miglioramento dei risultati ottenuti e a migliori le abilità di generalizzazione. Ulteriori progressi sono stati anche evidenziati considerando modelli capaci di sfruttare, oltre che al testo, la componente relazionale.
The huge amount of textual user-generated content on the Web has incredibly grown in the last decade, creating new relevant opportunities for different real-world applications and domains. To overcome the difficulties of dealing with this large volume of unstructured data, the research field of Natural Language Processing has provided efficient solutions developing computational models able to understand and interpret human natural language without any (or almost any) human intervention. This field has gained in further computational efficiency and performance from the advent of the recent machine learning research lines concerned with Deep Learning. In particular, this thesis focuses on a specific class of Deep Learning models devoted to learning high-level and meaningful representations of input data in unsupervised settings, by computing multiple non-linear transformations of increasing complexity and abstraction. Indeed, learning expressive representations from the data is a crucial step in Natural Language Processing, because it involves the transformation from discrete symbols (e.g. characters) to a machine-readable representation as real-valued vectors, which should encode semantic and syntactic meanings of the language units. The first research direction of this thesis is aimed at giving evidence that enhancing Natural Language Processing models with representations obtained by unsupervised Deep Learning models can significantly improve the computational abilities of making sense of large volume of user-generated text. In particular, this thesis addresses tasks that were considered crucial for understanding what the text is talking about, by extracting and disambiguating the named entities (Named Entity Recognition and Linking), and which opinion the user is expressing, dealing also with irony (Sentiment Analysis and Irony Detection). For each task, this thesis proposes a novel Natural Language Processing model enhanced by the data representation obtained by Deep Learning. As second research direction, this thesis investigates the development of a novel Deep Learning model for learning a meaningful textual representation taking into account the relational structure underlying user-generated content. The inferred representation comprises both textual and relational information. Once the data representation is obtained, it could be exploited by off-the-shelf machine learning algorithms in order to perform different Natural Language Processing tasks. As conclusion, the experimental investigations reveal that models able to incorporate high-level features, obtained by Deep Learning, show significant performance and improved generalization abilities. Further improvements can be also achieved by models able to take into account the relational information in addition to the textual content.
APA, Harvard, Vancouver, ISO, and other styles
14

Sze, Wui-fung. "Robust feature-point based image matching." Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B37153262.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Sze, Wui-fung, and 施會豐. "Robust feature-point based image matching." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B37153262.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Lim, Suryani. "Feature extraction, browsing and retrieval of images." Monash University, School of Computing and Information Technology, 2005. http://arrow.monash.edu.au/hdl/1959.1/9677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Adekunle, Carl Bunmi. "A technique for detecting feature interaction." Thesis, Royal Holloway, University of London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.249335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Ng, Ee Sin. "Image feature matching using pairwise spatial constraints." Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Nilsson, Niklas. "Feature detection for geospatial referencing." Thesis, Umeå universitet, Institutionen för fysik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-159809.

Full text
Abstract:
With the drone industry's recent explosive advancement, aerial photography is becoming increasingly important for an array of applications ranging from construction to agriculture. A drone flyover can give a better overview of regions that are difficult to navigate, and is often significantly faster, cheaper and more accurate than man-made sketches and other alternatives. With this increased use comes a growing need for image processing methods to help in analyzing captured photographs. This thesis presents a method for automatic location detection in aerial photographs using databases of aerial photographs and satellite images. The proposed pipeline is based on an initial round of tests, performed by using existing feature detection, description and matching algorithms on aerial photographs with a high degree of similarity. After which further modifications and improvements were implemented to make the method functional also for handling aerial photographs with a high level of inherent differences, e.g., viewpoint changes, different camera- and lens parameters, temporary objects and weather effects. The method is shown to yield highly accurate results in geographical regions containing features with a low level of ambiguity, and where factors like viewpoint difference are not too extreme. In particular, the method has been most successful in cities and some types of farmland, producing very good results compared to methods based on camera parameters and GPS-location, which have been common in automatic location detection previously. Knowledge of these parameters is not necessary when applying the method, making it applicable more generally and also independently of the precision of the instruments used to determine said parameters.  Furthermore, the approach is extended for automatic processing of video streams. With lack of available ground truth data, no definite conclusions about absolute accuracy of the method can be drawn for this use case. But it is nevertheless clear that processing speeds can be greatly improved by making use of the fact that subsequent video snapshots have a large graphical overlap. And it can indeed also be said that, for the tested video stream, using a type of extrapolation can greatly reduce the risk of graphical noise making location detection impossible for any given snapshot.
Då drönarindustrin växer så det knakar, har flygfoton blivit allt viktigare för en rad applikationer i vårt samhälle. Att flyga över ett svårnavigerat område med en drönare kan ge bättre översikt och är ofta snabbare, billigare och mer precist än skisser eller andra alternativa översiktsmetoder. Med denna ökade användning kommer också ett ökat behov av automatisk bildprocessering för att hjälpa till i analysen av dessa fotografier. Denna avhandling presenterar en metod för automatisk positionsbedömning av flygfoton, med hjälp av databaser med flygfoton och satellitfoton. Den presenterade metoden är baserad på inledande tester av existerande feature detection, feature description och feature matching algoritmer på ett något förenklat problem, där givna foton är väldigt grafiskt lika. Efter detta implementerades ytterligare modifikationer och förbättringar för att göra metoden mer robust även för bilder med en hög nivå av grafisk diskrepans, exempelvis skillnad i synvinkel, kamera- och linsparametrar, temporära objekt och vädereffekter. Den föreslagna metoden ger nöjaktiga resultat i geografiska regioner med en proportionellt stor mängd grafiska särdrag som enkelt kan särskiljas från varandra och där den grafiska diskrepansen inte är allt för stor. Särskilt goda resultat ses i bland annat städer och vissa typer av jordbruksområden, där metoden kan ge betydligt bättre resultat än metoder baserade på kända kameraparametrar och fotografens GPS-positionering, vilket har varit ett vanligt sätt att utföra denna typ av automatisk positionsbestämning tidigare. Dessutom är den presenterade metoden ofta enklare att applicera, då precisionen för diverse mätinstrument som annars måste användas när fotot tas inte spelar in alls i metodens beräkningar. Dessutom har metoden utökats för automatisk processering av videoströmmar. På grund av bristfälligt referensdata kan inga definitiva slutsatser dras angående metodens precision för detta användningsområde. Men det är ändå tydligt att beräkningstiden kan minskas drastiskt genom att använda faktumet att två påföljande ögonblicksbilder har ett stort grafiskt överlapp. Genom att använda en sorts extrapolering kan inverkan från grafiskt brus också minskas, brus som kan göra positionsbestämning omöjligt för en given ögonblicksbild.
APA, Harvard, Vancouver, ISO, and other styles
20

Jia, Xiaoguang. "Extending the feature set for automatic face recognition." Thesis, University of Southampton, 1993. https://eprints.soton.ac.uk/250161/.

Full text
Abstract:
Automatic face recognition has long been studied because it has a wide potential for application. Several systems have been developed to identify faces from small face populations via detailed face feature analysis, or by using neural nets, or through model based approaches. This study has aimed to provide satisfactory recognition within large populations of human faces and has concentrated on improving feature definition and extraction to establish an extended feature set to lead to a fully structured recognition system based on a single frontal view. An overall review on the development and the techniques of automatic face recognition is included, and performances of earlier systems are discussed. A novel profile description has been achieved from a frontal view of a face and is represented by a Walsh power spectrum which was selected from seven different descriptions due to its ability to distinguish the differences between profiles of different faces. A further feature has concerned the face contour which is extracted by iterative curve fitting and described by normalized Fourier descriptors. To accompany an extended set of geometric measurements, the eye region feature is described statistically by eye-centred moments. Hair texture has also been studied for the purpose of segmenting it from other parts of the face and to investigate the possibility of using it as a set of feature. These new features combine to form an extended feature vector to describe a face. The algorithms for feature extraction have been implemented on face images from different subjects and multiple views from the same person but without the face normal to the camera or without constant illumination. Features have been assessed in consequence on each feature set separately and on the composite feature vector. The results have continued to emphasize that though each description can be used to recognise a face there is a clear need for an extended feature set to cope with the requirements of recognizing faces within large populations.
APA, Harvard, Vancouver, ISO, and other styles
21

Nilsson, Mikael. "On feature extraction and classification in speech and image processing /." Karlskrona : Department of Signal Processing, School of Engineering, Blekinge Institute of Technology, 2007. http://www.bth.se/fou/forskinfo.nsf/allfirst2/fcbe16e84a9ba028c12573920048bce9?OpenDocument.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Moffat, Robert. "Are temporal processing deficits a central feature of language impairment?" Thesis, University of Strathclyde, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.410208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Li, Haopeng. "Feature-Based Image Processing for Rendering, Compression, and Visual Search." Doctoral thesis, KTH, Kommunikationsteori, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177994.

Full text
Abstract:
Visual communication, vivid, meaningful, and creative, permits a way to express information visually. The communication media, by images, graphs and videos, passes informative color and shape to human perception sensors. But when we look close, we wonder: are we merely a passive receiver? Or can we actively select what we would like? Can our eyes only sense the visual images? Or can we enjoy a comprehensive immersive experience of the real world? To discover wonders, we have to explore the essentials and under wraps of visual communication. The work described in this dissertation develops the techniques of visual communication, including rendering, compression and visual search. We leave the conventional pixel-by-pixel image processing behind to explore the opportunities of sparse feature-based image processing. Thus, in this dissertation, a new objective is proposed: to seek a methodology to improve the performance of visual communication by using geometric information carried by the image features. To motivate it, we investigate two systems of visual communication, namely free viewpoint coding and rendering, and mobile visual search. The first system is based on the delivery and presentation of multi-view videos. We demonstrate how to use the image features for efficient video coding and high quality virtual view rendering. To further boost the importance of image features, we discuss the second system, the mobile visual search system, which is only based on the transmission of image features. We illustrate how to achieve reliable identification by using sparse image features. The system of free-viewpoint coding and rendering encodes and delivers the video content to the end-user and allows interactively choosing and rendering a virtual viewpoint in real time. We propose a content-adaptive coding and rendering method to separate the dynamic and static video content items, and apply content-adaptive coding and rendering to each of them. The content-adaptive scheme comprises the extraction of static and dynamic content, the video coding engines, and a synthesis unit for virtual view rendering. We address the problem of using the image features for rate-distortion optimal video coding and high quality geometry model-based rendering. For the video coding engine, we study a feature-based motion compensation scheme and an optimal rate allocation model. For the component of free viewpoint rendering, we study a hypothesis-driven free viewpoint rendering approach based on 3D model hypotheses. For the second system of mobile visual search, we propose a geometry-based search, namely mobile 3D visual search. The end-to-end scheme uses a client-server model for visual communication. The client extracts and encodes the features of the query. The server holds the feature database derived from the multi-view imagery, as well as the feature matching engine. We address the problem of rate-constrained identification by using multi-view image features. For the client, we propose a rate-constrained feature coding method to efficiently encode the query features. For the server side, we propose a double hierarchy to structure the database for indexing the database features. Moreover, we develop an algorithm that accomplishes 3D geometry-based matching and ranking by utilizing 3D geometric information and 2D texture information jointly.

QC 20151201

APA, Harvard, Vancouver, ISO, and other styles
24

Mugtussids, Iossif B. "Flight Data Processing Techniques to Identify Unusual Events." Diss., Virginia Tech, 2000. http://hdl.handle.net/10919/28095.

Full text
Abstract:
Modern aircraft are capable of recording hundreds of parameters during flight. This fact not only facilitates the investigation of an accident or a serious incident, but also provides the opportunity to use the recorded data to predict future aircraft behavior. It is believed that, by analyzing the recorded data, one can identify precursors to hazardous behavior and develop procedures to mitigate the problems before they actually occur. Because of the enormous amount of data collected during each flight, it becomes necessary to identify the segments of data that contain useful information. The objective is to distinguish between typical data points, that are present in the majority of flights, and unusual data points that can be only found in a few flights. The distinction between typical and unusual data points is achieved by using classification procedures. In this dissertation, the application of classification procedures to flight data is investigated. It is proposed to use a Bayesian classifier that tries to identify the flight from which a particular data point came. If the flight from which the data point came is identified with a high level of confidence, then the conclusion that the data point is unusual within the investigated flights can be made. The Bayesian classifier uses the overall and conditional probability density functions together with a priori probabilities to make a decision. Estimating probability density functions is a difficult task in multiple dimensions. Because many of the recorded signals (features) are redundant or highly correlated or are very similar in every flight, feature selection techniques are applied to identify those signals that contain the most discriminatory power. In the limited amount of data available to this research, twenty five features were identified as the set exhibiting the best discriminatory power. Additionally, the number of signals is reduced by applying feature generation techniques to similar signals. To make the approach applicable in practice, when many flights are considered, a very efficient and fast sequential data clustering algorithm is proposed. The order in which the samples are presented to the algorithm is fixed according to the probability density function value. Accuracy and reduction level are controlled using two scalar parameters: a distance threshold value and a maximum compactness factor.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
25

Ljumić, Elvis. "Image feature extraction using fuzzy morphology." Diss., Online access via UMI:, 2007.

Find full text
Abstract:
Thesis (Ph. D.)--State University of New York at Binghamton, Department of Systems Science and Industrial Engineering, Thomas J. Watson School of Engineering and Applied Science, 2007.
Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
26

Robins, Michael John. "Local energy feature tracing in digital images and volumes." University of Western Australia. Dept. of Computer Science, 1999. http://theses.library.uwa.edu.au/adt-WU2003.0010.

Full text
Abstract:
Digital image feature detectors often comprise two stages of processing: an initial filtering phase and a secondary search stage. The initial filtering is designed to accentuate specific feature characteristics or suppress spurious components of the image signal. The second stage of processing involves searching the results for various criteria that will identify the locations of the image features. The local energy feature detection scheme combines the squares of the signal convolved with a pair of filters that are in quadrature with each other. The resulting local energy value is proportional to phase congruency which is a measure of the local alignment of the phases of the signals constituent Fourier components. Points of local maximum phase alignment have been shown to correspond to visual features in the image. The local energy calculation accentuates the location of many types of image features, such as lines, edges and ramps and estimates of local energy can be calculated in multidimensional image data by rotating the quadrature filters to several orientations. The second stage search criterion for local energy is to locate the points that lie along the ridges in the energy map that connect the points of local maxima. In three dimensional data the relatively higher energy values will form films between connecting laments and tendrils. This thesis examines the use of recursive spatial domain filtering to calculate local energy. A quadrature pair of filters which are based on the first derivative of the Gaussian function and its Hilbert transform, are rotated in space using a kernel of basis functions to obtain various orientations of the filters. The kernel is designed to be separable and each term is implemented using a recursive digital filter. Once local energy has been calculated the ridges and surfaces of high energy values are determined using a flooding technique. Starting from the points of local minima we perform an ablative skeletonisation of the higher energy values. The topology of the original set is maintained by examining and preserving the topology of the neighbourhood of each point when considering it for removal. This combination of homotopic skeletonisation and sequential processing of each level of energy values, results in a well located, thinned and connected tracing of the ridges. The thesis contains examples of the local energy calculation using steerable recursive filters and the ridge tracing algorithm applied to two and three dimensional images. Details of the algorithms are contained in the text and details of their computer implementation are provided in the appendices.
APA, Harvard, Vancouver, ISO, and other styles
27

Pahalawatta, Kapila. "Plant species biometric using feature hierarchies." Thesis, University of Canterbury. Computer Science and Software Engineering, 2008. http://hdl.handle.net/10092/1235.

Full text
Abstract:
Biometric identification is a pattern recognition based classification system that recognizes an individual by determining its authenticity using a specific physiological or behavioural characteristic (biometric). In contrast to number of commercially available biometric systems for human recognition in the market today, there is no such a biometric system for plant recognition, even though they have many characteristics that are uniquely identifiable at a species level. The goal of the study was to develop a plant species biometric using both global and local features of leaf images. In recent years, various approaches have been proposed for characterizing leaf images. Most of them were based on a global representation of leaf peripheral with Fourier descriptors, polygonal approximations and centroid-contour distance curve. Global representation of leaf shapes does not provide enough information to characterise species uniquely since different species of plants have similar leaf shapes. Others were based on leaf vein extraction using intensity histograms and trained artificial neural network classifiers. Leaf venation extraction is not always possible since it is not always visible in photographic images. This study proposed a novel approach of leaf identification based on feature hierarchies. First, leaves were sorted by their overall shape using shape signatures. Then this sorted list was pruned based on global and local shape descriptors. The consequent biometric was tested using a corpus of 200 leaves from 40 common New Zealand broadleaf plant species which encompass all categories of local information of leaf peripherals. Two novel shape signatures (full-width to length ratio distribution and half-width to length ratio distribution) were proposed and biometric vectors were constructed using both novel shape signatures, complex-coordinates and centroid-distance for comparison. Retrievals were compared and the biometric vector based on full-width to length ratio distribution was found to be the best classifier. Three types of local information of the leaf peripheral (leaf margin coarseness, stem length to blade length ratio and leaf tip curvature) and the global shape descriptor, leaf compactness, were used to prune the list further. The proposed biometric was able to successfully identify the correct species for 37 test images (out of 40). The proposed biometric identified all the test images (100%) correctly if two species were returned compared to the low recall rates of Wang et al. (2003) (30%, if 10 images were returned) and Ye et al. (2004) (71.4%, if top 5 images were returned). The biometric can be strengthened by adding reference images of new species to the database, or by adding more reference images of existing species when the reference images are not enough to cover the leaf shapes.
APA, Harvard, Vancouver, ISO, and other styles
28

SAIBENE, AURORA. "A Flexible Pipeline for Electroencephalographic Signal Processing and Management." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2022. http://hdl.handle.net/10281/360550.

Full text
Abstract:
L'elettroencefalogramma (EEG) fornisce registrazioni non-invasive delle attività e delle funzioni cerebrali sotto forma di serie temporali, a loro volta caratterizzate da una risoluzione temporale e spaziale (dipendente dai sensori), e da bande di frequenza specifiche per alcuni tipi di condizioni cerebrali. Tuttavia, i segnali EEG risultanti sono non-stazionari, cambiano nel tempo e sono eterogenei, essendo prodotti da differenti soggetti e venendo influenzati da specifici paradigmi sperimentali, condizioni ambientali e dispositivi. Inoltre, questi segnali sono facilmente soggetti a rumore e possono venire acquisiti per un tempo limitato, fornendo un numero ristretto di condizioni cerebrali sulle quali poter lavorare. Pertanto, in questa tesi viene proposta una pipeline flessibile per l'elaborazione e la gestione dei segnali EEG, affinchè possano essere più facilmente comprensibili e quindi più facilmente sfruttabili in diversi tipi di applicazioni. Inoltre, la pipeline flessibile proposta è divisa in quattro moduli riguardanti la pre-elaborazione del segnale, la sua normalizzazione, l'estrazione e la gestione di feature e la classificazione dei dati EEG. La pre-elaborazione del segnale EEG sfrutta la multivariate empirical mode decomposition (MEMD) per scomporre il segnale nelle sue modalità oscillatorie, chiamate intrinsic mode function (IMF), ed usa un criterio basato sull'entropia per selezionare le IMF più relevanti. Queste IMF dovrebbero mantenere le naturali dinamiche cerebrali e rimuovere componenti non-informative. Le risultati IMF rilevanti sono in seguito sfruttate per sostituire il segnale o aumentare la numerosità dei dati. Nonostante MEMD sia adatto alla non-stazionarietà del segnale EEG, ulteriori passi computazionali dovrebbero essere svolti per mitigare la caratteristica eterogeneità di questi dati. Pertanto, un passo di normalizzazione viene introdotto per ottenere dati comparabili per uno stesso soggetto o più soggetti e tra differenti condizioni sperimentali, quindi permettendo di estrarre feature nel dominio temporale, frequenziale e tempo-frequenziale per meglio caratterizzare il segnale EEG. Nonostante l'uso di un insieme di feature differenti fornisca la possibilità di trovare nuovi pattern nei dati, può altresì presentare alcune ridondanze ed incrementare il rischio di incorrere nella curse of dimensionality o nell'overfitting durante la classificazione. Pertanto, viene proposta una selezione delle feature basata sugli algoritmi evolutivi con un approccio completamente guidato dai dati. Inoltre, viene proposto l'utilizzo di modelli di apprendimento non o supervisionati e di nuovi criteri di stop per un algoritmo genetico modificato. Oltretutto, l'uso di diversi modelli di apprendimento automatico può influenzare il riconoscimento di differenti condizioni cerebrali. L'introduzione di modelli di deep learning potrebbe fornire una strategia in grado di apprendere informazioni direttamente dai dati disponibili, senza ulteriori elaborazioni. Fornendo una formulazione dell'input appropriata, le informazioni temporali, frequenziali e spaziali caratterizzanti i dati EEG potrebbero essere mantenute, evitando l'introduzione di architetture troppo complesse. Pertato, l'utilizzo di differenti processi ed approcci di elaborazione potrebbe fornire strategie più generiche o più legate a specifici esperimenti per gestire il segnale EEG, mantenendone le sue naturali caratteristiche.
The electroencephalogram (EEG) provides the non-invasive recording of brain activities and functions as time-series, characterized by a temporal and spatial (sensor-dependent) resolution, and by brain condition-bounded frequency bands. Moreover, it presents some cost-effective device solutions. However, the resulting EEG signals are non-stationary, time-varying, and heterogeneous, being recorded from different subjects and being influenced by specific experimental paradigms, environmental conditions, and devices. Moreover, they are easily affected by noise and they can be recorded for a limited time, thus they provide a restricted number of brain conditions to work with. Therefore, in this thesis a flexible pipeline for signal processing and management is proposed to have a better understanding of the EEG signals and exploit them for a variety of applications. Moreover, the proposed flexible pipeline is divided in 4 modules concerning signal pre-processing, normalization, feature computation and management, and EEG data classification. The EEG signal pre-processing exploits the multivariate empirical mode decomposition (MEMD) to decompose the signal in oscillatory modes, called intrinsic mode functions (IMFs), and uses an entropy criterion to select the most relevant IMFs that should maintain the natural brain dynamics, while discarding uninformative components. The resulting relevant IMFs are then exploited for signal substitution and data augmentation. Even though MEMD is adapt to the EEG signal non-stationarity, further processing steps should be undertaken to mitigate these data heterogeneity. Therefore, a normalization step is introduced to obtain comparable data inter- and intra-subject and between different experimental conditions, allowing the extraction of general features in the time, frequency, and time-frequency domain for EEG signal characterization. Even though the use of a variety of feature types may provide new data patterns, they may also present some redundancies and increase the risk of incurring in classification problems like curse of dimensionality and overfitting. Therefore, a feature selection based on evolutionary algorithms is proposed to have a completely data-driven approach, exploiting both supervised and unsupervised learning models, and suggesting new stopping criteria for a modified genetic algorithm implementation. Moreover, the use of different learning models may affect the discrimination of different brain conditions. The introduction of deep learning models may provide a strategy to learn directly from the available data. By suggesting a proper input formulation it could be possible to maintain the EEG data time, frequency, and spatial information, while avoiding too complex architectures. Therefore, using different processing steps and approaches may provide general or experimental specific strategies to manage the EEG signal, while maintaining its natural characteristics.
APA, Harvard, Vancouver, ISO, and other styles
29

Yin, Li. "Adaptive Background Modeling with Temporal Feature Update for Dynamic Foreground Object Removal." DigitalCommons@USU, 2016. https://digitalcommons.usu.edu/etd/5040.

Full text
Abstract:
In the study of computer vision, background modeling is a fundamental and critical task in many conventional applications. This thesis presents an introduction to background modeling and various computer vision techniques for estimating the background model to achieve the goal of removing dynamic objects in a video sequence. The process of estimating the background model with temporal changes in the absence of foreground moving objects is called adaptive background modeling. In this thesis, three adaptive background modeling approaches were presented for the purpose of developing \teacher removal" algorithms. First, an adaptive background modeling algorithm based on linear adaptive prediction is presented. Second, an adaptive background modeling algorithm based on statistical dispersion is presented. Third, a novel adaptive background modeling algorithm based on low rank and sparsity constraints is presented. The design and implementation of these algorithms are discussed in detail, and the experimental results produced by each algorithm are presented. Lastly, the results of this research are generalized and potential future research is discussed.
APA, Harvard, Vancouver, ISO, and other styles
30

Freitas, Paul Michael. "Feature-oriented specification of hardware bus protocols." Worcester, Mass. : Worcester Polytechnic Institute, 2008. https://www.wpi.edu/ETD-db/ETD-catalog/view_etd?URN=etd-042908-140922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Price, Stanton Robert. "Advanced feature learning and representation in image processing for anomaly detection." Thesis, Mississippi State University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1586997.

Full text
Abstract:

Techniques for improving the information quality present in imagery for feature extraction are proposed in this thesis. Specifically, two methods are presented: soft feature extraction and improved Evolution-COnstructed (iECO) features. Soft features comprise the extraction of image-space knowledge by performing a per-pixel weighting based on an importance map. Through soft features, one is able to extract features relevant to identifying a given object versus its background. Next, the iECO features framework is presented. The iECO features framework uses evolutionary computation algorithms to learn an optimal series of image transforms, specific to a given feature descriptor, to best extract discriminative information. That is, a composition of image transforms are learned from training data to present a given feature descriptor with the best opportunity to extract its information for the application at hand. The proposed techniques are applied to an automatic explosive hazard detection application and significant results are achieved.

APA, Harvard, Vancouver, ISO, and other styles
32

Gale, Alan Ian. "Signal processing and modelling of coritcal evoked potentials for feature extraction." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/42593.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Yuanxun. "Radar signature prediction and feature extraction using advanced signal processing techniques /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Sreevalson, Nair Jaya. "Modular processing of two-dimensional significance map for efficient feature extraction." Thesis, Mississippi State : Mississippi State University, 2002. http://library.msstate.edu/etd/show.asp?etd=etd-07012002-111746.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Befus, Chad R., and University of Lethbridge Faculty of Arts and Science. "Design and evaluation of dynamic feature-based segmentation on music." Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Mathematics and Computer Science, c2010, 2010. http://hdl.handle.net/10133/2531.

Full text
Abstract:
Segmentation is an indispensable step in the field of Music Information Retrieval (MIR). Segmentation refers to the splitting of a music piece into significant sections. Classically there has been a great deal of attention focused on various issues of segmentation, such as: perceptual segmentation vs. computational segmentation, segmentation evaluations, segmentation algorithms, etc. In this thesis, we conduct a series of perceptual experiments which challenge several of the traditional assumptions with respect to segmentation. Identifying some deficiencies in the current segmentation evaluation methods, we present a novel standardized evaluation approach which considers segmentation as a supportive step towards feature extraction in the MIR process. Furthermore, we propose a simple but effective segmentation algorithm and evaluate it utilizing our evaluation approach.
viii, 94 leaves : ill. ; 29 cm
APA, Harvard, Vancouver, ISO, and other styles
36

Lee, Kai-wah. "Mesh denoising and feature extraction from point cloud data." Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B42664330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Cheng, Xin. "Feature-based motion estimation and motion segmentation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0016/MQ55493.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Hua, Jianping. "Topics in genomic image processing." Texas A&M University, 2004. http://hdl.handle.net/1969.1/3244.

Full text
Abstract:
The image processing methodologies that have been actively studied and developed now play a very significant role in the flourishing biotechnology research. This work studies, develops and implements several image processing techniques for M-FISH and cDNA microarray images. In particular, we focus on three important areas: M-FISH image compression, microarray image processing and expression-based classification. Two schemes, embedded M-FISH image coding (EMIC) and Microarray BASICA: Background Adjustment, Segmentation, Image Compression and Analysis, have been introduced for M-FISH image compression and microarray image processing, respectively. In the expression-based classification area, we investigate the relationship between optimal number of features and sample size, either analytically or through simulation, for various classifiers.
APA, Harvard, Vancouver, ISO, and other styles
39

Marples, David John. "Detection and resolution of feature interactions in telecommunications systems during runtime." Thesis, University of Strathclyde, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.366893.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Qin, Jianzhao, and 覃剑钊. "Scene categorization based on multiple-feature reinforced contextual visual words." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B46969779.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Wagener, Dirk Wolfram. "Feature tracking and pattern registration." Thesis, Stellenbosch : Stellenbosch University, 2003. http://hdl.handle.net/10019.1/53424.

Full text
Abstract:
Thesis (MScEng) -- Stellenbosch University, 2003.
ENGLISH ABSTRACT: The video-based computer vision patient positioning system that is being developed at iThemba Laboratories, relies on the accurate, robust location, identification and tracking of a number of markers on the patient's mask. The precision requirements are demanding - a small error in the location of the markers leads to an inaccurate positioning of the patient, which could have fatal consequences. In this thesis we discuss the contsruction of suitable markers, their identification with subpixel accuracy, as well as a robust tracking algorithm. The algorithms were implemented and tested on real data. We also note and give examples of other applications, most notably 2D human face tracking and the 3D tracking of a moving person.
AFRIKAANSE OPSOMMING: Die video-gebaseerde rekenaarvisie pasiënt posisionerings stelsel wat by iThemba Laboratoriums ontwikkel word, maak staat op die akkurate opsporing, identifikasie en volging van 'n stel merkers op die pasiënt se masker. Die akkuraatheids voorwaardes is besonders streng - selfs 'n klein fout in die lokasie vandie merkers sal lei tot die onakkurate posisionering van die pasiënt, wat dodelike gevolge kan hê. In hierdie tesis bespreek ons die konstruksie van geskikte merkers, die identifikasie van die merkers tot op subbeeldingselement vlak en ook die akkurate volging van die merkers. Die algoritmes is op regte data getoets. Ander toepassings soos 2D en 3D menlike gesigs-volging word ook kortliks bespreek.
APA, Harvard, Vancouver, ISO, and other styles
42

Smith, Paul Devon. "An Analog Architecture for Auditory Feature Extraction and Recognition." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4839.

Full text
Abstract:
Speech recognition systems have been implemented using a wide range of signal processing techniques including neuromorphic/biological inspired and Digital Signal Processing techniques. Neuromorphic/biologically inspired techniques, such as silicon cochlea models, are based on fairly simple yet highly parallel computation and/or computational units. While the area of digital signal processing (DSP) is based on block transforms and statistical or error minimization methods. Essential to each of these techniques is the first stage of extracting meaningful information from the speech signal, which is known as feature extraction. This can be done using biologically inspired techniques such as silicon cochlea models, or techniques beginning with a model of speech production and then trying to separate the the vocal tract response from an excitation signal. Even within each of these approaches, there are multiple techniques including cepstrum filtering, which sits under the class of Homomorphic signal processing, or techniques using FFT based predictive approaches. The underlying reality is there are multiple techniques that have attacked the problem in speech recognition but the problem is still far from being solved. The techniques that have shown to have the best recognition rates involve Cepstrum Coefficients for the feature extraction and Hidden-Markov Models to perform the pattern recognition. The presented research develops an analog system based on programmable analog array technology that can perform the initial stages of auditory feature extraction and recognition before passing information to a digital signal processor. The goal being a low power system that can be fully contained on one or more integrated circuit chips. Results show that it is possible to realize advanced filtering techniques such as Cepstrum Filtering and Vector Quantization in analog circuitry. Prior to this work, previous applications of analog signal processing have focused on vision, cochlea models, anti-aliasing filters and other single component uses. Furthermore, classic designs have looked heavily at utilizing op-amps as a basic core building block for these designs. This research also shows a novel design for a Hidden Markov Model (HMM) decoder utilizing circuits that take advantage of the inherent properties of subthreshold transistors and floating-gate technology to create low-power computational blocks.
APA, Harvard, Vancouver, ISO, and other styles
43

Sandrock, Trudie. "Multi-label feature selection with application to musical instrument recognition." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019/11071.

Full text
Abstract:
Thesis (PhD)--Stellenbosch University, 2013.
ENGLISH ABSTRACT: An area of data mining and statistics that is currently receiving considerable attention is the field of multi-label learning. Problems in this field are concerned with scenarios where each data case can be associated with a set of labels instead of only one. In this thesis, we review the field of multi-label learning and discuss the lack of suitable benchmark data available for evaluating multi-label algorithms. We propose a technique for simulating multi-label data, which allows good control over different data characteristics and which could be useful for conducting comparative studies in the multi-label field. We also discuss the explosion in data in recent years, and highlight the need for some form of dimension reduction in order to alleviate some of the challenges presented by working with large datasets. Feature (or variable) selection is one way of achieving dimension reduction, and after a brief discussion of different feature selection techniques, we propose a new technique for feature selection in a multi-label context, based on the concept of independent probes. This technique is empirically evaluated by using simulated multi-label data and it is shown to achieve classification accuracy with a reduced set of features similar to that achieved with a full set of features. The proposed technique for feature selection is then also applied to the field of music information retrieval (MIR), specifically the problem of musical instrument recognition. An overview of the field of MIR is given, with particular emphasis on the instrument recognition problem. The particular goal of (polyphonic) musical instrument recognition is to automatically identify the instruments playing simultaneously in an audio clip, which is not a simple task. We specifically consider the case of duets – in other words, where two instruments are playing simultaneously – and approach the problem as a multi-label classification one. In our empirical study, we illustrate the complexity of musical instrument data and again show that our proposed feature selection technique is effective in identifying relevant features and thereby reducing the complexity of the dataset without negatively impacting on performance.
AFRIKAANSE OPSOMMING: ‘n Area van dataontginning en statistiek wat tans baie aandag ontvang, is die veld van multi-etiket leerteorie. Probleme in hierdie veld beskou scenarios waar elke datageval met ‘n stel etikette geassosieer kan word, instede van slegs een. In hierdie skripsie gee ons ‘n oorsig oor die veld van multi-etiket leerteorie en bespreek die gebrek aan geskikte standaard datastelle beskikbaar vir die evaluering van multi-etiket algoritmes. Ons stel ‘n tegniek vir die simulasie van multi-etiket data voor, wat goeie kontrole oor verskillende data eienskappe bied en wat nuttig kan wees om vergelykende studies in die multi-etiket veld uit te voer. Ons bespreek ook die onlangse ontploffing in data, en beklemtoon die behoefte aan ‘n vorm van dimensie reduksie om sommige van die uitdagings wat deur sulke groot datastelle gestel word die hoof te bied. Veranderlike seleksie is een manier van dimensie reduksie, en na ‘n vlugtige bespreking van verskillende veranderlike seleksie tegnieke, stel ons ‘n nuwe tegniek vir veranderlike seleksie in ‘n multi-etiket konteks voor, gebaseer op die konsep van onafhanklike soek-veranderlikes. Hierdie tegniek word empiries ge-evalueer deur die gebruik van gesimuleerde multi-etiket data en daar word gewys dat dieselfde klassifikasie akkuraatheid behaal kan word met ‘n verminderde stel veranderlikes as met die volle stel veranderlikes. Die voorgestelde tegniek vir veranderlike seleksie word ook toegepas in die veld van musiek dataontginning, spesifiek die probleem van die herkenning van musiekinstrumente. ‘n Oorsig van die musiek dataontginning veld word gegee, met spesifieke klem op die herkenning van musiekinstrumente. Die spesifieke doel van (polifoniese) musiekinstrument-herkenning is om instrumente te identifiseer wat saam in ‘n oudiosnit speel. Ons oorweeg spesifiek die geval van duette – met ander woorde, waar twee instrumente saam speel – en hanteer die probleem as ‘n multi-etiket klassifikasie een. In ons empiriese studie illustreer ons die kompleksiteit van musiekinstrumentdata en wys weereens dat ons voorgestelde veranderlike seleksie tegniek effektief daarin slaag om relevante veranderlikes te identifiseer en sodoende die kompleksiteit van die datastel te verminder sonder ‘n negatiewe impak op klassifikasie akkuraatheid.
APA, Harvard, Vancouver, ISO, and other styles
44

Gurbuz, Ali Cafer. "Feature detection algorithms in computed images." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24718.

Full text
Abstract:
Thesis (Ph.D.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: McClellan, James H.; Committee Member: Romberg, Justin K.; Committee Member: Scott, Waymond R. Jr.; Committee Member: Vela, Patricio A.; Committee Member: Vidakovic, Brani
APA, Harvard, Vancouver, ISO, and other styles
45

Demirel, Hasan. "Training set analysis for image-based facial feature detection." Thesis, Imperial College London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Lee, Kai-wah, and 李啟華. "Mesh denoising and feature extraction from point cloud data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B42664330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Rees, Stephen John. "Feature extraction and object recognition using conditional morphological operators." Thesis, University of South Wales, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Cunado, David. "Automatic gait recognition via model-based moving feature analysis." Thesis, University of Southampton, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.297628.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Lorentzon, Matilda. "Feature Extraction for Image Selection Using Machine Learning." Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-142095.

Full text
Abstract:
During flights with manned or unmanned aircraft, continuous recording can result in avery high number of images to analyze and evaluate. To simplify image analysis and tominimize data link usage, appropriate images should be suggested for transfer and furtheranalysis. This thesis investigates features used for selection of images worthy of furtheranalysis using machine learning. The selection is done based on the criteria of havinggood quality, salient content and being unique compared to the other selected images.The investigation is approached by implementing two binary classifications, one regardingcontent and one regarding quality. The classifications are made using support vectormachines. For each of the classifications three feature extraction methods are performedand the results are compared against each other. The feature extraction methods used arehistograms of oriented gradients, features from the discrete cosine transform domain andfeatures extracted from a pre-trained convolutional neural network. The images classifiedas both good and salient are then clustered based on similarity measures retrieved usingcolor coherence vectors. One image from each cluster is retrieved and those are the resultingimages from the image selection. The performance of the selection is evaluated usingthe measures precision, recall and accuracy. The investigation showed that using featuresextracted from the discrete cosine transform provided the best results for the quality classification.For the content classification, features extracted from a convolutional neuralnetwork provided the best results. The similarity retrieval showed to be the weakest partand the entire system together provides an average accuracy of 83.99%.
APA, Harvard, Vancouver, ISO, and other styles
50

Shi, Qiquan. "Low rank tensor decomposition for feature extraction and tensor recovery." HKBU Institutional Repository, 2018. https://repository.hkbu.edu.hk/etd_oa/549.

Full text
Abstract:
Feature extraction and tensor recovery problems are important yet challenging, particularly for multi-dimensional data with missing values and/or noise. Low-rank tensor decomposition approaches are widely used for solving these problems. This thesis focuses on three common tensor decompositions (CP, Tucker and t-SVD) and develops a set of decomposition-based approaches. The proposed methods aim to extract low-dimensional features from complete/incomplete data and recover tensors given partial and/or grossly corrupted observations.;Based on CP decomposition, semi-orthogonal multilinear principal component analysis (SO-MPCA) seeks a tensor-to-vector projection that maximizes the captured variance with the orthogonality constraint imposed in only one mode, and it further integrates the relaxed start strategy (SO-MPCA-RS) to achieve better feature extraction performance. To directly obtain the features from incomplete data, low-rank CP and Tucker decomposition with feature variance maximization (TDVM-CP and TDVM-Tucker) are proposed. TDVM methods explore the relationship among tensor samples via feature variance maximization, while estimating the missing entries via low-rank CP and Tucker approximation, leading to informative features extracted directly from partial observations. TDVM-CP extracts low-dimensional vector features viewing the weight vectors as features and TDVM-Tucker learns low-dimensional tensor features viewing the core tensors as features. TDVM methods can be generalized to other variants based on other tensor decompositions. On the other hand, this thesis solves the missing data problem by introducing low-rank matrix/tensor completion methods, and also contributes to automatic rank estimation. Rank-one matrix decomposition coupled with L1-norm regularization (L1MC) addresses the matrix rank estimation problem. With the correct estimated rank, L1MC refines its model without L1-norm regularization (L1MC-RF) and achieve optimal recovery results given enough observations. In addition, CP-based nuclear norm regularized orthogonal CP decomposition (TREL1) solves the challenging CP- and Tucker-rank estimation problems. The estimated rank can improve the tensor completion accuracy of existing decomposition-based methods. Furthermore, tensor singular value decomposition (t-SVD) combined with tensor nuclear norm (TNN) regularization (ARE_TNN) provides automatic tubal-rank estimation. With the accurate tubal-rank determination, ARE_TNN relaxes its model without the TNN constraint (TC-ARE) and results in optimal tensor completion under mild conditions. In addition, ARE_TNN refines its model by explicitly utilizing its determined tubal-rank a priori and then successfully recovers low-rank tensors based on incomplete and/or grossly corrupted observations (RTC-ARE: robust tensor completion/RTPCA-ARE: robust tensor principal component analysis).;Experiments and evaluations are presented and analyzed using synthetic data and real-world images/videos in machine learning, computer vision, and data mining applications. For feature extraction, the experimental results of face and gait recognition show that SO-MPCA-RS achieves the best overall performance compared with competing algorithms, and its relaxed start strategy is also effective for other CP-based PCA methods. In the applications of face recognition, object/action classification, and face/gait clustering, TDVM methods not only stably yield similar good results under various multi-block missing settings and different parameters in general, but also outperform the competing methods with significant improvements. For matrix/tensor rank estimation and recovery, L1MC-RF efficiently estimates the true rank and exactly recovers the incomplete images/videos under mild conditions, and outperforms the state-of-the-art algorithms on the whole. Furthermore, the empirical evaluations show that TREL1 correctly determines the CP-/Tucker- ranks well, given sufficient observed entries, which consistently improves the recovery performance of existing decomposition-based tensor completion. The t-SVD recovery methods TC-ARE, RTPCA-ARE, and RTC-ARE not only inherit the ability of ARE_TNN to achieve accurate rank estimation, but also achieve good performance in the tasks of (robust) image/video completion, video denoising, and background modeling. This outperforms the state-of-the-art methods in all cases we have tried so far with significant improvements.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography