To see the other types of publications on this topic, follow the link: Machine processing.

Dissertations / Theses on the topic 'Machine processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Machine processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Morris, Todd D. (Todd Douglas) Carleton University Dissertation Engineering Electrical. ""VLSI triangulation processing for machine vision."." Ottawa, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bowman, C. C. "High speed image processing for machine vision." Thesis, Cardiff University, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.383161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Park, Yongwon Baskiyar Sanjeev. "Dynamic task scheduling onto heterogeneous machines using Support Vector Machine." Auburn, Ala, 2008. http://repo.lib.auburn.edu/EtdRoot/2008/SPRING/Computer_Science_and_Software_Engineering/Thesis/Park_Yong_50.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Stymne, Sara. "Compound Processing for Phrase-Based Statistical Machine Translation." Licentiate thesis, Linköping : Department of Computer and Information Science, Linköpings universitet, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-51416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Hang. "Distributed Support Vector Machine With Graphics Processing Units." ScholarWorks@UNO, 2009. http://scholarworks.uno.edu/td/991.

Full text
Abstract:
Training a Support Vector Machine (SVM) requires the solution of a very large quadratic programming (QP) optimization problem. Sequential Minimal Optimization (SMO) is a decomposition-based algorithm which breaks this large QP problem into a series of smallest possible QP problems. However, it still costs O(n2) computation time. In our SVM implementation, we can do training with huge data sets in a distributed manner (by breaking the dataset into chunks, then using Message Passing Interface (MPI) to distribute each chunk to a different machine and processing SVM training within each chunk). In addition, we moved the kernel calculation part in SVM classification to a graphics processing unit (GPU) which has zero scheduling overhead to create concurrent threads. In this thesis, we will take advantage of this GPU architecture to improve the classification performance of SVM.
APA, Harvard, Vancouver, ISO, and other styles
6

Alzubi, Omar A. "Designing machine learning ensembles : a game coalition approach." Thesis, Swansea University, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.678293.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Grundström, Tobias. "Automated Measurements of Liver Fat Using Machine Learning." Thesis, Linköpings universitet, Datorseende, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-151286.

Full text
Abstract:
The purpose of the thesis was to investigate the possibility of using machine learn-ing for automation of liver fat measurements in fat-water magnetic resonancei maging (MRI). The thesis presents methods for texture based liver classificationand Proton Density Fat Fraction (PDFF) regression using multi-layer perceptrons utilizing 2D and 3D textural image features. The first proposed method was a data classification method with the goal to distinguish between suitable andunsuitable regions to measure PDFF in. The second proposed method was a combined classification and regression method where the classification distinguishes between liver and non-liver tissue. The goal of the regression model was to predict the difference d = pdff mean − pdff ROI between the manual ground truth mean and the fat fraction of the active Region of Interest (ROI).Tests were performed on varying sizes of Image Feature Regions (froi) and combinations of image features on both of the proposed methods. The tests showed that 3D measurements using image features from discrete wavelet transforms produced measurements similar to the manual fat measurements. The first method resulted in lower relative errors while the second method had a higher method agreement compared to manual measurements.
APA, Harvard, Vancouver, ISO, and other styles
8

Howlett, Robert J. "A distributed neural network for machine vision." Thesis, University of Brighton, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.260943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Muscedere, Roberto. "A multiple in-camera processing system for machine vision." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0023/MQ62258.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lai, Bing-Chang. "Combining generic programming with vector processing for machine vision." Access electronically, 2005. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20060221.095043/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Browne, R. G. "Transputer multi-processing and other topics in machine vision." Thesis, University of Canterbury. Electrical and Electronic Engineering, 1989. http://hdl.handle.net/10092/6532.

Full text
Abstract:
This thesis is concerned with machine vision including its application to the task of detecting surface blemishes and shape defects in kiwifruit at a rate of four fruit per second. Existing machine vision technology is subject to the twin constraints of large data volumes and restricted processing time. Two approaches to this problem are explored: the use of large processing power, and the reduction of the data volume. The provision of large processing power is achieved through the use of networks containing large numbers of micro-processors. The establishment of a Transputer Image Processing System (TIPS) has provided a test facility for the development of algorithms on a multitransputer system. In particular, distributed versions of the convex hull algorithm and of algorithms for image translation and rotation have been developed. The establishment of TIPS required the development of a shell to provide protection against deadlock and to provide a satisfactory environment for software development. The network topology is a significant factor in the system performance, and a particular network, the degree four chordal ring network, is proposed as a suitable network for transputer-based systems. The manner in which image processing operations map onto a multi-processor is also investigated. The alternative approach to a practical machine vision system is to decrease the volume of image data. This can be achieved using pyramidal vision, and the approach explored in this thesis uses a rank-based technique for the formation of each subsequent layer of the pyramid. In particular, a rank of one or two results in darker blemishes being emphasized relative to their surroundings. As a consequence, the volume of image data required to preserve blemish information is very much reduced. Another aspect of machine vision is lighting, and the problem of determining the optimum form of lighting for blemish detection on kiwifruit is explored. A machine vision system based on a combination of pyramidal vision and multi-transputer networks is proposed.
APA, Harvard, Vancouver, ISO, and other styles
12

Saxena, Vishal 1979. "Support vector machine and its applications in information processing." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/29404.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2004.
Includes bibliographical references (leaves 59-61).
With increasing amounts of data being generated by businesses and researchers there is a need for fast, accurate and robust algorithms for data analysis. Improvements in databases technology, computing performance and artificial intelligence have contributed to the development of intelligent data analysis. The primary aim of data mining is to discover patterns in the data that lead to better understanding of the data generating process and to useful predictions. One recent technique that has been developed to handle the ever-increasing complexity of hidden patterns is the support vector machine. The support vector machine has been developed as robust tool for classification and regression in noisy, complex domains. Current thesis work is aimed to explore the area of support vector machine to see the interesting applications in data analysis, especially from the point of view of information processing.
by Vishal Saxena.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
13

Vieira, Fábio Henrique Antunes [UNESP]. "Image processing through machine learning for wood quality classification." Universidade Estadual Paulista (UNESP), 2016. http://hdl.handle.net/11449/142813.

Full text
Abstract:
Submitted by FÁBIO HENRIQUE ANTUNES VIEIRA null (curso_structural@hotmail.com) on 2016-08-03T12:43:17Z No. of bitstreams: 1 Fábio Henrique Antunes Vieira TESE.pdf: 4977174 bytes, checksum: f3e115728925e457e12dd4a79c93812a (MD5)
Approved for entry into archive by Ana Paula Grisoto (grisotoana@reitoria.unesp.br) on 2016-08-04T19:15:49Z (GMT) No. of bitstreams: 1 vieira_fha_dr_guara.pdf: 4977174 bytes, checksum: f3e115728925e457e12dd4a79c93812a (MD5)
Made available in DSpace on 2016-08-04T19:15:49Z (GMT). No. of bitstreams: 1 vieira_fha_dr_guara.pdf: 4977174 bytes, checksum: f3e115728925e457e12dd4a79c93812a (MD5) Previous issue date: 2016-06-30
A classificação da qualidade da madeira é indicada para indústria de processamento e produção desse material. Essas empresas têm investido em soluções para agregar valor à matéria-prima, com o intuito de melhorar resultados, observando os rumos do mercado. O objetivo deste trabalho foi comparar Redes Neurais Convolutivas, um método de aprendizado profundo, na classificação da qualidade de madeira, com outras técnicas tradicionais de Máquinas de aprendizado, como Máquina de Vetores de Suporte, Árvores de Decisão, Regra dos Vizinhos Mais Próximos e Redes Neurais, em conjunto com Descritores de Textura. Isso foi possível através da verificação do nível de acurácia das experiências com diferentes técnicas, como Aprendizado Profundo e Descritores de Textura no processamento de imagens destes objetos. Foi utilizada uma câmera convencional para capturar as 374 amostras de imagem adotadas no experimento, e a base de dados está disponível para consulta. O processamento das imagens passou por algumas fases, após terem sido obtidas, como pré-processamento, segmentação, análise de recursos e classificação. Os métodos de classificação se deram através de Aprendizado Profundo e por meio de técnicas de Aprendizado de Máquinas tradicionais como Máquina de Vetores de Suporte, Árvores de Decisão, Regra dos Vizinhos Mais Próximos e Redes Neurais juntamente com os Descritores de Textura. Os resultados empíricos para o conjunto de dados das imagens da madeira serrada mostraram que o método com Descritores de Textura, independentemente da estratégia empregada, foi muito competitivo quando comparado com as Redes Neurais Convolutivas para todos os experimentos realizados, e até mesmo superou-as para esta aplicação.
The quality classification of wood is prescribed throughout the wood chain industry, particularly those from the processing and manufacturing fields. Those organizations have invested energy and time trying to increase value of basic items, with the purpose of accomplishing better results, in agreement to the market. The objective of this work was to compare Convolutional Neural Network, a deep learning method, for wood quality classification to other traditional Machine Learning techniques, namely Support Vector Machine (SVM), Decision Trees (DT), K-Nearest Neighbors (KNN), and Neural Networks (NN) associated with Texture Descriptors. Some of the possible options were to assess the predictive performance through the experiments with different techniques, Deep Learning and Texture Descriptors, for processing images of this material type. A camera was used to capture the 374 image samples adopted on the experiment, and their database is available for consultation. The images had some stages of processing after they have been acquired, as pre-processing, segmentation, feature analysis, and classification. The classification methods occurred through Deep Learning, more specifically Convolutional Neural Networks - CNN, and using Texture Descriptors with Support Vector Machine, Decision Trees, K-nearest Neighbors and Neural Network. Empirical results for the image dataset showed that the approach using texture descriptor method, regardless of the strategy employed, is very competitive when compared with CNN for all performed experiments, and even overcome it for this application.
APA, Harvard, Vancouver, ISO, and other styles
14

Vieira, Fábio Henrique Antunes. "Image processing through machine learning for wood quality classification /." Guaratinguetá, 2016. http://hdl.handle.net/11449/142813.

Full text
Abstract:
Orientador: Manoel Cléber de Sampaio Alves
Banca: Fábio Minoru Yamaji
Banca: Ana Lúcia Piedade Sodero Martins Pincelli
Banca: André Luís Debiaso Rossi
Banca: Carlos de Oliveira Affonso
Abstract: The quality classification of wood is prescribed throughout the wood chain industry, particularly those from the processing and manufacturing fields. Those organizations have invested energy and time trying to increase value of basic items, with the purpose of accomplishing better results, in agreement to the market. The objective of this work was to compare Convolutional Neural Network, a deep learning method, for wood quality classification to other traditional Machine Learning techniques, namely Support Vector Machine (SVM), Decision Trees (DT), K-Nearest Neighbors (KNN), and Neural Networks (NN) associated with Texture Descriptors. Some of the possible options were to assess the predictive performance through the experiments with different techniques, Deep Learning and Texture Descriptors, for processing images of this material type. A camera was used to capture the 374 image samples adopted on the experiment, and their database is available for consultation. The images had some stages of processing after they have been acquired, as pre-processing, segmentation, feature analysis, and classification. The classification methods occurred through Deep Learning, more specifically Convolutional Neural Networks - CNN, and using Texture Descriptors with Support Vector Machine, Decision Trees, K-nearest Neighbors and Neural Network. Empirical results for the image dataset showed that the approach using texture descriptor method, regardless of the strategy employed, is very competi... (Complete abstract click electronic access below)
Resumo: A classificação da qualidade da madeira é indicada para indústria de processamento e produção desse material. Essas empresas têm investido em soluções para agregar valor à matéria-prima, com o intuito de melhorar resultados, observando os rumos do mercado. O objetivo deste trabalho foi comparar Redes Neurais Convolutivas, um método de aprendizado profundo, na classificação da qualidade de madeira, com outras técnicas tradicionais de Máquinas de aprendizado, como Máquina de Vetores de Suporte, Árvores de Decisão, Regra dos Vizinhos Mais Próximos e Redes Neurais, em conjunto com Descritores de Textura. Isso foi possível através da verificação do nível de acurácia das experiências com diferentes técnicas, como Aprendizado Profundo e Descritores de Textura no processamento de imagens destes objetos. Foi utilizada uma câmera convencional para capturar as 374 amostras de imagem adotadas no experimento, e a base de dados está disponível para consulta. O processamento das imagens passou por algumas fases, após terem sido obtidas, como pré-processamento, segmentação, análise de recursos e classificação. Os métodos de classificação se deram através de Aprendizado Profundo e por meio de técnicas de Aprendizado de Máquinas tradicionais como Máquina de Vetores de Suporte, Árvores de Decisão, Regra dos Vizinhos Mais Próximos e Redes Neurais juntamente com os Descritores de Textura. Os resultados empíricos para o conjunto de dados das imagens da madeira serrada mostraram que o método com De... (Resumo completo, clicar acesso eletrônico abaixo)
Doutor
APA, Harvard, Vancouver, ISO, and other styles
15

SORIANO, PINTER JAUME. "Machine learning-based image processing for human-robot collaboration." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-278899.

Full text
Abstract:
Human-robot Collaboration as a new paradigm in manufacturing has already been a hot topic in both manufacturing science, production research, intelligent robotics, and computer science. Due to the boost of deep learning technologies in the nearly ten years, advanced information processing technologies bring the new possibility to human-robot Collaboration. Meanwhile, machine learning-based image processing such as convolutional neural network has become a powerful tool in dealing with problems like target recognizing and locating. This kind of technologies shows potentials on robotic manufacturing and human-robot Collaboration. A challenge is to implement well-designed deep neural networks linked to a robotic system that can conduct collaborative works with the human. Accuracy and robustness need also be concerned in the development. This thesis work will address this challenge. This thesis tries to implement a solution based in Machine Learning methods for image detection which permits us to, using a low cost image solutions (RGB single camera), detect and localize manufacturing components to pick them and finish an assembly, helping the human co-workers, using an industrial robot, simplifying also the IT tasks to run it.
Människa-robot samarbete, som ett nytt paradigm inom tillverkningsindustrin, har redan blivit ett omtalat ämne inom tillverkningsvetenskapen, produktforskningen, intelligent robotik och datavetenskapen. På grund av det senaste decenniets ökning av "deep learning" teknologier kan avancerade information-processerings teknologier bringa nya möjligheter för människarobot samarbete. Under tiden har även maskininlärnings-baserad bildklassificering med "convolutional neural network" blivit ett kraftfullt verktyg för att hantera problem så som måligenkänning och lokalisering. Dessa typer av teknologier har potential att implementeras nom robotiserad tillverkning och människa-robot samarbete. En utmaning är att implementera väldesignade "convolutional neural networks" kopplat till ett robot system som kan utföra arbete i samarbete med människan. Noggranhet och robusthet behöver också avvägas i utvecklingsarbetet. Detta examensarbete kommer att ta itu med denna utmaning. Detta examensarbete försöker att implementera en lösning baserad på maskininlärnings-metoder för bildigenkänning som tillåter oss att, med hjälp av en billig bild lösning (RGB enkel kamera), detektera och lokalisera tillverkningskomponenter att plocka upp och slutföra en montering, vilket hjälper den mänskliga medhjälparen, med en industriell robot. Detta förenklar också IT-uppgifterna för att köra den.
APA, Harvard, Vancouver, ISO, and other styles
16

Goraine, Habib. "Machine recognition of Arabic text." Thesis, University of Reading, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.278135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

King, Tony Richard. "Parallel image manipulation machine architecture." Thesis, University of Cambridge, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.257001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Wei. "Automatic Chinese calligraphic font generation with machine learning technology." Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3950605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Becirovic, Ema. "On Massive MIMO for Massive Machine-Type Communications." Licentiate thesis, Linköpings universitet, Kommunikationssystem, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-162586.

Full text
Abstract:
To cover all the needs and requirements of mobile networks in the future, the predicted usage of the mobile networks has been split into three use-cases: enhanced mobile broadband, ultra-reliable low-latency communication, and massive machine-type communication. In this thesis we focus on the massive machine-type communication use-case which is intended to facilitate the ever increasing number of smart devices and sensors. In the massive machine-type communication use-case, the main challenges are to accommodate a huge number of devices while keeping the battery lives of the devices long, and allowing them to be placed in far-away locations. However, these devices are not concerned about other features such as latency, high data rate, or mobility. In this thesis we study the application of massive MIMO (multiple-input multiple-output) technology for the massive machine-type communication use-case. Massive MIMO has been on the radar as an enabler for future communication networks in the last decade and is now firmly rooted in both academia and industry. The main idea of massive MIMO is to utilize a base station with a massive number of antennas which gives the ability to spatially direct signals and serve multiple devices in the same time- and frequency resource. More specifically, in this thesis we study A) a scenario where the base station takes advantage of a device's low mobility to improve its channel estimate, B) a random access scheme for massive machine-type communication which can accommodate a huge number of devices, and C) a case study where the benefits of massive MIMO for long range devices are quantified. The results are that the base station can significantly improve the channel estimates for a low mobility user such that it can tolerate lower SNR while still achieving the same rate. Additionally, the properties of massive MIMO greatly helps to detect users in random access scenarios and increase link-budgets compared to single-antenna base stations.
APA, Harvard, Vancouver, ISO, and other styles
20

Walters, Thomas C. "Auditory-based processing of communication sounds." Thesis, University of Cambridge, 2011. https://www.repository.cam.ac.uk/handle/1810/240577.

Full text
Abstract:
This thesis examines the possible benefits of adapting a biologically-inspired model of human auditory processing as part of a machine-hearing system. Features were generated by an auditory model, and used as input to machine learning systems to determine the content of the sound. Features were generated using the auditory image model (AIM) and were used for speech recognition and audio search. AIM comprises processing to simulate the human cochlea, and a 'strobed temporal integration' process which generates a stabilised auditory image (SAI) from the input sound. The communication sounds which are produced by humans, other animals, and many musical instruments take the form of a pulse-resonance signal: pulses excite resonances in the body, and the resonance following each pulse contains information both about the type of object producing the sound and its size. In the case of humans, vocal tract length (VTL) determines the size properties of the resonance. In the speech recognition experiments, an auditory filterbank was combined with a Gaussian fitting procedure to produce features which are invariant to changes in speaker VTL. These features were compared against standard mel-frequency cepstral coefficients (MFCCs) in a size-invariant syllable recognition task. The VTL-invariant representation was found to produce better results than MFCCs when the system was trained on syllables from simulated talkers of one range of VTLs and tested on those from simulated talkers with a different range of VTLs. The image stabilisation process of strobed temporal integration was analysed. Based on the properties of the auditory filterbank being used, theoretical constraints were placed on the properties of the dynamic thresholding function used to perform strobe detection. These constraints were used to specify a simple, yet robust, strobe detection algorithm. The syllable recognition system described above was then extended to produce features from profiles of the SAI and tested with the same syllable database as before. For clean speech, performance of the features was comparable to that of those generated from the filterbank output. However when pink noise was added to the stimuli, performance dropped more slowly as a function of signal-to-noise ratio when using the SAI-based AIM features, than when using either the filterbank-based features or the MFCCs, demonstrating the noise-robustness properties of the SAI representation. The properties of the auditory filterbank in AIM were also analysed. Three models of the cochlea were considered: the static gammatone filterbank, dynamic compressive gammachirp (dcGC) and the pole-zero filter cascade (PZFC). The dcGC and gammatone are standard filterbank models, whereas the PZFC is a filter cascade, which more accurately models signal propagation in the cochlea. However, while the architecture of the filterbanks is different, they have all been successfully fitted to psychophysical masking data from humans. The abilities of the filterbanks to measure pitch strength were assessed, using stimuli which evoke a weak pitch percept in humans, in order to ascertain whether there is any benefit in the use of the more computationally efficient PZFC.Finally, a complete sound effects search system using auditory features was constructed in collaboration with Google research. Features were computed from the SAI by sampling the SAI space with boxes of different scales. Vector quantization (VQ) was used to convert this multi-scale representation to a sparse code. The 'passive-aggressive model for image retrieval' (PAMIR) was used to learn the relationships between dictionary words and these auditory codewords. These auditory sparse codes were compared against sparse codes generated from MFCCs, and the best performance was found when using the auditory features.
APA, Harvard, Vancouver, ISO, and other styles
21

Chow, K. W. "Multi-processor architecture for machine vision." Thesis, Cardiff University, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.358531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Tan, Tele Seng Chu. "Colour texture analysis in machine vision." Thesis, University of Surrey, 1993. http://epubs.surrey.ac.uk/844403/.

Full text
Abstract:
Texture is an important cue in vision and has been analysed in its own right for the last three decades by researchers in psychophysics as well as computer vision and image processing. The three important early vision roles that texture analysis can play include texture classification, texture description and texture segmentation, all of which are pre-requisites for higher levels of analysis namely image interpretation and understanding. It is well known that colour can aid the human vision system in the analysis of many visual phenomena like shape, motion and texture. This notion coupled with the recent advent of fast computing hardware and the widespread availability of good quality colour cameras, digitizers, and monitors had created a new pathway for improving the performance of traditional grey level texture analysis schemes by incorporating colour information. In this thesis, the problem of statistical colour texture analysis is addressed. As a pre-requisite to analysing colour textures a review of the main texture analysis techniques available in the open literature is presented. The local linear transform technique is singled out as the main texture analysis scheme to be used throughout the course of the work. This technique boasts of several advantages; compactness in texture measurement, implementation simplicity, and suitability for stochastic or random texture representation. It is found that the structural property of the local linear transform for texture measurement resembles that of the energy measures based on Gabor functions. This has resulted in the possibility of emulating the latter texture extraction process by a set of quadrature filters like in the case of Gabor filtering. The motivation here is the speed improvement in the computation of the texture representation as the filtering process can be accelerated by Fast Fourier Transform. But unfortunately the number of quadrature filters needed to successfully emulate the local linear transform measures has been found to be unexpectedly large making the FFT implementation very uneconomical to realise. Two colour texture analysis schemes are developed. The first method advocates the dual transformation of the colour input image which requires the initial transformation of the tristimulus values into several colour co-ordinate systems and tlien extracting texture attributes from these transformed component images. The performance of these features is measured as the percentage of correct classification. Feature behaviour under illumination intensity variation will be investigated. The second approach harnesses the texture and colour information separately in an attempt to eliminate redundant or highly correlated features that are usually associated with the first approach. The colour histogram is used as an image model from which a colour representation scheme of this method can be derived. An efficient and fast way of coding the colour histogram by approximate principal component analysis is developed here. This reduces both the memory requirement for histogram storage and computation time for colour features by a factor N82/9, where Ng is the total number of grey level of each channel. It is shown that features derived from the latter approach perform better in experiments involving colour granite classification. These colour features are shown to be more robust to illumination intensity changes than colour texture features computed from the individual transformed channels. Further to this, the overall size of the colour texture feature dimension of the second approach is considerable lower than the first approach. The encouraging results gathered here indicate the usefulness of a hybrid form of multi-variate feature measurement of colour texture using separately, colour and texture attributes.
APA, Harvard, Vancouver, ISO, and other styles
23

Fothergill, John Simon. "The coaching-machine learning interface : indoor rowing." Thesis, University of Cambridge, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.648459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Mairal, Julien. "Sparse coding for machine learning, image processing and computer vision." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2010. http://tel.archives-ouvertes.fr/tel-00595312.

Full text
Abstract:
We study in this thesis a particular machine learning approach to represent signals that that consists of modelling data as linear combinations of a few elements from a learned dictionary. It can be viewed as an extension of the classical wavelet framework, whose goal is to design such dictionaries (often orthonormal basis) that are adapted to natural signals. An important success of dictionary learning methods has been their ability to model natural image patches and the performance of image denoising algorithms that it has yielded. We address several open questions related to this framework: How to efficiently optimize the dictionary? How can the model be enriched by adding a structure to the dictionary? Can current image processing tools based on this method be further improved? How should one learn the dictionary when it is used for a different task than signal reconstruction? How can it be used for solving computer vision problems? We answer these questions with a multidisciplinarity approach, using tools from statistical machine learning, convex and stochastic optimization, image and signal processing, computer vision, but also optimization on graphs.
APA, Harvard, Vancouver, ISO, and other styles
25

Marshall, Simon. "The generation of machine tool cutter paths utilising parallel processing." Thesis, University of Hull, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Diethe, T. R. "Sparse machine learning methods with applications in multivariate signal processing." Thesis, University College London (University of London), 2010. http://discovery.ucl.ac.uk/20450/.

Full text
Abstract:
This thesis details theoretical and empirical work that draws from two main subject areas: Machine Learning (ML) and Digital Signal Processing (DSP). A unified general framework is given for the application of sparse machine learning methods to multivariate signal processing. In particular, methods that enforce sparsity will be employed for reasons of computational efficiency, regularisation, and compressibility. The methods presented can be seen as modular building blocks that can be applied to a variety of applications. Application specific prior knowledge can be used in various ways, resulting in a flexible and powerful set of tools. The motivation for the methods is to be able to learn and generalise from a set of multivariate signals. In addition to testing on benchmark datasets, a series of empirical evaluations on real world datasets were carried out. These included: the classification of musical genre from polyphonic audio files; a study of how the sampling rate in a digital radar can be reduced through the use of Compressed Sensing (CS); analysis of human perception of different modulations of musical key from Electroencephalography (EEG) recordings; classification of genre of musical pieces to which a listener is attending from Magnetoencephalography (MEG) brain recordings. These applications demonstrate the efficacy of the framework and highlight interesting directions of future research.
APA, Harvard, Vancouver, ISO, and other styles
27

Tang, Qiao. "Knowledge management using machine learning, natural language processing and ontology." Thesis, Cardiff University, 2006. http://orca.cf.ac.uk/56067/.

Full text
Abstract:
This research developed a concept indexing framework which systematically integrates machine learning, natural language processing and ontology technologies to facilitate knowledge acquisition, extraction and organisation. The research reported in this thesis focuses first on the conceptual model of concept indexing, which represents knowledge as entities and concepts. Then the thesis outlines its benefits and the system architecture using this conceptual model. Next, the thesis presents a knowledge acquisition framework using machine learning in focused crawling Web content to enable automatic knowledge acquisition. Then, the thesis presents two language resources developed to enable ontology tagging, which are: an ontology dictionary and an ontologically tagged corpus. The ontologically tagged corpus is created using a heuristic algorithm developed in the thesis. Next, the ontology tagging algorithm is developed with the ontology dictionary and the ontologically tagged corpus to enable ontology tagging. Finally, the thesis presents the conceptual model, the system architecture, and the prototype system using concept indexing developed to facilitate knowledge acquisition, extraction and organisation. The solutions proposed in the thesis are illustrated with examples based on a prototype system developed in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
28

Jijie, Zhu. "Finite state machine with applications to digital signal processing systems." Thesis, University of Cambridge, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.293028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Ding, Sihao. "Multi-Perspective Image and Video Processing for Human-Machine Interaction." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488462115943949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Rustogi, Kabir. "Machine scheduling with changing processing times and rate-modifying activities." Thesis, University of Greenwich, 2013. http://gala.gre.ac.uk/11992/.

Full text
Abstract:
In classical scheduling models, it is normally assumed that the processing times of jobs are fixed. However, in the recent years, there has been a growing interest in models with variable processing times. Some of the common rationales provided for considering such models, is as follows: the machine conditions may deteriorate as more jobs are processed, resulting in higher than normal processing times, or conversely, the machine’s operator may gain more experience as more jobs are processed, so he/she can process the jobs faster. Another direction of improving the practical relevance of models is by introducing certain rate-modifying activities, such as maintenance periods, in the schedule. In this thesis, we mainly focus on the study of integrated models which allow changing processing times and rate-modifying activities. When this project was started, it was felt that there was a major scope of improvement in the area, both in terms of creating more general, practically relevant models and developing faster algorithms that are capable of handling a wide variety of problems. In this thesis, we address both these issues. We introduce several enhanced, practically relevant models for scheduling problems with changing times that allow various types of rate-modifying activities, various effects or a combination of effects on the processing times. To handle these generalised models, we developed a unified framework of algorithms that use similar general principles, through which, the effects of rate-modifying activities can be systematically studied for many different scenarios.
APA, Harvard, Vancouver, ISO, and other styles
31

Perumalla, Calvin A. "Machine Learning and Adaptive Signal Processing Methods for Electrocardiography Applications." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6926.

Full text
Abstract:
This dissertation is directed towards improving the state of art cardiac monitoring methods and automatic diagnosis of cardiac anomalies through modern engineering approaches such as adaptive signal processing, and machine learning methods. The dissertation will describe the invention and associated methods of a cardiac rhythm monitor dubbed the Integrated Vectorcardiogram (iVCG). In addition, novel machine learning approaches are discussed to improve diagnoses and prediction accuracy of cardiac diseases. It is estimated that around 17 million people in the world die from cardiac related events each year. It has also been shown that many of such deaths can be averted with long-term continuous monitoring and actuation. Hence, there is a growing need for better cardiac monitoring solutions. Leveraging the improvements in computational power, communication bandwidth, energy efficiency and electronic chip size in recent years, the Integrated Vectorcardiogram (iVCG) was invented as an answer to this problem. The iVCG is a miniaturized, integrated version of the Vectorcardiogram that was invented in the 1930s. The Vectorcardiogram provides full diagnostic quality cardiac information equivalent to that of the gold standard, 12-lead ECG, which is restricted to in-office use due to its bulky, obtrusive form. With the iVCG, it is possible to provide continuous, long-term, full diagnostic quality information, while being portable and unobtrusive to the patient. Moreover, it is possible to leverage this ‘Big Data’ and create machine learning algorithms to deliver better patient outcomes in the form of patient specific machine diagnosis and timely alerts. First, we present a proof-of-concept investigation for a miniaturized vectorcardiogram, the iVCG system for ambulatory on-body applications that continuously monitors the electrical activity of the heart in three dimensions. We investigate the minimum distance between a pair of leads in the X, Y and Z axes such that the signals are distinguishable from the noise. The target dimensions for our prototype iVCG are 3x3x2 cm and based on our experimental results we show that it is possible to achieve these dimensions. Following this, we present a solution to the problem of transforming the three VCG component signals to the familiar 12-lead ECG for the convenience of cardiologists. The least squares (LS) method is employed on the VCG signals and the reference (training) 12-lead ECG to obtain a 12x3 transformation matrix to generate the real-time ECG signals from the VCG signals. The iVCG is portable and worn on the chest of the patient and although a physician or trained technician will initially install it in the appropriate position, it is prone to subsequent rotation and displacement errors introduced by the patient placement of the device. We characterize these errors and present a software solution to correct the effect of the errors on the iVCG signals. We also describe the design of machine learning methods to improve automatic diagnosis and prediction of various heart conditions. Methods very similar to the ones described in this dissertation can be used on the long term, full diagnostic quality ‘Big Data’ such that the iVCG will be able to provide further insights into the health of patients. The iVCG system is potentially breakthrough and disruptive technology allowing long term and continuous remote monitoring of patient’s electrical heart activity. The implications are profound and include 1) providing a less expensive device compared to the 12-lead ECG system (the “gold standard”); 2) providing continuous, remote tele-monitoring of patients; 3) the replacement of current Holter shortterm monitoring system; 4) Improved and economic ICU cardiac monitoring; 5) The ability for patients to be sent home earlier from a hospital since physicians will have continuous remote monitoring of the patients.
APA, Harvard, Vancouver, ISO, and other styles
32

Mobus, George E. (George Edward). "A Multi-Time Scale Learning Mechanism for Neuromimic Processing." Thesis, University of North Texas, 1994. https://digital.library.unt.edu/ark:/67531/metadc278467/.

Full text
Abstract:
Learning and representing and reasoning about temporal relations, particularly causal relations, is a deep problem in artificial intelligence (AI). Learning such representations in the real world is complicated by the fact that phenomena are subject to multiple time scale influences and may operate with a strange attractor dynamic. This dissertation proposes a new computational learning mechanism, the adaptrode, which, used in a neuromimic processing architecture may help to solve some of these problems. The adaptrode is shown to emulate the dynamics of real biological synapses and represents a significant departure from the classical weighted input scheme of conventional artificial neural networks. Indeed the adaptrode is shown, by analysis of the deep structure of real synapses, to have a strong structural correspondence with the latter in terms of multi-time scale biophysical processes. Simulations of an adaptrode-based neuron and a small network of neurons are shown to have the same learning capabilities as invertebrate animals in classical conditioning. Classical conditioning is considered a fundamental learning task in animals. Furthermore, it is subject to temporal ordering constraints that fulfill the criteria of causal relations in natural systems. It may offer clues to the learning of causal relations and mechanisms for causal reasoning. The adaptrode is shown to solve an advanced problem in classical conditioning that addresses the problem of real world dynamics. A network is able to learn multiple, contrary associations that separate in time domains, that is a long-term memory can co-exist with a short-term contrary memory without destroying the former. This solves the problem of how to deal with meaningful transients while maintaining long-term memories. Possible applications of adaptrode-based neural networks are explored and suggestions for future research are made.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhang, Xiaowei. "Pedestrian flow measurement using image processing techniques." Thesis, Northumbria University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.367418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

King, Stephen. "A machine vision system for texture segmentation." Thesis, Brunel University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Alorwu, A. (Andy). "Android-based customizable media crowdsourcing toolkit for machine vision research." Master's thesis, University of Oulu, 2018. http://urn.fi/URN:NBN:fi:oulu-201812063247.

Full text
Abstract:
Smart devices have become more complex and powerful, increasing in both computational power, storage capacities, and battery longevity. Currently available online facial recognition databases do not offer training datasets with enough contextually descriptive metadata for novel scenarios such as using machine vision to detect if people in a video like each other based on their facial expressions. The aim of this research is to design and implement a software tool to enable researchers to collect videos from a large pool of people through crowdsourcing means for machine vision analysis. We are particularly interested in the tagging of the videos with the demographic data of study participants as well as data from custom post hoc survey. This study has demonstrated that smart devices and their embedded technologies can be utilized to collect videos as well as self-evaluated metadata through crowdsourcing means. The application makes use of sensors embedded within smart devices such as the camera and GPS sensors to collect videos, survey data, and geographical data. User engagement is encouraged using periodic push notifications. The collected videos and metadata using the application will be used in the future for machine vision analysis of various phenomena such as investigating if machine vision could be used to detect people’s fondness for each other based on their facial expressions and self-evaluated post-task survey data.
APA, Harvard, Vancouver, ISO, and other styles
36

Adams, Andrew. "Tools and techniques for machine-assisted meta-theory." Thesis, University of St Andrews, 1997. http://hdl.handle.net/10023/13382.

Full text
Abstract:
Machine-assisted formal proofs are becoming commonplace in certain fields of mathematics and theoretical computer science. New formal systems and variations on old ones are constantly invented. The meta-theory of such systems, i.e. proofs about the system as opposed to proofs within the system, are mostly done informally with a pen and paper. Yet the meta-theory of deductive systems is an area which would obviously benefit from machine support for formal proof. Is the software currently available sufficiently powerful yet easy enough to use to make machine assistance for formal meta-theory a viable proposition? This thesis presents work done by the author on formalizing proof theory from [DP97a] in various formal systems: SEQUEL [Tar93, Tar97], Isabelle [Pau94] and Coq [BB+96]. SEQUEL and Isabelle were found to be difficult to use for this type of work. In particular, the lack of automated production of induction principles in SEQUEL and Isabelle undermined confidence in the resulting formal proofs. Coq was found to be suitable for the formalisation methodology first chosen: the use of nameless dummy variables (de Bruijn indices) as pioneered in [dB72]. A second approach (inspired by the work of McKinna and Pollack [vBJMR94, MP97]) formalising named variables was also the subject of some initial work, and a comparison of these two approaches is presented. The formalisation was restricted to the implicational fragment of propositional logic. The informal theory has been extended to cover full propositional logic by Dyckhoff and Pinto, and extension of the formalisation using de Bruijn indices would appear to present few difficulties. An overview of other work in this area, in terms of both the tools and formalisation methods, is also presented. The theory formalised differs from other such work in that other formalisations have involved only one calculus. [DP97a] involves the relationships between three different calculi. There is consequently a much greater requirement for equality reasoning in the formalisation. It is concluded that a formalisation of any significance is still difficult, particularly one involving multiple calculi. No tools currently exist that allow for the easy representation of even quite simple systems in a way that fits human intuitions while still allowing for automatic derivation of induction principles. New work on integrating higher order abstract syntax and induction may be the way forward, although such work is still in the early stages.
APA, Harvard, Vancouver, ISO, and other styles
37

Hu, Ji, Dirk Cordel, and Christoph Meinel. "A virtual machine architecture for creating IT-security laboratories." Universität Potsdam, 2006. http://opus.kobv.de/ubp/volltexte/2009/3307/.

Full text
Abstract:
E-learning is a flexible and personalized alternative to traditional education. Nonetheless, existing e-learning systems for IT security education have difficulties in delivering hands-on experience because of the lack of proximity. Laboratory environments and practical exercises are indispensable instruction tools to IT security education, but security education in con-ventional computer laboratories poses the problem of immobility as well as high creation and maintenance costs. Hence, there is a need to effectively transform security laboratories and practical exercises into e-learning forms. This report introduces the Tele-Lab IT-Security architecture that allows students not only to learn IT security principles, but also to gain hands-on security experience by exercises in an online laboratory environment. In this architecture, virtual machines are used to provide safe user work environments instead of real computers. Thus, traditional laboratory environments can be cloned onto the Internet by software, which increases accessibilities to laboratory resources and greatly reduces investment and maintenance costs. Under the Tele-Lab IT-Security framework, a set of technical solutions is also proposed to provide eective functionalities, reliability, security, and performance. The virtual machines with appropriate resource allocation, software installation, and system congurations are used to build lightweight security laboratories on a hosting computer. Reliability and availability of laboratory platforms are covered by the virtual machine management framework. This management framework provides necessary monitoring and administration services to detect and recover critical failures of virtual machines at run time. Considering the risk that virtual machines can be misused for compromising production networks, we present security management solutions to prevent misuse of laboratory resources by security isolation at the system and network levels. This work is an attempt to bridge the gap between e-learning/tele-teaching and practical IT security education. It is not to substitute conventional teaching in laboratories but to add practical features to e-learning. This report demonstrates the possibility to implement hands-on security laboratories on the Internet reliably, securely, and economically.
APA, Harvard, Vancouver, ISO, and other styles
38

Lyons, Laura Christine. "An investigation of systematic errors in machine vision hardware." Thesis, Georgia Institute of Technology, 1989. http://hdl.handle.net/1853/16759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Döring, Kersten [Verfasser], and Stefan [Akademischer Betreuer] Günther. "Processing information about biomolecules with text mining and machine learning approaches." Freiburg : Universität, 2016. http://d-nb.info/111945297X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Liaghat, Zeinab. "Quality-efficiency trade-offs in machine learning applied to text processing." Doctoral thesis, Universitat Pompeu Fabra, 2017. http://hdl.handle.net/10803/402575.

Full text
Abstract:
Nowadays, the amount of available digital documents is rapidly growing, expanding at a considerable rate and coming from a variety of sources. Sources of unstructured and semi-structured information include the World Wide Web, news articles, biological databases, electronic mail, digital libraries, governmental digital repositories, chat rooms, online forums, blogs, and social media such as Facebook, Instagram, LinkedIn, Pinterest, Twitter, YouTube, Instagram, Pinterest, plus many others. Extracting information from these resources and finding useful information from such collections has become a challenge, which makes organizing massive amounts of data a necessity. Data mining, machine learning, and natural language processing are powerful techniques that can be used together to deal with this big challenge. Depending on the task or problem at hand, there are many different approaches that can be used. The methods that are being implemented are continuously being optimized, but not all these methods have been tested and compared for quality after training on large size corpora for supervised machine learning algorithms. The question is what happens to the quality of methods if we increase the data size from, say, 100 MB to over 1 GB? Moreover, are quality gains worth it when the rate of data processing diminishes? Can we trade quality for time efficiency and recover the quality loss by just being able to process more data? This thesis is first attempt to answer these questions in a general way for text processing tasks, as not enough research has been done to compare those methods considering the trade-offs of data size, quality, and processing time. Hence, we propose a trade-off analysis framework and apply it to three important text processing problems: Named Entity Recognition, Sentiment Analysis, and Document Classification. These problems were also chosen because they have different levels of object granularity: words, passages, and documents. For each problem, we select several machine learning algorithms and we evaluate the trade-offs of these different methods on large publicly available datasets (news, reviews, patents). We use different data subsets of increasing size ranging from 50 MB to a few GB, to explore these trade-offs. We conclude, as hypothesized, that just because the method has good performance in small data, it does not necessarily have the same performance for big data. For the two last problems, we consider similar algorithms and also consider two different data sets and two different evaluation techniques, to study the impact of the data and the evaluation technique on the resulting trade-offs. We find that the results do not change significantly.
Avui en dia, la quantitat de documents digitals disponibles està creixent ràpidament, expandint- se a un ritme considerable i procedint de diverses fonts. Les fonts d’informació no estructurada i semiestructurada inclouen la World Wide Web, articles de notícies, bases de dades biològiques, correus electrònics, biblioteques digitals, repositoris electrònics governamentals, , sales de xat, forums en línia, blogs i mitjans socials com Facebook, Instagram, LinkedIn, Pinterest, Twitter, YouTube i molts d’altres. Extreure’n informació d’aquests recursos i trobar informació útil d’aquestes col.leccions s’ha convertit en un desafiament que fa que l’organització d’aquesta enorme quantitat de dades esdevingui una necessitat. La mineria de dades, l’aprenentatge automàtic i el processament del llenguatge natural són tècniques poderoses que poden utilitzar-se conjuntament per fer front a aquest gran desafiament. Segons la tasca o el problema en qüestió existeixen molts emfo- caments diferents que es poden utilitzar. Els mètodes que s’estan implementant s’optimitzen continuament, però aquests mètodes d’aprenentatge automàtic supervisats han estat provats i comparats amb grans dades d’entrenament. La pregunta és : Què passa amb la qualitat dels mètodes si incrementem les dades de 100 MB a 1 GB? Més encara: Les millores en la qualitat valen la pena quan la taxa de processament de les dades minva? Podem canviar qualitat per eficiència, tot recuperant la perdua de qualitat quan processem més dades? Aquesta tesi és una primera aproximació per resoldre aquestes preguntes de forma gene- ral per a tasques de processament de text, ja que no hi ha hagut suficient investigació per a comparar aquests mètodes considerant el balanç entre el tamany de les dades, la qualitat dels resultats i el temps de processament. Per tant, proposem un marc per analitzar aquest balanç i l’apliquem a tres problemes importants de processament de text: Reconeixement d’Entitats Anomenades, Anàlisi de Sentiments i Classificació de Documents. Aquests problemes tam- bé han estat seleccionats perquè tenen nivells diferents de granularitat: paraules, opinions i documents complerts. Per a cada problema seleccionem diferents algoritmes d’aprenentatge automàtic i avaluem el balanç entre aquestes variables per als diferents algoritmes en grans conjunts de dades públiques ( notícies, opinions, patents). Utilitzem subconjunts de diferents tamanys entre 50 MB i alguns GB per a explorar aquests balanç. Per acabar, com havíem suposat, no perquè un algoritme és eficient en poques dades serà eficient en grans quantitats de dades. Per als dos últims problemes considerem algoritmes similars i també dos conjunts diferents de dades i tècniques d’avaluació per a estudiar l’impacte d’aquests dos paràmetres en els resultats. Mostrem que els resultats no canvien significativament amb aquests canvis.
Hoy en día, la cantidad de documentos digitales disponibles está creciendo rápidamente, ex- pandiéndose a un ritmo considerable y procediendo de una variedad de fuentes. Estas fuentes de información no estructurada y semi estructurada incluyen la World Wide Web, artículos de noticias, bases de datos biológicos, correos electrónicos, bibliotecas digitales, repositorios electrónicos gubernamentales, salas de chat, foros en línea, blogs y medios sociales como Fa- cebook, Instagram, LinkedIn, Pinterest, Twitter, YouTube, además de muchos otros. Extraer información de estos recursos y encontrar información útil de tales colecciones se ha convertido en un desafío que hace que la organización de esa enorme cantidad de datos sea una necesidad. La minería de datos, el aprendizaje automático y el procesamiento del lenguaje natural son técnicas poderosas que pueden utilizarse conjuntamente para hacer frente a este gran desafío. Dependiendo de la tarea o el problema en cuestión, hay muchos enfoques dife- rentes que se pueden utilizar. Los métodos que se están implementando se están optimizando continuamente, pero estos métodos de aprendizaje automático supervisados han sido probados y comparados con datos de entrenamiento grandes. La pregunta es ¿Qué pasa con la calidad de los métodos si incrementamos los datos de 100 MB a 1GB? Más aún, ¿las mejoras en la cali- dad valen la pena cuando la tasa de procesamiento de los datos disminuye? ¿Podemos cambiar calidad por eficiencia, recuperando la perdida de calidad cuando procesamos más datos? Esta tesis es una primera aproximación para resolver estas preguntas de forma general para tareas de procesamiento de texto, ya que no ha habido investigación suficiente para comparar estos métodos considerando el balance entre el tamaño de los datos, la calidad de los resultados y el tiempo de procesamiento. Por lo tanto, proponemos un marco para analizar este balance y lo aplicamos a tres importantes problemas de procesamiento de texto: Reconocimiento de En- tidades Nombradas, Análisis de Sentimientos y Clasificación de Documentos. Estos problemas fueron seleccionados también porque tienen distintos niveles de granularidad: palabras, opinio- nes y documentos completos. Para cada problema seleccionamos distintos algoritmos de apren- dizaje automático y evaluamos el balance entre estas variables para los distintos algoritmos en grandes conjuntos de datos públicos (noticias, opiniones, patentes). Usamos subconjuntos de distinto tamaño entre 50 MB y varios GB para explorar este balance. Para concluir, como ha- bíamos supuesto, no porque un algoritmo es eficiente en pocos datos será eficiente en grandes cantidades de datos. Para los dos últimos problemas consideramos algoritmos similares y tam- bién dos conjuntos distintos de datos y técnicas de evaluación, para estudiar el impacto de estos dos parámetros en los resultados. Mostramos que los resultados no cambian significativamente con estos cambios.
APA, Harvard, Vancouver, ISO, and other styles
41

Styś, Małgorzata Elżbieta. "A processing model of information structure in machine translation processor text." Thesis, University of Cambridge, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.624970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

吳建雄 and Jianxiong Wu. "A parallel distributed processing system for machine recognition of speech signals." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1991. http://hub.hku.hk/bib/B31232887.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Wu, Jianxiong. "A parallel distributed processing system for machine recognition of speech signals /." [Hong Kong : University of Hong Kong], 1991. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13068568.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Forsyth, Alexander William. "Improving clinical decision making with natural language processing and machine learning." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112847.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 49-53).
This thesis focused on two tasks of applying natural language processing (NLP) and machine learning to electronic health records (EHRs) to improve clinical decision making. The first task was to predict cardiac resynchronization therapy (CRT) outcomes with better precision than the current physician guidelines for recommending the procedure. We combined NLP features from free-text physician notes with structured data to train a supervised classifier to predict CRT outcomes. While our results gave a slight improvement over the current baseline, we were not able to predict CRT outcome with both high precision and high recall. These results limit the clinical applicability of our model, and reinforce previous work, which also could not find accurate predictors of CRT response. The second task in this thesis was to extract breast cancer patient symptoms during chemotherapy from free-text physician notes. We manually annotated about 10,000 sentences, and trained a conditional random field (CRF) model to predict whether a word indicated a symptom (positive label), specifically indicated the absence of a symptom (negative label), or was neutral. Our final model achieved 0.66, 1.00, and 0.77 F1 scores for predicting positive, neutral, and negative labels respectively. While the F1 scores for positive and negative labels are not extremely high, with the current performance, our model could be applied, for example, to gather better statistics about what symptoms breast cancer patients experience during chemotherapy and at what time points during treatment they experience these symptoms.
by Alexander William Forsyth.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
45

Trachi, Youness. "On induction machine faults detection using advanced parametric signal processing techniques." Thesis, Brest, 2017. http://www.theses.fr/2017BRES0103/document.

Full text
Abstract:
L’objectif de ces travaux de thèse est de développer des architectures fiables de surveillance et de détection des défauts d’une machine asynchrone basées sur des techniques paramétriques de traitement du signal. Pour analyser et détecter les défauts, un modèle paramétrique du courant statorique en environnement stationnaire est proposé. Il est supposé être constitué de plusieurs sinusoïdes avec des paramètres inconnus dans le bruit. Les paramètres de ce modèle sont estimés à l’aide des techniques paramétriques telles que les estimateurs spectraux de type sous-espaces (MUSIC et ESPRIT) et l’estimateur du maximum de vraisemblance. Un critère de sévérité des défauts, basé sur l’estimation des amplitudes des composantes fréquentielles du courant statorique, est aussi proposé pour évaluer le niveau de défaillance de la machine. Un nouveau détecteur des défauts est aussi proposé en utilisant la théorie de détection. Il est principalement basé sur le test du rapport de vraisemblance généralisé avec un signal et un bruit à paramètres inconnus. Enfin, les techniques paramétriques proposées ont été évaluées à l’aide de signaux de courant statoriques expérimentaux de machines asynchrones en considérant les défauts de roulements et les ruptures de barres rotoriques. L’analyse des résultats expérimentaux montre clairement l’efficacité et la capacité de détection des techniques paramétriques proposées
This Ph.D. thesis aims to develop reliable and cost-effective condition monitoring and faults detection architectures for induction machines. These architectures are mainly based on advanced parametric signal processing techniques. To analyze and detect faults, a parametric stator current model under stationary conditions has been considered. It is assumed to be multiple sinusoids with unknown parameters in noise. This model has been estimated using parametric techniques such as subspace spectral estimators and maximum likelihood estimator. A fault severity criterion based on the estimation of the stator current frequency component amplitudes has also been proposed to determine the induction machine failure level. A novel faults detector based on hypothesis testing has been also proposed. This detector is mainly based on the generalized likelihood ratio test detector with unknown signal and noise parameters. The proposed parametric techniques have been evaluated using experimental stator current signals issued from induction machines under two considered faults: bearing and broken rotor bars faults.Experimental results show the effectiveness and the detection ability of the proposed parametric techniques
APA, Harvard, Vancouver, ISO, and other styles
46

Hyberg, Martin. "Software Issue Time Estimation With Natural Language Processing and Machine Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-295202.

Full text
Abstract:
Time estimation for software issues is crucial to planning projects. Developers and experts have for many decades tried to estimate time requirements for issues as accurately as possible. The methods that are used today are often time-consuming and complex. This thesis investigates if the time estimation process can be done with natural language processing and machine learning. Three different word embeddings were used to represent the free text description, bag-of-words with tf-idf weighing, word2Vec and fastText. The different word embeddings were then fed into two types of machine learning approaches, classification and regression. The classification was binary and can be formulated as will the issue take more than three hours?. The goal of the regression problem was to predict an actual value for the time that the issue would take to complete. The classification models performance were measured with an F1-score, and the regression model was measured with an R2-score. The best F1- score for classification was 0.748 and was achieved with the word2Vec word embedding and an SVM classifier. The best score for the regression analysis was achieved with the bag-of-words word embedding, which achieved an R2- score of 0.380. Further evaluation of the results and a comparison to actual estimates made by the company show that humans only performs slightly better than the models given the binary classification defined above. The F1-score of the employees was 0.792, a difference of just 0.044 from the best F1-score made by the models. This thesis concludes that the models are not good enough to use in a professional setting. An F1-score of 0.748 could be used in other settings, but the classification question in this problem is too broad to be used for a real project. The results for the regression is also too low to be of any valuable use.
Tidsuppskattning för programvaruärenden är en avgörande del för planering av projekt. Utvecklare och experter har i många årtionden försökt uppskatta tiden ett ärende kommer ta så exakt som möjligt. Metoderna som används idag är ofta tidskrävande och komplexa. Denna avhandling undersöker om tidsuppskattningsprocessen kan göras med hjälp av språkteknologi och maskininlärning. De flesta programvaruärenden har en fritextbeskrivning av vad som är fel eller behöver läggas till. Tre olika ordinbäddningar användes för att representera fritextbeskrivningen, bag-of-word med tf-idf-viktning, word2Vec och fastText. De olika ordinbäddningarna matades sedan in i två typer av maskininlärningsmetoder, klassificering och regression. Klassificeringen var binär och frågan kan formuleras som tar ärendet mer än tre timmar?. Målet med regressionsproblemet var att förutsäga ett faktiskt värde för den tid som frågan skulle ta att slutföra. Klassificeringsmodellens prestanda mättes med en F1-poäng och regressionsmodellen mättes med en R2-poäng. Den bästa F1-poängen för klassificering var 0.748 och uppnåddes med en word2Vec-ordinbäddning och en SVM-klassificeringsmodell. Den bästa poängen för regressionsanalysen uppnåddes med en bag-of-words-inbäddning, som uppnådde en R2-poäng på 0.380. Vidare undersökning av resultaten och en jämförelse av faktiskta tidsestimat som gjorts av företaget visar att människor bara är lite bättre än modellerna givet klassificeringsfrågan beskriven ovan. F1-poängen för de anställda var 0.792, bara 0.044 bättre än det bästa F1-poängen för modellerna. Slutsatsen för denna avhandling är att modellerna inte är tillräckligt bra för att användas i en professionell miljö. En F1-poäng på 0.748 kan användas i andra situationer, men klassificeringsfrågan i detta problem är för bred för att användas för ett riktigt projekt. Resultatet för regressionen är också för lågt för att vara till någon värdefull användning.
APA, Harvard, Vancouver, ISO, and other styles
47

Hedberg, Niclas. "Automated invoice processing with machine learning : Benefits, risks and technical feasibility." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279617.

Full text
Abstract:
When an organization receives invoices, accountants specify accounts and cost centers related to the purchases. This thesis investigated automated decision support with machine learning that gives suggestions to the accountant of what accounts and cost centers that can be used for invoices. The purpose was to identify benefits and risks of using machine learning automation for invoice processing and evaluate the performance of this technology. It was found that machine learning-based decision support for invoice processing is perceived to be beneficial by saving time, reducing the mental effort, create more coherent bookkeeping, detect errors, and enabling higher levels of automation. However, there are also risks related to implementing automation with machine learning. There is a high variety of how accounts and cost centers are used in different organizations and an uneven performance can be expected due to that some invoices are more complex to process than others. Machine learning experiments were conducted which indicated that the accuracy of suggesting the correct account was 73–76%. For cost centers, the accuracy was 50–62%. A method for filtering machine learning output was developed with the aim of raising the accuracy of the automated suggestions. With this method, the limited amount of suggestions that passed the filter achieved accuracy up to 100%.
När en organisation tar emot fakturor anges konton och kostnadsställen relaterade till inköpen. Detta examensarbete undersökte automatiserat beslutsstöd med maskininlärning som ger förslag på vilka konton och kostnadsställe som kan användas för fakturor. Syftet var att identifiera fördelarna och riskerna med att använda automatisering med maskininlärning för fakturahantering och utvärdera teknikens prestanda. Resultaten visade att maskininlärningsbaserat beslutsstöd för fakturabehandling uppfattas vara fördelaktigt genom att spara tid, minska mentala ansträngning, skapa mer sammanhängande bokföring, upptäcka fel, och möjliggöra högre automatiseringsnivåer. Men det finns också risker relaterade till implementering av automatisering med maskininlärning. Det är en stor variation gällande hur konton och kostnadsställen används i olika organisationer och en ojämn prestanda kan förväntas på grund av att vissa fakturor är mer komplexa att bokföra än andra. Maskininlärningsexperiment genomfördes som indikerade att korrektheten i att föreslå rätt konto var 73–76%. För kostnadsställe var korrektheten 50–62%. En metod för att filtrera maskininlärnings-förslagen utvecklades i syfte att höja korrektheten för automatiseringen. Med denna metod uppnådde den begränsade mängden förslag som passerade filtret en korrekthet upp till 100%.
APA, Harvard, Vancouver, ISO, and other styles
48

Zhang, Yue. "Sparsity in Image Processing and Machine Learning: Modeling, Computation and Theory." Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case1523017795312546.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Ji, Soo-Yeon. "COMPUTER-AIDED TRAUMA DECISION MAKING USING MACHINE LEARNING AND SIGNAL PROCESSING." VCU Scholars Compass, 2008. http://scholarscompass.vcu.edu/etd/1628.

Full text
Abstract:
Over the last 20 years, much work has focused on computer-aided clinical decision support systems due to a rapid increase in the need for management and processing of medical knowledge. Among all fields of medicine, trauma care has the highest need for proper information management due to the high prevalence of complex, life-threatening injuries. In particular, hemorrhage, which is encountered in most traumatic injuries, is a dominant factor in determining survival in both civilian and military settings. This complication can be better managed using a more in-depth analysis of patient information. Trauma physicians must make precise and rapid decisions, while considering a large number of patient variables and dealing with stressful environments. The ability of a computer-aided decision making system to rapidly analyze a patient’s condition can enable physicians to make more accurate decisions and thereby significantly improve the quality of care provided to patients. The first part of this study is focused on classification of highly complex databases using a hierarchical method which combines two complementary techniques: logistic regression and machine learning. This method, hereafter referred to as Classification Using Significant Features (CUSF), includes a statistical process to select the most significant variables from the correlated database. Then a machine learning algorithm is used to identify the data into classes using only the significant variables. As the main application addressed by CUSF, a set of computer-assisted rule-based trauma decision making system are designed. Computer aided decision-making system not only provides vital assistance for physicians in making fast and accurate decisions, proposed decisions are supported by transparent reasoning, but also can confirm a physicians’ current knowledge, enabling them to detect complex patterns and information which may reveal new knowledge not easily visible to the human eyes. The second part of this study proposes an algorithm based on a set of novel wavelet features to analyze physiological signals, such as Electrocardiograms (ECGs) that can provide invaluable information typically invisible to human eyes. These wavelet-based method, hereafter referred to as Signal Analysis Based on Wavelet-Extracted Features (SABWEF), extracts information that can be used to detect and analyze complex patterns that other methods such as Fourier cannot deal with. For instance, SABWEF can evaluate the severity of hemorrhagic shock (HS) from ECG, while the traditional technique of applying power spectrum density (PSD) and fractal dimension (FD) cannot distinguish between the ECG patterns of patients with HS (i.e. blood loss), and those of subjects undergoing physical activity. In this study, as the main application of SABWEF, ECG is analyzed to distinguish between HS and physical activity, and show that SABWEF can be used in both civilian and military settings to detect HS and its extent. This is the first reported use of an ECG analysis method to classify blood volume loss. SABWEF has the capability to rapidly determine the degree of volume loss from hemorrhage, providing the chance for more rapid remote triage and decision making.
APA, Harvard, Vancouver, ISO, and other styles
50

Enshaeifar, Shirin. "Eigen-based machine learning techniques for complex and hyper-complex processing." Thesis, University of Surrey, 2016. http://epubs.surrey.ac.uk/811040/.

Full text
Abstract:
One of the earlier works on eigen-based techniques for the hyper-complex domain of quaternions was on “quaternion principal component analysis of colour images”. The results of this work are still instructive in many aspects. First, it showed how naturally the quaternion domain accounts for the coupling between the dimensions of red, blue and green of an image, hence its suitability for multichannel processing. Second, it was clear that there was a lack of eigen-based techniques for such a domain, which explains the non-trivial gap in the literature. Third, the lack of such eigen-based quaternion tools meant that the scope and the applications of quaternion signal processing were quite limited, especially in the field of biomedicine. And fourth, quaternion principal component analysis made use of complex matrix algebra, which reminds us that the complex domain lays the building blocks of the quaternion domain, and therefore any research endeavour in quaternion signal processing should start with the complex domain. As such, the first contribution of this thesis lies in the proposition of complex singular spectrum analysis. That research provided a deep understanding and an appreciation of the intricacies of the complex domain and its impact on the quaternion domain. As the complex domain offers one degree of freedom over the real domain, the statistics of a complex variable x has to be augmented with its complex conjugate x*, which led to the term augmented statistics. This recent advancement in complex statistics was exploited in the proposed complex singular spectrum analysis. The same statistical notion was used in proposing novel quaternion eigen-based techniques such as the quaternion singular spectrum analysis, the quaternion uncorrelating transform, and the quaternion common spatial patterns. The latter two methods highlighted an important gap in the literature – there were no algebraic methods that solved the simultaneous diagonalisation of quaternion matrices. To address this issue, this thesis also presents new fundamental results on quaternion matrix factorisations and explores the depth of quaternion algebra. To demonstrate the efficacy of these methods, real-world problems mainly in biomedical engineering were considered. First, the proposed complex singular spectrum analysis successfully addressed an examination of schizophrenic data through the estimation of the event-related potential of P300. Second, the automated detection of the different stages of sleep was made possible using the proposed quaternion singular spectrum analysis. Third, the proposed quaternion common spatial patterns facilitated the discrimination of Parkinsonian patients from healthy subjects. To illustrate the breadth of the proposed eigen-based techniques, other areas of applications were also presented, such as in wind and financial forecasting, and Alamouti-based communication problems. Finally, a preliminary work is made available to suggest that the next step from this thesis is to move from static models (eigen-based models) to dynamic models (such as tracking models).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography