To see the other types of publications on this topic, follow the link: Detector principle.

Dissertations / Theses on the topic 'Detector principle'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Detector principle.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kandlakunta, Praneeth. "A Proof-of-Principle Investigation for a Neutron-Gamma Discrimination Technique in a Semiconductor Neutron Detector." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1332447196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Фесенко, А. "Металлоискатель." Thesis, Сумский государственный университет, 2014. http://essuir.sumdu.edu.ua/handle/123456789/38883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Santos, André Luiz dos. "Desenvolvimento de sistem biomimético para análise de 3,5,6-Tricloro-2-piridinol, o principal metabólito do clorpirifós /." Araraquara, 2012. http://hdl.handle.net/11449/97833.

Full text
Abstract:
Orientador: Maria Del Pilar Taboada Sotomayor
Banca: Marcos Roberto de Vasconcelos Lanza
Banca: Rosa Amália Fireman Dutra
Resumo: O presente trabalho está baseado no desenvolvimento de um sistema biomimético para monitoramento sensível e seletivo do metabólito TCP (3,5,6-tricloro-2-piridinol), proveniente do agrotóxico clorpitifós, o qual é mais solúvel que o próprio agrotóxico e cuja ocorrência em águas subterrâneas e superficiais é mais provável e perigosa. Foi construído um sensor biomimético com detecção voltamétrica por onda quadrada, os eletrodos, foram confeccionados à base de pasta de carbono modificada com o complexo cloro-5,10,15,20-tetraquis(pentafluorofenil)-21H,23H-porfirina ferro (III), o qual apresenta uma estrutura química semelhante à do sítio ativo da enzima P450. O sensor construído apresentou as melhores respostas em tampão fosfato 0,20 mol L-1 e pH 6,0, usando a voltametria de onda quadrada a 50 Hz, 150 mV de amplitude e 1,5 mV de E. Com os parâmetros otimizados o sensor apresentou limites de detecção e de quantificação de 1,9 e 5,2 μmol L-1, respectivamente. Estudos realizados para averiguar a biomimeticidade do sensor, incluíram: velocidade de varredura por voltametria cíclica, exploração do perfil hiperbólico da resposta no sensor e avaliação da seletividade. O sensor foi satisfatoriamente usado na análise em diversos tipos de amostras de interesse ambiental. Foram feitos testes de recuperação e nas amostras de solo, águas superficiais e subterrâneas obtendo recuperação de 91%, 107% e 96% respectivamente, mostrando que o sensor pode ser usado como método alternativo para quantificação de TCP em diferentes matrizes. O sensor também foi empregado no monitoramento da eficiência de polímeros de impressão molecular (MIP) para TCP. Buscando obter o polímero biomimético mais eficiente para o analito, foram usadas ferramentas computacionais que permitiram escolher o melhor monômero (acrilonitrila)... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: This work is based on developing a biomimetic system for sensitive and selective monitoring of the TCP (3,5,6-trichloro-2-pyridinol), the principal metabolite of the pesticide chlorpyrifos, which is more soluble than the pesticide and whose occurrence in groundwater and surface water is more likely and dangerous. For this, a biomimetic sensor was constructed and the square wave voltammetric was used for measurements. The electrodes were fabricated using carbon paste modified with the complex chloro-5,10,15,20-tetrakis-(pentafluorophenyl)-21H,23H-porphyrin iron(III), which has a chemical structure similar to the active site of the enzyme P450. The sensor presented the best responses in phosphate buffer 0.20 mol L-1 and pH 6.0, using the square wave voltammetry with 50 Hz, amplitude of potential of 150 mV and ΔE of 1.5 mV. With the optimized parameters the sensor showed limits of detection and quantification of 1.9 and 5.2 μmol L-1, respectively. Studies conducted to investigate the mimicking of the sensor, included evaluation of the influence on scan rate in the cyclic voltammetry, the verification of the hyperbolic profile of the sensor response and evaluation of selectivity. The sensor has been satisfactorily applied in the analysis of different samples of environmental interest. Recovery experiments in samples of soil, surface water, and groundwater showed values of 91%, 107% and 96% respectively, showing that the sensor can be used as an alternative method for the quantification of TCP in different matrices. The sensor was also used to monitor the efficiency of molecularly imprinted polymers (MIP) for TCP. In order to obtain the most efficient biomimetic polymer to this analyte were used computational tools that allowed select the best monomer (acrylonitrile). In order to verify the results obtained by the theoretical... (Complete abstract click electronic access below)
Mestre
APA, Harvard, Vancouver, ISO, and other styles
4

Santos, André Luiz dos [UNESP]. "Desenvolvimento de sistem biomimético para análise de 3,5,6-Tricloro-2-piridinol, o principal metabólito do clorpirifós." Universidade Estadual Paulista (UNESP), 2012. http://hdl.handle.net/11449/97833.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:29:08Z (GMT). No. of bitstreams: 0 Previous issue date: 2012-10-22Bitstream added on 2014-06-13T20:38:25Z : No. of bitstreams: 1 santos_al_me_araiq.pdf: 1374698 bytes, checksum: 4000b6c44028dbd65db8795f30a87ff5 (MD5)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
O presente trabalho está baseado no desenvolvimento de um sistema biomimético para monitoramento sensível e seletivo do metabólito TCP (3,5,6-tricloro-2-piridinol), proveniente do agrotóxico clorpitifós, o qual é mais solúvel que o próprio agrotóxico e cuja ocorrência em águas subterrâneas e superficiais é mais provável e perigosa. Foi construído um sensor biomimético com detecção voltamétrica por onda quadrada, os eletrodos, foram confeccionados à base de pasta de carbono modificada com o complexo cloro-5,10,15,20-tetraquis(pentafluorofenil)-21H,23H-porfirina ferro (III), o qual apresenta uma estrutura química semelhante à do sítio ativo da enzima P450. O sensor construído apresentou as melhores respostas em tampão fosfato 0,20 mol L-1 e pH 6,0, usando a voltametria de onda quadrada a 50 Hz, 150 mV de amplitude e 1,5 mV de E. Com os parâmetros otimizados o sensor apresentou limites de detecção e de quantificação de 1,9 e 5,2 μmol L-1, respectivamente. Estudos realizados para averiguar a biomimeticidade do sensor, incluíram: velocidade de varredura por voltametria cíclica, exploração do perfil hiperbólico da resposta no sensor e avaliação da seletividade. O sensor foi satisfatoriamente usado na análise em diversos tipos de amostras de interesse ambiental. Foram feitos testes de recuperação e nas amostras de solo, águas superficiais e subterrâneas obtendo recuperação de 91%, 107% e 96% respectivamente, mostrando que o sensor pode ser usado como método alternativo para quantificação de TCP em diferentes matrizes. O sensor também foi empregado no monitoramento da eficiência de polímeros de impressão molecular (MIP) para TCP. Buscando obter o polímero biomimético mais eficiente para o analito, foram usadas ferramentas computacionais que permitiram escolher o melhor monômero (acrilonitrila)...
This work is based on developing a biomimetic system for sensitive and selective monitoring of the TCP (3,5,6-trichloro-2-pyridinol), the principal metabolite of the pesticide chlorpyrifos, which is more soluble than the pesticide and whose occurrence in groundwater and surface water is more likely and dangerous. For this, a biomimetic sensor was constructed and the square wave voltammetric was used for measurements. The electrodes were fabricated using carbon paste modified with the complex chloro-5,10,15,20-tetrakis-(pentafluorophenyl)-21H,23H-porphyrin iron(III), which has a chemical structure similar to the active site of the enzyme P450. The sensor presented the best responses in phosphate buffer 0.20 mol L-1 and pH 6.0, using the square wave voltammetry with 50 Hz, amplitude of potential of 150 mV and ΔE of 1.5 mV. With the optimized parameters the sensor showed limits of detection and quantification of 1.9 and 5.2 μmol L-1, respectively. Studies conducted to investigate the mimicking of the sensor, included evaluation of the influence on scan rate in the cyclic voltammetry, the verification of the hyperbolic profile of the sensor response and evaluation of selectivity. The sensor has been satisfactorily applied in the analysis of different samples of environmental interest. Recovery experiments in samples of soil, surface water, and groundwater showed values of 91%, 107% and 96% respectively, showing that the sensor can be used as an alternative method for the quantification of TCP in different matrices. The sensor was also used to monitor the efficiency of molecularly imprinted polymers (MIP) for TCP. In order to obtain the most efficient biomimetic polymer to this analyte were used computational tools that allowed select the best monomer (acrylonitrile). In order to verify the results obtained by the theoretical... (Complete abstract click electronic access below)
APA, Harvard, Vancouver, ISO, and other styles
5

Паржин, Юрій Володимирович. "Моделі і методи побудови архітектури і компонентів детекторних нейроморфних комп'ютерних систем." Thesis, НТУ "ХПІ", 2018. http://repository.kpi.kharkov.ua/handle/KhPI-Press/34755.

Full text
Abstract:
Дисертація на здобуття наукового ступеня доктора технічних наук за спеціальністю 05.13.05 – комп'ютерні системи та компоненти. – Національний технічний університет "Харківський політехнічний інститут", Міністерство освіти і науки України, Харків, 2018. Дисертація присвячена вирішенню проблеми підвищення ефективності побудови та використання нейроморфних комп'ютерних систем (НКС) в результаті розробки моделей побудови їх компонентів та загальної архітектури, а також методів їх навчання на основі формалізованого детекторного принципу. В результаті аналізу і класифікації архітектури та компонентів НКС встановлено, що в основі всіх їх нейромережевих реалізацій лежить конекціоністська парадигма побудови штучних нейронних мереж. Було обґрунтовано та формалізовано альтернативний до конекціоністської парадигми детекторний принцип побудови архітектури НКС та її компонентів, в основі якого лежить встановлена властивість зв’язності елементів вхідного вектору сигналів та відповідних вагових коефіцієнтів нейроелемента НКС. На основі детекторного принципу були розроблені багатосегментні порогові інформаційні моделі компонентів детекторної НКС (ДНКС): блоків-детекторів, блоків-аналізаторів та блоку новизни, в яких в результаті розробленого методу зустрічного навчання формуються концепти, що визначають необхідні і достатні умови формування їх реакцій. Метод зустрічного навчання ДНКС дозволяє скоротити час її навчання при вирішенні практичних задач розпізнавання зображень до однієї епохи та скоротити розмірність навчальної вибірки. Крім того, цей метод дозволяє вирішити проблему стабільності-пластичності пам'яті ДНКС та проблему її перенавчання на основі самоорганізації карти блоків-детекторів вторинного рівня обробки інформації під управлінням блоку новизни. В результаті досліджень була розроблена модель мережевої архітектури ДНКС, що складається з двох шарів нейроморфних компонентів первинного та вторинного рівнів обробки інформації, та яка дозволяє скоротити кількість необхідних компонентів системи. Для обґрунтування підвищення ефективності побудови та використання НКС на основі детекторного принципу, були розроблені програмні моделі ДНКС автоматизованого моніторингу та аналізу зовнішньої електромагнітної обстановки, а також розпізнавання рукописних цифр бази даних MNIST. Результати дослідження цих систем підтвердили правильність теоретичних положень дисертації та високу ефективність розроблених моделей і методів.
Dissertation for the degree of Doctor of Technical Sciences in the specialty 05.13.05 – Computer systems and components. – National Technical University "Kharkiv Polytechnic Institute", Ministry of Education and Science of Ukraine, Kharkiv, 2018. The thesis is devoted to solving the problem of increasing the efficiency of building and using neuromorphic computer systems (NCS) as a result of developing models for constructing their components and a general architecture, as well as methods for their training based on the formalized detection principle. As a result of the analysis and classification of the architecture and components of the NCS, it is established that the connectionist paradigm for constructing artificial neural networks underlies all neural network implementations. The detector principle of constructing the architecture of the NCS and its components was substantiated and formalized, which is an alternative to the connectionist paradigm. This principle is based on the property of the binding of the elements of the input signal vector and the corresponding weighting coefficients of the NCS. On the basis of the detector principle, multi-segment threshold information models for the components of the detector NCS (DNCS): block-detectors, block-analyzers and a novelty block were developed. As a result of the developed method of counter training, these components form concepts that determine the necessary and sufficient conditions for the formation of reactions. The method of counter training of DNCS allows reducing the time of its training in solving practical problems of image recognition up to one epoch and reducing the dimension of the training sample. In addition, this method allows to solve the problem of stability-plasticity of DNCS memory and the problem of its overfitting based on self-organization of a map of block-detectors of a secondary level of information processing under the control of a novelty block. As a result of the research, a model of the network architecture of DNCS was developed, which consists of two layers of neuromorphic components of the primary and secondary levels of information processing, and which reduces the number of necessary components of the system. To substantiate the increase in the efficiency of constructing and using the NCS on the basis of the detector principle, software models were developed for automated monitoring and analysis of the external electromagnetic environment, as well as recognition of the manuscript figures of the MNIST database. The results of the study of these systems confirmed the correctness of the theoretical provisions of the dissertation and the high efficiency of the developed models and methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Паржин, Юрій Володимирович. "Моделі і методи побудови архітектури і компонентів детекторних нейроморфних комп'ютерних систем." Thesis, НТУ "ХПІ", 2018. http://repository.kpi.kharkov.ua/handle/KhPI-Press/34756.

Full text
Abstract:
Дисертація на здобуття наукового ступеня доктора технічних наук за спеціальністю 05.13.05 – комп'ютерні системи та компоненти. – Національний технічний університет "Харківський політехнічний інститут", Міністерство освіти і науки України, Харків, 2018. Дисертація присвячена вирішенню проблеми підвищення ефективності побудови та використання нейроморфних комп'ютерних систем (НКС) в результаті розробки моделей побудови їх компонентів та загальної архітектури, а також методів їх навчання на основі формалізованого детекторного принципу. В результаті аналізу і класифікації архітектури та компонентів НКС встановлено, що в основі всіх їх нейромережевих реалізацій лежить конекціоністська парадигма побудови штучних нейронних мереж. Було обґрунтовано та формалізовано альтернативний до конекціоністської парадигми детекторний принцип побудови архітектури НКС та її компонентів, в основі якого лежить встановлена властивість зв’язності елементів вхідного вектору сигналів та відповідних вагових коефіцієнтів нейроелемента НКС. На основі детекторного принципу були розроблені багатосегментні порогові інформаційні моделі компонентів детекторної НКС (ДНКС): блоків-детекторів, блоків-аналізаторів та блоку новизни, в яких в результаті розробленого методу зустрічного навчання формуються концепти, що визначають необхідні і достатні умови формування їх реакцій. Метод зустрічного навчання ДНКС дозволяє скоротити час її навчання при вирішенні практичних задач розпізнавання зображень до однієї епохи та скоротити розмірність навчальної вибірки. Крім того, цей метод дозволяє вирішити проблему стабільності-пластичності пам'яті ДНКС та проблему її перенавчання на основі самоорганізації карти блоків-детекторів вторинного рівня обробки інформації під управлінням блоку новизни. В результаті досліджень була розроблена модель мережевої архітектури ДНКС, що складається з двох шарів нейроморфних компонентів первинного та вторинного рівнів обробки інформації, та яка дозволяє скоротити кількість необхідних компонентів системи. Для обґрунтування підвищення ефективності побудови та використання НКС на основі детекторного принципу, були розроблені програмні моделі ДНКС автоматизованого моніторингу та аналізу зовнішньої електромагнітної обстановки, а також розпізнавання рукописних цифр бази даних MNIST. Результати дослідження цих систем підтвердили правильність теоретичних положень дисертації та високу ефективність розроблених моделей і методів.
Dissertation for the degree of Doctor of Technical Sciences in the specialty 05.13.05 – Computer systems and components. – National Technical University "Kharkiv Polytechnic Institute", Ministry of Education and Science of Ukraine, Kharkiv, 2018. The thesis is devoted to solving the problem of increasing the efficiency of building and using neuromorphic computer systems (NCS) as a result of developing models for constructing their components and a general architecture, as well as methods for their training based on the formalized detection principle. As a result of the analysis and classification of the architecture and components of the NCS, it is established that the connectionist paradigm for constructing artificial neural networks underlies all neural network implementations. The detector principle of constructing the architecture of the NCS and its components was substantiated and formalized, which is an alternative to the connectionist paradigm. This principle is based on the property of the binding of the elements of the input signal vector and the corresponding weighting coefficients of the NCS. On the basis of the detector principle, multi-segment threshold information models for the components of the detector NCS (DNCS): block-detectors, block-analyzers and a novelty block were developed. As a result of the developed method of counter training, these components form concepts that determine the necessary and sufficient conditions for the formation of reactions. The method of counter training of DNCS allows reducing the time of its training in solving practical problems of image recognition up to one epoch and reducing the dimension of the training sample. In addition, this method allows to solve the problem of stability-plasticity of DNCS memory and the problem of its overfitting based on self-organization of a map of block-detectors of a secondary level of information processing under the control of a novelty block. As a result of the research, a model of the network architecture of DNCS was developed, which consists of two layers of neuromorphic components of the primary and secondary levels of information processing, and which reduces the number of necessary components of the system. To substantiate the increase in the efficiency of constructing and using the NCS on the basis of the detector principle, software models were developed for automated monitoring and analysis of the external electromagnetic environment, as well as recognition of the manuscript figures of the MNIST database. The results of the study of these systems confirmed the correctness of the theoretical provisions of the dissertation and the high efficiency of the developed models and methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Cong, Jie. "Nonlinearity Detection Using Penalization-Based Principle." Thesis, The George Washington University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10927993.

Full text
Abstract:

When constructing a statistical model, nonlinearity detection has always been an interesting topic and a difficult problem. To balance precision of parametric modeling and robustness of nonparametric modeling, the semi-parametric modeling method has shown very good performance. The specific example, spline fitting, can very well estimate nonlinear patterns. However, as the number of spline bases goes up, the method can generate a large amount of parameters to estimate, especially for multiple dimensional case. It's been discussed in the literature to treat additional slopes of spline bases as random terms, then those slopes can be controlled with a single variance term. The semi-parametric model then becomes a linear mixed effect problem.

Data of large dimensions has become a serious computation burden, especially when it comes to nonlinearity. A good dimension reduction technique is needed to ease this situation. Methods like LASSO type penalties have very good performance in linear regression. Traditional LASSO add a restriction on slopes to the model. Parameters can be shrunk to 0. Here we extend that method to semi-parametric spline fitting, making it possible to reduce dimensions of nonlinearity. The problem of nonlinearity detection is then transformed to a model selection problem. The penalty is taken on variance terms which control nonlinearity in each dimension. As the limit value changes, variance terms can be shrunk to 0. When one variance term is reduced to 0, the nonlinear part of that dimension is removed from the model. AIC/BIC criteria are used to choose the final model. This method is very challenging since testing is almost impossible due to the boundary situation.

The method is further extended to generalized additive model. Quasi-likelihood is adopted to simplify the problem, making it similar to partially linear additive case. LASSO type penalties are again performed on variance components of each dimension, making dimension reduction possible for nonlinear terms. Conditional AIC/BIC is used to select the model.

The dissertation is consisted of five parts.

In Chapter 1, we have a thorough literature review. All previous works including semi-parametric modeling, penalized spline fitting, linear mixed effect modeling, variable selection methods, and generalized nonparametric modeling are all introduced here.

In Chapter 2, the model construction is explained in detail for single dimension case. It includes derivation of iteration procedures, computation technique discussion, simulation studies including power analysis, and discussions of other parameter estimation methods.

In Chapter 3, the model is extended to multiple dimensional case. In addition to model construction, derivation of iteration procedures, computation technique discussion and simulation studies, we have a real data example, using plasma beta-carotene data from a nutritional study. The result shows advantage of nonlinearity detection.

In Chapter 4, generalized additive modeling is considered. We especially focus on the two most commonly used distributions, Bernoulli distribution and Poisson distribution. Model is constructed using Quasi-likelihood. Two iteration methods are introduced here. Simulation studies are performed on both distributions of one dimensional and multiple dimensional case. We have a real data example using Pima Indian diabetes study dataset. The result also shows advantage of nonlinearity detection.

In Chapter 5, some possible future works are dicussed. The topics include more complicated covariance matrix structure of random terms, dimension reduction for both linearity and nonlinearity at the same time, bootstrap method with model selection taken into account, and higher degree p-spline setup.

APA, Harvard, Vancouver, ISO, and other styles
8

Petersen, James Vincent. "Investigation into the fundamental principles of fiber optic evanescent sensors." Diss., This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-02052007-081233/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Foster, Marc Douglas. "Liquid chromatographic separation and sensing principles with a water only mobile phase /." Thesis, Connect to this title online; UW restricted, 1996. http://hdl.handle.net/1773/8503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lu, Wei. "A method for automated landmark constellation detection using evolutionary principal components and statistical shape models." Thesis, University of Iowa, 2010. https://ir.uiowa.edu/etd/851.

Full text
Abstract:
Medical imaging technologies such as MRI, CT, PET, etc. enable the use of higher resolution 3D digital image data for research and clinical treatment. The new technologies provide improved spatial resolution at the cost of increased data processing time. Manual identification of anatomical landmarks is still a common practice in many neuroimaging and other medical imaging applications but it is labor-intensive, subjective, and suffers from intra-/inter- rater inconsistency. This work explored one way of estimating a landmark constellation automatically, consistently, and efficiently. The proposed method demonstrated a successful application on how to effectively utilize image processing in tackling clinical challenges. It is shown that the cooperation of spatial localization using linear model prediction with evolutionary principal components and local search estimation using statistical shape models is capable of effectively extracting important landmark detection information from both morphometric relationships of landmarks and consistent intensity distribution of images. It is accurate (compared to 1.6 mm root mean squared errors of manual labeling of brain landmarks), consistent, reliable in predicting many salient midbrain point landmarks such as ac, pc, MPJ, etc. in a longitudinal, multisubject environment, and throughout large datasets with different modalities and image information such as orientation, spacing, and origin. The framework of linear model estimation method using evolutionary principal components and the idea of local search using statistical shape models are generalized to the detection task for arbitrary number of landmarks in other organs, creatures, or even any other physical objects in the world as long as the landmarks present intensity consistency and satisfy regularity in spatial organization.
APA, Harvard, Vancouver, ISO, and other styles
11

Anderson, William Boyd. "Detection of lubricating film breakdown in mechanical seals." Thesis, Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/20148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Mahmood, Muhammad Tariq. "Face Detection by Image Discriminating." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4352.

Full text
Abstract:
Human face recognition systems have gained a considerable attention during last few years. There are very many applications with respect to security, sensitivity and secrecy. Face detection is the most important and first step of recognition system. Human face is non rigid and has very many variations regarding image conditions, size, resolution, poses and rotation. Its accurate and robust detection has been a challenge for the researcher. A number of methods and techniques are proposed but due to a huge number of variations no one technique is much successful for all kinds of faces and images. Some methods are exhibiting good results in certain conditions and others are good with different kinds of images. Image discriminating techniques are widely used for pattern and image analysis. Common discriminating methods are discussed.
SIPL, Mechatronics, GIST 1 Oryong-Dong, Buk-Gu, Gwangju, 500-712 South Korea tel. 0082-62-970-2997
APA, Harvard, Vancouver, ISO, and other styles
13

Ledaguenel, Patrick. "Detection des lithiases de la voie biliaire principale : proposition d'un score prédictif." Bordeaux 2, 1994. http://www.theses.fr/1994BOR23010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Motloung, Setumo Victor. "Intense pulsed neutron generation based on the principle of Plasma Immersion Ion Implantation (PI3) technique." Thesis, University of the Western Cape, 2006. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_9599_1182748458.

Full text
Abstract:

The development of a deuterium-deuterium/ tritium-deuterium (D-D/ D-T) pulsed neutron generator based on the principle of the Plasma Immersion Ion Implantation (PI3) technique is presented, in terms of investigating development of a compact system to generate an ultra short burst of mono-energetic neutrons (of order 1010 per second) during a short period of time (<
20&mu
s) at repetition rates up to 1 kHz. The system will facilitate neutron detection techniques, such as neutron back-scattering, neutron radiography and time-of-flight activation analysis.


Aspects addressed in developing the system includes (a) characterizing the neutron spectra generated as a function of the target configuration/ design to ensure a sustained intense neutron flux for long periods of time, (b) the system was also characterised as a function of power supply operating conditions such as voltage, current, gas pressure and plasma density.

APA, Harvard, Vancouver, ISO, and other styles
15

Khwambala, Patricia Helen. "The importance of selecting the optimal number of principal components for fault detection using principal component analysis." Master's thesis, University of Cape Town, 2012. http://hdl.handle.net/11427/11930.

Full text
Abstract:
Includes summary.
Includes bibliographical references.
Fault detection and isolation are the two fundamental building blocks of process monitoring. Accurate and efficient process monitoring increases plant availability and utilization. Principal component analysis is one of the statistical techniques that are used for fault detection. Determination of the number of PCs to be retained plays a big role in detecting a fault using the PCA technique. In this dissertation focus has been drawn on the methods of determining the number of PCs to be retained for accurate and effective fault detection in a laboratory thermal system. SNR method of determining number of PCs, which is a relatively recent method, has been compared to two commonly used methods for the same, the CPV and the scree test methods.
APA, Harvard, Vancouver, ISO, and other styles
16

Veeravalli, Murali Srinidhi. "A microfluidic Coulter counting device for metal wear detection in lubrication oil." Akron, OH : University of Akron, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=akron1226866175.

Full text
Abstract:
Thesis (M.S.)--University of Akron, Dept. of Mechanical Engineering, 2008.
"December, 2008." Title from electronic thesis title page (viewed 12/9/2009) Advisor, Jiang John Zhe; Faculty Readers, Joan Carletta, Dane Quinn; Department Chair, Celal Batur; Dean of the College, George K. Haritos; Dean of the Graduate School, George R. Newkome. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
17

Alturki, Abdulrahman S. "Principal Point Determination for Camera Calibration." University of Dayton / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1500326474390507.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Lehnert, Simon [Verfasser], and Christian [Akademischer Betreuer] Leibold. "Biophysical principles underlying binaural coincidence detection : computational approaches / Simon Lehnert. Betreuer: Christian Leibold." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2015. http://d-nb.info/1082504750/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Aguirre, Jurado Ricardo. "Resilient Average and Distortion Detection in Sensor Networks." ScholarWorks@UNO, 2009. http://scholarworks.uno.edu/td/962.

Full text
Abstract:
In this paper a resilient sensor network is built in order to lessen the effects of a small portion of corrupted sensors when an aggregated result such as the average needs to be obtained. By examining the variance in sensor readings, a change in the pattern can be spotted and minimized in order to maintain a stable aggregated reading. Offset in sensors readings are also analyzed and compensated to help reduce a bias change in average. These two analytical techniques are later combined in Kalman filter to produce a smooth and resilient average given by the readings of individual sensors. In addition, principal components analysis is used to detect variations in the sensor network. Experiments are held using real sensors called MICAz, which are use to gather light measurements in a small area and display the light average generated in that area.
APA, Harvard, Vancouver, ISO, and other styles
20

Lee-Davey, Jon. "Application of machine olfaction principles for the detection of high voltage transformer oil degradation." Thesis, Cranfield University, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.405216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Buswell, Richard A. "Uncertainty in the first principle model based condition monitoring of HVAC systems." Thesis, Loughborough University, 2001. https://dspace.lboro.ac.uk/2134/7559.

Full text
Abstract:
Model based techniques for automated condition monitoring of HVAC systems have been under development for some years. Results from the application of these methods to systems installed in real buildings have highlighted robustness and sensitivity issues. The generation of false alarms has been identified as a principal factor affecting the potential usefulness of condition monitoring in HVAC applications. The robustness issue is a direct result of the uncertain measurements and the lack of experimental control that axe characteristic of HVAC systems. This thesis investigates the uncertainties associated with implementing a condition monitoring scheme based on simple first principles models in HVAC subsystems installed in real buildings. The uncertainties present in typical HVAC control system measurements are evaluated. A sensor validation methodology is developed and applied to a cooling coil subsystem installed in a real building. The uncertainty in steady-state analysis based on transient data is investigated. The uncertainties in the simplifications and assumptions associated with the derivation of simple first principles based models of heat-exchangers are established. A subsystem model is developed and calibrated to the test system. The relationship between the uncertainties in the calibration data and the parameter estimates are investigated. The uncertainties from all sources are evaluated and used to generate a robust indication of the subsystem condition. The sensitivity and robustness of the scheme is analysed based on faults implemented in the test system during summer, winter and spring conditions.
APA, Harvard, Vancouver, ISO, and other styles
22

Möllberg, Andreas. "Investigation of the principle of flame rectification in order to improve detection of the propane flame in absorption refrigerators." Thesis, Linköping University, The Department of Physics, Chemistry and Biology, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-488.

Full text
Abstract:

Electrical properties of a propane flame was investigated to improve detection of the flame in absorption refrigerators. The principle of flame rectification, which uses the diode property of the flame, was studied. A DC voltage in the range 0–130V was applied, between the burner and an electrode in the flame, and the current through the flame in the forward and reverse direction was measured. This measurements were performed with the electrode top in different horizontal and vertical positions. AC voltages at various frequencies was also applied and the average current through the flame was measured.

A linear relation was found between the applied DC voltage and the current through the flame which means that the resistance, in the investigated voltage range, is independent of the applied voltage. The resistance in the forward direction was almost constant for different electrode positions but the reverse resistance varied many hundred MOhm­ when the electrode was moved vertically away from the burner. The gas flow also influenced the reverse resistance to a large extent.


Elektriska egenskaper hos en propanlåga undersöktes i syfte att förbättra detekteringen av lågan i absorptionskylskåp. Rektifieringsprincipen, vilken utnyttjar lågans diodegenskap, undersöktes. En likspänning i intervallet 0–130V lades på, mellan brännaren och en elektrod i lågan, och strömmen genom lågan i fram- och backriktningen mättes. Dessa mätningar gjordes med elektroden i olika horisontella och vertikala positioner. Växelspänning med olika frekvenser lades också på och medelvärdet av strömmen genom lågan mättes.

Ett linjärt samband upptäcktes mellan pålagd likspänning och strömmen genom lågan vilket betyder att resistansen, i det undersökta spänningsintervallet, är oberoende av pålagd spänning. Resistansen i framriktningen var i princip konstant vid olika elektrodplaceringar medan backresistansen varierade flera hundra MOhm när elektroden flyttades bort från brännaren vertikalt. Gasflödet påverkade också backresistansen i stor utsträckning.

APA, Harvard, Vancouver, ISO, and other styles
23

Yano, Daisuke. "Application of a contact potential difference probe to detection of nanometer-scale lubricant on a hard disk." Thesis, Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/17528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Tejada, Gamero Enrique David. "Object detection in videos using principal component pursuit and convolutional neural networks." Master's thesis, Pontificia Universidad Católica del Perú, 2018. http://tesis.pucp.edu.pe/repositorio/handle/123456789/11982.

Full text
Abstract:
Object recognition in videos is one of the main challenges in computer vision. Several methods have been proposed to achieve this task, such as background subtraction, temporal differencing, optical flow, particle filtering among others. Since the introduction of Convolutonal Neural Networks (CNN) for object detection in the Imagenet Large Scale Visual Recognition Competition (ILSVRC), its use for image detection and classification has increased, becoming the state-of-the-art for such task, being Faster R-CNN the preferred model in the latest ILSVRC challenges. Moreover, the Faster R-CNN model, with minimum modifications, has been succesfully used to detect and classify objects (either static or dynamic) in video sequences; in such setup, the frames of the video are input “as is” i.e. without any pre-processing. In this thesis work we propose to use Robust PCA (RPCA, a.k.a. Principal Component Pursuit, PCP), as a video background modeling pre-processing step, before using the Faster R-CNN model, in order to improve the overall performance of detection and classification of, specifically, the moving objects. We hypothesize that such pre-processing step, which segments the moving objects from the background, would reduce the amount of regions to be analyzed in a given frame and thus (i) improve the classification time and (ii) reduce the error in classification for the dynamic objects present in the video. In particular, we use a fully incremental RPCA / PCP algorithm that is suitable for real-time or on-line processing. Furthermore, we present extensive computational results that were carried out in three different platforms: A high-end server with a Tesla K40m GPU, a desktop with a Tesla K10m GPU and the embedded system Jetson TK1. Our classification results attain competitive or superior performance in terms of Fmeasure, achieving an improvement ranging from 3.7% to 97.2%, with a mean improvement of 22% when the sparse image was used to detect and classify the object with the neural network, while at the same time, reducing the classification time in all architectures by a factor raging between 2% and 25%.
Tesis
APA, Harvard, Vancouver, ISO, and other styles
25

Bertils, Joakim. "Implementation of Principal Component Analysis For Use in Anomaly Detection Using CUDA." Thesis, Linköpings universitet, Programvara och system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160475.

Full text
Abstract:
As more and more systems are connected, a large benefit is found in being able to find and predict problems in the monitored process. By analyzing the data in real time, feedback can be generated to the operators or the process allowing the process to correct itself. This thesis implements and evaluates three CUDA GPU implementations of the principal component analysis used for dimensionality reduction of multivariate data sets running in real time to explore the trade-offs of the algorithm implementations in terms of speed, energy and accuracy. The GPU implementations are compared to reference implementations on the CPU. The study finds that the covariance based method is the fastest of the implementations for the tested configurations, but the iterative NIPALS implementation has some interesting optimization opportunities that are explored. For large enough data sets, speedup compared to the 8 virtual core CPU of around 100 is obtained for the GPU implementations, making the GPU implementations an option to investigate for problems requiring real time computation of principal components.
APA, Harvard, Vancouver, ISO, and other styles
26

Merrill, Nicholas Swede. "Modified Kernel Principal Component Analysis and Autoencoder Approaches to Unsupervised Anomaly Detection." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/98659.

Full text
Abstract:
Unsupervised anomaly detection is the task of identifying examples that differ from the normal or expected pattern without the use of labeled training data. Our research addresses shortcomings in two existing anomaly detection algorithms, Kernel Principal Component Analysis (KPCA) and Autoencoders (AE), and proposes novel solutions to improve both of their performances in the unsupervised settings. Anomaly detection has several useful applications, such as intrusion detection, fault monitoring, and vision processing. More specifically, anomaly detection can be used in autonomous driving to identify obscured signage or to monitor intersections. Kernel techniques are desirable because of their ability to model highly non-linear patterns, but they are limited in the unsupervised setting due to their sensitivity of parameter choices and the absence of a validation step. Additionally, conventionally KPCA suffers from a quadratic time and memory complexity in the construction of the gram matrix and a cubic time complexity in its eigendecomposition. The problem of tuning the Gaussian kernel parameter, $sigma$, is solved using the mini-batch stochastic gradient descent (SGD) optimization of a loss function that maximizes the dispersion of the kernel matrix entries. Secondly, the computational time is greatly reduced, while still maintaining high accuracy by using an ensemble of small, textit{skeleton} models and combining their scores. The performance of traditional machine learning approaches to anomaly detection plateaus as the volume and complexity of data increases. Deep anomaly detection (DAD) involves the applications of multilayer artificial neural networks to identify anomalous examples. AEs are fundamental to most DAD approaches. Conventional AEs rely on the assumption that a trained network will learn to reconstruct normal examples better than anomalous ones. In practice however, given sufficient capacity and training time, an AE will generalize to reconstruct even very rare examples. Three methods are introduced to more reliably train AEs for unsupervised anomaly detection: Cumulative Error Scoring (CES) leverages the entire history of training errors to minimize the importance of early stopping and Percentile Loss (PL) training aims to prevent anomalous examples from contributing to parameter updates. Lastly, early stopping via Knee detection aims to limit the risk of over training. Ultimately, the two new modified proposed methods of this research, Unsupervised Ensemble KPCA (UE-KPCA) and the modified training and scoring AE (MTS-AE), demonstrates improved detection performance and reliability compared to many baseline algorithms across a number of benchmark datasets.
Master of Science
Anomaly detection is the task of identifying examples that differ from the normal or expected pattern. The challenge of unsupervised anomaly detection is distinguishing normal and anomalous data without the use of labeled examples to demonstrate their differences. This thesis addresses shortcomings in two anomaly detection algorithms, Kernel Principal Component Analysis (KPCA) and Autoencoders (AE) and proposes new solutions to apply them in the unsupervised setting. Ultimately, the two modified methods, Unsupervised Ensemble KPCA (UE-KPCA) and the Modified Training and Scoring AE (MTS-AE), demonstrates improved detection performance and reliability compared to many baseline algorithms across a number of benchmark datasets.
APA, Harvard, Vancouver, ISO, and other styles
27

Abuasbeh, Mohammad. "Fault Detection and Diagnosis for Brine to Water Heat Pump Systems." Thesis, KTH, Tillämpad termodynamik och kylteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-183595.

Full text
Abstract:
The overall objective of this thesis is to develop methods for fault detection and diagnosis for ground source heat pumps that can be used by servicemen to assist them to accurately detect and diagnose faults during the operation of the heat pump. The aim of this thesis is focused to develop two fault detection and diagnosis methods, sensitivity ratio and data-driven using principle component analysis. For the sensitivity ratio method model, two semi-empirical models for heat pump unit were built to simulate fault free and faulty conditions in the heat pump. Both models have been cross-validated by fault free experimental data. The fault free model is used as a reference. Then, fault trend analysis is performed in order to select a pair of uniquely sensitive and insensitive parameters to calculate the sensitivity ratio for each fault. When a sensitivity ratio value for a certain fault drops below a predefined value, that fault is diagnosed and an alarm message with that fault appears. The simulated faults data is used to test the model and the model successfully detected and diagnosed the faults types that were tested for different operation conditions. In the second method, principle component analysis is used to drive linear correlations of the original variables and calculate the principle components to reduce the dimensionality of the system. Then simple clustering technique is used for operation conditions classification and fault detection and diagnosis process. Each fault is represented by four clusters connected with three lines where each cluster represents different fault intensity level. The fault detection is performed by measuring the shortest orthogonal distance between the test point and the lines connecting the faults’ clusters. Simulated fault free and faulty data are used to train the model. Then, a new set of simulated faults data is used to test the model and the model successfully detected and diagnosed all faults type and intensity level of the tested faults for different operation conditions. Both models used simple seven temperature measurements, two pressure measurements (from which the condensation and evaporation temperatures are calculated) and the electrical power, as an input to the fault detection and diagnosis model. This is to reduce the cost and make it more convenient to implement. Finally, for each models, a user friendly graphical user interface is built to facilitate the model operation by the serviceman.
APA, Harvard, Vancouver, ISO, and other styles
28

Halligan, Gary. "Fault detection and prediction with application to rotating machinery." Diss., Rolla, Mo. : Missouri University of Science and Technology, 2009. http://scholarsmine.mst.edu/thesis/pdf/Halligan_09007dcc80708356.pdf.

Full text
Abstract:
Thesis (M.S.)--Missouri University of Science and Technology, 2009.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed November 25, 2009) Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
29

Davenport, Timothy M. "Early Forest Fire Detection using Texture Analysis of Principal Components from Multispectral Video." DigitalCommons@CalPoly, 2012. https://digitalcommons.calpoly.edu/theses/795.

Full text
Abstract:
The aim of this study is to incorporate the spectral, temporal and spatial attributes of a smoke plume for Early Forest Fire Detection. Image processing techniques are used on multispectral (red, green, blue, mid-wave infrared, and long-wave infrared) video to segment and indentify the presence of a smoke plume within a scene. The temporal and spectral variance of a smoke plume is captured through Principal Component Analysis (PCA) where the Multispectral-Multitemporal PCA is performed on a sequence of video frames simultaneously. The presence of a plume existing in one of the higher order principal components is determined by the texture of its spatial content. The texture is characterized by statistical descriptors derived from the principal component‟s joint probability density distribution of intensities occurring within a spatial relationship, known as a Gray Level Co-Occurrence Matrix (GLCM). Initial analysis is performed on selected frames where only a subset of time is considered. Once the parameters are chosen from the static analysis, the algorithms are executed on video through time to validate the method. The results show that a smoke plume is readily segmented via PCA. Based on the five spectral bands over 3 seconds sampled at 1 second, the plume exists in the 7th principal component. Within these principal components, the smoke‟s presence is best identified by the correlation texture descriptor. The smoke is very spatially correlated compared to the scene at large. Therefore a spike in the spatial correlation of the principal components is all that is needed to identify the start of the smoke plume.
APA, Harvard, Vancouver, ISO, and other styles
30

Ostruška, Jan. "Ochrany při zemních spojeních." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-221169.

Full text
Abstract:
The thesis deals with wattmetric and conductance principle of faulted feeder detection in medium voltage compensated distribution networks. These principles are analysed and tested with particular emphasis on high impedance earth faults. The first part of testing utilizes data about state of the system with earth fault, which was determined by designed static model of earth fault. The testing in the second part utilizes real records of high impedance earth faults. In both parts the wattmetric and conductance protections was presented on protective relays ABB REF 615, ABB REM 543 and Protection&Consulting RYo by means of unit OMICRON CMC 256plus. As a major result of performed tests are records of detection of particular earth faults. Based on these records it could be concluded, that functionality of protections is substantially dependent on magnitude of zero sequence voltage. Furthermore the wattmetric protections are dependent on fault resistance as well.
APA, Harvard, Vancouver, ISO, and other styles
31

Koehl, Marie. "La négociation en droit des entreprises en difficulté." Thesis, Paris 10, 2019. http://www.theses.fr/2019PA100016/document.

Full text
Abstract:
S’intéresser à la négociation en droit des entreprises en difficulté peut sembler, de prime abord, surprenant tant cette branche du droit est marquée du sceau de l'ordre public. La logique de dialogue entre le débiteur et ses créanciers s’observe pourtant de plus en plus dans la majorité des procédures offertes au débiteur pour traiter ses difficultés. C’est que les perspectives du législateur ont changé : il ne s’agit plus seulement de sanctionner, mais davantage de prévenir les difficultés et de sauvegarder les entreprises avec l’intime conviction qu’une norme consentie est une norme efficace. De cette évolution est née la volonté d’appréhender le phénomène actuel de la négociation dans ses effets sur le droit des entreprises en difficulté. Il s’est agi de déterminer, dans les textes, la réalité des négociations et, en contrepoint, la part réelle du pouvoir du juge. Ce sont d’abord les équilibres au sein des procédures qui ont été bouleversés par la promotion du processus de négociation, en particulier s’agissant de celles qui, à l’origine, étaient judiciaires et collectives et dans lesquelles l’unilatéralisme était prégnant. À l’inverse, on observe un phénomène de judiciarisation des procédures amiables avec le souci de sécuriser des processus négociés. De ce fait, la ligne de partage entre les procédures amiables et les procédures judiciaires est moins claire que par le passé. Le développement de la négociation, a aussi modifié les équilibres entre les acteurs : au cœur de la recherche de la solution à apporter aux difficultés de l’entreprise, le débiteur et ses créanciers se retrouvent placés au premier rang. Enfin, les mutations opérées par l’intégration de la négociation en droit des entreprises en difficulté modifient également les valeurs traditionnellement attachées à la matière. Les principes traditionnels tels que l’égalité des créanciers s’en trouvent atténués. Cependant, ces changements offrent surtout un droit plus équilibré et plus attractif. Si l’office classique du juge semble dénaturé, son pouvoir se retrouve corrélativement renforcé. Le processus de négociation nécessite en effet la mise en place d’un cadre juridique strict et un contrôle judiciaire important afin d’assurer la garantie des droits fondamentaux des parties. Surtout, le débiteur et ses créanciers accepteront plus aisément une solution dont ils ont la maîtrise. Il ressort de cette évolution, le constat d’un droit davantage fondé sur l’idée de confiance. Ainsi, en raison des nombreux avantages qu’on lui connaît, la voie amiable pourrait encore jouer de ses charmes auprès du législateur français
At first glance, it may seem surprising to focus on negotiation in insolvency law since this branch of law is marked by the seal of public order. However, the logic of dialogue between the debtor and his creditors is increasingly observed in most of the procedures offered to the debtor to deal with his difficulties. The legislator's perspectives have changed: it is no longer just a question of sanctioning, but more of preventing difficulties and safeguarding companies. This evolution has given rise to the desire to understand the current phenomenon of negotiation in its effects on the law of companies in difficulty. The aim was to determine, in the texts, the reality of the negotiations and, as a counterpoint, the real share of the judge's power. The promotion of the negotiation process in dealing with business difficulties has upset, on the one hand, the balances within the procedures. Negotiations appear to have been strengthened in procedures that were originally judicial and collective and in which unilateralism was prevalent. Conversely, mutual agreement procedures are more judicial in nature than before. As a result, the dividing line between amicable and judicial proceedings is less clear than in the past. The development of negotiation has also upset the balances between the players: at the heart of the search for a solution to the company's difficulties, the debtor and his creditors are placed at the forefront of the processing. The changes brought about by the integration of negotiation into the law of companies in difficulty are also changing the values traditionally attached to the subject. Traditional principles such as the equality of creditors are reduced. However, these changes offer above all a more balanced and attractive law. If the judge's traditional office seems to be distorted, his power is strengthened accordingly. The negotiation process requires the establishment of a strict legal framework and significant judicial control to ensure that the fundamental rights of the parties are guaranteed. Above all, the debtor and his creditors will more easily accept a solution in their control. This development shows that the law is more based on the idea of trust. Thus, because of the many advantages known to it, the amicable way could still play its charms with the French legislator
APA, Harvard, Vancouver, ISO, and other styles
32

Moussa, Georges Fouad Mr. "EARLY FOREST FIRE DETECTION USING TEXTURE, BLOB THRESHOLD, AND MOTION ANALYSIS OF PRINCIPAL COMPONENTS." DigitalCommons@CalPoly, 2012. https://digitalcommons.calpoly.edu/theses/881.

Full text
Abstract:
Forest fires constantly threaten ecological systems, infrastructure and human lives. The purpose behind this study is minimizing the devastating damage caused by forest fires. Since it is impossible to completely avoid their occurrences, it is essential to accomplish a fast and appropriate intervention to minimize their destructive consequences. The most traditional method for detecting forest fires is human based surveillance through lookout towers. However, this study presents a more modern technique. It utilizes land-based real-time multispectral video processing to identify and determine the possibility of fire occurring within the camera’s field of view. The temporal, spectral, and spatial signatures of the fire are exploited. The methods discussed include: (1) Range filtering followed by entropy filtering of the infrared (IR) video data, and (2) Principal Component Analysis of visible spectrum video data followed by motion analysis and adaptive intensity threshold. The two schemes presented are tailored to detect the fire core, and the smoke plume, respectively. Cooled Midwave Infrared (IR) camera is used to capture the heat distribution within the field of view. The fire core is then isolated using texture analysis techniques: first, range filtering applied on two consecutive IR frames, and then followed by entropy filtering of their absolute difference. Since smoke represents the earliest sign of fire, this study also explores multiple techniques for detecting smoke plumes in a given scene. The spatial and temporal variance of smoke plume is captured using temporal Principal Component Analysis, PCA. The results show that a smoke plume is readily segmented via PCA applied on the visible Blue band over 2 seconds sampled every 0.2 seconds. The smoke plume exists in the 2nd principal component, and is finally identified, segmented, and isolated, using either motion analysis or adaptive intensity threshold. Experimental results, obtained in this study, show that the proposed system can detect smoke effectively at a distance of approximately 832 meters with a low false-alarm rate and short reaction time. Applied, such system would achieve early forest fire detection minimizing fire damage. Keywords: Image Processing, Principal Component Analysis, PCA, Principal Component, PC, Texture Analysis, Motion Analysis, Multispectral, Visible, Cooled Midwave Infrared, Smoke Signature, Gaussian Mixture Model.
APA, Harvard, Vancouver, ISO, and other styles
33

Garges, David Casimir. "Early Forest Fire Detection via Principal Component Analysis of Spectral and Temporal Smoke Signature." DigitalCommons@CalPoly, 2015. https://digitalcommons.calpoly.edu/theses/1456.

Full text
Abstract:
The goal of this study is to develop a smoke detecting algorithm using digital image processing techniques on multi-spectral (visible & infrared) video. By utilizing principal component analysis (PCA) followed by spatial filtering of principal component images the location of smoke can be accurately identified over a period of exposure time with a given frame capture rate. This result can be further analyzed with consideration of wind factor and fire detection range to determine if a fire is present within a scene. Infrared spectral data is shown to contribute little information concerning the smoke signature. Moreover, finalized processing techniques are focused on the blue spectral band as it is furthest away from the infrared spectral bands and because it experimentally yields the largest footprint in the processed principal component images in comparison to other spectral bands. A frame rate of .5 images/sec (1 image every 2 seconds) is determined to be the maximum such that temporal variance of smoke can be captured. The study also shows eigenvectors corresponding to the principal components that best represent smoke and are valuable indications of smoke temporal signature. Raw video data is taken through rigorous pre-processing schemes to align frames from respective spectral band both spatially and temporally. A multi-paradigm numerical computing program, MATLAB, is used to match the field of view across five spectral bands: Red, Green, Blue, Long-Wave Infrared, and Mid-Wave Infrared. Extracted frames are aligned temporally from key frames throughout the data capture. This alignment allows for more accurate digital processing for smoke signature. v Clustering analysis on RGB and HSV value systems reveal that color alone is not helpful to segment smoke. The feature values of trees and other false positives are shown to be too closely related to features of smoke for in solely one instance in time. A temporal principal component transform on the blue spectral band eliminates static false positives and emphasizes the temporal variance of moving smoke in images with higher order. A threshold adjustment is applied to a blurred blue principal component of non-unity principal component order and smoke results can be finalized using median filtering. These same processing techniques are applied to difference images as a more simple and traditional technique for identifying temporal variance and results are compared.
APA, Harvard, Vancouver, ISO, and other styles
34

Aradhye, Hrishikesh Balkrishna. "Anomaly Detection Using Multiscale Methods." The Ohio State University, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=osu989701610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Londono-Vasquez, Douglas. "APPLICATIONS OF THE HARDY-WEINBERG PRINCIPLE TO DETECTION OF LINKAGE DISEQUILIBRIUM AND GENOTYPING ERRORS IN THE CONTEXT OF ASSOCIATION STUDIES." Case Western Reserve University School of Graduate Studies / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=case1181247657.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

FERAUD, RAPHAEL. "Un modele connexionniste utilisant un principe de longueur de description minimale : application a la detection de visages." Rennes 1, 1997. http://www.theses.fr/1997REN10238.

Full text
Abstract:
Les reseaux de neurones sont des modeles statistiques, qui permettent l'apprentissage numerique par l'exemple. Un modele pour l'apprentissage d'un reseau de neurones est developpe et applique a la detection de visages. Cette approche est basee sur les reseaux de neurones autoassociatifs. Nous verrons, que pour utiliser ce type de reseau comme un estimateur, il faut faire des hypotheses tres fortes sur les donnees. En etudiant l'affaiblissement de ces hypotheses, nous presenterons notre modele, le modele generatif contraint : - generatif, puisque le but de l'apprentissage de ce reseau est d'evaluer la probabilite qu'une entree ait ete generee par le modele, - et contraint car pour ameliorer la qualite de l'estimation, certains contre-exemples sont utilises lors de l'apprentissage. Pour etendre la detection a differentes orientations du visage, trois modeles de combinaisons de ces reseaux sont proposes. Des resultats ameliorant notablement l'etat de l'art sont presentes en utilisant un melange conditionnel de ces estimateurs. Les neurones formels sont des unites de traitement de l'information. En enoncant une nouvelle formulation du principe de longueur de description minimale, basee sur la complexite de kolmogorov, nous montrerons que l'apprentissage peut se reduire a un probleme de compression de donnees. Considerons une sequence d'entrees x#n, une sequence de sorties y#n, et une sequence jointe z#n = (x#n, y#n). La question est la suivante : peut-on compresser la sequence jointe z#n plus efficacement que les sequences x#n et y#n separement ? si oui, cela veut dire que cette compression de z#n capture des dependances entre x#n et y#n. Or, c'est exactement ce que cherchent a faire les algorithmes d'apprentissage. Nous montrerons que lorsque l'on ne gagne rien a compresser conjointement x#n et y#n alors ces series sont statistiquement independantes. Une contrainte, basee sur ces developpements de la theorie de l'information, permettant de controler la faculte de generalisation d'un perceptron multicouches, est proposee.
APA, Harvard, Vancouver, ISO, and other styles
37

Zuzarte, Ian Jeromino. "A Principal Component Regression Analysis for Detection of the Onset of Nocturnal Hypoglycemia in Type 1 Diabetic Patients." University of Akron / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=akron1226955083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Radjabi, Ryan F. "WILDFIRE DETECTION SYSTEM BASED ON PRINCIPAL COMPONENT ANALYSIS AND IMAGE PROCESSING OF REMOTE-SENSED VIDEO." DigitalCommons@CalPoly, 2016. https://digitalcommons.calpoly.edu/theses/1621.

Full text
Abstract:
Early detection and mitigation of wildfires can reduce devastating property damage, firefighting costs, pollution, and loss of life. This thesis proposes the method of Principal Component Analysis (PCA) of images in the temporal domain to identify a smoke plume in wildfires. Temporal PCA is an effective motion detector, and spatial filtering of the output Principal Component images can segment the smoke plume region. The effective use of other image processing techniques to identify smoke plumes and heat plumes are compared. The best attributes of smoke plume detectors and heat plume detectors are evaluated for combination in an improved wildfire detection system. PCA of visible blue images at an image sampling rate of 2 seconds per image effectively exploits a smoke plume signal. PCA of infrared images is the fundamental technique for exploiting a heat plume signal. A system architecture is proposed for the implementation of image processing techniques. The real-world deployment and usability are described for this system.
APA, Harvard, Vancouver, ISO, and other styles
39

Marques, Miguel Alexandre Castanheira. "On-line system for faults detection in induction motors based on PCA." Master's thesis, Faculdade de Ciências e Tecnologia, 2012. http://hdl.handle.net/10362/8578.

Full text
Abstract:
Dissertation to obtain the degree of Master in Electrical and Computer Engineering
Nowadays in the industry there many processes where human intervention is replaced by electrical machines, especially induction machines due to his robustness, performance and low cost. Although, induction machines are a high reliable device, they are also susceptible to faults. Therefore, the study of induction machine state is essential to reduce human and financial costs. The faults in induction machines can be divided mainly into two types: electrical faults and mechanical faults. Electrical faults represent between 40% and 50% of the reported faults and can be divided essentially in 2 types: stator unbalances and broken rotor bars. Taking into account the high dependency of induction machines and the massive use of automatic processes the industrial level, it is necessary to have diagnostic and monitoring systems these machines. It is presented in this work an on-line system for detection and diagnosis of electrical faults in induction motors based on computer-aided monitoring of the supply currents. The main objective is to detect and identify the presence of broken rotor bars and stator short-circuits in the induction motor. The presence of faults in the machine causes different disturbances in the supply currents. Through a stationary reference frame, such as αβ transform it is possible to extract and manipulate the results obtained from the supply currents using Eigen decomposition.
APA, Harvard, Vancouver, ISO, and other styles
40

Anderson, James. "A comparison of four change detection techniques for two urban areas in the United States." Morgantown, W. Va. : [West Virginia University Libraries], 2002. http://etd.wvu.edu/templates/showETD.cfm?recnum=2371.

Full text
Abstract:
Thesis (M.A.)--West Virginia University, 2002.
Title from document title page. Document formatted into pages; contains ix, 61 p. : col. ill., col. maps. Includes abstract. Includes bibliographical references (p. 40-42).
APA, Harvard, Vancouver, ISO, and other styles
41

Mill, Robert William. "The application of auditory signal processing principles to the detection, tracking and association of tonal components in sonar." Thesis, University of Sheffield, 2008. http://etheses.whiterose.ac.uk/12827/.

Full text
Abstract:
A steady signal exerts two complementary effects on a noisy acoustic environment: one is to add energy, the other is to create order. The ear has evolved mechanisms to detect both effects and encodes the fine temporal detail of a stimulus in sequences of auditory nerve discharges. Taking inspiration from these ideas, this thesis investigates the use of regular timing for sonar signal detection. Algorithms that operate on the temporal structure of a received signal are developed for the detection of merchant vessels. These ideas are explored by reappraising three areas traditionally associated with power-based detection. First of all, a time-frequency display based on timing instead of power is developed. Rather than inquiring of the display, "How much energy has been measured at this frequency? ", one would ask, "How structured is the signal at this frequency? Is this consistent with a target? " The auditory-motivated zero crossings with peak amplitudes (ZCPA) algorithm forms the starting-point for this study. Next, matters related to quantitative system performance analysis are addressed, such as how often a system will fail to detect a signal in particular conditions, or how much energy is required to guarantee a certain probability of detection. A suite of optimal temporal receivers is designed and is subsequently evaluated using the same kinds of synthetic signal used to assess power-based systems: Gaussian processes and sinusoids. The final area of work considers how discrete components on a sonar signal display, such as tonals and transients, can be identified and organised according to auditory scene analysis principles. Two algorithms are presented and evaluated using synthetic signals: one is designed to track a tonal through transient events, and the other attempts to identify groups of comodulated tonals against a noise background. A demonstration of each algorithm is provided for recorded sonar signals.
APA, Harvard, Vancouver, ISO, and other styles
42

Falcoz-Vigne, Vincent. "Définition d'un principe de détection de fluorescence induite applique au diagnostic en cancérologie." Vandoeuvre-les-Nancy, INPL, 1993. http://www.theses.fr/1993INPL065N.

Full text
Abstract:
Le diagnostic de certains cancers par détection de fluorescence induite par laser krypton constitue un nouvel espoir dans l'éventail des techniques modernes de dépistage de la maladie. Elle fait appel aux derniers développements en matière de lasers médicaux et d'agents photosensibilisants. Les propriétés de rétention sélective de ceux-ci (notamment l'hematoporphyrine et ses dérivés) dans les zones tumorales et leur excitabilité dans une longueur d'onde adéquate, constituent la base de cet aspect de la thérapie photodynamique. Le bas niveau de la fluorescence des colorants, et l'autofluorescence naturelle des tissus sains sont autant de problèmes concernant la mise en évidence de la frontière tissus sains-tumeur. Par ailleurs, le manque de sélectivité des photosensibilisants dans leur localisation, et parfois leur toxicité, amplifient encore ces difficultés. Ceci nous a amené à concevoir un système qui puisse s'affranchir de ces difficultés, depuis la prise d'image, jusqu'a la visualisation sur moniteur. Une source d'excitation monochromatique un laser krypton achemine par l'intermédiaire d'une fibre optique, l'énergie excitatrice sur le site à observer. L'image du site recueillie par un endoscope est envoyée vers un système optique de discrimination par filtrage de l'autofluorescence et de la fluorescence induite. Une camera intensificatrice effectue une forte amplification du signal, et un système de traitement informatique rehausse les contrastes, et supprime la contribution de l'autofluorescence par traitement soustractif. L'image traitée est finalement visualisée sur un moniteur couleur. Des campagnes de mesures expérimentales sur des fantômes nous ont permis de valider le système de détection et de tirer les premières conclusions quant à son efficacité
APA, Harvard, Vancouver, ISO, and other styles
43

Onder, Murat. "Face Detection And Active Robot Vision." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/2/12605290/index.pdf.

Full text
Abstract:
The main task in this thesis is to design a robot vision system with face detection and tracking capability. Hence there are two main works in the thesis: Firstly, the detection of the face on an image that is taken from the camera on the robot must be achieved. Hence this is a serious real time image processing task and time constraints are very important because of this reason. A processing rate of 1 frame/second is tried to be achieved and hence a fast face detection algorithm had to be used. The Eigenface method and the Subspace LDA (Linear Discriminant Analysis) method are implemented, tested and compared for face detection and Eigenface method proposed by Turk and Pentland is decided to be used. The images are first passed through a number of preprocessing algorithms to obtain better performance, like skin detection, histogram equalization etc. After this filtering process the face candidate regions are put through the face detection algorithm to understand whether there is a face or not in the image. Some modifications are applied to the eigenface algorithm to detect the faces better and faster. Secondly, the robot must move towards the face in the image. This task includes robot motion. The robot to be used for this purpose is a Pioneer 2-DX8 Plus, which is a product of ActivMedia Robotics Inc. and only the interfaces to move the robot have been implemented in the thesis software. The robot is to detect the faces at different distances and arrange its position according to the distance of the human to the robot. Hence a scaling mechanism must be used either in the training images, or in the input image taken from the camera. Because of timing constraint and low camera resolution, a limited number of scaling is applied in the face detection process. With this reason faces of people who are very far or very close to the robot will not be detected. A background independent face detection system is tried to be designed. However the resultant algorithm is slightly dependent to the background. There is no any other constraints in the system.
APA, Harvard, Vancouver, ISO, and other styles
44

Zuzarte, Ian. "A principal component regression analysis for detection of the onset of nocturnal hypoglycemia in Type I diabetic patients." Akron, OH : University of Akron, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=akron1226955083.

Full text
Abstract:
Thesis (M.S.)--University of Akron, Dept. of Biomedical Engineering, 2008.
"December, 2008." Title from electronic thesis title page (viewed 12/12/2009) Advisor, Dale H. Mugler; Committee members, Daniel B. Sheffer, Bruce C. Taylor; Department Chair, Daniel B. Sheffer; Dean of the College, George K. Haritos; Dean of the Graduate School, George R. Newkome. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
45

Shirvany, Réza. "Estimation of the Degree of Polarization in Polarimetric SAR Imagery : Principles and Applications." Thesis, Toulouse, INPT, 2012. http://www.theses.fr/2012INPT0082/document.

Full text
Abstract:
Les radars à synthèse d’ouverture (RSO) polarimétriques sont devenus incontournables dans le domaine de la télédétection, grâce à leur zone de couverture étendue, ainsi que leur capacité à acquérir des données dans n’importe quelles conditions atmosphériques de jour comme de nuit. Au cours des trois dernières décennies, plusieurs RSO polarimétriques ont été utilisés portant une variété de modes d’imagerie, tels que la polarisation unique, la polarisation double et également des modes dits pleinement polarimétriques. Grâce aux recherches récentes, d’autres modes alternatifs, tels que la polarisation hybride et compacte, ont été proposés pour les futures missions RSOs. Toutefois, un débat anime la communauté de la télédétection quant à l’utilité des modes alternatifs et quant au compromis entre la polarimétrie double et la polarimétrie totale. Cette thèse contribue à ce débat en analysant et comparant ces différents modes d’imagerie RSO dans une variété d’applications, avec un accent particulier sur la surveillance maritime (la détection des navires et de marées noires). Pour nos comparaisons, nous considérons un paramètre fondamental, appelé le degré de polarisation (DoP). Ce paramètre scalaire a été reconnu comme l’un des paramètres les plus pertinents pour caractériser les ondes électromagnétiques partiellement polarisées. A l’aide d’une analyse statistique détaillée sur les images polarimétriques RSO, nous proposons des estimateurs efficaces du DoP pour les systèmes d’imagerie cohérente et incohérente. Ainsi, nous étendons la notion de DoP aux différents modes d’imagerie polarimétrique hybride et compacte. Cette étude comparative réalisée dans différents contextes d’application dégage des propriétés permettant de guider le choix parmi les différents modes polarimétriques. Les expériences sont effectuées sur les données polarimétriques provenant du satellite Canadian RADARSAT-2 et le RSO aéroporté Américain AirSAR, couvrant divers types de terrains tels que l’urbain, la végétation et l’océan. Par ailleurs nous réalisons une étude détaillée sur les potentiels du DoP pour la détection et la reconnaissance des marées noires basée sur les acquisitions récentes d’UAVSAR, couvrant la catastrophe de Deepwater Horizon dans le golfe du Mexique
Polarimetric Synthetic Aperture Radar (SAR) systems have become highly fruitful thanks to their wide area coverage and day and night all-weather capabilities. Several polarimetric SARs have been flown over the last few decades with a variety of polarimetric SAR imaging modes; traditional ones are linear singleand dual-pol modes. More sophisticated ones are full-pol modes. Other alternative modes, such as hybrid and compact dual-pol, have also been recently proposed for future SAR missions. The discussion is vivid across the remote sensing society about both the utility of such alternative modes, and also the trade-off between dual and full polarimetry. This thesis contributes to that discussion by analyzing and comparing different polarimetric SAR modes in a variety of geoscience applications, with a particular focus on maritime monitoring and surveillance. For our comparisons, we make use of a fundamental, physically related discriminator called the Degree of Polarization (DoP). This scalar parameter has been recognized as one of the most important parameters characterizing a partially polarized electromagnetic wave. Based on a detailed statistical analysis of polarimetric SAR images, we propose efficient estimators of the DoP for both coherent and in-coherent SAR systems. We extend the DoP concept to different hybrid and compact SAR modes and compare the achieved performance with different full-pol methods. We perform a detailed study of vessel detection and oil-spill recognition, based on linear and hybrid/compact dual-pol DoP, using recent data from the Deepwater Horizon oil-spill, acquired by the National Aeronautics and Space Administration (NASA)/Jet Propulsion Laboratory (JPL) Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR). Extensive experiments are also performed over various terrain types, such as urban, vegetation, and ocean, using the data acquired by the Canadian RADARSAT-2 and the NASA/JPL Airborne SAR (AirSAR) system
APA, Harvard, Vancouver, ISO, and other styles
46

Thai, Trang Thuy. "Design and development of novel radio frequency sensors based on far-field and near-field principles." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50303.

Full text
Abstract:
The objective of this work is to enhance and advance sensing technologies with the design and development of novel radio frequency (RF) sensors based on far-field and near-field principles of the electromagnetic (EM) resonances. In the first part of this thesis, original design and development of a passive RF temperature sensor, a passive RF strain sensor, and a passive RF pressure sensor are presented. The RF temperature sensor is presented in Chapter 3. It is based on split ring resonators loaded with bimorph cantilevers. Its operating principles and equivalent circuits are discussed in Chapter 4, where the design concept is illustrated to be robust and highly adaptable to different sensing ranges, environments, and applicable to other type of sensing beyond temperatures. The passive RF strain sensor, based on a patch antenna loaded with a cantilever-integrated open loop, is presented in Chapter 5, where it is demonstrated to have the highest strain sensitivity in the same remote and passive class of sensors in the state-of-the-art. Chapter 6 describes the passive RF pressure sensor, which is based on a dual-band stacked-patch antenna that allows both identification and sensing to be embedded in its unique dual resonant responses. In the second part of this thesis, an original and first-of-its-kind RF transducer is presented that enables non-touch sensing of human fingers within 3 cm of proximity (based on one unit sensor cell). The RF transducer is based on a slotted microstrip patch coupled to a half-wavelength parallel-coupled microstrip filter operating in the frequency range of 6 – 8 GHz. The sensing mechanism is based on the EM near-field coupling between the resonator and the human finger. Fundamentally different from the electric field capacitive sensing, this new method of sensing, the first of its kind, based on near-field interference that produces a myriad of nonlinearities in the sensing response, can introduce new capabilities for the interface of electronic displays (the detection is based on pattern recognition). What set this sensor and its platform apart from previous proximity sensors and microwave sensing platforms is the low profile planar structure of the system, and its compatibility with mobile applications. The thesis provides both breadth and depth in the proposed design and development and thus presenting a complete research in its contributions to RF sensing.
APA, Harvard, Vancouver, ISO, and other styles
47

Ortiz, Carlos. "First Principles Calculations of Electron Transport and Structural Damage by Intense Irradiation." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-102376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Grbovic, Mihajlo. "Data Mining Algorithms for Decentralized Fault Detection and Diagnostic in Industrial Systems." Diss., Temple University Libraries, 2012. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/177848.

Full text
Abstract:
Computer and Information Science
Ph.D.
Timely Fault Detection and Diagnosis in complex manufacturing systems is critical to ensure safe and effective operation of plant equipment. Process fault is defined as a deviation from normal process behavior, defined within the limits of safe production. The quantifiable objectives of Fault Detection include achieving low detection delay time, low false positive rate, and high detection rate. Once a fault has been detected pinpointing the type of fault is needed for purposes of fault mitigation and returning to normal process operation. This is known as Fault Diagnosis. Data-driven Fault Detection and Diagnosis methods emerged as an attractive alternative to traditional mathematical model-based methods, especially for complex systems due to difficulty in describing the underlying process. A distinct feature of data-driven methods is that no a priori information about the process is necessary. Instead, it is assumed that historical data, containing process features measured in regular time intervals (e.g., power plant sensor measurements), are available for development of fault detection/diagnosis model through generalization of data. The goal of my research was to address the shortcomings of the existing data-driven methods and contribute to solving open problems, such as: 1) decentralized fault detection and diagnosis; 2) fault detection in the cold start setting; 3) optimizing the detection delay and dealing with noisy data annotations. 4) developing models that can adapt to concept changes in power plant dynamics. For small-scale sensor networks, it is reasonable to assume that all measurements are available at a central location (sink) where fault predictions are made. This is known as a centralized fault detection approach. For large-scale networks, decentralized approach is often used, where network is decomposed into potentially overlapping blocks and each block provides local decisions that are fused at the sink. The appealing properties of the decentralized approach include fault tolerance, scalability, and reusability. When one or more blocks go offline due to maintenance of their sensors, the predictions can still be made using the remaining blocks. In addition, when the physical facility is reconfigured, either by changing its components or sensors, it can be easier to modify part of the decentralized system impacted by the changes than to overhaul the whole centralized system. The scalability comes from reduced costs of system setup, update, communication, and decision making. Main challenges in decentralized monitoring include process decomposition and decision fusion. We proposed a decentralized model where the sensors are partitioned into small, potentially overlapping, blocks based on the Sparse Principal Component Analysis (PCA) algorithm, which preserves strong correlations among sensors, followed by training local models at each block, and fusion of decisions based on the proposed Maximum Entropy algorithm. Moreover, we introduced a novel framework for adding constraints to the Sparse PCA problem. The constraints limit the set of possible solutions by imposing additional goals to be reached trough optimization along with the existing Sparse PCA goals. The experimental results on benchmark fault detection data show that Sparse PCA can utilize prior knowledge, which is not directly available in data, in order to produce desirable network partitions, with a pre-defined limit on communication cost and/or robustness.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
49

Ragni, Caterina. "Low Resource Algorithms for Abnormal Instances Detection in the Internet of Things Framework." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20159/.

Full text
Abstract:
Electrocardiography (ECG) signals are widely used to appraise the health of the human heart. The resulting time series signal is visually analyzed by a cardiologist to detect any unusual beat, which could be referred to a series of abnormal behavior going from a motion artifact to arrhythmia that the patient may have suffered. In this dissertation, a low-complexity and low-resources method for anomaly detection based on principal component analysis is presented. The weightlessness of the proposed algorithm well- fits implementation at low resources devices, with a view to edge computing. In particular, energy of the signal is observed in a properly chosen subspace, and normal processes are discriminated from abnormal ones, based on their accordance with certain criteria. The evaluation of the method is firstly conducted on synthetic ECG, and performance are compared with Literature's related works. The efficacy of the algorithm is finally proved through a qualitative evaluation on ECG signals coming from real patients. Addressing the particular case of motion artifact detection, this dissertation proves how, despite the extremely low computations costs, the method would help in reducing diagnosis time, by reducing to the minimum false alarms.
APA, Harvard, Vancouver, ISO, and other styles
50

Wessman, Filip. "Advanced Algorithms for Classification and Anomaly Detection on Log File Data : Comparative study of different Machine Learning Approaches." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-43175.

Full text
Abstract:
Background: A problematic area in today’s large scale distributed systems is the exponential amount of growing log data. Finding anomalies by observing and monitoring this data with manual human inspection methods becomes progressively more challenging, complex and time consuming. This is vital for making these systems available around-the-clock. Aim: The main objective of this study is to determine which are the most suitable Machine Learning (ML) algorithms and if they can live up to needs and requirements regarding optimization and efficiency in the log data monitoring area. Including what specific steps of the overall problem can be improved by using these algorithms for anomaly detection and classification on different real provided data logs. Approach: Initial pre-study is conducted, logs are collected and then preprocessed with log parsing tool Drain and regular expressions. The approach consisted of a combination of K-Means + XGBoost and respectively Principal Component Analysis (PCA) + K-Means + XGBoost. These was trained, tested and with different metrics individually evaluated against two datasets, one being a Server data log and on a HTTP Access log. Results: The results showed that both approaches performed very well on both datasets. Able to with high accuracy, precision and low calculation time classify, detect and make predictions on log data events. It was further shown that when applied without dimensionality reduction, PCA, results of the prediction model is slightly better, by a few percent. As for the prediction time, there was marginally small to no difference for when comparing the prediction time with and without PCA. Conclusions: Overall there are very small differences when comparing the results for with and without PCA. But in essence, it is better to do not use PCA and instead apply the original data on the ML models. The models performance is generally very dependent on the data being applied, it the initial preprocessing steps, size and it is structure, especially affecting the calculation time the most.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography