Gotowa bibliografia na temat „PCA ALGORITHM”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „PCA ALGORITHM”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "PCA ALGORITHM"

1

Dinanti, Aldila, i Joko Purwadi. "Analisis Performa Algoritma K-Nearest Neighbor dan Reduksi Dimensi Menggunakan Principal Component Analysis". Jambura Journal of Mathematics 5, nr 1 (1.02.2023): 155–65. http://dx.doi.org/10.34312/jjom.v5i1.17098.

Pełny tekst źródła
Streszczenie:
This paper discusses the performance of the K-Nearest Neighbor Algorithm with dimension reduction using Principal Component Analysis (PCA) in the case of diabetes disease classification. A large number of variables and data on the diabetes dataset requires a relatively long computation time, so dimensional reduction is needed to speed up the computational process. The dimension reduction method used in this study is PCA. After dimension reduction is done, it is continued with classification using the K-Nearest Neighbor Algorithm. The results on diabetes case studies show that dimension reduction using PCA produces 3 main components of the 8 variables in the original data, namely PC1, PC2, and PC3. Then classification result using K-Nearest Neighbor shows that by choosing 3 closest neighbor parameters (K), for K = 3, K = 5, and K = 7. The result for K = 3 has an accuracy of 67,53%, for K = 5 had an accuracy is 72,72%, and for K=7 had an accuracy of 77,92%. Thus, it was concluded that the best accuracy performance for the classification of diabetes was achieved at K=7 with an accuracy of 77.92%.
Style APA, Harvard, Vancouver, ISO itp.
2

Subramaniam, Ashwin, i Byung-Joo Oh. "Mushroom Recognition Using PCA Algorithm". International Journal of Software Engineering and Its Applications 10, nr 1 (31.01.2016): 43–50. http://dx.doi.org/10.14257/ijseia.2016.10.1.05.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Subiyanto, Subiyanto, Dina Priliyana, Moh Eki Riyadani, Nur Iksan i Hari Wibawanto. "Face recognition system with PCA-GA algorithm for smart home door security using Rasberry Pi". Jurnal Teknologi dan Sistem Komputer 8, nr 3 (25.05.2020): 210–16. http://dx.doi.org/10.14710/jtsiskom.2020.13590.

Pełny tekst źródła
Streszczenie:
Genetic algorithm (GA) can improve the classification of the face recognition process in the principal component analysis (PCA). However, the accuracy of this algorithm for the smart home security system has not been further analyzed. This paper presents the accuracy of face recognition using PCA-GA for the smart home security system on Raspberry Pi. PCA was used as the face recognition algorithm, while GA to improve the classification performance of face image search. The PCA-GA algorithm was implemented on the Raspberry Pi. If an authorized person accesses the door of the house, the relay circuit will unlock the door. The accuracy of the system was compared to other face recognition algorithms, namely LBPH-GA and PCA. The results show that PCA-GA face recognition has an accuracy of 90 %, while PCA and LBPH-GA have 80 % and 90 %, respectively.
Style APA, Harvard, Vancouver, ISO itp.
4

Zhang, Kang, Yongdong Huang i Cheng Zhao. "Remote sensing image fusion via RPCA and adaptive PCNN in NSST domain". International Journal of Wavelets, Multiresolution and Information Processing 16, nr 05 (wrzesień 2018): 1850037. http://dx.doi.org/10.1142/s0219691318500376.

Pełny tekst źródła
Streszczenie:
In order to improve fused image quality of multi-spectral (MS) image and panchromatic (PAN) image, a new remote sensing image fusion algorithm based on robust principal component analysis (RPCA) and non-subsampled shearlet transform (NSST) is proposed. First, the first principle component PC1 of MS image is extracted via principal component analysis (PCA). Then, the component PC1 and PAN image are decomposed by NSST to get the low and high frequency subbands, respectively. For the low frequency subband, the sparse matrix of PAN image by RPCA decomposition is used to guide the fusion rule; for the high frequency subbands, the fusion rule employed is based on adaptive PCNN model. Finally, the fusion image is obtained by inverse NSST transform and inverse PCA transform. The experimental results illustrate that the proposed fusion algorithm outperforms other classical fusion algorithms (PCA, Curvelet, NSCT, NSST and NSCT-PCNN) in terms of visual quality and objective evaluation in whole, and achieve better fusion performance.
Style APA, Harvard, Vancouver, ISO itp.
5

Hadiprakoso, Raden Budiarto, i I. Komang Setia Buana. "Performance Comparison of Feature Extraction and Machine Learning Classification Algorithms for Face Recognition". IJICS (International Journal of Informatics and Computer Science) 5, nr 3 (30.11.2021): 250. http://dx.doi.org/10.30865/ijics.v5i3.3333.

Pełny tekst źródła
Streszczenie:
Face recognition is a highly active research topic in pattern recognition and computer vision, with numerous practical applications. Face recognition can provide the most natural interaction experience similar to the way humans can recognize others. This paper presents a performance comparison of various machine learning approaches and feature extraction algorithms. The feature extraction algorithm used is Principal Component Analysis (PCA), Latent Dirichlet Allocation (LDA), and a combination of PCA-LDA. The method used is to take a dataset sample and then evaluate and compare machine learning algorithms to analyze accuracy in recognizing faces. We also use feature extraction techniques on facial image capture to speed up data processing. The classification algorithms measured are k-nearest neighbor, naive Bayes, support vector machine, random forest, and gradient boosting. The results showed that the random forest classification algorithm was the most accurate face recognition method. On the other hand, the PCA-LDA combined feature extraction algorithm has lower false-negative and false-positive rates than PCA and LDA. In addition, the PCA feature extraction algorithm has the fastest performance in the process of recognizing faces
Style APA, Harvard, Vancouver, ISO itp.
6

Dolan, Matthew T., Sung Kim, Yu-Hsuan Shao i Grace L. Lu-Yao. "Authentication of Algorithm to Detect Metastases in Men with Prostate Cancer Using ICD-9 Codes". Epidemiology Research International 2012 (22.08.2012): 1–7. http://dx.doi.org/10.1155/2012/970406.

Pełny tekst źródła
Streszczenie:
Background. Metastasis is a crucial endpoint for patients with prostate cancer (PCa), but currently lacks a validated claims-based algorithm for detection. Objective. To develop an algorithm using ICD-9 codes to facilitate accurate reporting of PCa metastases. Methods. Medical records from 300 men hospitalized at Robert Wood Johnson University Hospital for PCa were reviewed. Using the presence of metastatic PCa on chart review as the gold standard, two algorithms to detect metastases were compared. Algorithm A used ICD-9 codes 198.5 (bone metastases), 197.0 (lung metastases), 197.7 (liver metastases), or 198.3 (brain and spinal cord metastases) to detect metastases, while algorithm B used only 198.5. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for the two algorithms were determined. Kappa statistics were used to measure agreement rates between claim data and chart review. Results. Algorithm A demonstrated a sensitivity, specificity, PPV, and NPV of 95%, 100%, 100%, and 98.7%, respectively. Corresponding numbers for algorithm B were 90%, 100%, 100%, and 97.5%, respectively. The agreement rate is 96.8% for algorithm A and 93.5% for algorithm B. Conclusions. Using ICD-9 codes 198.5, 197.0, 197.7, or 198.3 in detecting the presence of PCa metastases offers a high sensitivity, specificity, PPV, and NPV value.
Style APA, Harvard, Vancouver, ISO itp.
7

Zhao, Wenjing, Yue Chi, Yatong Zhou i Cheng Zhang. "Image Denoising Algorithm Combined with SGK Dictionary Learning and Principal Component Analysis Noise Estimation". Mathematical Problems in Engineering 2018 (2018): 1–10. http://dx.doi.org/10.1155/2018/1259703.

Pełny tekst źródła
Streszczenie:
SGK (sequential generalization of K-means) dictionary learning denoising algorithm has the characteristics of fast denoising speed and excellent denoising performance. However, the noise standard deviation must be known in advance when using SGK algorithm to process the image. This paper presents a denoising algorithm combined with SGK dictionary learning and the principal component analysis (PCA) noise estimation. At first, the noise standard deviation of the image is estimated by using the PCA noise estimation algorithm. And then it is used for SGK dictionary learning algorithm. Experimental results show the following: (1) The SGK algorithm has the best denoising performance compared with the other three dictionary learning algorithms. (2) The SGK algorithm combined with PCA is superior to the SGK algorithm combined with other noise estimation algorithms. (3) Compared with the original SGK algorithm, the proposed algorithm has higher PSNR and better denoising performance.
Style APA, Harvard, Vancouver, ISO itp.
8

Li, Mingfei, Zhengpeng Chen, Jiangbo Dong, Kai Xiong, Chuangting Chen, Mumin Rao, Zhiping Peng, Xi Li i Jingxuan Peng. "A Data-Driven Fault Diagnosis Method for Solid Oxide Fuel Cell Systems". Energies 15, nr 7 (31.03.2022): 2556. http://dx.doi.org/10.3390/en15072556.

Pełny tekst źródła
Streszczenie:
In this study, a data-driven fault diagnosis method was developed for solid oxide fuel cell (SOFC) systems. First, the complete experimental data was obtained following the design of the SOFC system experiments. Then, principal component analysis (PCA) was performed to reduce the dimensionality of the obtained experimental data. Finally, the fault diagnosis algorithms were designed by support vector machine (SVM) and BP neural network to identify and prevent the reformer carbon deposition and heat exchanger rupture faults, respectively. The research results show that both SVM and BP fault diagnosis algorithms can achieve online fault identification. The PCA + SVM algorithm was compared with the SVM algorithm, BP algorithm, and PCA + BP algorithm, and the results show that the PCA + SVM algorithm is superior in terms of running time and accuracy, the diagnosis accuracy reached more than 99%, and the running time was within 20 s. The corresponding system optimization scheme is also proposed.
Style APA, Harvard, Vancouver, ISO itp.
9

Vu, L. G., Abeer Alsadoon, P. W. C. Prasad i A. M. S. Rahma. "Improving Accuracy in Face Recognition Proposal to Create a Hybrid Photo Indexing Algorithm, Consisting of Principal Component Analysis and a Triangular Algorithm (PCAaTA)". International Journal of Pattern Recognition and Artificial Intelligence 31, nr 01 (styczeń 2017): 1756001. http://dx.doi.org/10.1142/s0218001417560018.

Pełny tekst źródła
Streszczenie:
Accurate face recognition is today vital, principally for reasons of security. Current methods employ algorithms that index (classify) important features of human faces. There are many current studies in this field but most current solutions have significant limitations. Principal Component Analysis (PCA) is one of the best facial recognition algorithms. However, there are some noises that could affect the accuracy of this algorithm. The PCA works well with the support of preprocessing steps such as illumination reduction, background removal and color conversion. Some current solutions have shown results when using a combination of PCA and preprocessing steps. This paper proposes a hybrid solution in face recognition using PCA as the main algorithm with the support of a triangular algorithm in face normalization in order to enhance indexing accuracy. To evaluate the accuracy of the proposed hybrid indexing algorithm, the PCAaTA is tested and the results are compared with current solutions.
Style APA, Harvard, Vancouver, ISO itp.
10

Moghaddasi, Zahra, Hamid A. Jalab, Rafidah Md Noor i Saeed Aghabozorgi. "Improving RLRN Image Splicing Detection with the Use of PCA and Kernel PCA". Scientific World Journal 2014 (2014): 1–10. http://dx.doi.org/10.1155/2014/606570.

Pełny tekst źródła
Streszczenie:
Digital image forgery is becoming easier to perform because of the rapid development of various manipulation tools. Image splicing is one of the most prevalent techniques. Digital images had lost their trustability, and researches have exerted considerable effort to regain such trustability by focusing mostly on algorithms. However, most of the proposed algorithms are incapable of handling high dimensionality and redundancy in the extracted features. Moreover, existing algorithms are limited by high computational time. This study focuses on improving one of the image splicing detection algorithms, that is, the run length run number algorithm (RLRN), by applying two dimension reduction methods, namely, principal component analysis (PCA) and kernel PCA. Support vector machine is used to distinguish between authentic and spliced images. Results show that kernel PCA is a nonlinear dimension reduction method that has the best effect on R, G, B, and Y channels and gray-scale images.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "PCA ALGORITHM"

1

Petters, Patrik. "Development of a Supervised Multivariate Statistical Algorithm for Enhanced Interpretability of Multiblock Analysis". Thesis, Linköpings universitet, Matematiska institutionen, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138112.

Pełny tekst źródła
Streszczenie:
In modern biological research, OMICs techniques, such as genomics, proteomics or metabolomics, are often employed to gain deep insights into metabolic regulations and biochemical perturbations in response to a specific research question. To gain complementary biologically relevant information, multiOMICs, i.e., several different OMICs measurements on the same specimen, is becoming increasingly frequent. To be able to take full advantage of this complementarity, joint analysis of such multiOMICs data is necessary, but this is yet an underdeveloped area. In this thesis, a theoretical background is given on general component-based methods for dimensionality reduction such as PCA, PLS for single block analysis, and multiblock PLS for co-analysis of OMICs data. This is followed by a rotation of an unsupervised analysis method. The aim of this method is to divide dimensionality-reduced data in block-distinct and common variance partitions, using the DISCO-SCA approach. Finally, an algorithm for a similar rotation of a supervised (PLS) solution is presented using data available in the literature. To the best of our knowledge, this is the first time that such an approach for rotation of a supervised analysis in block-distinct and common partitions has been developed and tested.This newly developed DISCO-PLS algorithm clearly showed an increased potential for visualisation and interpretation of data, compared to standard PLS. This is shown bybiplots of observation scores and multiblock variable loadings.
Style APA, Harvard, Vancouver, ISO itp.
2

Ergin, Emre. "Investigation Of Music Algorithm Based And Wd-pca Method Based Electromagnetic Target Classification Techniques For Their Noise Performances". Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12611218/index.pdf.

Pełny tekst źródła
Streszczenie:
Multiple Signal Classification (MUSIC) Algorithm based and Wigner Distribution-Principal Component Analysis (WD-PCA) based classification techniques are very recently suggested resonance region approaches for electromagnetic target classification. In this thesis, performances of these two techniques will be compared concerning their robustness for noise and their capacity to handle large number of candidate targets. In this context, classifier design simulations will be demonstrated for target libraries containing conducting and dielectric spheres and for dielectric coated conducting spheres. Small scale aircraft targets modeled by thin conducting wires will also be used in classifier design demonstrations.
Style APA, Harvard, Vancouver, ISO itp.
3

Romualdo, Kamilla Vogas. "Problemas direto e inverso de processos de separação em leito móvel simulado mediante mecanismos cinéticos de adsorção". Universidade do Estado do Rio de Janeiro, 2012. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=6750.

Pełny tekst źródła
Streszczenie:
Diversas aplicações industriais relevantes envolvem os processos de adsorção, citando como exemplos a purificação de produtos, separação de substâncias, controle de poluição e umidade entre outros. O interesse crescente pelos processos de purificação de biomoléculas deve-se principalmente ao desenvolvimento da biotecnologia e à demanda das indústrias farmacêutica e química por produtos com alto grau de pureza. O leito móvel simulado (LMS) é um processo cromatográfico contínuo que tem sido aplicado para simular o movimento do leito de adsorvente, de forma contracorrente ao movimento do líquido, através da troca periódica das posições das correntes de entrada e saída, sendo operado de forma contínua, sem prejuízo da pureza das correntes de saída. Esta consiste no extrato, rico no componente mais fortemente adsorvido, e no rafinado, rico no componente mais fracamente adsorvido, sendo o processo particularmente adequado a separações binárias. O objetivo desta tese é estudar e avaliar diferentes abordagens utilizando métodos estocásticos de otimização para o problema inverso dos fenômenos envolvidos no processo de separação em LMS. Foram utilizados modelos discretos com diferentes abordagens de transferência de massa, com a vantagem da utilização de um grande número de pratos teóricos em uma coluna de comprimento moderado, neste processo a separação cresce à medida que os solutos fluem através do leito, isto é, ao maior número de vezes que as moléculas interagem entre a fase móvel e a fase estacionária alcançando assim o equilíbrio. A modelagem e a simulação verificadas nestas abordagens permitiram a avaliação e a identificação das principais características de uma unidade de separação do LMS. A aplicação em estudo refere-se à simulação de processos de separação do Baclofen e da Cetamina. Estes compostos foram escolhidos por estarem bem caracterizados na literatura, estando disponíveis em estudos de cinética e de equilíbrio de adsorção nos resultados experimentais. De posse de resultados experimentais avaliou-se o comportamento do problema direto e inverso de uma unidade de separação LMS visando comparar os resultados obtidos com os experimentais, sempre se baseando em critérios de eficiência de separação entre as fases móvel e estacionária. Os métodos estudados foram o GA (Genetic Algorithm) e o PCA (Particle Collision Algorithm) e também foi feita uma hibridização entre o GA e o PCA. Como resultado desta tese analisouse e comparou-se os métodos de otimização em diferentes aspectos relacionados com o mecanismo cinético de transferência de massa por adsorção e dessorção entre as fases sólidas do adsorvente.
Several important industrial applications involving adsorption processes, citing as an example the product purification, separation of substances, pollution control and moisture among others. The growing interest in processes of purification of biomolecules is mainly due to the development of biotechnology and the demand of pharmaceutical and chemical products with high purity. The simulated moving bed (SMB) chromatography is a continuous process that has been applied to simulate the movement of the adsorbent bed, in a countercurrent to the movement of liquid through the periodic exchange of the positions of input and output currents, being operated so continuous, notwithstanding the purity of the outlet streams. This is the extract, rich in the more strongly adsorbed component, and the raffinate, rich in the more weakly adsorbed component, the method being particularly suited to binary separations. The aim of this thesis is to study and evaluate different approaches using stochastic optimization methods for the inverse problem of the phenomena involved in the separation process in LMS. We used discrete models with different approaches to mass transfer. With the benefit of using a large number of theoretical plates in a column of moderate length, in this process the separation increases as the solute flowing through the bed, i.e. as many times as molecules interact between the mobile phase and stationary phase thus achieving the equilibrium. The modeling and simulation verified in these approaches allowed the assessment and identification of the main characteristics of a separation unit LMS. The application under consideration refers to the simulation of the separation of Ketamine and Baclofen. These compounds were chosen because they are well characterized in the literature and are available in kinetic studies and equilibrium adsorption on experimental results. With the results of experiments evaluated the behavior of the direct and inverse problem of a separation unit LMS in order to compare these results, always based on the criteria of separation efficiency between the mobile and stationary phases. The methods studied were the GA (Genetic Algorithm) and PCA (Particle Collision Algorithm) and we also made a hybridization between the GA and PCA. This thesis, we analyzed and compared the optimization methods in different aspects of the kinetic mechanism for mass transfer between the adsorption and desorption of the adsorbent solid phases.
Style APA, Harvard, Vancouver, ISO itp.
4

SINGH, BHUPINDER. "A HYBRID MSVM COVID-19 IMAGE CLASSIFICATION ENHANCED USING PARTICLE SWARM OPTIMIZATION". Thesis, DELHI TECHNOLOGICAL UNIVERSITY, 2021. http://dspace.dtu.ac.in:8080/jspui/handle/repository/18864.

Pełny tekst źródła
Streszczenie:
COVID-19 (novel coronavirus disease) is a serious illness that has killed millions of civilians and affected millions around the world. Mostly as result, numerous technologies that enable both the rapid and accurate identification of COVID-19 illnesses will provide much assistance to healthcare practitioners. A machine learning- based approach is used for the detection of COVID-19. In general, artificial intelligence (AI) approaches have yielded positive outcomes in healthcare visual processing and analysis. CXR is the digital image processing method that plays a vital role in the analysis of Covid-19 disease. Due to the maximum accessibility of huge scale annotated image databases, excessive success has been done using multiclass support vector machines for image classification. Image classification is the main challenge to detect medical diagnosis. The existing work used CNN with a transfer learning mechanism that can give a solution by transferring information from GENETIC object recognition tasks. The DeTrac method has been used to detect the disease in CXR images. DeTrac method accuracy achieved 93.1~ 97 percent. In this proposed work, the hybridization PSO+MSVM method has worked with irregularities in the CXR images database by studying its group distances using a group or class mechanism. At the initial phase of the process, a median filter is used for the noise reduction from the image. Edge detection is an essential step in the process of COVID-19 detection. The canny edge detector is implemented for the detection of edges in the chest x-ray images. The PCA (Principal Component Analysis) method is implemented for the feature extraction phase. There are multiple features extracted through PCA and the essential features are optimized by an optimization technique known as swarm optimization is used for feature optimization. For the detection of COVID-19 through CXR images, a hybrid multi-class support vector machine technique is implemented. The PSO (particle swarm optimization) technique is used for feature optimization. The comparative analysis of various existing techniques is also depicted in this work. The proposed system has achieved an accuracy of 97.51 percent, SP of 97.49 percent, and 98.0 percent of SN. The proposed system is compared with existing systems and achieved better performance and the compared systems are DeTrac, GoogleNet, and SqueezeNet.
Style APA, Harvard, Vancouver, ISO itp.
5

Wang, Xuechuan, i n/a. "Feature Extraction and Dimensionality Reduction in Pattern Recognition and Their Application in Speech Recognition". Griffith University. School of Microelectronic Engineering, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030619.162803.

Pełny tekst źródła
Streszczenie:
Conventional pattern recognition systems have two components: feature analysis and pattern classification. Feature analysis is achieved in two steps: parameter extraction step and feature extraction step. In the parameter extraction step, information relevant for pattern classification is extracted from the input data in the form of parameter vector. In the feature extraction step, the parameter vector is transformed to a feature vector. Feature extraction can be conducted independently or jointly with either parameter extraction or classification. Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) are the two popular independent feature extraction algorithms. Both of them extract features by projecting the parameter vectors into a new feature space through a linear transformation matrix. But they optimize the transformation matrix with different intentions. PCA optimizes the transformation matrix by finding the largest variations in the original feature space. LDA pursues the largest ratio of between-class variation and within-class variation when projecting the original feature space to a subspace. The drawback of independent feature extraction algorithms is that their optimization criteria are different from the classifier’s minimum classification error criterion, which may cause inconsistency between feature extraction and the classification stages of a pattern recognizer and consequently, degrade the performance of classifiers. A direct way to overcome this problem is to conduct feature extraction and classification jointly with a consistent criterion. Minimum classification Error (MCE) training algorithm provides such an integrated framework. MCE algorithm was first proposed for optimizing classifiers. It is a type of discriminative learning algorithm but achieves minimum classification error directly. The flexibility of the framework of MCE algorithm makes it convenient to conduct feature extraction and classification jointly. Conventional feature extraction and pattern classification algorithms, LDA, PCA, MCE training algorithm, minimum distance classifier, likelihood classifier and Bayesian classifier, are linear algorithms. The advantage of linear algorithms is their simplicity and ability to reduce feature dimensionalities. However, they have the limitation that the decision boundaries generated are linear and have little computational flexibility. SVM is a recently developed integrated pattern classification algorithm with non-linear formulation. It is based on the idea that the classification that a.ords dot-products can be computed efficiently in higher dimensional feature spaces. The classes which are not linearly separable in the original parametric space can be linearly separated in the higher dimensional feature space. Because of this, SVM has the advantage that it can handle the classes with complex nonlinear decision boundaries. However, SVM is a highly integrated and closed pattern classification system. It is very difficult to adopt feature extraction into SVM’s framework. Thus SVM is unable to conduct feature extraction tasks. This thesis investigates LDA and PCA for feature extraction and dimensionality reduction and proposes the application of MCE training algorithms for joint feature extraction and classification tasks. A generalized MCE (GMCE) training algorithm is proposed to mend the shortcomings of the MCE training algorithms in joint feature and classification tasks. SVM, as a non-linear pattern classification system is also investigated in this thesis. A reduced-dimensional SVM (RDSVM) is proposed to enable SVM to conduct feature extraction and classification jointly. All of the investigated and proposed algorithms are tested and compared firstly on a number of small databases, such as Deterding Vowels Database, Fisher’s IRIS database and German’s GLASS database. Then they are tested in a large-scale speech recognition experiment based on TIMIT database.
Style APA, Harvard, Vancouver, ISO itp.
6

Wang, Xuechuan. "Feature Extraction and Dimensionality Reduction in Pattern Recognition and Their Application in Speech Recognition". Thesis, Griffith University, 2003. http://hdl.handle.net/10072/365680.

Pełny tekst źródła
Streszczenie:
Conventional pattern recognition systems have two components: feature analysis and pattern classification. Feature analysis is achieved in two steps: parameter extraction step and feature extraction step. In the parameter extraction step, information relevant for pattern classification is extracted from the input data in the form of parameter vector. In the feature extraction step, the parameter vector is transformed to a feature vector. Feature extraction can be conducted independently or jointly with either parameter extraction or classification. Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) are the two popular independent feature extraction algorithms. Both of them extract features by projecting the parameter vectors into a new feature space through a linear transformation matrix. But they optimize the transformation matrix with different intentions. PCA optimizes the transformation matrix by finding the largest variations in the original feature space. LDA pursues the largest ratio of between-class variation and within-class variation when projecting the original feature space to a subspace. The drawback of independent feature extraction algorithms is that their optimization criteria are different from the classifier’s minimum classification error criterion, which may cause inconsistency between feature extraction and the classification stages of a pattern recognizer and consequently, degrade the performance of classifiers. A direct way to overcome this problem is to conduct feature extraction and classification jointly with a consistent criterion. Minimum classification Error (MCE) training algorithm provides such an integrated framework. MCE algorithm was first proposed for optimizing classifiers. It is a type of discriminative learning algorithm but achieves minimum classification error directly. The flexibility of the framework of MCE algorithm makes it convenient to conduct feature extraction and classification jointly. Conventional feature extraction and pattern classification algorithms, LDA, PCA, MCE training algorithm, minimum distance classifier, likelihood classifier and Bayesian classifier, are linear algorithms. The advantage of linear algorithms is their simplicity and ability to reduce feature dimensionalities. However, they have the limitation that the decision boundaries generated are linear and have little computational flexibility. SVM is a recently developed integrated pattern classification algorithm with non-linear formulation. It is based on the idea that the classification that a.ords dot-products can be computed efficiently in higher dimensional feature spaces. The classes which are not linearly separable in the original parametric space can be linearly separated in the higher dimensional feature space. Because of this, SVM has the advantage that it can handle the classes with complex nonlinear decision boundaries. However, SVM is a highly integrated and closed pattern classification system. It is very difficult to adopt feature extraction into SVM’s framework. Thus SVM is unable to conduct feature extraction tasks. This thesis investigates LDA and PCA for feature extraction and dimensionality reduction and proposes the application of MCE training algorithms for joint feature extraction and classification tasks. A generalized MCE (GMCE) training algorithm is proposed to mend the shortcomings of the MCE training algorithms in joint feature and classification tasks. SVM, as a non-linear pattern classification system is also investigated in this thesis. A reduced-dimensional SVM (RDSVM) is proposed to enable SVM to conduct feature extraction and classification jointly. All of the investigated and proposed algorithms are tested and compared firstly on a number of small databases, such as Deterding Vowels Database, Fisher’s IRIS database and German’s GLASS database. Then they are tested in a large-scale speech recognition experiment based on TIMIT database.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Microelectronic Engineering
Full Text
Style APA, Harvard, Vancouver, ISO itp.
7

Rimal, Suraj. "POPULATION STRUCTURE INFERENCE USING PCA AND CLUSTERING ALGORITHMS". OpenSIUC, 2021. https://opensiuc.lib.siu.edu/theses/2860.

Pełny tekst źródła
Streszczenie:
Genotype data, consisting large numbers of markers, is used as demographic and association studies to determine genes related to specific traits or diseases. Handling of these datasets usually takes a significant amount of time in its application of population structure inference. Therefore, we suggested applying PCA on genotyped data and then clustering algorithms to specify the individuals to their particular subpopulations. We collected both real and simulated datasets in this study. We studied PCA and selected significant features, then applied five different clustering techniques to obtain better results. Furthermore, we studied three different methods for predicting the optimal number of subpopulations in a collected dataset. The results of four different simulated datasets and two real human genotype datasets show that our approach performs well in the inference of population structure. NbClust is more effective to infer subpopulations in the population. In this study, we showed that centroid-based clustering: such as k-means and PAM, performs better than model-based, spectral, and hierarchical clustering algorithms. This approach also has the benefit of being fast and flexible in the inference of population structure.
Style APA, Harvard, Vancouver, ISO itp.
8

Katadound, Sachin. "Face Recognition: Study and Comparison of PCA and EBGM Algorithms". TopSCHOLAR®, 2004. http://digitalcommons.wku.edu/theses/241.

Pełny tekst źródła
Streszczenie:
Face recognition is a complex and difficult process due to various factors such as variability of illumination, occlusion, face specific characteristics like hair, glasses, beard, etc., and other similar problems affecting computer vision problems. Using a system that offers robust and consistent results for face recognition, various applications such as identification for law enforcement, secure system access, computer human interaction, etc., can be automated successfully. Different methods exist to solve the face recognition problem. Principal component analysis, Independent component analysis, and linear discriminant analysis are few other statistical techniques that are commonly used in solving the face recognition problem. Genetic algorithm, elastic bunch graph matching, artificial neural network, etc. are few of the techniques that have been proposed and implemented. The objective of this thesis paper is to provide insight into different methods available for face recognition, and explore methods that provided an efficient and feasible solution. Factors affecting the result of face recognition and the preprocessing steps that eliminate such abnormalities are also discussed briefly. Principal Component Analysis (PCA) is the most efficient and reliable method known for at least past eight years. Elastic bunch graph matching (EBGM) technique is one of the promising techniques that we studied in this thesis work. We also found better results with EBGM method than PCA in the current thesis paper. We recommend use of a hybrid technique involving the EBGM algorithm to obtain better results. Though, the EBGM method took a long time to train and generate distance measures for the given gallery images compared to PCA. But, we obtained better cumulative match score (CMS) results for the EBGM in comparison to the PCA method. Other promising techniques that can be explored separately in other paper include Genetic algorithm based methods, Mixture of principal components, and Gabor wavelet techniques.
Style APA, Harvard, Vancouver, ISO itp.
9

Perez, Gallardo Jorge Raúl. "Ecodesign of large-scale photovoltaic (PV) systems with multi-objective optimization and Life-Cycle Assessment (LCA)". Phd thesis, Toulouse, INPT, 2013. http://oatao.univ-toulouse.fr/10505/1/perez_gallardo_partie_1_sur_2.pdf.

Pełny tekst źródła
Streszczenie:
Because of the increasing demand for the provision of energy worldwide and the numerous damages caused by a major use of fossil sources, the contribution of renewable energies has been increasing significantly in the global energy mix with the aim at moving towards a more sustainable development. In this context, this work aims at the development of a general methodology for designing PV systems based on ecodesign principles and taking into account simultaneously both techno-economic and environmental considerations. In order to evaluate the environmental performance of PV systems, an environmental assessment technique was used based on Life Cycle Assessment (LCA). The environmental model was successfully coupled with the design stage model of a PV grid-connected system (PVGCS). The PVGCS design model was then developed involving the estimation of solar radiation received in a specific geographic location, the calculation of the annual energy generated from the solar radiation received, the characteristics of the different components and the evaluation of the techno-economic criteria through Energy PayBack Time (EPBT) and PayBack Time (PBT). The performance model was then embedded in an outer multi-objective genetic algorithm optimization loop based on a variant of NSGA-II. A set of Pareto solutions was generated representing the optimal trade-off between the objectives considered in the analysis. A multi-variable statistical method (i.e., Principal Componet Analysis, PCA) was then applied to detect and omit redundant objectives that could be left out of the analysis without disturbing the main features of the solution space. Finally, a decision-making tool based on M-TOPSIS was used to select the alternative that provided a better compromise among all the objective functions that have been investigated. The results showed that while the PV modules based on c-Si have a better performance in energy generation, the environmental aspect is what makes them fall to the last positions. TF PV modules present the best trade-off in all scenarios under consideration. A special attention was paid to recycling process of PV module even if there is not yet enough information currently available for all the technologies evaluated. The main cause of this lack of information is the lifetime of PV modules. The data relative to the recycling processes for m-Si and CdTe PV technologies were introduced in the optimization procedure for ecodesign. By considering energy production and EPBT as optimization criteria into a bi-objective optimization cases, the importance of the benefits of PV modules end-of-life management was confirmed. An economic study of the recycling strategy must be investigated in order to have a more comprehensive view for decision making.
Style APA, Harvard, Vancouver, ISO itp.
10

Lacasse, Alexandre. "Bornes PAC-Bayes et algorithmes d'apprentissage". Thesis, Université Laval, 2010. http://www.theses.ulaval.ca/2010/27635/27635.pdf.

Pełny tekst źródła
Streszczenie:
L’objet principale de cette thèse est l’étude théorique et la conception d’algorithmes d’apprentissage concevant des classificateurs par vote de majorité. En particulier, nous présentons un théorème PAC-Bayes s’appliquant pour borner, entre autres, la variance de la perte de Gibbs (en plus de son espérance). Nous déduisons de ce théorème une borne du risque du vote de majorité plus serrée que la fameuse borne basée sur le risque de Gibbs. Nous présentons également un théorème permettant de borner le risque associé à des fonctions de perte générale. À partir de ce théorème, nous concevons des algorithmes d’apprentissage construisant des classificateurs par vote de majorité pondérés par une distribution minimisant une borne sur les risques associés aux fonctions de perte linéaire, quadratique, exponentielle, ainsi qu’à la fonction de perte du classificateur de Gibbs à piges multiples. Certains de ces algorithmes se comparent favorablement avec AdaBoost.
The main purpose of this thesis is the theoretical study and the design of learning algorithms returning majority-vote classifiers. In particular, we present a PAC-Bayes theorem allowing us to bound the variance of the Gibbs’ loss (not only its expectation). We deduce from this theorem a bound on the risk of a majority vote tighter than the famous bound based on the Gibbs’ risk. We also present a theorem that allows to bound the risk associated with general loss functions. From this theorem, we design learning algorithms building weighted majority vote classifiers minimizing a bound on the risk associated with the following loss functions : linear, quadratic and exponential. Also, we present algorithms based on the randomized majority vote. Some of these algorithms compare favorably with AdaBoost.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "PCA ALGORITHM"

1

Karakatič, Sašo, i Iztok Fister ml. Strojno učenje: S Pythonom do prvega klasifikatorja. University of Maribor Press, 2022. http://dx.doi.org/10.18690/um.feri.1.2022.

Pełny tekst źródła
Streszczenie:
Knjiga služi kot uvod v področje strojnega učenja za vse, ki imajo vsaj osnovne izkušnje s programiranjem. Pregledajo se pomembni pojmi strojnega učenja (model znanja, učna in testna množica, algoritem učenja), natančneje pa se predstavi tehnika klasifikacije in način ovrednotenja kvalitete modelov znanja klasifikacije. Spozna se algoritem klasifikacije k najbližjih sosedov in predstavi se uporaba tega algoritma – tako konceptualno kakor v programski kodi. Knjiga poda številne primere v programskem jeziku Python in okolju Jupyter Notebooks. Za namen utrjevanja znanja pa so ponujene naloge (tako računske, kot programerske) s podanimi rešitvami.
Style APA, Harvard, Vancouver, ISO itp.
2

Coates, Laura C., Arthur Kavanaugh i Christopher T. Ritchlin. Treatment algorithm and treat to target. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780198737582.003.0032.

Pełny tekst źródła
Streszczenie:
This chapter covers the evidence for treatment algorithms and treatment to target in PsA. Evidence for treatment algorithms including step up vs step down approaches to prescribing and early vs delayed treatment is discussed. EULAR recommendations for treating to target in SpA are outlined with a summary of the level of evidence available at that time. Key outcome measures that could be utilized as targets in PsA are reviewed with discussion of their merits and deficiencies. A detailed description of the first treat to target study in PsA is presented: the TICOPA study. The impact of comorbidities on treatment decisions is discussed, both related SpA conditions such as uveitis and inflammatory bowel disease, and non-SpA comorbidities such as the metabolic syndrome and liver disease. Finally, suggestions for translation into clinical practice are outlined, highlighting the need for multi-speciality collaborative working, full assessment of disease activity and subsequent optimal treatment.
Style APA, Harvard, Vancouver, ISO itp.
3

Fiedler, Klaus, i Florian Kutzner. Pseudocontingencies. Redaktor Michael R. Waldmann. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780199399550.013.14.

Pełny tekst źródła
Streszczenie:
In research on causal inference and in related paradigms (conditioning, cue learning, attribution), it has been traditionally taken for granted that the statistical contingency between cause and effect drives the cognitive inference process. However, while a contingency model implies a cognitive algorithm based on joint frequencies (i.e., the cell frequencies of a 2 x 2 contingency table), recent research on pseudocontingencies (PCs) suggests a different mental algorithm that is driven by base rates (i.e., the marginal frequencies of a 2 x 2 table). When the base rates of two variables are skewed in the same way, a positive contingency is inferred. In contrast, a negative contingency is inferred when base rates are skewed in opposite directions. The chapter describes PCs as a resilient cognitive illusion, as a proxy for inferring contingencies in the absence of solid information, and as a smart heuristic that affords valid inferences most of the time.
Style APA, Harvard, Vancouver, ISO itp.
4

Capps, Carlos H. Setup reduction in PCB assembly: A group technology application using Genetic Algorithms. 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Fragoulis, George E., i Iain B. McInnes. Small molecules in the treatment of psoriatic arthritis. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780198737582.003.0031.

Pełny tekst źródła
Streszczenie:
Psoriatic arthritis (PsA) is a chronic inflammatory arthritis, occurring in about one third of psoriasis patients, and exhibiting very varied clinical manifestations and comorbidities. Although the clinical outcome of the disease has been significantly improved recently, mainly due to utilization of novel agents targeting the IL-23/-17 axis, unmet needs still exist. Emerging insights into the disease’s pathogenesis led to development of new drugs acting against critical molecular targets and their efficacy in psoriasis and/or PsA has been tested in Phase III clinical trials. Some of these therapeutic regimens have been already approved; some look promising but others have been proven inefficacious. Future studies are expected to determine the place of these compounds within the therapeutic algorithm of PsA. In this chapter we describe the rationale and clinical impact of incorporating small molecules in PsA treatment, as well as specific molecules involved in pathogenesis that may serve as therapeutic targets.
Style APA, Harvard, Vancouver, ISO itp.
6

Pittenger, Arthur O. An Introduction to Quantum Computing Algorithms (Progress in Computer Science and Applied Logic (PCS)). Birkhäuser Boston, 2001.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Pazzi, Annalisa. Applicazioni Di Grafi e Algoritmi Alla Fuga Di Pac-Man Dal Ghosts Team: Codice Completo in Linguaggio C. Independently Published, 2020.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

AlJaroudi, Wael. Myocardial Perfusion Imaging Before and After Cardiac Revascularization. Oxford University Press, 2015. http://dx.doi.org/10.1093/med/9780199392094.003.0015.

Pełny tekst źródła
Streszczenie:
Coronary artery disease (CAD) remains the leading cause of morbidity and mortality worldwide. While the burden of the disease remains high, the rates of death attributable to CAD have declined by almost a third between 1998 and 2008. In patients with stable ischemic heart disease (SIHD), data supporting survival benefit from coronary artery bypass graft surgery (CABG) or percutaneous coronary intervention (PCI) versus no revascularization are outdated with the recent advancement in medical therapy. Over the years, myocardial perfusion imaging (MPI) has played a significant role in detecting ischemic burden, risk stratifying patients and guiding physicians to the best treatment strategy. Contrary to data from other trials, the role of stress MPI has been downplayed in more contemporary randomized clinical trial that failed to show that ischemic burden identifies the ideal candidate for revascularization or carries incremental prognostic value. Hence, there is an equipoise on the role of MPI in the management of patients prior to revascularization. The role of stress MPI post-revascularization has also been evaluated in multiple studies, mostly done in the last decade or prior. The guidelines advocate against routine stress MPI in asymptomatic patients (unless 5 years or more post-CABG), but allows it in the presence or recurrence of symptoms. The current chapter will review the data on survival benefit from revascularization, complementary role of stress MPI in selecting the appropriate candidate for revascularization, prognostic value of ischemic versus atherosclerotic burden, role of MPI post revascularization, updated guidelines and proposed algorithms to guide the treating physicians.
Style APA, Harvard, Vancouver, ISO itp.
9

Michael, Blair, Walker George i Willey Stuart, red. Financial Markets and Exchanges Law. Wyd. 3. Oxford University Press, 2021. http://dx.doi.org/10.1093/law/9780198827528.001.0001.

Pełny tekst źródła
Streszczenie:
The book provides a comprehensive and authoritative analysis on the regulation of financial markets and market infrastructure. It focuses on stock markets and exchanges, associated trading, clearing, and settlement, and on payment systems, set in their historical and current contexts. This new edition addresses a number of major developments that have impacted the UK, wider European and international financial markets, such as within the UK, the PRA, the FCA and the Bank of England have become established financial regulators, each with its distinguishing responsibilities; MiFID has been substantially revised and strengthened through new directly applicable EU regulation; MiFID 2 also addresses the challenges posed by the use of fast-technology such as high frequency and algorithmic trading; and new technology is beginning to make an impact on the infrastructure of financial markets. This new edition includes updated content on the growing importance of financial technology with two new chapters on the emerging impact of financial technology on markets and on the regulation of markets. There is also a new chapter on MiFID 2 and MiFIR – the new securities trading architecture that will see the introduction of a new trading venue as well as significant changes to and the pre- and post-trade transparency and reporting regime. The introduction of mandatory trading of derivatives on trading venues is addressed together with the related post-EMIR regime for the mandatory clearing of certain classes of derivatives. Chapters on the role of the European Commission and ESMA have been updated, and consideration is given to the possible implications of Brexit for market location and access
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "PCA ALGORITHM"

1

Tao, Yang, i Yuanzi He. "Improved PCA Face Recognition Algorithm". W Communications in Computer and Information Science, 602–11. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-7981-3_44.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Verboven, Sabine, Peter J. Rousseeuw i Mia Hubert. "An improved algorithm for robust PCA". W COMPSTAT, 481–86. Heidelberg: Physica-Verlag HD, 2000. http://dx.doi.org/10.1007/978-3-642-57678-2_67.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Ding, Shifei, Weikuan Jia, Chunyang Su, Xinzheng Xu i Liwen Zhang. "PCA-Based Elman Neural Network Algorithm". W Advances in Computation and Intelligence, 315–21. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-92137-0_35.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Salgado, Paulo, i Getúlio Igrejas. "A PCA-Fuzzy Clustering Algorithm for Contours Analysis". W Advances in Intelligent and Soft Computing, 305–11. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24001-0_28.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Zhang, Haigang, Yixin Yin, Sen Zhang i Changyin Sun. "An Improved ELM Algorithm Based on PCA Technique". W Proceedings in Adaptation, Learning and Optimization, 95–104. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-14066-7_10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Draganov, Ivo, Roumen Kountchev i Veska Georgieva. "Medical Images Transform by Multistage PCA-Based Algorithm". W Advances in Intelligent Analysis of Medical Data and Decision Support Systems, 89–100. Heidelberg: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-00029-9_8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Das, Debasis, i Rajiv Misra. "A Parallel AES Encryption Algorithm Based on PCA". W Advances in Parallel Distributed Computing, 238–46. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24037-9_23.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Wen, Shicheng, Shijin Yuan, Bin Mu i Hongyu Li. "Robust PCA-Based Genetic Algorithm for Solving CNOP". W Intelligent Computing Theories and Methodologies, 597–606. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-22180-9_59.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Babu, Mylam Chinnappan, i Sangaralingam Pushpa. "Genetic Algorithm-Based PCA Classification for Imbalanced Dataset". W Intelligent Computing in Engineering, 541–52. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-2780-7_59.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Chang, W. H., i M. C. Cheng. "Image Retrieval by Auto Weight Regulation PCA Algorithm". W Intelligent Systems Design and Applications, 383–92. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-44999-7_37.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "PCA ALGORITHM"

1

Dagher, Issam. "Incremental PCA-LDA algorithm". W 2010 IEEE International Conference on Computational Intelligence for Measurement Systems and Applications (CIMSA). IEEE, 2010. http://dx.doi.org/10.1109/cimsa.2010.5611752.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Salgado, Paulo, Lio Gonçalves, Getúlio Igrejas, Theodore E. Simos, George Psihoyios, Ch Tsitouras i Zacharias Anastassi. "Sliding PCA Fuzzy Clustering Algorithm". W NUMERICAL ANALYSIS AND APPLIED MATHEMATICS ICNAAM 2011: International Conference on Numerical Analysis and Applied Mathematics. AIP, 2011. http://dx.doi.org/10.1063/1.3637005.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Feng, Xu, i Wenjian Yu. "A Fast Adaptive Randomized PCA Algorithm". W Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/411.

Pełny tekst źródła
Streszczenie:
It is desirable to adaptively determine the number of dimensions (rank) for PCA according to a given tolerance of low-rank approximation error. In this work, we aim to develop a fast algorithm solving this adaptive PCA problem. We propose to replace the QR factorization in randQB_EI algorithm with matrix multiplication and inversion of small matrices, and propose a new error indicator to incrementally evaluate approximation error in Frobenius norm. Combining the shifted power iteration technique for better accuracy, we finally build up an algorithm named farPCA. Experimental results show that farPCA is much faster than the baseline methods (randQB_EI, randUBV and svds) in practical setting of multi-thread computing, while producing nearly optimal results of adpative PCA.
Style APA, Harvard, Vancouver, ISO itp.
4

Hasanbelliu, Erion, Luis Sanchez Giraldo i Jose C. Principe. "A Recursive Online Kernel PCA Algorithm". W 2010 20th International Conference on Pattern Recognition (ICPR). IEEE, 2010. http://dx.doi.org/10.1109/icpr.2010.50.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Wang, Qianqian, Quanxue Gao, Xinbo Gao i Feiping Nie. "Angle Principal Component Analysis". W Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/409.

Pełny tekst źródła
Streszczenie:
Recently, many ℓ1-norm based PCA methods have been developed for dimensionality reduction, but they do not explicitly consider the reconstruction error. Moreover, they do not take into account the relationship between reconstruction error and variance of projected data. This reduces the robustness of algorithms. To handle this problem, a novel formulation for PCA, namely angle PCA, is proposed. Angle PCA employs ℓ2-norm to measure reconstruction error and variance of projected da-ta and maximizes the summation of ratio between variance and reconstruction error of each data. Angle PCA not only is robust to outliers but also retains PCA’s desirable property such as rotational invariance. To solve Angle PCA, we propose an iterative algorithm, which has closed-form solution in each iteration. Extensive experiments on several face image databases illustrate that our method is overall superior to the other robust PCA algorithms, such as PCA, PCA-L1 greedy, PCA-L1 nongreedy and HQ-PCA.
Style APA, Harvard, Vancouver, ISO itp.
6

Bansal, Abhishek, Kapil Mehta i Sahil Arora. "Face Recognition Using PCA and LDA Algorithm". W Communication Technologies (ACCT). IEEE, 2012. http://dx.doi.org/10.1109/acct.2012.52.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Li, H. L., W. Quan, G. Ji i Z. H. Qian. "Wireless Indoor Positioning Algorithm Based on PCA". W 2015 International Conference on Artificial Intelligence and Industrial Engineering. Paris, France: Atlantis Press, 2015. http://dx.doi.org/10.2991/aiie-15.2015.3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Melikyan, Vazgen, i Hasmik Osipyan. "Modified fast PCA algorithm on GPU architecture". W 2014 East-West Design & Test Symposium (EWDTS). IEEE, 2014. http://dx.doi.org/10.1109/ewdts.2014.7027099.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Fengxiang Jin i Shifei Ding. "An improved PCA algorithm based on WIF". W 2008 IEEE International Joint Conference on Neural Networks (IJCNN 2008 - Hong Kong). IEEE, 2008. http://dx.doi.org/10.1109/ijcnn.2008.4634006.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Li, Yuan-long, Jun Zhang i Wei-neng Chen. "Differential evolution algorithm with PCA-based crossover". W the fourteenth international conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2330784.2331018.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "PCA ALGORITHM"

1

Zhao, George, Grang Mei, Bulent Ayhan, Chiman Kwan i Venu Varma. DTRS57-04-C-10053 Wave Electromagnetic Acoustic Transducer for ILI of Pipelines. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), marzec 2005. http://dx.doi.org/10.55274/r0012049.

Pełny tekst źródła
Streszczenie:
In this project, Intelligent Automation, Incorporated (IAI) and Oak Ridge National Lab (ORNL) propose a novel and integrated approach to inspect the mechanical dents and metal loss in pipelines. It combines the state-of-the-art SH wave Electromagnetic Acoustic Transducer (EMAT) technique, through detailed numerical modeling, data collection instrumentation, and advanced signal processing and pattern classifications, to detect and characterize mechanical defects in the underground pipeline transportation infrastructures. The technique has four components: (1) thorough guided wave modal analysis, (2) recently developed three-dimensional (3-D) Boundary Element Method (BEM) for best operational condition selection and defect feature extraction, (3) ultrasonic Shear Horizontal (SH) waves EMAT sensor design and data collection, and (4) advanced signal processing algorithm like a nonlinear split-spectrum filter, Principal Component Analysis (PCA) and Discriminant Analysis (DA) for signal-to-noise-ratio enhancement, crack signature extraction, and pattern classification. This technology not only can effectively address the problems with the existing methods, i.e., to detect the mechanical dents and metal loss in the pipelines consistently and reliably but also it is able to determine the defect shape and size to a certain extent.
Style APA, Harvard, Vancouver, ISO itp.
2

Poovendran, Radha, i Brian Matt. Security Analysis and Extensions of the PCB Algorithm for Distributed Key Generation. Fort Belvoir, VA: Defense Technical Information Center, styczeń 2005. http://dx.doi.org/10.21236/ada459087.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Searcy, Stephen W., i Kalman Peleg. Adaptive Sorting of Fresh Produce. United States Department of Agriculture, sierpień 1993. http://dx.doi.org/10.32747/1993.7568747.bard.

Pełny tekst źródła
Streszczenie:
This project includes two main parts: Development of a “Selective Wavelength Imaging Sensor” and an “Adaptive Classifiery System” for adaptive imaging and sorting of agricultural products respectively. Three different technologies were investigated for building a selectable wavelength imaging sensor: diffraction gratings, tunable filters and linear variable filters. Each technology was analyzed and evaluated as the basis for implementing the adaptive sensor. Acousto optic tunable filters were found to be most suitable for the selective wavelength imaging sensor. Consequently, a selectable wavelength imaging sensor was constructed and tested using the selected technology. The sensor was tested and algorithms for multispectral image acquisition were developed. A high speed inspection system for fresh-market carrots was built and tested. It was shown that a combination of efficient parallel processing of a DSP and a PC based host CPU in conjunction with a hierarchical classification system, yielded an inspection system capable of handling 2 carrots per second with a classification accuracy of more than 90%. The adaptive sorting technique was extensively investigated and conclusively demonstrated to reduce misclassification rates in comparison to conventional non-adaptive sorting. The adaptive classifier algorithm was modeled and reduced to a series of modules that can be added to any existing produce sorting machine. A simulation of the entire process was created in Matlab using a graphical user interface technique to promote the accessibility of the difficult theoretical subjects. Typical Grade classifiers based on k-Nearest Neighbor techniques and linear discriminants were implemented. The sample histogram, estimating the cumulative distribution function (CDF), was chosen as a characterizing feature of prototype populations, whereby the Kolmogorov-Smirnov statistic was employed as a population classifier. Simulations were run on artificial data with two-dimensions, four populations and three classes. A quantitative analysis of the adaptive classifier's dependence on population separation, training set size, and stack length determined optimal values for the different parameters involved. The technique was also applied to a real produce sorting problem, e.g. an automatic machine for sorting dates by machine vision in an Israeli date packinghouse. Extensive simulations were run on actual sorting data of dates collected over a 4 month period. In all cases, the results showed a clear reduction in classification error by using the adaptive technique versus non-adaptive sorting.
Style APA, Harvard, Vancouver, ISO itp.
4

Panek. PR-312-12208-R01 Plume Volume Molar Ratio Method Assumptions and Conservative Model Over-Predictions. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), kwiecień 2013. http://dx.doi.org/10.55274/r0010806.

Pełny tekst źródła
Streszczenie:
With the introduction of the more stringent short-term 1-hour air quality NO2 standard, the use of redundant, overly conservative assumptions (e.g. use of permit allowable emissions � hourly rate based on maximum annual tons per year) is no longer appropriate and potentially misinforms and obfuscates modeling compliance demonstrations. One of the challenges of modeling oxides of nitrogen (NOx) emissions is determining the amount of total NOx that will exist in the form of nitrogen dioxide (NO2) at a receptor. Combustion source emissions usually contain mostly nitric oxide (NO), which is not a regulated criteria pollutant. The Plume Volume Molar Ratio Method (PVMRM) is a higher fidelity �Tier 3� model option contained within the USEPA AERMOD dispersion model for the purpose of reducing the inherent conservatism associated with predicting the NO2 fraction within a plume. Tier 3 approaches must be approved on a case-by-case basis by the regulatory agency prior to approval. The American Petroleum Association (API), Council of Industrial Boiler Owners (CIBO), Electric Power Research Institute (EPRI), AF and PA (American Forest and Paper Association) and other trade associations and stakeholders have undertaken initial investigatory efforts into the Plume Volume Molar Ratio Method (PVMRM) algorithm and model chemistry formulation. The initial results of this work were presented at the EPA 10th Modeling Conference on March 13th through 15th, 2012. Section 320 of the Clean Air Act (CAA) requires a conference to be held every 3 years. The purpose of the conference is to provide an overview of the latest features of the agency�s preferred air quality models and to provide a forum for public review and comment on how the agency determines and applies air quality models in the future. This report provides a technical description of the current state of the Plume Volume Molar Ratio Method (PVMRM) currently used by EPA in the AERMOD model and the Air Quality Modeling Guidance contained in Appendix W. This paper examines the conservative assumptions used by EPA that lead to conservative model over-predictions, work that was recently presented at the 10th Modeling Conference on PVMRM, and gaps and shortcomings for existing reciprocating compressors. This report also summarizes the initial findings from these efforts and identifies potential gaps (e.g. shorter stacks and lower in-stack NO/NO2 ratios) in the analysis as it may relate to existing reciprocating engine drivers.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii