Добірка наукової літератури з теми "FEATURE SELECTION TECHNIQUE"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "FEATURE SELECTION TECHNIQUE".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "FEATURE SELECTION TECHNIQUE"

1

Sharaff, Aakanksha, Naresh Kumar Nagwani, and Kunal Swami. "Impact of Feature Selection Technique on Email Classification." International Journal of Knowledge Engineering-IACSIT 1, no. 1 (2015): 59–63. http://dx.doi.org/10.7763/ijke.2015.v1.10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Salama, Mostafa A., and Ghada Hassan. "A Novel Feature Selection Measure Partnership-Gain." International Journal of Online and Biomedical Engineering (iJOE) 15, no. 04 (February 27, 2019): 4. http://dx.doi.org/10.3991/ijoe.v15i04.9831.

Повний текст джерела
Анотація:
Multivariate feature selection techniques search for the optimal features subset to reduce the dimensionality and hence the complexity of a classification task. Statistical feature selection techniques measure the mutual correlation between features well as the correlation of each feature to the tar- get feature. However, adding a feature to a feature subset could deteriorate the classification accuracy even though this feature positively correlates to the target class. Although most of existing feature ranking/selection techniques consider the interdependency between features, the nature of interaction be- tween features in relationship to the classification problem is still not well investigated. This study proposes a technique for forward feature selection that calculates the novel measure Partnership-Gain to select a subset of features whose partnership constructively correlates to the target feature classification. Comparative analysis to other well-known techniques shows that the proposed technique has either an enhanced or a comparable classification accuracy on the datasets studied. We present a visualization of the degree and direction of the proposed measure of features’ partnerships for a better understanding of the measure’s nature.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Sikri, Alisha, N. P. Singh, and Surjeet Dalal. "Analysis of Rank Aggregation Techniques for Rank Based on the Feature Selection Technique." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 3s (March 11, 2023): 95–108. http://dx.doi.org/10.17762/ijritcc.v11i3s.6160.

Повний текст джерела
Анотація:
In order to improve classification accuracy and lower future computation and data collecting costs, feature selection is the process of choosing the most crucial features from a group of attributes and removing the less crucial or redundant ones. To narrow down the features that need to be analyzed, a variety of feature selection procedures have been detailed in published publications. Chi-Square (CS), IG, Relief, GR, Symmetrical Uncertainty (SU), and MI are six alternative feature selection methods used in this study. The provided dataset is aggregated using four rank aggregation strategies: "rank aggregation," "Borda Count (BC) methodology," "score and rank combination," and "unified feature scoring" based on the outcomes of the six feature selection method (UFS). These four procedures by themselves were unable to generate a clear selection rank for the characteristic. To produce different ranks of traits, this ensemble of aggregating ranks is carried out. For this, the bagging method of majority voting was applied.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Goswami, Saptarsi, Amit Kumar Das, Amlan Chakrabarti, and Basabi Chakraborty. "A feature cluster taxonomy based feature selection technique." Expert Systems with Applications 79 (August 2017): 76–89. http://dx.doi.org/10.1016/j.eswa.2017.01.044.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Jain, Rahi, and Wei Xu. "HDSI: High dimensional selection with interactions algorithm on feature selection and testing." PLOS ONE 16, no. 2 (February 16, 2021): e0246159. http://dx.doi.org/10.1371/journal.pone.0246159.

Повний текст джерела
Анотація:
Feature selection on high dimensional data along with the interaction effects is a critical challenge for classical statistical learning techniques. Existing feature selection algorithms such as random LASSO leverages LASSO capability to handle high dimensional data. However, the technique has two main limitations, namely the inability to consider interaction terms and the lack of a statistical test for determining the significance of selected features. This study proposes a High Dimensional Selection with Interactions (HDSI) algorithm, a new feature selection method, which can handle high-dimensional data, incorporate interaction terms, provide the statistical inferences of selected features and leverage the capability of existing classical statistical techniques. The method allows the application of any statistical technique like LASSO and subset selection on multiple bootstrapped samples; each contains randomly selected features. Each bootstrap data incorporates interaction terms for the randomly sampled features. The selected features from each model are pooled and their statistical significance is determined. The selected statistically significant features are used as the final output of the approach, whose final coefficients are estimated using appropriate statistical techniques. The performance of HDSI is evaluated using both simulated data and real studies. In general, HDSI outperforms the commonly used algorithms such as LASSO, subset selection, adaptive LASSO, random LASSO and group LASSO.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ramineni, Vyshnavi, and Goo-Rak Kwon. "Diagnosis of Alzheimer’s Disease using Wrapper Feature Selection Method." Korean Institute of Smart Media 12, no. 3 (April 30, 2023): 30–37. http://dx.doi.org/10.30693/smj.2023.12.3.30.

Повний текст джерела
Анотація:
Alzheimer’s disease (AD) symptoms are being treated by early diagnosis, where we can only slow the symptoms and research is still undergoing. In consideration, using T1-weighted images several classification models are proposed in Machine learning to identify AD. In this paper, we consider the improvised feature selection, to reduce the complexity by using wrapping techniques and Restricted Boltzmann Machine (RBM). This present work used the subcortical and cortical features of 278 subjects from the ADNI dataset to identify AD and sMRI. Multi-class classification is used for the experiment i.e., AD, EMCI, LMCI, HC. The proposed feature selection consists of Forward feature selection, Backward feature selection, and Combined PCA & RBM. Forward and backward feature selection methods use an iterative method starting being no features in the forward feature selection and backward feature selection with all features included in the technique. PCA is used to reduce the dimensions and RBM is used to select the best feature without interpreting the features. We have compared the three models with PCA to analysis. The following experiment shows that combined PCA &RBM, and backward feature selection give the best accuracy with respective classification model RF i.e., 88.65, 88.56% respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zabidi, A., W. Mansor, and Khuan Y. Lee. "Optimal Feature Selection Technique for Mel Frequency Cepstral Coefficient Feature Extraction in Classifying Infant Cry with Asphyxia." Indonesian Journal of Electrical Engineering and Computer Science 6, no. 3 (June 1, 2017): 646. http://dx.doi.org/10.11591/ijeecs.v6.i3.pp646-655.

Повний текст джерела
Анотація:
<p>Mel Frequency Cepstral Coefficient is an efficient feature representation method for extracting human-audible audio signals. However, its representation of features is large and redundant. Therefore, feature selection is required to select the optimal subset of Mel Frequency Cepstral Coefficient features. The performance of two types of feature selection techniques; Orthogonal Least Squares and F-ratio for selecting Mel Frequency Cepstral Coefficient features of infant cry with asphyxia was examined. OLS selects the feature subset based on their contribution to the reduction of error, while F-Ratio selects them according to their discriminative abilities. The feature selection techniques were combined with Multilayer Perceptron to distinguish between asphyxiated infant cry and normal cry signals. The performance of the feature selection methods was examined by analysing the Multilayer Perceptron classification accuracy resulted from the combination of the feature selection techniques and Multilayer Perceptron. The results indicate that Orthogonal Least Squares is the most suitable feature selection method in classifying infant cry with asphyxia since it produces the highest classification accuracy.<em></em></p>
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Miftahushudur, Tajul, Chaeriah Bin Ali Wael, and Teguh Praludi. "Infinite Latent Feature Selection Technique for Hyperspectral Image Classification." Jurnal Elektronika dan Telekomunikasi 19, no. 1 (August 31, 2019): 32. http://dx.doi.org/10.14203/jet.v19.32-37.

Повний текст джерела
Анотація:
The classification process is one of the most crucial processes in hyperspectral imaging. One of the limitations in classification process using machine learning technique is its complexities, where hyperspectral image format has a thousand band that can be used as a feature for learning purpose. This paper presents a comparison between two feature selection technique based on probability approach that not only can tackle the problem, but also improve accuracy. Infinite Latent Feature Selection (ILFS) and Relief Techniques are implemented in a hyperspectral image to select the most important feature or band before applied in Support Vector Machine (SVM). The result showed ILFS technique can improve classification accuracy better than Relief (92.21% vs. 88.10%). However, Relief can extract less feature to reach its best accuracy with only 6 features compared with ILFS with 9.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Saifan, Ahmad A., and Lina Abu-wardih. "Software Defect Prediction Based on Feature Subset Selection and Ensemble Classification." ECTI Transactions on Computer and Information Technology (ECTI-CIT) 14, no. 2 (October 9, 2020): 213–28. http://dx.doi.org/10.37936/ecti-cit.2020142.224489.

Повний текст джерела
Анотація:
Two primary issues have emerged in the machine learning and data mining community: how to deal with imbalanced data and how to choose appropriate features. These are of particular concern in the software engineering domain, and more specifically the field of software defect prediction. This research highlights a procedure which includes a feature selection technique to single out relevant attributes, and an ensemble technique to handle the class-imbalance issue. In order to determine the advantages of feature selection and ensemble methods we look at two potential scenarios: (1) Ensemble models constructed from the original datasets, without feature selection; (2) Ensemble models constructed from the reduced datasets after feature selection has been applied. Four feature selection techniques are employed: Principal Component Analysis (PCA), Pearson’s correlation, Greedy Stepwise Forward selection, and Information Gain (IG). The aim of this research is to assess the effectiveness of feature selection techniques using ensemble techniques. Five datasets, obtained from the PROMISE software depository, are analyzed; tentative results indicate that ensemble methods can improve the model's performance without the use of feature selection techniques. PCA feature selection and bagging based on K-NN perform better than both bagging based on SVM and boosting based on K-NN and SVM, and feature selection techniques including Pearson’s correlation, Greedy stepwise, and IG weaken the ensemble models’ performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ali, Tariq, Asif Nawaz, and Hafiza Ayesha Sadia. "Genetic Algorithm Based Feature Selection Technique for Electroencephalography Data." Applied Computer Systems 24, no. 2 (December 1, 2019): 119–27. http://dx.doi.org/10.2478/acss-2019-0015.

Повний текст джерела
Анотація:
Abstract High dimensionality is a well-known problem that has a huge number of highlights in the data, yet none is helpful for a particular data mining task undertaking, for example, classification and grouping. Therefore, selection of features is used frequently to reduce the data set dimensionality. Feature selection is a multi-target errand, which diminishes dataset dimensionality, decreases the running time, and furthermore enhances the expected precision. In the study, our goal is to diminish the quantity of features of electroencephalography data for eye state classification and achieve the same or even better classification accuracy with the least number of features. We propose a genetic algorithm-based feature selection technique with the KNN classifier. The accuracy is improved with the selected feature subset using the proposed technique as compared to the full feature set. Results prove that the classification precision of the proposed strategy is enhanced by 3 % on average when contrasted with the accuracy without feature selection.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "FEATURE SELECTION TECHNIQUE"

1

Tan, Feng. "Improving Feature Selection Techniques for Machine Learning." Digital Archive @ GSU, 2007. http://digitalarchive.gsu.edu/cs_diss/27.

Повний текст джерела
Анотація:
As a commonly used technique in data preprocessing for machine learning, feature selection identifies important features and removes irrelevant, redundant or noise features to reduce the dimensionality of feature space. It improves efficiency, accuracy and comprehensibility of the models built by learning algorithms. Feature selection techniques have been widely employed in a variety of applications, such as genomic analysis, information retrieval, and text categorization. Researchers have introduced many feature selection algorithms with different selection criteria. However, it has been discovered that no single criterion is best for all applications. We proposed a hybrid feature selection framework called based on genetic algorithms (GAs) that employs a target learning algorithm to evaluate features, a wrapper method. We call it hybrid genetic feature selection (HGFS) framework. The advantages of this approach include the ability to accommodate multiple feature selection criteria and find small subsets of features that perform well for the target algorithm. The experiments on genomic data demonstrate that ours is a robust and effective approach that can find subsets of features with higher classification accuracy and/or smaller size compared to each individual feature selection algorithm. A common characteristic of text categorization tasks is multi-label classification with a great number of features, which makes wrapper methods time-consuming and impractical. We proposed a simple filter (non-wrapper) approach called Relation Strength and Frequency Variance (RSFV) measure. The basic idea is that informative features are those that are highly correlated with the class and distribute most differently among all classes. The approach is compared with two well-known feature selection methods in the experiments on two standard text corpora. The experiments show that RSFV generate equal or better performance than the others in many cases.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Loscalzo, Steven. "Group based techniques for stable feature selection." Diss., Online access via UMI:, 2009.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Vege, Sri Harsha. "Ensemble of Feature Selection Techniques for High Dimensional Data." TopSCHOLAR®, 2012. http://digitalcommons.wku.edu/theses/1164.

Повний текст джерела
Анотація:
Data mining involves the use of data analysis tools to discover previously unknown, valid patterns and relationships from large amounts of data stored in databases, data warehouses, or other information repositories. Feature selection is an important preprocessing step of data mining that helps increase the predictive performance of a model. The main aim of feature selection is to choose a subset of features with high predictive information and eliminate irrelevant features with little or no predictive information. Using a single feature selection technique may generate local optima. In this thesis we propose an ensemble approach for feature selection, where multiple feature selection techniques are combined to yield more robust and stable results. Ensemble of multiple feature ranking techniques is performed in two steps. The first step involves creating a set of different feature selectors, each providing its sorted order of features, while the second step aggregates the results of all feature ranking techniques. The ensemble method used in our study is frequency count which is accompanied by mean to resolve any frequency count collision. Experiments conducted in this work are performed on the datasets collected from Kent Ridge bio-medical data repository. Lung Cancer dataset and Lymphoma dataset are selected from the repository to perform experiments. Lung Cancer dataset consists of 57 attributes and 32 instances and Lymphoma dataset consists of 4027 attributes and 96 ix instances. Experiments are performed on the reduced datasets obtained from feature ranking. These datasets are used to build the classification models. Model performance is evaluated in terms of AUC (Area under Receiver Operating Characteristic Curve) performance metric. ANOVA tests are also performed on the AUC performance metric. Experimental results suggest that ensemble of multiple feature selection techniques is more effective than an individual feature selection technique.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Gustafsson, Robin. "Ordering Classifier Chains using filter model feature selection techniques." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14817.

Повний текст джерела
Анотація:
Context: Multi-label classification concerns classification with multi-dimensional output. The Classifier Chain breaks the multi-label problem into multiple binary classification problems, chaining the classifiers to exploit dependencies between labels. Consequently, its performance is influenced by the chain's order. Approaches to finding advantageous chain orders have been proposed, though they are typically costly. Objectives: This study explored the use of filter model feature selection techniques to order Classifier Chains. It examined how feature selection techniques can be adapted to evaluate label dependence, how such information can be used to select a chain order and how this affects the classifier's performance and execution time. Methods: An experiment was performed to evaluate the proposed approach. The two proposed algorithms, Forward-Oriented Chain Selection (FOCS) and Backward-Oriented Chain Selection (BOCS), were tested with three different feature evaluators. 10-fold cross-validation was performed on ten benchmark datasets. Performance was measured in accuracy, 0/1 subset accuracy and Hamming loss. Execution time was measured during chain selection, classifier training and testing. Results: Both proposed algorithms led to improved accuracy and 0/1 subset accuracy (Friedman & Hochberg, p < 0.05). FOCS also improved the Hamming loss while BOCS did not. Measured effect sizes ranged from 0.20 to 1.85 percentage points. Execution time was increased by less than 3 % in most cases. Conclusions: The results showed that the proposed approach can improve the Classifier Chain's performance at a low cost. The improvements appear similar to comparable techniques in magnitude but at a lower cost. It shows that feature selection techniques can be applied to chain ordering, demonstrates the viability of the approach and establishes FOCS and BOCS as alternatives worthy of further consideration.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zhang, Fu. "Intelligent feature selection for neural regression : techniques and applications." Thesis, University of Warwick, 2012. http://wrap.warwick.ac.uk/49639/.

Повний текст джерела
Анотація:
Feature Selection (FS) and regression are two important technique categories in Data Mining (DM). In general, DM refers to the analysis of observational datasets to extract useful information and to summarise the data so that it can be more understandable and be used more efficiently in terms of storage and processing. FS is the technique of selecting a subset of features that are relevant to the development of learning models. Regression is the process of modelling and identifying the possible relationships between groups of features (variables). Comparing with the conventional techniques, Intelligent System Techniques (ISTs) are usually favourable due to their flexible capabilities for handling real‐life problems and the tolerance to data imprecision, uncertainty, partial truth, etc. This thesis introduces a novel hybrid intelligent technique, namely Sensitive Genetic Neural Optimisation (SGNO), which is capable of reducing the dimensionality of a dataset by identifying the most important group of features. The capability of SGNO is evaluated with four practical applications in three research areas, including plant science, civil engineering and economics. SGNO is constructed using three key techniques, known as the core modules, including Genetic Algorithm (GA), Neural Network (NN) and Sensitivity Analysis (SA). The GA module controls the progress of the algorithm and employs the NN module as its fitness function. The SA module quantifies the importance of each available variable using the results generated in the GA module. The global sensitivity scores of the variables are used determine the importance of the variables. Variables of higher sensitivity scores are considered to be more important than the variables with lower sensitivity scores. After determining the variables’ importance, the performance of SGNO is evaluated using the NN module that takes various numbers of variables with the highest global sensitivity scores as the inputs. In addition, the symbolic relationship between a group of variables with the highest global sensitivity scores and the model output is discovered using the Multiple‐Branch Encoded Genetic Programming (MBE‐GP). A total of four datasets have been used to evaluate the performance of SGNO. These datasets involve the prediction of short‐term greenhouse tomato yield, prediction of longitudinal dispersion coefficients in natural rivers, prediction of wave overtopping at coastal structures and the modelling of relationship between the growth of industrial inputs and the growth of the gross industrial output. SGNO was applied to all these datasets to explore its effectiveness of reducing the dimensionality of the datasets. The performance of SGNO is benchmarked with four dimensionality reduction techniques, including Backward Feature Selection (BFS), Forward Feature Selection (FFS), Principal Component Analysis (PCA) and Genetic Neural Mathematical Method (GNMM). The applications of SGNO on these datasets showed that SGNO is capable of identifying the most important feature groups of in the datasets effectively and the general performance of SGNO is better than those benchmarking techniques. Furthermore, the symbolic relationships discovered using MBE‐GP can generate performance competitive to the performance of NN models in terms of regression accuracies.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Muteba, Ben Ilunga. "Data Science techniques for predicting plant genes involved in secondary metabolites production." University of the Western Cape, 2018. http://hdl.handle.net/11394/7039.

Повний текст джерела
Анотація:
Masters of Science
Plant genome analysis is currently experiencing a boost due to reduced costs associated with the development of next generation sequencing technologies. Knowledge on genetic background can be applied to guide targeted plant selection and breeding, and to facilitate natural product discovery and biological engineering. In medicinal plants, secondary metabolites are of particular interest because they often represent the main active ingredients associated with health-promoting qualities. Plant polyphenols are a highly diverse family of aromatic secondary metabolites that act as antimicrobial agents, UV protectants, and insect or herbivore repellents. Most of the genome mining tools developed to understand genetic materials have very seldom addressed secondary metabolite genes and biosynthesis pathways. Little significant research has been conducted to study key enzyme factors that can predict a class of secondary metabolite genes from polyketide synthases. The objectives of this study were twofold: Primarily, it aimed to identify the biological properties of secondary metabolite genes and the selection of a specific gene, naringenin-chalcone synthase or chalcone synthase (CHS). The study hypothesized that data science approaches in mining biological data, particularly secondary metabolite genes, would enable the compulsory disclosure of some aspects of secondary metabolite (SM). Secondarily, the aim was to propose a proof of concept for classifying or predicting plant genes involved in polyphenol biosynthesis from data science techniques and convey these techniques in computational analysis through machine learning algorithms and mathematical and statistical approaches. Three specific challenges experienced while analysing secondary metabolite datasets were: 1) class imbalance, which refers to lack of proportionality among protein sequence classes; 2) high dimensionality, which alludes to a phenomenon feature space that arises when analysing bioinformatics datasets; and 3) the difference in protein sequences lengths, which alludes to a phenomenon that protein sequences have different lengths. Considering these inherent issues, developing precise classification models and statistical models proves a challenge. Therefore, the prerequisite for effective SM plant gene mining is dedicated data science techniques that can collect, prepare and analyse SM genes.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Strand, Lars Helge. "Feature selection in Medline using text and data mining techniques." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9249.

Повний текст джерела
Анотація:

In this thesis we propose a new method for searching for gene products gene products and give annotations associating genes with Gene Ontology codes. Many solutions already exists, using different techniques, however few are capable of addressing the whole GO hierarchy. We propose a method for exploring this hierarchy by dividing it into subtrees, trying to find terms that are characteristics for the subtrees involved. Using a feature selection based on chi-square analysis and naive Bayes classification to find the correct GO nodes.

Стилі APA, Harvard, Vancouver, ISO та ін.
8

Ni, Weizeng. "A Review and Comparative Study on Univariate Feature Selection Techniques." University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1353156184.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Dang, Vinh Q. "Evolutionary approaches for feature selection in biological data." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2014. https://ro.ecu.edu.au/theses/1276.

Повний текст джерела
Анотація:
Data mining techniques have been used widely in many areas such as business, science, engineering and medicine. The techniques allow a vast amount of data to be explored in order to extract useful information from the data. One of the foci in the health area is finding interesting biomarkers from biomedical data. Mass throughput data generated from microarrays and mass spectrometry from biological samples are high dimensional and is small in sample size. Examples include DNA microarray datasets with up to 500,000 genes and mass spectrometry data with 300,000 m/z values. While the availability of such datasets can aid in the development of techniques/drugs to improve diagnosis and treatment of diseases, a major challenge involves its analysis to extract useful and meaningful information. The aims of this project are: 1) to investigate and develop feature selection algorithms that incorporate various evolutionary strategies, 2) using the developed algorithms to find the “most relevant” biomarkers contained in biological datasets and 3) and evaluate the goodness of extracted feature subsets for relevance (examined in terms of existing biomedical domain knowledge and from classification accuracy obtained using different classifiers). The project aims to generate good predictive models for classifying diseased samples from control.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Miller, Corey Alexander. "Intelligent Feature Selection Techniques for Pattern Classification of Time-Domain Signals." W&M ScholarWorks, 2013. https://scholarworks.wm.edu/etd/1539623620.

Повний текст джерела
Анотація:
Time-domain signals form the basis of analysis for a variety of applications, including those involving variable conditions or physical changes that result in degraded signal quality. Typical approaches to signal analysis fail under these conditions, as these types of changes often lie outside the scope of the domain's basic analytic theory and are too complex for modeling. Sophisticated signal processing techniques are required as a result. In this work, we develop a robust signal analysis technique that is suitable for a wide variety of time-domain signal analysis applications. Statistical pattern classification routines are applied to problems of interest involving a physical change in the domain of the problem that translate into changes in the signal characteristics. The basis of this technique involves a signal transformation known as the Dynamic Wavelet Fingerprint, used to generate a feature space in addition to features related to the physical domain of the individual application. Feature selection techniques are explored that incorporate the context of the problem into the feature space reduction in an attempt to identify optimal representations of these data sets.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "FEATURE SELECTION TECHNIQUE"

1

K, Kokula Krishna Hari, and K. Saravanan, eds. Exploratory Analysis of Feature Selection Techniques in Medical Image Processing. Tiruppur, Tamil Nadu, India: Association of Scientists, Developers and Faculties, 2016.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Raza, Muhammad Summair, and Usman Qamar. Understanding and Using Rough Set Based Feature Selection: Concepts, Techniques and Applications. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-4965-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Raza, Muhammad Summair, and Usman Qamar. Understanding and Using Rough Set Based Feature Selection: Concepts, Techniques and Applications. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-32-9166-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Raza, Muhammad Summair, and Usman Qamar. Understanding and Using Rough Set Based Feature Selection: Concepts, Techniques and Applications. Springer, 2017.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Raza, Muhammad Summair, and Usman Qamar. Understanding and Using Rough Set Based Feature Selection: Concepts, Techniques and Applications. Springer, 2019.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Raza, Muhammad Summair, and Usman Qamar. Understanding and Using Rough Set Based Feature Selection: Concepts, Techniques and Applications. Springer Singapore Pte. Limited, 2020.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Raza, Muhammad Summair, and Usman Qamar. Understanding and Using Rough Set Based Feature Selection: Concepts, Techniques and Applications. Springer Singapore Pte. Limited, 2018.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Grant, Stuart A., and David B. Auyong. Basic Principles of Ultrasound Guided Nerve Block. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780190231804.003.0001.

Повний текст джерела
Анотація:
This chapter provides a clinical description of ultrasound physics tailored to provide the practitioner a solid background for optimal imaging and needle guidance technique during regional anesthesia. Important ultrasound characteristics are covered, including optimization of ultrasound images, transducer selection, and features found on most point-of-care systems. In-plane and out-of-plane needle guidance techniques and a three-step process for visualizing in-plane needle insertions are presented. Next, common artifacts and errors including attenuation, dropout, and intraneural injection are covered, along with clinical solutions to overcome these inaccuracies. Preparation details are reviewed to make the regional anesthesia procedures as reproducible and safe as possible. Also included are a practical review of peripheral nerve block catheter placement principles, an appendix listing what blocks may be used for what surgeries, and seven Keys to Ultrasound Success that can make ultrasound guided regional anesthesia understandable and clinically feasible for all practitioners.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Thrumurthy, Sri G., Tania S. De Silva, Zia M. Moinuddin, and Stuart Enoch. EMQs for the MRCS Part A. Oxford University Press, 2013. http://dx.doi.org/10.1093/oso/9780199645640.001.0001.

Повний текст джерела
Анотація:
Specifically designed to help candidates revise for the MRCS exam, this book features 250 extended matching questions divided into 96 themes, covering the whole syllabus. Containing everything candidates need to pass the MRCS Part A EMQ section of the exam, the book focuses intensively on topics relating to principles of surgery-in-general, including peri-operative care, post-operative management and critical care, surgical technique and technology, management and legal issues in surgery, clinical microbiology, emergency medicine and trauma management, and principles of surgical oncology. The high level of detail included within the questions and their explanations allows effective self-assessment of knowledge and quick identification of key areas requiring further attention. Varying approaches to extended matching questions are used, giving effective exam practice and guidance through revision and exam technique. This includes clinical case questions, positively-worded questions, requiring selection of the most appropriate of relatively correct answers; 'two-step' or 'double-jump' questions, requiring several cognitive steps to arrive at the correct answer; as well as factual recall questions, prompting basic recall of facts.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Thrumurthy, Sri G., Tania Samantha De Silva, Zia Moinuddin, and Stuart Enoch. SBA MCQs for the MRCS Part A. Oxford University Press, 2012. http://dx.doi.org/10.1093/oso/9780199645633.001.0001.

Повний текст джерела
Анотація:
Specifically designed to help candidates revise for the MRCS exam, this book features 350 Single Best Answer multiple choice questions, covering the whole syllabus. Containing everything candidates need to pass the MRCS Part A SBA section of the exam, it focuses intensively on the application of basic sciences (applied surgical anatomy, physiology, and pathology) to the management of surgical patients. The high level of detail included within the questions and their explanations allows effective self-assessment of knowledge and quick identification of key areas requiring further attention. Varying approaches to Single Best Answer multiple choice questions are used, giving effective exam practice and guidance through revision and exam technique. This includes clinical case questions, 'positively-worded' questions, requiring selection of the most appropriate of relatively correct answers; 'two-step' or 'double-jump' questions, requiring several cognitive steps to arrive at the correct answer; as well as 'factual recall' questions, prompting basic recall of facts.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "FEATURE SELECTION TECHNIQUE"

1

Singh, Upendra, and Sudhakar Tripathi. "Protein Classification Using Hybrid Feature Selection Technique." In Communications in Computer and Information Science, 813–21. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-3433-6_97.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Naveen, Nekuri, and Mandala Sookshma. "Adaptive Feature Selection and Classification Using Optimization Technique." In Frontiers in Intelligent Computing: Theory and Applications, 146–55. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-32-9186-7_17.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Guru, D. S., Mostafa Ali, and Mahamad Suhil. "A Novel Feature Selection Technique for Text Classification." In Advances in Intelligent Systems and Computing, 721–33. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-1498-8_63.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Nagaraj, Naik, B. M. Vikranth, and N. Yogesh. "Recursive Feature Elimination Technique for Technical Indicators Selection." In Communications in Computer and Information Science, 139–45. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-08277-1_12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zheng, Hai-Tao, and Haiyang Zhang. "Online Streaming Feature Selection Using Sampling Technique and Correlations Between Features." In Web Technologies and Applications, 43–55. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-45817-5_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Christy, A., and G. Meera Gandhi. "Feature Selection and Clustering of Documents Using Random Feature Set Generation Technique." In Advances in Data Science and Management, 67–79. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-0978-0_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Lee, Kee-Cheol. "A Technique of Dynamic Feature Selection Using the Feature Group Mutual Information." In Methodologies for Knowledge Discovery and Data Mining, 138–42. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48912-6_19.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

LeKhac, NhienAn, Bo Wu, ChongCheng Chen, and M.-Tahar Kechadi. "Feature Selection Parallel Technique for Remotely Sensed Imagery Classification." In Lecture Notes in Computer Science, 623–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39643-4_45.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Seeja, K. R. "A Novel Feature Selection Technique for SAGE Data Classification." In Communications in Computer and Information Science, 49–54. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39678-6_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Alharbi, Abdullah Semran, Yuefeng Li, and Yue Xu. "Integrating LDA with Clustering Technique for Relevance Feature Selection." In AI 2017: Advances in Artificial Intelligence, 274–86. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63004-5_22.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "FEATURE SELECTION TECHNIQUE"

1

Battisti, Felipe de Melo, and Tiago Buarque Assunção de Carvalho. "Threshold Feature Selection PCA." In Symposium on Knowledge Discovery, Mining and Learning. Sociedade Brasileira de Computação - SBC, 2022. http://dx.doi.org/10.5753/kdmile.2022.227718.

Повний текст джерела
Анотація:
Classification algorithms encounter learning difficulties when data has non-discriminant features. Dimensionality reduction techniques such as PCA are commonly applied. However, PCA has the disadvantage of being an unsupervised method, ignoring relevant class information on data. Therefore, this paper proposes the Threshold Feature Selector (TFS), a new supervised dimensionality reduction method that employs class thresholds to select more relevant features. We also present the Threshold PCA (TPCA), a combination of our supervised technique with standard PCA. During experiments, TFS achieved higher accuracy in 90% of the datasets compared with the original data. The second proposed technique, TPCA, outperformed the standard PCA in accuracy gain in 70% of the datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bibi, K. Fathima, and M. Nazreen Banu. "Feature subset selection based on Filter technique." In 2015 International Conference on Computing and Communications Technologies (ICCCT). IEEE, 2015. http://dx.doi.org/10.1109/iccct2.2015.7292710.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wiratsin, In-On, and Lalita Narupiyakul. "Feature Selection Technique for Autism Spectrum Disorder." In CCEAI 2021: 5th International Conference on Control Engineering and Artificial Intelligence. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3448218.3448241.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Tayal, Devendra K., Neha Srivastava, and Neha. "Feature Selection using Enhanced Nature Optimization Technique." In 2023 International Conference on Advances in Intelligent Computing and Applications (AICAPS). IEEE, 2023. http://dx.doi.org/10.1109/aicaps57044.2023.10074104.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

S, Abdul Razak M., Nirmala C. R, Chetan B. B, Mohammed Rafi, and Sreenivasa B. R. "Online feature Selection using Pearson Correlation Technique." In 2022 IEEE 7th International Conference on Recent Advances and Innovations in Engineering (ICRAIE). IEEE, 2022. http://dx.doi.org/10.1109/icraie56454.2022.10054267.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

López Jaimes, Antonio, Carlos A. Coello Coello, and Debrup Chakraborty. "Objective reduction using a feature selection technique." In the 10th annual conference. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1389095.1389228.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Wang, Yong, Adam J. Brzezinski, Xianli Qiao, and Jun Ni. "Heuristic Feature Selection for Shaving Tool Wear Classification." In ASME 2016 11th International Manufacturing Science and Engineering Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/msec2016-8547.

Повний текст джерела
Анотація:
In this paper, we develop and apply feature extraction and selection techniques to classify tool wear in the shaving process. Because shaving tool condition monitoring is not well-studied, we extract both traditional and novel features from accelerometer signals collected from the shaving machine. We then apply a heuristic feature selection technique to identify key features and classify the tool condition. Run-to-life data from a shop-floor application is used to validate the proposed technique.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Meng Wang, Shudong Sun, Ganggang Niu, Yuanzhi Tu, and Shihui Guo. "A feature selection technique based on equivalent relation." In 2011 2nd International Conference on Artificial Intelligence, Management Science and Electronic Commerce (AIMSEC). IEEE, 2011. http://dx.doi.org/10.1109/aimsec.2011.6010707.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Liogiene, Tatjana, and Gintautas Tamulevicius. "SFS feature selection technique for multistage emotion recognition." In 2015 IEEE 3rd Workshop on Advances in Information, Electronic and Electrical Engineering (AIEEE). IEEE, 2015. http://dx.doi.org/10.1109/aieee.2015.7367299.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Mary, I. Thusnavis Bella, A. Vasuki, and M. A. P. Manimekalai. "An optimized feature selection CBIR technique using ANN." In 2017 International Conference on Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT). IEEE, 2017. http://dx.doi.org/10.1109/iceeccot.2017.8284550.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "FEATURE SELECTION TECHNIQUE"

1

Zhao, George, Grang Mei, Bulent Ayhan, Chiman Kwan, and Venu Varma. DTRS57-04-C-10053 Wave Electromagnetic Acoustic Transducer for ILI of Pipelines. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), March 2005. http://dx.doi.org/10.55274/r0012049.

Повний текст джерела
Анотація:
In this project, Intelligent Automation, Incorporated (IAI) and Oak Ridge National Lab (ORNL) propose a novel and integrated approach to inspect the mechanical dents and metal loss in pipelines. It combines the state-of-the-art SH wave Electromagnetic Acoustic Transducer (EMAT) technique, through detailed numerical modeling, data collection instrumentation, and advanced signal processing and pattern classifications, to detect and characterize mechanical defects in the underground pipeline transportation infrastructures. The technique has four components: (1) thorough guided wave modal analysis, (2) recently developed three-dimensional (3-D) Boundary Element Method (BEM) for best operational condition selection and defect feature extraction, (3) ultrasonic Shear Horizontal (SH) waves EMAT sensor design and data collection, and (4) advanced signal processing algorithm like a nonlinear split-spectrum filter, Principal Component Analysis (PCA) and Discriminant Analysis (DA) for signal-to-noise-ratio enhancement, crack signature extraction, and pattern classification. This technology not only can effectively address the problems with the existing methods, i.e., to detect the mechanical dents and metal loss in the pipelines consistently and reliably but also it is able to determine the defect shape and size to a certain extent.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Searcy, Stephen W., and Kalman Peleg. Adaptive Sorting of Fresh Produce. United States Department of Agriculture, August 1993. http://dx.doi.org/10.32747/1993.7568747.bard.

Повний текст джерела
Анотація:
This project includes two main parts: Development of a “Selective Wavelength Imaging Sensor” and an “Adaptive Classifiery System” for adaptive imaging and sorting of agricultural products respectively. Three different technologies were investigated for building a selectable wavelength imaging sensor: diffraction gratings, tunable filters and linear variable filters. Each technology was analyzed and evaluated as the basis for implementing the adaptive sensor. Acousto optic tunable filters were found to be most suitable for the selective wavelength imaging sensor. Consequently, a selectable wavelength imaging sensor was constructed and tested using the selected technology. The sensor was tested and algorithms for multispectral image acquisition were developed. A high speed inspection system for fresh-market carrots was built and tested. It was shown that a combination of efficient parallel processing of a DSP and a PC based host CPU in conjunction with a hierarchical classification system, yielded an inspection system capable of handling 2 carrots per second with a classification accuracy of more than 90%. The adaptive sorting technique was extensively investigated and conclusively demonstrated to reduce misclassification rates in comparison to conventional non-adaptive sorting. The adaptive classifier algorithm was modeled and reduced to a series of modules that can be added to any existing produce sorting machine. A simulation of the entire process was created in Matlab using a graphical user interface technique to promote the accessibility of the difficult theoretical subjects. Typical Grade classifiers based on k-Nearest Neighbor techniques and linear discriminants were implemented. The sample histogram, estimating the cumulative distribution function (CDF), was chosen as a characterizing feature of prototype populations, whereby the Kolmogorov-Smirnov statistic was employed as a population classifier. Simulations were run on artificial data with two-dimensions, four populations and three classes. A quantitative analysis of the adaptive classifier's dependence on population separation, training set size, and stack length determined optimal values for the different parameters involved. The technique was also applied to a real produce sorting problem, e.g. an automatic machine for sorting dates by machine vision in an Israeli date packinghouse. Extensive simulations were run on actual sorting data of dates collected over a 4 month period. In all cases, the results showed a clear reduction in classification error by using the adaptive technique versus non-adaptive sorting.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Tayeb, Shahab. Taming the Data in the Internet of Vehicles. Mineta Transportation Institute, January 2022. http://dx.doi.org/10.31979/mti.2022.2014.

Повний текст джерела
Анотація:
As an emerging field, the Internet of Vehicles (IoV) has a myriad of security vulnerabilities that must be addressed to protect system integrity. To stay ahead of novel attacks, cybersecurity professionals are developing new software and systems using machine learning techniques. Neural network architectures improve such systems, including Intrusion Detection System (IDSs), by implementing anomaly detection, which differentiates benign data packets from malicious ones. For an IDS to best predict anomalies, the model is trained on data that is typically pre-processed through normalization and feature selection/reduction. These pre-processing techniques play an important role in training a neural network to optimize its performance. This research studies the impact of applying normalization techniques as a pre-processing step to learning, as used by the IDSs. The impacts of pre-processing techniques play an important role in training neural networks to optimize its performance. This report proposes a Deep Neural Network (DNN) model with two hidden layers for IDS architecture and compares two commonly used normalization pre-processing techniques. Our findings are evaluated using accuracy, Area Under Curve (AUC), Receiver Operator Characteristic (ROC), F-1 Score, and loss. The experimentations demonstrate that Z-Score outperforms no-normalization and the use of Min-Max normalization.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Robert Nourgaliev, Nam Dinh, and Robert Youngblood. Development, Selection, Implementation and Testing of Architectural Features and Solution Techniques for Next Generation of System Simulation Codes to Support the Safety Case if the LWR Life Extension. Office of Scientific and Technical Information (OSTI), December 2010. http://dx.doi.org/10.2172/1004227.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Lylo, Taras. Російсько-українська війна в інтерпретаціях іранського видання «The Tehran Times»: основні ідеологеми та маніпулятивні прийоми. Ivan Franko National University of Lviv, березень 2023. http://dx.doi.org/10.30970/vjo.2023.52-53.11730.

Повний текст джерела
Анотація:
The article analyzes the main ideologemes in the Iranian English-language newspaper The Tehran Times about the Russian-Ukrainian war. Particular attention is paid to such ideologemes as “NATO-created Ukraine war”, “Western racism”, “an average European is a victim of the US policy”. The author claims that the newspaper is a repeater of anti-Ukrainian ideologemes by the Russian propaganda, including such as “coup d’état in Ukraine”, “denazification”, “special military operation”, “conflict in Ukraine”, “genocide in Donbas”, but retranslates them in a specific way: the journalists of The Tehran Times do not often use such ideologemes, but mainly ensure their functioning in the newspaper due to the biased selection of external authors (mainly from the USA), who are carriers of the cognitive curvature. The object of the research is also the manipulative techniques of the newspaper (the appeal to “common sense”, simplification of a complex problem, etc.). Methods of modeling the image of the enemy are also studied (first of all, such an enemy for the Tehran Times is the USA), among which categoricalness occupies a special place (all features of the opponent are interpreted not only at its own discretion, but indisputably; such and only such perception of the opponent is “the ultimate truth”), stereotypes (stereotypes replace the true knowledge), demonization (the opponent is portrayed as the embodiment of absolute, metaphysical evil) and asynchrony (an astronomer’s view, who sees a star as if it was the same all eternity to this point. The dynamics of history is ignored by propagandist). Keywords: ideologeme, manipulative techniques, Russia, racism, propaganda.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Riccardella, Scott. PR-335-143705-R01 Study on Reliability of In-ditch NDE for SCC Anomalies. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), October 2018. http://dx.doi.org/10.55274/r0011529.

Повний текст джерела
Анотація:
Pipeline operators are increasingly finding pipeline degradation in the form of crack-like defects associated with stress corrosion cracking and are challenged in selecting and employing nondestructive examination techniques that can reliably determine the maximum depth and axial depth profile of these anomalies. This information is essential in determining whether the line is fit for service and for how long; which areas require repair; what repair methods may be deemed acceptable; and whether in-line inspection was successful in detecting and prioritizing anomalies. This work further investigates and quantifies the capabilities and limitations of different nondestructive examination methodologies that are typically used and/or identified as feasible for the characterization of crack-like features. The objective of this project was to develop and implement a process to evaluate the applicability, accuracy, and sensitivity of different nondestructive examination methodologies in sizing stress corrosion cracking anomalies. This report outlines the nondestructive examination methodologies selected, the test processes and protocols used, the crack truth verification processes used, and the analysis of the final results.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії