Journal articles on the topic 'Feature-set-difference'

To see the other types of publications on this topic, follow the link: Feature-set-difference.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Feature-set-difference.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Susan, Seba, and Madasu Hanmandlu. "Difference theoretic feature set for scale-, illumination- and rotation-invariant texture classification." IET Image Processing 7, no. 8 (November 1, 2013): 725–32. http://dx.doi.org/10.1049/iet-ipr.2012.0527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bharti, Puja, Deepti Mittal, and Rupa Ananthasivan. "Characterization of chronic liver disease based on ultrasound images using the variants of grey-level difference matrix." Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 232, no. 9 (September 2018): 884–900. http://dx.doi.org/10.1177/0954411918796531.

Full text
Abstract:
Chronic liver diseases are fifth leading cause of fatality in developing countries. Early diagnosis is important for timely treatment and to salvage life. Ultrasound imaging is frequently used to examine abnormalities of liver. However, ambiguity lies in visual interpretation of liver stages on ultrasound images. This difficult visualization problem can be solved by analysing extracted textural features from images. Grey-level difference matrix, a texture feature extraction method, can provide information about roughness of liver surface, sharpness of liver borders and echotexture of liver parenchyma. In this article, the behaviour of variants of grey-level difference matrix in characterizing liver stages is investigated. The texture feature sets are extracted by using variants of grey-level difference matrix based on two, three, five and seven neighbouring pixels. Thereafter, to take the advantage of complementary information from extracted feature sets, feature fusion schemes are implemented. In addition, hybrid feature selection (combination of ReliefF filter method and sequential forward selection wrapper method) is used to obtain optimal feature set in characterizing liver stages. Finally, a computer-aided system is designed with the optimal feature set to classify liver health in terms of normal, chronic liver, cirrhosis and hepatocellular carcinoma evolved over cirrhosis. In the proposed work, experiments are performed to (1) identify the best approximation of derivative (forward, central or backward); (2) analyse the performance of individual feature sets of variants of grey-level difference matrix; (3) obtain optimal feature set by exploiting the complementary information from variants of grey-level difference matrix and (4) analyse the performance of proposed method in comparison with existing feature extraction methods. These experiments are carried out on database of 754 segmented regions of interest formed by clinically acquired ultrasound images. The results show that classification accuracy of 94.5% is obtained by optimal feature set having complementary information from variants of grey-level difference matrix.
APA, Harvard, Vancouver, ISO, and other styles
3

Sun, C., D. Guo, H. Gao, L. Zou, and H. Wang. "A method of version merging for computer-aided design files based on feature extraction." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 225, no. 2 (June 20, 2010): 463–71. http://dx.doi.org/10.1243/09544062jmes2159.

Full text
Abstract:
In order to manage the version files and maintain the latest version of the computer-aided design (CAD) files in asynchronous collaborative systems, one method of version merging for CAD files is proposed to resolve the problem based on feature extraction. First of all, the feature information is extracted based on the feature attribute of CAD files and stored in a XML feature file. Then, analyse the feature file, and the feature difference set is obtained by the given algorithm. Finally, the merging result of the difference set and the master files with application programming interface (API) interface functions is achieved, and then the version merging of CAD files is also realized. The application in Catia validated that the proposed method is feasible and valuable in engineering.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Ying Jie, and Mongi A. Abidi. "The Comparative Study between Difference Actions and Full Actions." Applied Mechanics and Materials 373-375 (August 2013): 500–503. http://dx.doi.org/10.4028/www.scientific.net/amm.373-375.500.

Full text
Abstract:
An appearance-based feature set is proposed. With Hidden Markov Model (HMM) handling any temporal variance, the contributions of features, which are from full foreground sequence and from temporal difference sequence, are compared in details by methods which are based on feature selecting and feature voting. The experimental analysis shows that the comparative contributions can be achieved for human action identifying by the two data sources. This introduces the opportunity to analyze human behavior based on temporal difference sequence instead of full foreground sequence, and validates the far-reaching significance of this work.
APA, Harvard, Vancouver, ISO, and other styles
5

Antoniuk, Izabella, Jarosław Kurek, Artur Krupa, Grzegorz Wieczorek, Michał Bukowski, Michał Kruk, and Albina Jegorowa. "Advanced Feature Extraction Methods from Images of Drillings in Melamine Faced Chipboard for Automatic Diagnosis of Drill Wear." Sensors 23, no. 3 (January 18, 2023): 1109. http://dx.doi.org/10.3390/s23031109.

Full text
Abstract:
In this paper, a novel approach to evaluation of feature extraction methodologies is presented. In the case of machine learning algorithms, extracting and using the most efficient features is one of the key problems that can significantly influence overall performance. It is especially the case with parameter-heavy problems, such as tool condition monitoring. In the presented case, images of drilled holes are considered, where state of the edge and the overall size of imperfections have high influence on product quality. Finding and using a set of features that accurately describes the differences between the edge that is acceptable or too damaged is not always straightforward. The presented approach focuses on detailed evaluation of various feature extraction approaches. Each chosen method produced a set of features, which was then used to train a selected set of classifiers. Five initial feature sets were obtained, and additional ones were derived from them. Different voting methods were used for ensemble approaches. In total, 38 versions of the classifiers were created and evaluated. Best accuracy was obtained by the ensemble approach based on Weighted Voting methodology. A significant difference was shown between different feature extraction methods, with a total difference of 11.14% between the worst and best feature set, as well as a further 0.2% improvement achieved by using the best voting approach.
APA, Harvard, Vancouver, ISO, and other styles
6

Agrahari, Rahul, Matthew Nicholson, Clare Conran, Haytham Assem, and John D. Kelleher. "Assessing Feature Representations for Instance-Based Cross-Domain Anomaly Detection in Cloud Services Univariate Time Series Data." IoT 3, no. 1 (January 29, 2022): 123–44. http://dx.doi.org/10.3390/iot3010008.

Full text
Abstract:
In this paper, we compare and assess the efficacy of a number of time-series instance feature representations for anomaly detection. To assess whether there are statistically significant differences between different feature representations for anomaly detection in a time series, we calculate and compare confidence intervals on the average performance of different feature sets across a number of different model types and cross-domain time-series datasets. Our results indicate that the catch22 time-series feature set augmented with features based on rolling mean and variance performs best on average, and that the difference in performance between this feature set and the next best feature set is statistically significant. Furthermore, our analysis of the features used by the most successful model indicates that features related to mean and variance are the most informative for anomaly detection. We also find that features based on model forecast errors are useful for anomaly detection for some but not all datasets.
APA, Harvard, Vancouver, ISO, and other styles
7

Yan, Jun, and Yan Piao. "Research on the Harris Algorithm of Feature Extraction for Moving Targets in the Video." Applied Mechanics and Materials 741 (March 2015): 378–81. http://dx.doi.org/10.4028/www.scientific.net/amm.741.378.

Full text
Abstract:
This paper studies a problem which is the rapid acquisition feature information of moving targets in a video sequence. Selecting the harris algorithm as a feature extraction algorithm, but harris algorithm has a problem of running slowly .Opting 3 * 3 window as the detection window, analyzing the similarity between the window capacity pixel and central pixel, then using the difference to measure this similarity of pixels, when the difference is greater than the set threshold value, we thought they are different, otherwise they are similar.After that characteristics of regional screening was completed. The target areas were divided into feature areas and non-feature regions, but we only calculated the response functions of characteristic areas. The algorithm’s executing efficiency improved.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Chunzhong, and Zongben Xu. "Structure Identification-Based Clustering According to Density Consistency." Mathematical Problems in Engineering 2011 (2011): 1–14. http://dx.doi.org/10.1155/2011/890901.

Full text
Abstract:
Structure of data set is of critical importance in identifying clusters, especially the density difference feature. In this paper, we present a clustering algorithm based on density consistency, which is a filtering process to identify same structure feature and classify them into same cluster. This method is not restricted by the shapes and high dimension data set, and meanwhile it is robust to noises and outliers. Extensive experiments on synthetic and real world data sets validate the proposed the new clustering algorithm.
APA, Harvard, Vancouver, ISO, and other styles
9

Jing, Xiao Yuan, Xiang Long Ge, Yong Fang Yao, and Feng Nan Yu. "Feature Extraction Algorithm Based on Sample Set Reconstruction." Applied Mechanics and Materials 347-350 (August 2013): 2241–45. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.2241.

Full text
Abstract:
When the number of labeled training samples is very small, the sample information people can use would be very little and the recognition rates of traditional image recognition methods are not satisfactory. However, there is often some related information contained in other databases that is helpful to feature extraction. Thus, it is considered to take full advantage of the data information in other databases by transfer learning. In this paper, the idea of transferring the samples is employed and further we propose a feature extraction approach based on sample set reconstruction. We realize the approach by reconstructing the training sample set using the difference information among the samples of other databases. Experimental results on three widely used face databases AR, FERET, CAS-PEAL are presented to demonstrate the efficacy of the proposed approach in classification performance.
APA, Harvard, Vancouver, ISO, and other styles
10

Ji, Linna, Fengbao Yang, and Xiaoming Guo. "Image Fusion Algorithm Selection Based on Fusion Validity Distribution Combination of Difference Features." Electronics 10, no. 15 (July 21, 2021): 1752. http://dx.doi.org/10.3390/electronics10151752.

Full text
Abstract:
Aiming at addressing the problem whereby existing image fusion models cannot reflect the demand of diverse attributes (e.g., type or amplitude) of difference features on algorithms, leading to poor or invalid fusion effect, this paper puts forward the construction and combination of difference features fusion validity distribution based on intuition-possible sets to deal with the selection of algorithms with better fusion effect in dual mode infrared images. Firstly, the distances of the amplitudes of difference features between fused images and source images are calculated. The distances can be divided into three levels according to the fusion result of each algorithm, which are regarded as intuition-possible sets of fusion validity of difference features, and a novel construction method of fusion validity distribution based on intuition-possible sets is proposed. Secondly, in view of multiple amplitude intervals of each difference feature, this paper proposes a distribution combination method based on intuition-possible set ordering. Difference feature score results are aggregated by a fuzzy operator. Joint drop shadows of difference feature score results are obtained. Finally, the experimental results indicate that our proposed method can achieve optimal selection of algorithms that has relatively better effect on the fusion of difference features according to the varied feature amplitudes.
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Haikuan, Feixiang Zhou, Wenju Zhou, and Ling Chen. "Human Pose Recognition Based on Depth Image Multifeature Fusion." Complexity 2018 (December 2, 2018): 1–12. http://dx.doi.org/10.1155/2018/6271348.

Full text
Abstract:
The recognition of human pose based on machine vision usually results in a low recognition rate, low robustness, and low operating efficiency. That is mainly caused by the complexity of the background, as well as the diversity of human pose, occlusion, and self-occlusion. To solve this problem, a feature extraction method combining directional gradient of depth feature (DGoD) and local difference of depth feature (LDoD) is proposed in this paper, which uses a novel strategy that incorporates eight neighborhood points around a pixel for mutual comparison to calculate the difference between the pixels. A new data set is then established to train the random forest classifier, and a random forest two-way voting mechanism is adopted to classify the pixels on different parts of the human body depth image. Finally, the gravity center of each part is calculated and a reasonable point is selected as the joint to extract human skeleton. The experimental results show that the robustness and accuracy are significantly improved, associated with a competitive operating efficiency by evaluating our approach with the proposed data set.
APA, Harvard, Vancouver, ISO, and other styles
12

Fang, Wenjing, Hongfen Zhu, Shuai Li, Haoxi Ding, and Rutian Bi. "Rapid Identification of Main Vegetation Types in the Lingkong Mountain Nature Reserve Based on Multi-Temporal Modified Vegetation Indices." Sensors 23, no. 2 (January 6, 2023): 659. http://dx.doi.org/10.3390/s23020659.

Full text
Abstract:
Nature reserves are among the most bio-diverse regions worldwide, and rapid and accurate identification is a requisite for their management. Based on the multi-temporal Sentinel-2 dataset, this study presents three multi-temporal modified vegetation indices (the multi-temporal modified normalized difference Quercus wutaishanica index (MTM-NDQI), the multi-temporal modified difference scrub grass index (MTM-DSI), and the multi-temporal modified ratio shaw index (MTM-RSI)) to improve the classification accuracy of the remote sensing of vegetation in the Lingkong Mountain Nature Reserve of China (LMNR). These three indices integrate the advantages of both the typical vegetation indices and the multi-temporal remote sensing data. By using the proposed indices with a uni-temporal modified vegetation index (the uni-temporal modified difference pine-oak mixed forest index (UTM-DMI)) and typical vegetation indices (e.g., the ratio vegetation index (RVI), the difference vegetation index (DVI), and the normalized difference vegetation index (NDVI)), an optimal feature set is obtained that includes the NDVI of December, the NDVI of April, and the UTM-DMI, MTM-NDQI, MTM-DSI, and MTM-RSI. The overall accuracy (OA) of the random forest classification (98.41%) and Kappa coefficient of the optimal feature set (0.98) were higher than those of the time series NDVI (OA = 96.03%, Kappa = 0.95), the time series RVI (OA = 95.56%, Kappa = 0.95), and the time series DVI (OA = 91.27%, Kappa = 0.90). The OAs of the rapid classification and the Kappa coefficient of the knowledge decision tree based on the optimal feature set were 95.56% and 0.95, respectively. Meanwhile, only three of the seven vegetation types were omitted or misclassified slightly. Overall, the proposed vegetation indices have advantages in identifying the vegetation types in protected areas.
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Shuoqi, Wei Zheng, and Zhaowei Li. "Optimizing Matching Area for Underwater Gravity-Aided Inertial Navigation Based on the Convolution Slop Parameter-Support Vector Machine Combined Method." Remote Sensing 13, no. 19 (October 1, 2021): 3940. http://dx.doi.org/10.3390/rs13193940.

Full text
Abstract:
This paper focuses on the selection of matching areas in the gravity-aided inertial navigation system. Firstly, the Sobel operator was used in convolution of the gravity anomaly map to obtain the feature map. The convolution slope parameters were constructed by combining the feature map and the gravity anomaly map. The characteristic parameters, such as the difference between convolution rows and columns, convolution variance of the feature map, the pooling difference, and range of the gravity anomaly map, were combined. Based on the support vector machine algorithm, the convolution slope parameter-support vector machine combined method is proposed. Second, we selected the appropriate training sample set and set parameters to verify. The results show that compared with the pre-calibration results, the classification accuracy of the test set is more than 92%, which proves that the convolution slope parameter-support vector machine combined method can effectively distinguish between the suitable and the unsuitable area. Thirdly, we applied this method to another region. The navigation experiment was performed in the split-matching area. The average positioning error was better than 100 m, and the correct rate was more than 90%. The results show that sailing in the selected area can accurately match the trajectory and reduce the positioning error.
APA, Harvard, Vancouver, ISO, and other styles
14

He, Xingjian, Jing Liu, Jun Fu, Xinxin Zhu, Jinqiao Wang, and Hanqing Lu. "Consistent-Separable Feature Representation for Semantic Segmentation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (May 18, 2021): 1531–39. http://dx.doi.org/10.1609/aaai.v35i2.16244.

Full text
Abstract:
Cross-entropy loss combined with softmax is one of the most commonly used supervision components in most existing segmentation methods. The softmax loss is typically good at optimizing the inter-class difference, but not good at reducing the intra-class variation, which can be suboptimal for semantic segmentation task. In this paper, we propose a Consistent-Separable Feature Representation Network to model the Consistent-Separable (C-S) features, which are intra-class consistent and inter-class separable, improving the discriminative power of the deep features. Specifically, we develop a Consistent-Separable Feature Learning Module to obtain C-S features through a new loss, called Class-Aware Consistency loss. This loss function is proposed to force the deep features to be consistent among the same class and apart between different classes. Moreover, we design an Adaptive feature Aggregation Module to fuse the C-S features and original features from backbone for the better semantic prediction. We show that compared with various baselines, the proposed method brings consistent performance improvement. Our proposed approach achieves state-of-the-art performance on Cityscapes (82.6% mIoU in test set), ADE20K (46.65% mIoU in validation set), COCO Stuff (41.3% mIoU in validation set) and PASCAL Context (55.9% mIoU in test set).
APA, Harvard, Vancouver, ISO, and other styles
15

Tahmoresnezhad, Jafar, and Sattar Hashemi. "An Efficient yet Effective Random Partitioning and Feature Weighting Approach for Transfer Learning." International Journal of Pattern Recognition and Artificial Intelligence 30, no. 02 (February 2016): 1651003. http://dx.doi.org/10.1142/s0218001416510034.

Full text
Abstract:
One of the serious challenges in machine learning and pattern recognition is to transfer knowledge from related but different domains to a new unlabeled domain. Feature selection with maximum mean discrepancy (f-MMD) is a novel and effective approach to transfer knowledge from source domain (training set) into target domain (test set) where training and test sets are drawn from different distributions. However, f-MMD has serious challenges in facing datasets with large number of samples and features. Moreover, f-MMD ignores the feature-label relation in finding the reduced representation of dataset. In this paper, we exploit jointly transfer learning and class discrimination to cope with domain shift problem on which the distribution difference is considerably large. We therefore put forward a novel transfer learning and class discrimination approach, referred to as RandOm k-samplesets feature Weighting Approach (ROWA). Specifically, ROWA reduces the distribution difference across domains in an unsupervised manner where no label is available in the test set. Moreover, ROWA exploits feature-label relation to separate various classes alongside the domain transfer, and augments the relation of selected features and source domain labels. In this work, we employ disjoint/overlapping small-sized samplesets to iteratively converge to final solution. Employment of local sets along with a novel optimization problem constructs a robust and effective reduced representation for adaptation across domains. Extensive experiments on real and synthetic datasets verify that ROWA can significantly outperform state-of-the-art transfer learning approaches.
APA, Harvard, Vancouver, ISO, and other styles
16

Di, Cheng, Jing Peng, Yihua Di, and Siwei Wu. "3D Face Modeling Algorithm for Film and Television Animation Based on Lightweight Convolutional Neural Network." Complexity 2021 (May 24, 2021): 1–10. http://dx.doi.org/10.1155/2021/6752120.

Full text
Abstract:
Through the analysis of facial feature extraction technology, this paper designs a lightweight convolutional neural network (LW-CNN). The LW-CNN model adopts a separable convolution structure, which can propose more accurate features with fewer parameters and can extract 3D feature points of a human face. In order to enhance the accuracy of feature extraction, a face detection method based on the inverted triangle structure is used to detect the face frame of the images in the training set before the model extracts the features. Aiming at the problem that the feature extraction algorithm based on the difference criterion cannot effectively extract the discriminative information, the Generalized Multiple Maximum Dispersion Difference Criterion (GMMSD) and the corresponding feature extraction algorithm are proposed. The algorithm uses the difference criterion instead of the entropy criterion to avoid the “small sample” problem, and the use of QR decomposition can extract more effective discriminative features for facial recognition, while also reducing the computational complexity of feature extraction. Compared with traditional feature extraction methods, GMMSD avoids the problem of “small samples” and does not require preprocessing steps on the samples; it uses QR decomposition to extract features from the original samples and retains the distribution characteristics of the original samples. According to different change matrices, GMMSD can evolve into different feature extraction algorithms, which shows the generalized characteristics of GMMSD. Experiments show that GMMSD can effectively extract facial identification features and improve the accuracy of facial recognition.
APA, Harvard, Vancouver, ISO, and other styles
17

Shats, Vladimir. "Properties of the Ordered Feature Values as a Classifier Basis." Cybernetics and Physics, Volume 11, 2022, Number 1 (June 2, 2022): 25–29. http://dx.doi.org/10.35470/2226-4116-2022-11-1-25-29.

Full text
Abstract:
The paper proposes a new classifier based on new concept closeness for objects finite set: feature values of the same class objects are close if the difference between these values is small enough. To pass to this concept, the combined sample data for each feature k were approximated by mapping onto a set of the ordered pairs (k;m), where m is the interval number of the feature ordered values. The objects of each pair have close values of the considered feature. Number lists of training sample objects of the same class, forming ordered pairs, was called an information granule. The frequency of any granule is calculated from the length relation of corresponding subsets as a complex event. These frequencies allow us to calculate the frequencies of the object feature values in different classes, and then the object frequencies as a whole in a certain class, the maximum of which determines the object class. Simplicity, robustness and efficiency of the developed algorithm were confirmed experimentally on 9 databases.
APA, Harvard, Vancouver, ISO, and other styles
18

Liu, Ze Min, Zhi Guo He, and Yu Dong Cao. "Research on Feature Extraction Method for Handwritten Chinese Character Recognition Based on Supervised Independent Component Analysis." Advanced Materials Research 774-776 (September 2013): 1636–41. http://dx.doi.org/10.4028/www.scientific.net/amr.774-776.1636.

Full text
Abstract:
Feature extraction is very difficult for handwritten Chinese character because of large Chinese characters set, complex structure and very large shape variations. The recognition rate by currently used feature extraction methods is far from the requirements of the people. For this problem, a new supervised independent component analysis (SICA) algorithm based on J-divergence entropy is proposed for feature extraction, which can measure the difference between different categories. The scheme takes full advantage of good extraction local features capability and powerful capability to handle data with non-Gaussian distribution by ICA, and the extracted feature component and classification can be tightly combined. The experiments show that the feature extraction method based on SICA is superior to that of gradient-based and that of ICA.
APA, Harvard, Vancouver, ISO, and other styles
19

Pachori, Ram Bilas. "Discrimination between Ictal and Seizure-Free EEG Signals Using Empirical Mode Decomposition." Research Letters in Signal Processing 2008 (2008): 1–5. http://dx.doi.org/10.1155/2008/293056.

Full text
Abstract:
A new method for analysis of electroencephalogram (EEG) signals using empirical mode decomposition (EMD) and Fourier-Bessel (FB) expansion has been presented in this paper. The EMD decomposes an EEG signal into a finite set of band-limited signals termed intrinsic mode functions (IMFs). The mean frequency (MF) for each IMF has been computed using FB expansion. The MF measure of the IMFs has been used as a feature in order to identify the difference between ictal and seizure-free intracranial EEG signals. It has been shown that the MF feature of the IMFs has provided statistically significant difference between ictal and seizure-free EEG signals. Simulation results are included to illustrate the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
20

Kumar R., Arun, Vijay S. Rajpurohit, and Bhairu J. Jirage. "Pomegranate Fruit Quality Assessment Using Machine Intelligence and Wavelet Features." Journal of Horticultural Research 26, no. 1 (June 1, 2018): 53–60. http://dx.doi.org/10.2478/johr-2018-0006.

Full text
Abstract:
Abstract Quality assessment is an important concern in the post-harvest marketing of fruits. Manual quality assessment of pomegranate fruits poses various problems because of human operators. In the present paper, an efficient machine vision system is designed and implemented in order to assess the quality of pomegranate fruits. The main objectives of the present study are (1) to adopt a best pre-processing module, (2) to select best class of features and (3) to develop an efficient machine learning technique for quality assessment of pomegranates. The sample images of pomegranate fruits are captured using a custom-made image acquisition system. Two sets of features, namely, spatial domain feature set and wavelet feature set are extracted for all of the sample images. Experiments are conducted by training both artificial neural networks (ANNs) and support vector machines (SVMs) using both sets of features. The results of the experiments illustrated that ANNs outperform SVMs with a difference in the accuracy of 12.65%. Further, the selection of wavelet featureset for training yielded more accurate results against spatial domain feature set.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhong, Chengyan, Guanqiu Qi, Neal Mazur, Sarbani Banerjee, Devanshi Malaviya, and Gang Hu. "A Domain Adaptive Person Re-Identification Based on Dual Attention Mechanism and Camstyle Transfer." Algorithms 14, no. 12 (December 13, 2021): 361. http://dx.doi.org/10.3390/a14120361.

Full text
Abstract:
Due to the variation in the image capturing process, the difference between source and target sets causes a challenge in unsupervised domain adaptation (UDA) on person re-identification (re-ID). Given a labeled source training set and an unlabeled target training set, this paper focuses on improving the generalization ability of the re-ID model on the target testing set. The proposed method enforces two properties at the same time: (1) camera invariance is achieved through the positive learning formed by unlabeled target images and their camera style transfer counterparts; and (2) the robustness of the backbone network feature extraction is improved, and the accuracy of feature extraction is enhanced by adding a position-channel dual attention mechanism. The proposed network model uses a classic dual-stream network. Comparative experimental results on three public benchmarks prove the superiority of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
22

LIAO, SHASHA, and MINGHU JIANG. "A NEW FEATURE SELECTION METHOD BASED ON CONCEPT EXTRACTION IN AUTOMATIC CHINESE TEXT CLASSIFICATION." New Mathematics and Natural Computation 03, no. 03 (November 2007): 331–47. http://dx.doi.org/10.1142/s1793005707000823.

Full text
Abstract:
The feature selection is an important part in automatic text classification. In this paper, we use a Chinese semantic dictionary — Hownet to extract the concepts from the word as the feature set, because it can better reflect the meaning of the text. However, as the concept definition in the dictionary sometimes cannot express the word properly, we define the expression power for every sememe and every definition of the word in further process, and define the relation degree between the sememe and the definition. A threshold is set in the sememe tree, the sememe of the little information is filtered, and the words of weak definition are reserved in expression power. By this method, we construct a combined feature set that consists of both sememes and the Chinese words. The values of sememes are given according to their expression power and relation to the word. By comparing seven feature weighing methods in text classification, we propose a CHI-MCOR weighing method according to the weighing theories and classification precision. Experimental result shows that if the words are extracted properly, not only the feature dimension is smaller but also the classification precision is higher. Our method makes a good balance between the features which occur frequently in the corpus and those which only occur in one category, the difference of the classification precision among different categories is small.
APA, Harvard, Vancouver, ISO, and other styles
23

Liu, Yong, Shenggen Ju, Junfeng Wang, and Chong Su. "A New Feature Selection Method for Text Classification Based on Independent Feature Space Search." Mathematical Problems in Engineering 2020 (May 12, 2020): 1–14. http://dx.doi.org/10.1155/2020/6076272.

Full text
Abstract:
Feature selection method is designed to select the representative feature subsets from the original feature set by different evaluation of feature relevance, which focuses on reducing the dimension of the features while maintaining the predictive accuracy of a classifier. In this study, we propose a feature selection method for text classification based on independent feature space search. Firstly, a relative document-term frequency difference (RDTFD) method is proposed to divide the features in all text documents into two independent feature sets according to the features’ ability to discriminate the positive and negative samples, which has two important functions: one is to improve the high class correlation of the features and reduce the correlation between the features and the other is to reduce the search range of feature space and maintain appropriate feature redundancy. Secondly, the feature search strategy is used to search the optimal feature subset in independent feature space, which can improve the performance of text classification. Finally, we evaluate several experiments conduced on six benchmark corpora, the experimental results show the RDTFD method based on independent feature space search is more robust than the other feature selection methods.
APA, Harvard, Vancouver, ISO, and other styles
24

Hu, Wenfei. "On the Translation Topology of Confucian Words in C-E dictionary: Structural Comparison and Feature Analysis." Theory and Practice in Language Studies 12, no. 8 (August 1, 2022): 1592–601. http://dx.doi.org/10.17507/tpls.1208.15.

Full text
Abstract:
Confucian words in C-E dictionaries are significant for language learning and cross-cultural communication, and comparative lexicographical study is beneficial for the analysis of different bilingual dictionaries and especially helpful for the improvement of C-E dictionary compilation. The Feature of Topology in Bilingual dictionary (including topological equivalence, point-set topological hierarchical structure) provides theoretical framework for the present study. After stratified sampling and statistical analysis, the paper conducts comparative research on the translation structure and transformation pattern between Confucian words and Biblical words from translation topology. The research includes descriptive analysis, independent t-test and feature analysis. The findings indicate that the translation topology of Confucian words in C-E dictionaries is featured with simplification and discreetness, compared with Biblical words. Confucian words and Biblical words are heterogenic in distribution, assemblage, relevance and transformation strategy, concerning topological point, set and field, which in turn affects the appearance and reordering of initial event. The reasons are as follows: the difference of compilation principle and the over-dependence on the monolingual dictionary differentiate the language variables, leading to the structural difference in Confucian words of topological transformation. The paucity of parallel corpus changes the structure density of cultural topology set, and forms different transformation pattern and representation validity of culture-bound words.
APA, Harvard, Vancouver, ISO, and other styles
25

Chen, Xiaoyue, Xiaoyan Zhang, Jian Zhou, and Ke Zhou. "Rolling Bearings Fault Diagnosis Based on Tree Heuristic Feature Selection and the Dependent Feature Vector Combined with Rough Sets." Applied Sciences 9, no. 6 (March 19, 2019): 1161. http://dx.doi.org/10.3390/app9061161.

Full text
Abstract:
Rolling element bearings (REB) are widely used in all walks of life, and they play an important role in the health operation of all kinds of rotating machinery. Therefore, the fault diagnosis of REB has attracted substantial attention. Fault diagnosis methods based on time-frequency signal analysis and intelligent classification are widely used for REB because of their effectiveness. However, there still exist two shortcomings in these fault diagnosis methods: (1) A large amount of redundant information is difficult to identify and delete. (2) Aliasing patterns decrease the methods’ classification accuracy. To overcome these problems, this paper puts forward an improved fault diagnosis method based on tree heuristic feature selection (THFS) and the dependent feature vector combined with rough sets (RS-DFV). In the RS-DFV method, the feature set was optimized through the dependent feature vector (DFV). Furthermore, the DFV revealed the essential difference among different REB faults and improved the accuracy of fault description. Moreover, the rough set was utilized to reasonably describe the aliasing patterns and overcome the problem of abnormal termination in DFV extraction. In addition, a tree heuristic feature selection method (THFS) was devised to delete the redundant information and construct the structure of RS-DFV. Finally, a simulation, four other feature vectors, three other feature selection methods and four other fault diagnosis methods were utilized for the REB fault diagnosis to demonstrate the effectiveness of the RS-DFV method. RS-DFV obtained an effective subset of five features from 100 features, and acquired a very good diagnostic accuracy (100%, 100%, 99.51%, 100%, 99.47%, 100%), which is much higher than all comparative tests. The results indicate that the RS-DFV method could select an appropriate feature set, deeply dig the effectiveness of the features and more exactly describe the aliasing patterns. Consequently, this method performs better in REB fault diagnosis than the original intelligent methods.
APA, Harvard, Vancouver, ISO, and other styles
26

Liu, Qihang, Chang Huang, Zhuolin Shi, and Shiqiang Zhang. "Probabilistic River Water Mapping from Landsat-8 Using the Support Vector Machine Method." Remote Sensing 12, no. 9 (April 26, 2020): 1374. http://dx.doi.org/10.3390/rs12091374.

Full text
Abstract:
River water extent is essential for river hydrological surveys. Traditional methods for river water mapping often result in significant uncertainties. This paper proposes a support vector machine (SVM)-based river water mapping method that can quantify the extraction uncertainties simultaneously. Five specific bands of Landsat-8 Operational Land Imager (OLI) data were selected to construct the feature set. Considering the effect of terrain, a widely used terrain index called height above nearest drainage, calculated from the 1 arc-second Shuttle Radar Topography Mission (SRTM) digital elevation model (DEM), was also added into the feature set. With this feature set, a posterior probability SVM model was established to extract river water bodies and quantify the uncertainty with posterior probabilities. Three river sections in Northwestern China were selected as the case study areas, considering their different river characteristics and geographical environment. Then, the reliability and stability of the proposed method were evaluated through comparisons with the traditional Normalized Difference Water Index (NDWI) and modified NDWI (mNDWI) methods and validated with higher-resolution Sentinel-2 images. It was found that resultant probability maps obtained by the proposed SVM method achieved generally high accuracy with a weighted root mean square difference of less than 0.1. Other accuracy indices including the Kappa coefficient and critical success index also suggest that the proposed method outperformed the traditional water index methods in terms of river mapping accuracy and thresholding stability. Finally, the proposed method resulted in the ability to separate water bodies from hill shades more easily, ensuring more reliable river water mapping in mountainous regions.
APA, Harvard, Vancouver, ISO, and other styles
27

Hou, Huirang, Xiaonei Zhang, and Qinghao Meng. "Olfactory EEG Signal Classification Using a Trapezoid Difference-Based Electrode Sequence Hashing Approach." International Journal of Neural Systems 30, no. 03 (February 18, 2020): 2050011. http://dx.doi.org/10.1142/s0129065720500112.

Full text
Abstract:
Olfactory-induced electroencephalogram (EEG) signal classification is of great significance in a variety of fields, such as disorder treatment, neuroscience research, multimedia applications and brain–computer interface. In this paper, a trapezoid difference-based electrode sequence hashing method is proposed for olfactory EEG signal classification. First, an [Formula: see text]-layer trapezoid feature set whose size ratio of the top, bottom and height is 1:2:1 is constructed for each frequency band of each EEG sample. This construction is based on [Formula: see text] optimized power-spectral-density features extracted from [Formula: see text] real electrodes and [Formula: see text] nonreal electrode’s features. Subsequently, the [Formula: see text] real electrodes’ sequence (ES) codes of each layer of the constructed trapezoid feature set are obtained by arranging the feature values in ascending order. Finally, the nearest neighbor classification is used to find a class whose ES codes are the most similar to those of the testing sample. Thirteen-class olfactory EEG signals collected from 11 subjects are used to compare the classification performance of the proposed method with six traditional classification methods. The comparison shows that the proposed method gives average accuracy of 94.3%, Cohen’s kappa value of 0.94, precision of 95.0%, and F1-measure of 94.6%, which are higher than those of the existing methods.
APA, Harvard, Vancouver, ISO, and other styles
28

Plancade, Sandra, Magali Berland, Mélisande Blein-Nicolas, Olivier Langella, Ariane Bassignani, and Catherine Juste. "A combined test for feature selection on sparse metaproteomics data—an alternative to missing value imputation." PeerJ 10 (June 24, 2022): e13525. http://dx.doi.org/10.7717/peerj.13525.

Full text
Abstract:
One of the difficulties encountered in the statistical analysis of metaproteomics data is the high proportion of missing values, which are usually treated by imputation. Nevertheless, imputation methods are based on restrictive assumptions regarding missingness mechanisms, namely “at random” or “not at random”. To circumvent these limitations in the context of feature selection in a multi-class comparison, we propose a univariate selection method that combines a test of association between missingness and classes, and a test for difference of observed intensities between classes. This approach implicitly handles both missingness mechanisms. We performed a quantitative and qualitative comparison of our procedure with imputation-based feature selection methods on two experimental data sets, as well as simulated data with various scenarios regarding the missingness mechanisms and the nature of the difference of expression (differential intensity or differential presence). Whereas we observed similar performances in terms of prediction on the experimental data set, the feature ranking and selection from various imputation-based methods were strongly divergent. We showed that the combined test reaches a compromise by correlating reasonably with other methods, and remains efficient in all simulated scenarios unlike imputation-based feature selection methods.
APA, Harvard, Vancouver, ISO, and other styles
29

Lötsch, Jörn, and Alfred Ultsch. "Enhancing Explainable Machine Learning by Reconsidering Initially Unselected Items in Feature Selection for Classification." BioMedInformatics 2, no. 4 (December 12, 2022): 701–14. http://dx.doi.org/10.3390/biomedinformatics2040047.

Full text
Abstract:
Feature selection is a common step in data preprocessing that precedes machine learning to reduce data space and the computational cost of processing or obtaining the data. Filtering out uninformative variables is also important for knowledge discovery. By reducing the data space to only those components that are informative to the class structure, feature selection can simplify models so that they can be more easily interpreted by researchers in the field, reminiscent of explainable artificial intelligence. Knowledge discovery in complex data thus benefits from feature selection that aims to understand feature sets in the thematic context from which the data set originates. However, a single variable selected from a very small number of variables that are technically sufficient for AI training may make little immediate thematic sense, whereas the additional consideration of a variable discarded during feature selection could make scientific discovery very explicit. In this report, we propose an approach to explainable feature selection (XFS) based on a systematic reconsideration of unselected features. The difference between the respective classifications when training the algorithms with the selected features or with the unselected features provides a valid estimate of whether the relevant features in a data set have been selected and uninformative or trivial information was filtered out. It is shown that revisiting originally unselected variables in multivariate data sets allows for the detection of pathologies and errors in the feature selection that occasionally resulted in the failure to identify the most appropriate variables.
APA, Harvard, Vancouver, ISO, and other styles
30

Agarwal, Saurabh, and Ki-Hyun Jung. "HSB-SPAM: An Efficient Image Filtering Detection Technique." Applied Sciences 11, no. 9 (April 21, 2021): 3749. http://dx.doi.org/10.3390/app11093749.

Full text
Abstract:
Median filtering is being used extensively for image enhancement and anti-forensics. It is also being used to disguise the traces of image processing operations such as JPEG compression and image resampling when utilized in image de-noising and smoothing tool. In this paper, a robust image forensic technique namely HSB-SPAM is proposed to assist in median filtering detection. The proposed technique considers the higher significant bit-plane (HSB) of the image to highlight the statistical changes efficiently. Further, multiple difference arrays along with the first order pixel difference is used to separate the pixel difference, and Laplacian pixel difference is applied to extract a robust feature set. To compact the size of feature vectors, the operation of thresholding on the difference arrays is also utilized. As a result, the proposed detector is able to detect median, mean and Gaussian filtering operations with higher accuracy than the existing detectors. In the experimental results, the performance of the proposed detector is validated on the small size and post JPEG compressed images, where it is shown that the proposed method outperforms the state of art detectors in the most of the cases.
APA, Harvard, Vancouver, ISO, and other styles
31

Guo, Shi Xu, Jia Xin Chen, and Bo Peng. "Research of Object Tracking Algorithm Based on BRISK." Advanced Materials Research 1049-1050 (October 2014): 1496–501. http://dx.doi.org/10.4028/www.scientific.net/amr.1049-1050.1496.

Full text
Abstract:
In view of the problems that high complexity, large calculation and the difficulty to apply to real-time systems in the current moving target tracking algorithm, this paper introduce the BRISK feature extraction algorithm, and proposed the object tracking algorithm based on BRISK. Set up the background model and use the background difference method to detect the moving target template. Then match in the next frame and track the target. In order to reduce the search feature matching area, further improve the real-time of the algorithm, we also introduce the kalman filter algorithm to estimate the target motion trajectory. The experimental result show that comparing with the SURF, SIFT feature tracking algorithm, the algorithm of this paper has greatly improved in real-time.
APA, Harvard, Vancouver, ISO, and other styles
32

Tao, Ran, Han Zhang, Yutong Zheng, and Marios Savvides. "Powering Finetuning in Few-Shot Learning: Domain-Agnostic Bias Reduction with Selected Sampling." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 8467–75. http://dx.doi.org/10.1609/aaai.v36i8.20823.

Full text
Abstract:
In recent works, utilizing a deep network trained on meta-training set serves as a strong baseline in few-shot learning. In this paper, we move forward to refine novel-class features by finetuning a trained deep network. Finetuning is designed to focus on reducing biases in novel-class feature distributions, which we define as two aspects: class-agnostic and class-specific biases. Class-agnostic bias is defined as the distribution shifting introduced by domain difference, which we propose Distribution Calibration Module(DCM) to reduce. DCM owes good property of eliminating domain difference and fast feature adaptation during optimization. Class-specific bias is defined as the biased estimation using a few samples in novel classes, which we propose Selected Sampling(SS) to reduce. Without inferring the actual class distribution, SS is designed by running sampling using proposal distributions around support-set samples. By powering finetuning with DCM and SS, we achieve state-of-the-art results on Meta-Dataset with consistent performance boosts over ten datasets from different domains. We believe our simple yet effective method demonstrates its possibility to be applied on practical few-shot applications.
APA, Harvard, Vancouver, ISO, and other styles
33

Park, Ji Hun. "Volumetric Model Body Outline Computation for an Object Tracking in a Video Stream." Applied Mechanics and Materials 479-480 (December 2013): 897–900. http://dx.doi.org/10.4028/www.scientific.net/amm.479-480.897.

Full text
Abstract:
This paper presents a new outline contour generation method to track a rigid body in single video stream taken using a varying focal length and moving camera. We assume feature points and background eliminated images are provided, and we get different views of a tracked object when the object is stationary. Using different views of a tracked object, we volume-reconstruct a 3D model body after 3D scene analysis. For computing camera parameters and target object movement for a scene with a moving target object, we use fixed feature background points, and convert as a parameter optimization problem solving. Performance index for parameter optimization is minimizing feature point errors as well as outline contour difference between reconstructed 3D model and background eliminated tracked object. The proposed method is tested using an input image set.
APA, Harvard, Vancouver, ISO, and other styles
34

Schmidt, Georg, Stefan Stüring, Norman Richnow, and Ingo Siegert. "Handling of “unknown unknowns” - classification of 3D geometries from CAD open set datasets using Convolutional Neural Networks." Online Journal of Applied Knowledge Management 10, no. 1 (September 6, 2022): 62–76. http://dx.doi.org/10.36965/ojakm.2022.10(1)62-76.

Full text
Abstract:
This paper refers to the application of Convolutional Neural Networks (CNNs) for the classification of 3D geometries from Computer-Aided Design (CAD) datasets with a large proportion of unknown unknowns (classes unknown after training). The motivation of the work is the automatic recognition of standard parts in the large CAD-based image data set and thus, reducing the time required for the manual preparation of the data set. The classification is based on a threshold value of the Softmax output layer (first criterion), as well as on three different methods of a second criterion. The three methods for the second criterion are the comparison of metadata relating to the geometries, the comparison of feature vectors from previous dense layers of the CNN with a Spearman correlation, and the distance-based difference between multivariate Gaussian models of these feature vectors using Kullback-Leibler divergence. It is confirmed that all three methods are suitable to solve an open set problem in large 3D datasets (more than 1000 different geometries). Classification and training are image-based using different multi-view representations of the geometries.
APA, Harvard, Vancouver, ISO, and other styles
35

Huang, Zhenzhen, Qiang Niu, Ilsun You, and Giovanni Pau. "Acceleration Feature Extraction of Human Body Based on Wearable Devices." Energies 14, no. 4 (February 10, 2021): 924. http://dx.doi.org/10.3390/en14040924.

Full text
Abstract:
Wearable devices used for human body monitoring has broad applications in smart home, sports, security and other fields. Wearable devices provide an extremely convenient way to collect a large amount of human motion data. In this paper, the human body acceleration feature extraction method based on wearable devices is studied. Firstly, Butterworth filter is used to filter the data. Then, in order to ensure the extracted feature value more accurately, it is necessary to remove the abnormal data in the source. This paper combines Kalman filter algorithm with a genetic algorithm and use the genetic algorithm to code the parameters of the Kalman filter algorithm. We use Standard Deviation (SD), Interval of Peaks (IoP) and Difference between Adjacent Peaks and Troughs (DAPT) to analyze seven kinds of acceleration. At last, SisFall data set, which is a globally available data set for study and experiments, is used for experiments to verify the effectiveness of our method. Based on simulation results, we can conclude that our method can distinguish different activity clearly.
APA, Harvard, Vancouver, ISO, and other styles
36

Shin, Hyunseok, and Sejong Oh. "Feature-Weighted Sampling for Proper Evaluation of Classification Models." Applied Sciences 11, no. 5 (February 25, 2021): 2039. http://dx.doi.org/10.3390/app11052039.

Full text
Abstract:
In machine learning applications, classification schemes have been widely used for prediction tasks. Typically, to develop a prediction model, the given dataset is divided into training and test sets; the training set is used to build the model and the test set is used to evaluate the model. Furthermore, random sampling is traditionally used to divide datasets. The problem, however, is that the performance of the model is evaluated differently depending on how we divide the training and test sets. Therefore, in this study, we proposed an improved sampling method for the accurate evaluation of a classification model. We first generated numerous candidate cases of train/test sets using the R-value-based sampling method. We evaluated the similarity of distributions of the candidate cases with the whole dataset, and the case with the smallest distribution–difference was selected as the final train/test set. Histograms and feature importance were used to evaluate the similarity of distributions. The proposed method produces more proper training and test sets than previous sampling methods, including random and non-random sampling.
APA, Harvard, Vancouver, ISO, and other styles
37

Ma, Chaoqun, Xiaoguang Hu, Jin Xiao, and Guofeng Zhang. "Homogenized ORB Algorithm Using Dynamic Threshold and Improved Quadtree." Mathematical Problems in Engineering 2021 (January 5, 2021): 1–19. http://dx.doi.org/10.1155/2021/6693627.

Full text
Abstract:
The Oriented FAST and Rotated BRIEF (ORB) algorithm has the problem that the extracted feature points are overconcentrated or even overlapped, leading to information loss of local image features. A homogenized ORB algorithm using dynamic thresholds and improved quadtree method is proposed in this paper, named Quadtree ORB (QTORB). In the feature point extraction stage, a new dynamic local threshold calculation method is proposed to enhance the algorithm’s ability to extract feature points at homogeneous regions. Then, a quadtree method is improved and adopted to manage and optimize feature points to eliminate those excessively concentrated and overlapping feature points. Meanwhile, in the feature points optimization process, different quadtree depths are set at different image pyramid levels to prevent excessive splitting of the quadtree and increase calculation speed. In the feature point description stage, local gray difference value information is introduced to enhance the saliency of the feature description. Finally, the Hamming distance is used to match points and RANSAC is used to avoid mismatches. Two datasets, namely, the optical image dataset and SAR image dataset, are used in the experiment. The experimental result shows that, considering accuracy and real-time efficiency, the QTORB can effectively improve the distribution uniformity of feature points.
APA, Harvard, Vancouver, ISO, and other styles
38

Liu, Yinghui. "Classification of Videos Based on Deep Learning." Journal of Sensors 2022 (September 6, 2022): 1–6. http://dx.doi.org/10.1155/2022/9876777.

Full text
Abstract:
Automatic classification of videos is a basic task of content archiving and video scene understanding for broadcasters. And the time series modeling is the key to video classification. To solve this problem, this paper proposes a new video classification method based on temporal difference networks (TDN), which focuses on capturing multiscale time information for effective action classification. The core idea of TDN is to design an effective time module by clearly using the time difference operator and systematically evaluate its impact on short-term and long-term motion modeling. In order to fully capture the time information of the entire video, TDN has established a two-level difference model. For local motion modeling, the time difference in consecutive frames is used to provide a more refined motion mode for convolutional neural network (CNN). For global motion modeling, the time difference of the segments is combined to capture the remote structure for the extraction of the motion feature. The experimental results on two public video anomaly detection data sets, namely, UCF sports data set and SVW field sports data set, prove that the performance of the proposed method is better than some existing methods.
APA, Harvard, Vancouver, ISO, and other styles
39

LI, YUN, KANG TU, SIYUAN ZHENG, JINGFANG WANG, YIXUE LI, PEI HAO, and XUAN LI. "ASSOCIATION OF FEATURE GENE EXPRESSION WITH STRUCTURAL FINGERPRINTS OF CHEMICAL COMPOUNDS." Journal of Bioinformatics and Computational Biology 09, no. 04 (August 2011): 503–19. http://dx.doi.org/10.1142/s0219720011005446.

Full text
Abstract:
Exploring the relationship between a chemical structure and its biological function is of great importance for drug discovery. For understanding the mechanisms of drug action, researchers traditionally focused on the molecular structures in the context of interactions with targets. The newly emerged high-throughput "omics" technology opened a new dimension to study the structure–function relationship of chemicals. Previous studies made attempts to introduce transcriptomics data into chemical function investigation. But little effort has been made to link structural fingerprints of compounds with defined intracellular functions, i.e. expression of particular genes and altered pathways. By integrating the chemical structural information with the gene expression profiles of chemical-treated cells, we developed a novel method to associate the structural difference between compounds with the expression of a definite set of genes, which were called feature genes. A subtraction protocol was designed to extract a minimum gene set related to chemical structural features, which can be utilized in practice as markers for drug screening. Case studies demonstrated that our approach is capable of finding feature genes associated with chemical structural fingerprints.
APA, Harvard, Vancouver, ISO, and other styles
40

Tolhuisen, Manon L., Jan W. Hoving, Miou S. Koopman, Manon Kappelhof, Henk van Voorst, Agnetha E. Bruggeman, Adam M. Demchuck, et al. "Outcome Prediction Based on Automatically Extracted Infarct Core Image Features in Patients with Acute Ischemic Stroke." Diagnostics 12, no. 8 (July 23, 2022): 1786. http://dx.doi.org/10.3390/diagnostics12081786.

Full text
Abstract:
Infarct volume (FIV) on follow-up diffusion-weighted imaging (FU-DWI) is only moderately associated with functional outcome in acute ischemic stroke patients. However, FU-DWI may contain other imaging biomarkers that could aid in improving outcome prediction models for acute ischemic stroke. We included FU-DWI data from the HERMES, ISLES, and MR CLEAN-NO IV databases. Lesions were segmented using a deep learning model trained on the HERMES and ISLES datasets. We assessed the performance of three classifiers in predicting functional independence for the MR CLEAN-NO IV trial cohort based on: (1) FIV alone, (2) the most important features obtained from a trained convolutional autoencoder (CAE), and (3) radiomics. Furthermore, we investigated feature importance in the radiomic-feature-based model. For outcome prediction, we included 206 patients: 144 scans were included in the training set, 21 in the validation set, and 41 in the test set. The classifiers that included the CAE and the radiomic features showed AUC values of 0.88 and 0.81, respectively, while the model based on FIV had an AUC of 0.79. This difference was not found to be statistically significant. Feature importance results showed that lesion intensity heterogeneity received more weight than lesion volume in outcome prediction. This study suggests that predictions of functional outcome should not be based on FIV alone and that FU-DWI images capture additional prognostic information.
APA, Harvard, Vancouver, ISO, and other styles
41

Mishra, Kshitij, and P. Rama Chandra Prasad. "Automatic Extraction of Water Bodies from Landsat Imagery Using Perceptron Model." Journal of Computational Environmental Sciences 2015 (February 2, 2015): 1–9. http://dx.doi.org/10.1155/2015/903465.

Full text
Abstract:
Extraction of water bodies from satellite imagery has been widely explored in the recent past. Several approaches have been developed to delineate water bodies from different satellite imagery varying in spatial, spectral, and temporal characteristics. The current study puts forward an automatic approach to extract the water body from a Landsat satellite imagery using a perceptron model. Perceptron involves classification based on a linear predictor function that merges few characteristic properties of the object commonly known as feature vectors. The feature vectors, combined with the weights, sum up to provide an input to the output function which is a binary hard limit function. The feature vector in this study is a set of characteristic properties shown by a pixel of the water body. Low reflectance of water in SWIR band, comparison of reflectance in different bands, and a modified normalized difference water index are used as descriptors. The normalized difference water index is modified to enhance its reach over shallow regions. For this study a threshold value of 2 has been proved as best among the three possible threshold values. The proposed method accurately and quickly discriminated water from other land cover features.
APA, Harvard, Vancouver, ISO, and other styles
42

Zhu, Min, Jing Xia, Mo Lei Yan, Sheng Yu Zhang, Guo Long Cai, Jing Yan, and Gang Min Ning. "Feature Selection and Optimization of Random Forest Modeling." Applied Mechanics and Materials 687-691 (November 2014): 1416–19. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.1416.

Full text
Abstract:
Traditional random forest algorithm is difficult to achieve very good effect for the classification of small sample data set. Because in the process of repeated random selection, selection sample is little, resulting in trees with very small degree of difference, which floods right decisions, makes bigger generalization error of the model, and the predict rate is reduced. For the sample size of sepsis cases data, this paper adopts for parameters used in random forest modeling interval division choice; divide feature interval into high correlation and uncertain correlation intervals; select data from two intervals respectively for modeling. Eventually reduce model generalization error, and improve accuracy of prediction.
APA, Harvard, Vancouver, ISO, and other styles
43

Priya, Sarv, Yanan Liu, Caitlin Ward, Nam H. Le, Neetu Soni, Ravishankar Pillenahalli Maheshwarappa, Varun Monga, Honghai Zhang, Milan Sonka, and Girish Bathla. "Radiomic Based Machine Learning Performance for a Three Class Problem in Neuro-Oncology: Time to Test the Waters?" Cancers 13, no. 11 (May 24, 2021): 2568. http://dx.doi.org/10.3390/cancers13112568.

Full text
Abstract:
Prior radiomics studies have focused on two-class brain tumor classification, which limits generalizability. The performance of radiomics in differentiating the three most common malignant brain tumors (glioblastoma (GBM), primary central nervous system lymphoma (PCNSL), and metastatic disease) is assessed; factors affecting the model performance and usefulness of a single sequence versus multiparametric MRI (MP-MRI) remain largely unaddressed. This retrospective study included 253 patients (120 metastatic (lung and brain), 40 PCNSL, and 93 GBM). Radiomic features were extracted for whole a tumor mask (enhancing plus necrotic) and an edema mask (first pipeline), as well as for separate enhancing and necrotic and edema masks (second pipeline). Model performance was evaluated using MP-MRI, individual sequences, and the T1 contrast enhanced (T1-CE) sequence without the edema mask across 45 model/feature selection combinations. The second pipeline showed significantly high performance across all combinations (Brier score: 0.311–0.325). GBRM fit using the full feature set from the T1-CE sequence was the best model. The majority of the top models were built using a full feature set and inbuilt feature selection. No significant difference was seen between the top-performing models for MP-MRI (AUC 0.910) and T1-CE sequence with (AUC 0.908) and without edema masks (AUC 0.894). T1-CE is the single best sequence with comparable performance to that of multiparametric MRI (MP-MRI). Model performance varies based on tumor subregion and the combination of model/feature selection methods.
APA, Harvard, Vancouver, ISO, and other styles
44

Yu, Fang. "Implementation of a Human Motion Capture System Based on the Internet of Things Machine Vision." Journal of Cases on Information Technology 24, no. 5 (February 21, 2022): 1–20. http://dx.doi.org/10.4018/jcit.302245.

Full text
Abstract:
The classification of the stereo matching comprehensive analysis related algorithm model can be subdivided into local stereo matching based on the entire acquisition and global stereo matching based on the entire local. But it can have a higher capture efficiency because the log-likelihood variance cost calculation function can have a faster feature convergence capture speed than the ordinary log-mean-square error cost function. Through the combination of gray channel and frame difference channel, a better network structure and parameters on the KTH data set are obtained, which can ensure the classification effect while greatly reducing the number of parameters, improving training efficiency and improving classification accuracy. The article uses dual-channel 3D convolutional human neural network technology to achieve 92.5% accuracy of human feature capture, which is significantly better than many traditional feature extraction techniques proposed in the literature.
APA, Harvard, Vancouver, ISO, and other styles
45

Chen, Shuhan, Bai Xue, Han Yang, Xiaorun Li, Liaoying Zhao, and Chein-I. Chang. "Optical Remote Sensing Image Registration Using Spatial-Consistency and Average Regional Information Divergence Minimization via Quantum-Behaved Particle Swarm Optimization." Remote Sensing 12, no. 18 (September 19, 2020): 3066. http://dx.doi.org/10.3390/rs12183066.

Full text
Abstract:
Due to invariance to significant intensity differences, similarity metrics have been widely used as criteria for an area-based method for registering optical remote sensing image. However, for images with large scale and rotation difference, the robustness of similarity metrics can greatly determine the registration accuracy. In addition, area-based methods usually require appropriately selected initial values for registration parameters. This paper presents a registration approach using spatial consistency (SC) and average regional information divergence (ARID), called spatial-consistency and average regional information divergence minimization via quantum-behaved particle swarm optimization (SC-ARID-QPSO) for optical remote sensing images registration. Its key idea minimizes ARID with SC to select an ARID-minimized spatial consistent feature point set. Then, the selected consistent feature set is tuned randomly to generate a set of M registration parameters, which provide initial particle warms to implement QPSO to obtain final optimal registration parameters. The proposed ARID is used as a criterion for the selection of consistent feature set, the generation of initial parameter sets, and fitness functions used by QPSO. The iterative process of QPSO is terminated based on a custom-designed automatic stopping rule. To evaluate the performance of SC-ARID-QPSO, both simulated and real images are used for experiments for validation. In addition, two data sets are particularly designed to conduct a comparative study and analysis with existing state-of-the-art methods. The experimental results demonstrate that SC-ARID-QPSO produces better registration accuracy and robustness than compared methods.
APA, Harvard, Vancouver, ISO, and other styles
46

Dixit, Abhishek, Ashish Mani, and Rohit Bansal. "DEPSOSVM: variant of differential evolution based on PSO for image and text data classification." International Journal of Intelligent Computing and Cybernetics 13, no. 2 (May 12, 2020): 223–38. http://dx.doi.org/10.1108/ijicc-01-2020-0004.

Full text
Abstract:
PurposeFeature selection is an important step for data pre-processing specially in the case of high dimensional data set. Performance of the data model is reduced if the model is trained with high dimensional data set, and it results in poor classification accuracy. Therefore, before training the model an important step to apply is the feature selection on the dataset to improve the performance and classification accuracy.Design/methodology/approachA novel optimization approach that hybridizes binary particle swarm optimization (BPSO) and differential evolution (DE) for fine tuning of SVM classifier is presented. The name of the implemented classifier is given as DEPSOSVM.FindingsThis approach is evaluated using 20 UCI benchmark text data classification data set. Further, the performance of the proposed technique is also evaluated on UCI benchmark image data set of cancer images. From the results, it can be observed that the proposed DEPSOSVM techniques have significant improvement in performance over other algorithms in the literature for feature selection. The proposed technique shows better classification accuracy as well.Originality/valueThe proposed approach is different from the previous work, as in all the previous work DE/(rand/1) mutation strategy is used whereas in this study DE/(rand/2) is used and the mutation strategy with BPSO is updated. Another difference is on the crossover approach in our case as we have used a novel approach of comparing best particle with sigmoid function. The core contribution of this paper is to hybridize DE with BPSO combined with SVM classifier (DEPSOSVM) to handle the feature selection problems.
APA, Harvard, Vancouver, ISO, and other styles
47

Lin, Mingqiang, Chenhao Yan, and Xianping Zeng. "State of Health Estimation Method for Lithium-Ion Batteries via Generalized Additivity Model and Transfer Component Analysis." World Electric Vehicle Journal 14, no. 1 (January 5, 2023): 14. http://dx.doi.org/10.3390/wevj14010014.

Full text
Abstract:
Battery state of health (SOH) is a momentous indicator for aging severity recognition of lithium-ion batteries and is also an indispensable parameter of the battery management system. In this paper, an innovative SOH estimation algorithm based on feature transfer is proposed for lithium-ion batteries. Firstly, sequence features with battery aging information are sufficiently extracted based on the capacity increment curve. Secondly, transfer component analysis is employed to obtain the mapping that minimizes the data distribution difference between the training set and the test set in the shared feature space. Finally, the generalized additive model is investigated to estimate the battery health status. The experimental results demonstrate that the proposed algorithm is capable of forecasting the SOH for lithium-ion batteries, and the results are more outstanding than those of several comparison algorithms. The predictive error evaluation indicators for each battery are both less than 2.5%. In addition, satisfactory SOH estimation results can also be obtained by only relying on a small amount of data as the training set. The comparative experiments using traditional features and different machine learning methods also testify to the superiority of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
48

SONG, FENGXI, DAVID ZHANG, YONG XU, and JIZHONG WANG. "FIVE NEW FEATURE SELECTION METRICS IN TEXT CATEGORIZATION." International Journal of Pattern Recognition and Artificial Intelligence 21, no. 06 (September 2007): 1085–101. http://dx.doi.org/10.1142/s0218001407005831.

Full text
Abstract:
Feature selection has been extensively applied in statistical pattern recognition as a mechanism for cleaning up the set of features that are used to represent data and as a way of improving the performance of classifiers. Four schemes commonly used for feature selection are Exponential Searches, Stochastic Searches, Sequential Searches, and Best Individual Features. The most popular scheme used in text categorization is Best Individual Features as the extremely high dimensionality of text feature spaces render the other three feature selection schemes time prohibitive. This paper proposes five new metrics for selecting Best Individual Features for use in text categorization. Their effectiveness have been empirically tested on two well- known data collections, Reuters-21578 and 20 Newsgroups. Experimental results show that the performance of two of the five new metrics, Bayesian Rule and F-one Value, is not significantly below that of a good traditional text categorization selection metric, Document Frequency. The performance of another two of these five new metrics, Low Loss Dimensionality Reduction and Relative Frequency Difference, is equal to or better than that of conventional good feature selection metrics such as Mutual Information and Chi-square Statistic.
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, L., F. Rottensteiner, and C. Heipke. "FEATURE DESCRIPTOR BY CONVOLUTION AND POOLING AUTOENCODERS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3/W2 (March 10, 2015): 31–38. http://dx.doi.org/10.5194/isprsarchives-xl-3-w2-31-2015.

Full text
Abstract:
In this paper we present several descriptors for feature-based matching based on autoencoders, and we evaluate the performance of these descriptors. In a training phase, we learn autoencoders from image patches extracted in local windows surrounding key points determined by the Difference of Gaussian extractor. In the matching phase, we construct key point descriptors based on the learned autoencoders, and we use these descriptors as the basis for local keypoint descriptor matching. Three types of descriptors based on autoencoders are presented. To evaluate the performance of these descriptors, recall and 1-precision curves are generated for different kinds of transformations, e.g. zoom and rotation, viewpoint change, using a standard benchmark data set. We compare the performance of these descriptors with the one achieved for SIFT. Early results presented in this paper show that, whereas SIFT in general performs better than the new descriptors, the descriptors based on autoencoders show some potential for feature based matching.
APA, Harvard, Vancouver, ISO, and other styles
50

Zhang, Wenbo, Jue Qu, Wei Wang, Jun Hu, and Jie Li. "Geo-Location Method for Images of Damaged Roads." Electronics 11, no. 16 (August 12, 2022): 2530. http://dx.doi.org/10.3390/electronics11162530.

Full text
Abstract:
Due to the large difference between normal conditions and damaged road images, geo-location in damaged areas often fails due to occlusion or damage to buildings and iconic signage in the image. In order to study the influence of post-war building and landmark damage conditions on the geolocation results of localization algorithms, and to improve the geolocation effect of such algorithms under damaged conditions, this paper used informative reference images and key point selection. Aiming at the negative effects of occlusion and landmark building damage in the retrieval process, a retrieval method called reliability- and repeatability-based deep learning feature points is proposed. In order to verify the effectiveness of the above algorithm, this paper constructed a data set consisting of urban, rural and technological parks and other road segments as a training set to generate a database. It consists of 11,896 reference images. Considering the cost of damaged landmarks, an artificially generated method is used to construct images of damaged landmarks with different damage ratios as a test set. Experiments show that the database optimization method can effectively compress the storage capacity of the feature index and can also speed up the positioning speed without affecting the accuracy rate. The proposed image retrieval method optimizes feature points and feature indices to make them reliable against damaged terrain and images. The improved algorithm improved the accuracy of geo-location for damaged roads, and the method based on deep learning has a higher effect on the geo-location of damaged roads than the traditional algorithm. Furthermore, we fully demonstrated the effectiveness of our proposed method by constructing a multi-segment road image dataset.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography