Academic literature on the topic 'Feature extraction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Feature extraction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Feature extraction"

1

Xia, Liegang, Shulin Mi, Junxia Zhang, Jiancheng Luo, Zhanfeng Shen, and Yubin Cheng. "Dual-Stream Feature Extraction Network Based on CNN and Transformer for Building Extraction." Remote Sensing 15, no. 10 (May 22, 2023): 2689. http://dx.doi.org/10.3390/rs15102689.

Full text
Abstract:
Automatically extracting 2D buildings from high-resolution remote sensing images is among the most popular research directions in the area of remote sensing information extraction. Semantic segmentation based on a CNN or transformer has greatly improved building extraction accuracy. A CNN is good at local feature extraction, but its ability to acquire global features is poor, which can lead to incorrect and missed detection of buildings. The advantage of transformer models lies in their global receptive field, but they do not perform well in extracting local features, resulting in poor local detail for building extraction. We propose a CNN-based and transformer-based dual-stream feature extraction network (DSFENet) in this paper, for accurate building extraction. In the encoder, convolution extracts the local features for buildings, and the transformer realizes the global representation of the buildings. The effective combination of local and global features greatly enhances the network’s feature extraction ability. We validated the capability of DSFENet on the Google Image dataset and the ISPRS Vaihingen dataset. DSEFNet achieved the best accuracy performance compared to other state-of-the-art models.
APA, Harvard, Vancouver, ISO, and other styles
2

He, Haiqing, Yan Wei, Fuyang Zhou, and Hai Zhang. "A Deep Neural Network for Road Extraction with the Capability to Remove Foreign Objects with Similar Spectra." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1-2024 (May 10, 2024): 193–99. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-2024-193-2024.

Full text
Abstract:
Abstract. Existing road extraction methods based on deep learning often struggle with distinguishing ground objects that share similar spectral information, such as roads and buildings. Consequently, this study proposes a dual encoder-decoder deep neural network to address road extraction in complex backgrounds. In the feature extraction stage, the first encoder-decoder designed for extracting road features. The second encoder-decoder utilized for extracting building features. During the feature fusion stage, road features and building features are integrated using a subtraction method. The resultant road features, constrained by building features, enhance the preservation of accurate road feature information. Within the feature fusion stage, road feature maps and building feature maps designated for fusion are input into the convolutional block attention module. This step aims to amplify the features of different channels and extract key information from diverse spatial positions. Subsequently, feature fusion is executed using the element-by-element subtraction method. The outcome is road features constrained by building features, thus preserving more precise road feature information. Experimental results demonstrate that the model successfully learns both road and building features concurrently. It effectively distinguishes between easily confused roads and buildings with similar spectral information, ultimately enhancing the accuracy of road extraction.
APA, Harvard, Vancouver, ISO, and other styles
3

V., Dr Sellam. "Text Analysis Via Composite Feature Extraction." Journal of Advanced Research in Dynamical and Control Systems 24, no. 4 (March 31, 2020): 310–20. http://dx.doi.org/10.5373/jardcs/v12i4/20201445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Park, Sun-Bae, and Do-Sik Yoo. "Priority feature extraction using projective transform feature extraction technique." Journal of Korean Institute of Intelligent Systems 34, no. 2 (April 30, 2024): 110–16. http://dx.doi.org/10.5391/jkiis.2024.34.2.110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ohl, Frank W., and Henning Scheich. "Feature extraction and feature interaction." Behavioral and Brain Sciences 21, no. 2 (April 1998): 278. http://dx.doi.org/10.1017/s0140525x98431170.

Full text
Abstract:
The idea of the orderly output constraint is compared with recent findings about the representation of vowels in the auditory cortex of an animal model for human speech sound processing (Ohl & Scheich 1997). The comparison allows a critical consideration of the idea of neuronal “feature extractors,” which is of relevance to the noninvariance problem in speech perception.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Ziyan. "Feature Extraction and Identification of Calligraphy Style Based on Dual Channel Convolution Network." Security and Communication Networks 2022 (May 16, 2022): 1–11. http://dx.doi.org/10.1155/2022/4187797.

Full text
Abstract:
To improve the effect of calligraphy style feature extraction and identification, this study proposes a calligraphy style feature extraction and identification technology based on two-channel convolutional neural network and constructs an intelligent calligraphy style feature extraction and identification system. Moreover, this paper improves the C3D network model and retains 2 fully connected layers. In addition, by extracting the outline skeleton and stroke features of calligraphy characters, this paper calculates the feature weight and authenticity determination function and constructs an authenticity identification system. The experimental study shows that the calligraphy style feature extraction and identification system based on the dual-channel convolutional neural network proposed in this paper has a good performance in calligraphy style feature extraction and identification.
APA, Harvard, Vancouver, ISO, and other styles
7

Zheng, Jian, Hongchun Qu, Zhaoni Li, Lin Li, Xiaoming Tang, and Fei Guo. "A novel autoencoder approach to feature extraction with linear separability for high-dimensional data." PeerJ Computer Science 8 (August 11, 2022): e1061. http://dx.doi.org/10.7717/peerj-cs.1061.

Full text
Abstract:
Feature extraction often needs to rely on sufficient information of the input data, however, the distribution of the data upon a high-dimensional space is too sparse to provide sufficient information for feature extraction. Furthermore, high dimensionality of the data also creates trouble for the searching of those features scattered in subspaces. As such, it is a tricky task for feature extraction from the data upon a high-dimensional space. To address this issue, this article proposes a novel autoencoder method using Mahalanobis distance metric of rescaling transformation. The key idea of the method is that by implementing Mahalanobis distance metric of rescaling transformation, the difference between the reconstructed distribution and the original distribution can be reduced, so as to improve the ability of feature extraction to the autoencoder. Results show that the proposed approach wins the state-of-the-art methods in terms of both the accuracy of feature extraction and the linear separabilities of the extracted features. We indicate that distance metric-based methods are more suitable for extracting those features with linear separabilities from high-dimensional data than feature selection-based methods. In a high-dimensional space, evaluating feature similarity is relatively easier than evaluating feature importance, so that distance metric methods by evaluating feature similarity gain advantages over feature selection methods by assessing feature importance for feature extraction, while evaluating feature importance is more computationally efficient than evaluating feature similarity.
APA, Harvard, Vancouver, ISO, and other styles
8

Taha, Mohammed A., Hanaa M. Ahmed, and Saif O. Husain. "Iris Features Extraction and Recognition based on the Scale Invariant Feature Transform (SIFT)." Webology 19, no. 1 (January 20, 2022): 171–84. http://dx.doi.org/10.14704/web/v19i1/web19013.

Full text
Abstract:
Iris Biometric authentication is considered to be one of the most dependable biometric characteristics for identifying persons. In actuality, iris patterns have invariant, stable, and distinguishing properties for personal identification. Due to its excellent dependability in personal identification, iris recognition has received more attention. Current iris recognition methods give good results especially when NIR and specific capture conditions are used in collaboration with the user. On the other hand, values related to images captured using VW are affected by noise such as blurry images, eye skin, occlusion, and reflection, which negatively affects the overall performance of the recognition systems. In both NIR and visible spectrum iris images, this article presents an effective iris feature extraction strategy based on the scale-invariant feature transform algorithm (SIFT). The proposed method was tested on different databases such as CASIA v1 and ITTD v1, as NIR images, as well as UBIRIS v1 as visible-light color images. The proposed system gave good accuracy rates compared to existing systems, as it gave an accuracy rate of (96.2%) when using CASIA v1 and (96.4%) in ITTD v1, while the system accuracy dropped to (84.0 %) when using UBIRIS v1.
APA, Harvard, Vancouver, ISO, and other styles
9

Soechting, John F., Weilai Song, and Martha Flanders. "Haptic Feature Extraction." Cerebral Cortex 16, no. 8 (October 12, 2005): 1168–80. http://dx.doi.org/10.1093/cercor/bhj058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

He, Dong-Chen, Li Wang, and Jean Guibert. "Texture feature extraction." Pattern Recognition Letters 6, no. 4 (September 1987): 269–73. http://dx.doi.org/10.1016/0167-8655(87)90087-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Feature extraction"

1

Goodman, Steve. "Feature extraction and classification." Thesis, University of Sunderland, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Raymond. "Feature extraction in classification." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/23634.

Full text
Abstract:
Feature extraction, or dimensionality reduction, is an essential part of many machine learning applications. The necessity for feature extraction stems from the curse of dimensionality and the high computational cost of manipulating high-dimensional data. In this thesis we focus on feature extraction for classification. There are several approaches, and we will focus on two such: the increasingly popular information-theoretic approach, and the classical distance-based, or variance-based approach. Current algorithms for information-theoretic feature extraction are usually iterative. In contrast, PCA and LDA are popular examples of feature extraction techniques that can be solved by eigendecomposition, and do not require an iterative procedure. We study the behaviour of an example of iterative algorithm that maximises Kapur's quadratic mutual information by gradient ascent, and propose a new estimate of mutual information that can be maximised by closed-form eigendecomposition. This new technique is more computationally efficient than iterative algorithms, and its behaviour is more reliable and predictable than gradient ascent. Using a general framework of eigendecomposition-based feature extraction, we show a connection between information-theoretic and distance-based feature extraction. Using the distance-based approach, we study the effects of high input dimensionality and over-fitting on feature extraction, and propose a family of eigendecomposition-based algorithms that can solve this problem. We investigate the relationship between class-discrimination and over-fitting, and show why the advantages of information-theoretic feature extraction become less relevant in high-dimensional spaces.
APA, Harvard, Vancouver, ISO, and other styles
3

Dhanjal, Charanpal. "Sparse Kernel feature extraction." Thesis, University of Southampton, 2008. https://eprints.soton.ac.uk/64875/.

Full text
Abstract:
The presence of irrelevant features in training data is a significant obstacle for many machine learning tasks, since it can decrease accuracy, make it harder to understand the learned model and increase computational and memory requirements. One approach to this problem is to extract appropriate features. General approaches such as Principal Components Analysis (PCA) are successful for a variety of applications, however they can be improved upon by targeting feature extraction towards more specific problems. More recent work has been more focused and considers sparser formulations which potentially have improved generalisation. However, sparsity is not always efficiently implemented and frequently requires complex optimisation routines. Furthermore, one often does not have a direct control on the sparsity of the solution. In this thesis, we address some of these problems, first by proposing a general framework for feature extraction which possesses a number of useful properties. The framework is based on Partial Least Squares (PLS), and one can choose a user defined criterion to compute projection directions. It draws together a number of existing results and provides additional insights into several popular feature extraction methods. More specific feature extraction is considered for three objectives: matrix approximation, supervised feature extraction and learning the semantics of two-viewed data. Computational and memory efficiency is prioritised, as well as sparsity in a direct manner and simple implementations. For the matrix approximation case, an analysis of different orthogonalisation methods is presented in terms of the optimal choice of projection direction. The analysis results in a new derivation for Kernel Feature Analysis (KFA) and the formation of two novel matrix approximation methods based on PLS. In the supervised case, we apply the general feature extraction framework to derive two new methods based on maximising covariance and alignment respectively. Finally, we outline a novel sparse variant of Kernel Canonical Correlation Analysis (KCCA) which approximates a cardinality constrained optimisation. This method, as well as a variant which performs feature selection in one view, is applied to an enzyme function prediction case study.
APA, Harvard, Vancouver, ISO, and other styles
4

Drangel, Andreas. "Feature extraction from images with augmented feature inputs." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-219073.

Full text
Abstract:
Machine learning models for visual recognition tasks such as image recognition is a common research area as of lately. However, not much research has been made when multiple features are to be extracted from the same input. This thesis researches if and how knowledge about one feature influences model performance of a model classifying another feature, as well as how the similarity and generality of the feature data distributions influences model performance. Incorporating augmentation inputs in the form of extra feature information in image models was found to yield different results depending on feature data distribution similarity and level of generality. Care must be taken when augmenting with features in order for the feature not to be completely redundant or to completely take over in the learning process. Selecting reasonable augmentation inputs might yield desired synergy effects which influences model performance to the better.
Maskininlärningsmodeller för uppgifter inom visuellt igenkännande så som bildigenkänning har på senaste tiden varit ett vanligt forskningsområde. Dock har inte mycket forskning fokuserats på att extrahera multipla särdrag från samma inmatning. Detta examensarbete syftar till att undersöka hur kunskap om ett särdrag influerar en modells prestanda som syftar till att klassificera ett annat särdrag, men även hur likhet och generalitet i särdragens datadistribution influerar modellprestanda. Integrering av förstärkande inmatning i form av extra särdragsinformation i bildklassificeringsmodeller visades ge olika resultat beroende på likhet och generalitet av distribution av särdragsdata. Hänsyn måste tas när förstärkande särdrag används för att de förstärkande särdragen inte ska bli helt redundanta eller helt ta över under träningsprocessen. Väljande av rimliga förstärkningssärdrag kan medföra önskade synergieffekter vilket påverkar modellprestandan till det bättre.
APA, Harvard, Vancouver, ISO, and other styles
5

Kerr, Dermot. "Autonomous Scale Invariant Feature Extraction." Thesis, University of Ulster, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.502896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Alathari, Thamer. "Feature extraction in volumetric images." Thesis, University of Southampton, 2015. https://eprints.soton.ac.uk/379936/.

Full text
Abstract:
The increased interest in volumetric images in recent years requires new feature extraction methods for 3D image interpretation. The aim of this study is to provide algorithms that aid the process of detecting and segmenting geometrical objects from volumetric images. Due to high computational expense, such methods have yet to be established in the volumetric space. Only few have tackled this problem using shape descriptors and key-points of a specific shape; those techniques can detect complex shapes rather than simple geometric shapes due to the well defined key-points. Simplifying the data in the volumetric image using a surface detector and surface curvature estimation preserves the important information about the shapes at the same time reducing the computational expense. Whilst the literature describes only the template of the three-dimensional Sobel operator and not its basis, we present an extended version of the Sobel operator, which considers the gradients of all directions to extract an object’s surface, and with clear basis that allows for development of larger operators. Surface curvature descriptors are usually based on geometrical properties of a segmented object rather than on the change in image intensity. In this work, a new approach is described to estimate the surface curvature of objects using local changes of image intensity. The new methods have shown reliable results on both synthetic and on real volumetric images. The curvature and edge data are then processed in two new techniques for evidence gathering to extract a geometrical shape’s main axis or centre point. The accumulated data are taken directly from voxels’ geometrical locations rather than the surface normals as proposed in literature. The new approaches have been applied to detect a cylinder’s axis and spherical shapes. A new 3D line detection based on origin shifting has also been introduced. Accumulating, at every voxel, the angles resulting from a coordinate transform of a Cartesian to spherical system successfully indicates the existence of a 3D line in the volumetric image. A novel method based on using an analogy to pressure is introduced to allow analysis/ visualisation of objects as though they have been separated, when they were actually touching in the original volumetric images. The approach provides a new domain highlighting the connected areas between multiple touching objects. A mask is formed to detach the interconnected objects and remarkable results are achieved. This is applied successfully to isolate coins within an image of a Roman hoard of coins, and other objects. The approach can fail to isolate objects when the space between them appears to be of similar density to the objects themselves. This motivated development of an operator extended by high-pass filtering and morphological operations. This led to more accurate extraction of coins within the Roman hoard, and to successful isolation of femurs in a database of scanned body images enabling better isolation of hip components in replacement therapy.
APA, Harvard, Vancouver, ISO, and other styles
7

Serce, Hakan. "Facial Feature Extraction Using Deformable Templates." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1224674/index.pdf.

Full text
Abstract:
The purpose of this study is to develop an automatic facial feature extraction system, which is able to identify the detailed shape of eyes, eyebrows and mouth from facial images. The developed system not only extracts the location information of the features, but also estimates the parameters pertaining the contours and parts of the features using parametric deformable templates approach. In order to extract facial features, deformable models for each of eye, eyebrow, and mouth are developed. The development steps of the geometry, imaging model and matching algorithms, and energy functions for each of these templates are presented in detail, along with the important implementation issues. In addition, an eigenfaces based multi-scale face detection algorithm which incorporates standard facial proportions is implemented, so that when a face is detected the rough search regions for the facial features are readily available. The developed system is tested on JAFFE (Japanese Females Facial Expression Database), Yale Faces, and ORL (Olivetti Research Laboratory) face image databases. The performance of each deformable templates, and the face detection algorithm are discussed separately.
APA, Harvard, Vancouver, ISO, and other styles
8

Sherrah, Jamie. "Automatic feature extraction for pattern recognition /." Title page, contents and abstract only, 1998. http://web4.library.adelaide.edu.au/theses/09PH/09phs553.pdf.

Full text
Abstract:
Thesis (Ph. D.)--University of Adelaide, Dept. of Electrical and Electronic Engineering, 1999.
CD-ROM in back pocket comprises experimental results and executables. Includes bibliographical references (p. 251-261).
APA, Harvard, Vancouver, ISO, and other styles
9

Ljumić, Elvis. "Image feature extraction using fuzzy morphology." Diss., Online access via UMI:, 2007.

Find full text
Abstract:
Thesis (Ph. D.)--State University of New York at Binghamton, Department of Systems Science and Industrial Engineering, Thomas J. Watson School of Engineering and Applied Science, 2007.
Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
10

Daniušis, Povilas. "Feature extraction via dependence structure optimization." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2012. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2012~D_20121001_093645-66010.

Full text
Abstract:
In many important real world applications the initial representation of the data is inconvenient, or even prohibitive for further analysis. For example, in image analysis, text analysis and computational genetics high-dimensional, massive, structural, incomplete, and noisy data sets are common. Therefore, feature extraction, or revelation of informative features from the raw data is one of fundamental machine learning problems. Efficient feature extraction helps to understand data and the process that generates it, reduce costs for future measurements and data analysis. The representation of the structured data as a compact set of informative numeric features allows applying well studied machine learning techniques instead of developing new ones.. The dissertation focuses on supervised and semi-supervised feature extraction methods, which optimize the dependence structure of features. The dependence is measured using the kernel estimator of Hilbert-Schmidt norm of covariance operator (HSIC measure). Two dependence structures are investigated: in the first case we seek features which maximize the dependence on the dependent variable, and in the second one, we additionally minimize the mutual dependence of features. Linear and kernel formulations of HBFE and HSCA are provided. Using Laplacian regularization framework we construct semi-supervised variants of HBFE and HSCA. Suggested algorithms were investigated experimentally using conventional and multilabel classification data... [to full text]
Daugelis praktiškai reikšmingu sistemu mokymo uždaviniu reikalauja gebeti panaudoti didelio matavimo, strukturizuotus, netiesinius duomenis. Vaizdu, teksto, socialiniu bei verslo ryšiu analize, ivairus bioinformatikos uždaviniai galetu buti tokiu uždaviniu pavyzdžiais. Todel požymiu išskyrimas dažnai yra pirmasis žingsnis, kuriuo pradedama duomenu analize ir nuo kurio priklauso galutinio rezultato sekme. Šio disertacinio darbo tyrimo objektas yra požymiu išskyrimo algoritmai, besiremiantys priklausomumo savoka. Darbe nagrinejamas priklausomumas, nusakytas kovariacinio operatoriaus Hilberto-Šmidto normos (HSIC mato) branduoliniu ivertiniu. Pasiulyti šiuo ivertiniu besiremiantys HBFE ir HSCA algoritmai leidžia dirbti su bet kokios strukturos duomenimis, bei yra formuluojami tikriniu vektoriu terminais (tai leidžia optimizavimui naudoti standartinius paketus), bei taikytini ne tik prižiurimo, bet ir dalinai prižiurimo mokymo imtims. Pastaruoju atveju HBFE ir HSCA modifikacijos remiasi Laplaso reguliarizacija. Eksperimentais su klasifikavimo bei daugiažymio klasifikavimo duomenimis parodyta, jog pasiulyti algoritmai leidžia pagerinti klasifikavimo efektyvuma lyginant su PCA ar LDA.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Feature extraction"

1

Guyon, Isabelle, Masoud Nikravesh, Steve Gunn, and Lotfi A. Zadeh, eds. Feature Extraction. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/978-3-540-35488-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Huan, and Hiroshi Motoda, eds. Feature Extraction, Construction and Selection. Boston, MA: Springer US, 1998. http://dx.doi.org/10.1007/978-1-4615-5725-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chaki, Jyotismita, and Nilanjan Dey. Image Color Feature Extraction Techniques. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-5761-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

S, Aguado Alberto, ed. Feature extraction and image processing. Oxford: Newnes, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Taguchi, Y.-h. Unsupervised Feature Extraction Applied to Bioinformatics. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-22456-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hu, Li, and Zhiguo Zhang, eds. EEG Signal Processing and Feature Extraction. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-9113-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Agarwal, Basant, and Namita Mittal. Prominent Feature Extraction for Sentiment Analysis. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-25343-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

United States. National Aeronautics and Space Administration., ed. 3D feature extraction for unstructured grids. [Washington, DC: National Aeronautics and Space Administration, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rand, Robert S. Texture analysis and cartographic feature extraction. Fort Belvoir, Va: U.S. Army Corps of Engineers, Engineer Topographic Laboratories, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Samantaray, Aswini Kumar, and Amol D. Rahulkar. Feature Extraction in Medical Image Retrieval. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-57279-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Feature extraction"

1

Torkkola, Kari, and Eugene Tuv. "Ensembles of Regularized Least Squares Classifiers for High-Dimensional Problems." In Feature Extraction, 297–313. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/978-3-540-35488-8_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bengio, Yoshua, Olivier Delalleau, Nicolas Le Roux, Jean-François Paiement, Pascal Vincent, and Marie Ouimet. "Spectral Dimensionality Reduction." In Feature Extraction, 519–50. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/978-3-540-35488-8_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Momma, Michinari, and Kristin P. Bennett. "Constructing Orthogonal Latent Features for Arbitrary Loss." In Feature Extraction, 551–83. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/978-3-540-35488-8_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sen, Soumya, Anjan Dutta, and Nilanjan Dey. "Feature Extraction." In Audio Processing and Speech Recognition, 45–66. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-6098-5_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Devroye, Luc, László Györfi, and Gábor Lugosi. "Feature Extraction." In A Probabilistic Theory of Pattern Recognition, 561–74. New York, NY: Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4612-0711-5_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pau, L. F. "Feature Extraction." In Computer Vision for Electronics Manufacturing, 265–76. Boston, MA: Springer US, 1990. http://dx.doi.org/10.1007/978-1-4613-0507-1_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Verma, Nishchal K., and Al Salour. "Feature Extraction." In Studies in Systems, Decision and Control, 121–73. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-0512-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Owens, F. J. "Feature Extraction." In Signal Processing of Speech, 70–87. London: Macmillan Education UK, 1993. http://dx.doi.org/10.1007/978-1-349-22599-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Suchenwirth, Richard, Jun Guo, Irmfried Hartmann, Georg Hincha, Manfred Krause, and Zheng Zhang. "Feature Extraction." In Optical Recognition of Chinese Characters, 55–93. Wiesbaden: Vieweg+Teubner Verlag, 1989. http://dx.doi.org/10.1007/978-3-663-13999-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Remagnino, Paolo, Simon Mayo, Paul Wilkin, James Cope, and Don Kirkup. "Feature Extraction." In Computational Botany, 33–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 2016. http://dx.doi.org/10.1007/978-3-662-53745-9_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Feature extraction"

1

Fields, Malcolm C., and D. C. Anderson. "Hybrid Feature Extraction for Machining Applications." In ASME 1993 Design Technical Conferences. American Society of Mechanical Engineers, 1993. http://dx.doi.org/10.1115/detc1993-0399.

Full text
Abstract:
Abstract A hybrid feature extraction algorithm for extracting cavity features for machining applications is presented. The algorithm operates on both a feature-based solid model of a part and its corresponding boundary representation solid model. Information available from both part representations is used, offering a more robust and efficient solution for some of the critical limitations of current feature extraction algorithms, such as verification of completeness, computation of cavity volumes, and maintenance of design information. The hybrid feature extraction algorithm combines the strengths of feature-based design and feature extraction approaches to linking design and manufacturing. Starting with a feature-based model of a part consisting of volumetric design features combined with a stock shape using set operations, the algorithm transforms this model into a feature model containing only machinable cavity features. The transformation involves computations on both the set theoretic feature model and its corresponding boundary representation solid model, and deals with the complexities of feature-feature intersections and protrusions. By combining the higher-level product information contained in the design feature model with the topological and geometric information in the boundary representation model, the algorithm supplements traditional boundary representation extraction with non-geometric product information, enabling the verification of completeness, and significantly aiding the computation of the appropriate feature volumes.
APA, Harvard, Vancouver, ISO, and other styles
2

Dong, Jian, and Sreedharan Vijayan. "Feature Extraction Techniques by Optimal Volume Decomposition." In ASME 1996 Design Engineering Technical Conferences and Computers in Engineering Conference. American Society of Mechanical Engineers, 1996. http://dx.doi.org/10.1115/96-detc/cie-1337.

Full text
Abstract:
Abstract The elements of Computer-Aided Manufacturing, do not make full use of the part description stored in a CAD model because it exists in terms of low-level faces, edges and vertices or primitive volumes related to the manufacturing planning task. Consequently manufacturing planning still depends upon human expertise and input to interpret the part definition according to manufacturing needs. Feature-based technology is becoming an important tool to resolve this and other related problems. One approach is to design the part using Features directly. Another approach is Manufacturing Feature Extraction and Recognition. Manufacturing Feature Extraction consists of searching for the part description, recognizing cavity features, extracting those features as solid volumes of material to be removed. Feature Recognition involves raising this information to the level of part features which can be read by a process planning program. The feature extraction can be called optimal if the manufacturing cost of the component using those features can be minimized. An optimized feature extraction technique using two powerful optimization methods viz., Simulated Annealing and Genetic Algorithm is presented in this paper. This work has relevance in the areas of CAD/CAM linking, process planning and manufacturability assessment.
APA, Harvard, Vancouver, ISO, and other styles
3

Putri, Divi Galih Prasetyo, and Daniel Oranova Siahaan. "Software feature extraction using infrequent feature extraction." In 2016 6th International Annual Engineering Seminar (InAES). IEEE, 2016. http://dx.doi.org/10.1109/inaes.2016.7821927.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Castano, Fernando, Daniel M. Gaines, and Caroline C. Hayes. "Improving Feature Extraction Through Closely-Coupled Integration of Fixture Analysis." In ASME 1996 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1996. http://dx.doi.org/10.1115/imece1996-0002.

Full text
Abstract:
Abstract This paper presents a view of feature extraction as a process that involves consideration of manufacturing tools, processes and the fixtures to be used. This view is implemented in MEDIATOR. Some feature extractors use almost entirely geometric considerations [29], others use process and tool information which is often implicitly encoded in the data structures to help guide the feature extraction process. Our view of feature extraction is similar to the second approach in that process and tool information is also used. However, we take this approach one step further and also use fixture information to determine features. The reason behind this is that a feature is considered relevant because there is a method for producing it in the task domain. The set of possible tools, tool motions and fixtures that can be used is generated during feature extraction in MEDIATOR and selection of specific fixture and details are done later in process planning. Advantages of this method include the ability to use task information to strongly constrain the search for valid features, increased likelihood that features recognized will be manufacturable and directly usable by the process planner, and ease of modifying the feature extractor.
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Jian, Y. Kusurkar, and Deborah E. Silver. "Distributed feature extraction." In Electronic Imaging 2002, edited by Robert F. Erbacher, Philip C. Chen, Matti Groehn, Jonathan C. Roberts, and Craig M. Wittenbrink. SPIE, 2002. http://dx.doi.org/10.1117/12.458786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sharma, Divya, Shashikant Patil, Prarthna Patel, and Utsavi Pathak. "Optical feature extraction." In ICWET '10: International Conference and Workshop on Emerging Trends in Technology. New York, NY, USA: ACM, 2010. http://dx.doi.org/10.1145/1741906.1742175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xu, Xiangxiang, and Lizhong Zheng. "Multivariate Feature Extraction." In 2022 58th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2022. http://dx.doi.org/10.1109/allerton49937.2022.9929401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ganter, M. A., and P. A. Skoglund. "Feature Extraction for Casting Core Development." In ASME 1991 Design Technical Conferences. American Society of Mechanical Engineers, 1991. http://dx.doi.org/10.1115/detc1991-0074.

Full text
Abstract:
Abstract Feature extraction techniques are presented for the generation of casting core patterns from a boundary representation (B-Rep) solid model. Techniques are presented which would allow for automatic extraction of three classes of core features (internal voids, single and multi-surface holes, and boundary perturbations). The task of extracting casting cores from solid models involves recognizing a collection of entities (i.e. slots, bosses, undercut surfaces, local and global concavities, etc.) from the set of lower level entities (i.e. the B-Rep structure). To this end, a combination of solid modeling B-Rep and graph structures and their associated methods will be used for casting core development. Appropriate local features are identified and extracted from the original object, and are grouped into one or more new object(s) (termed a core-object). If the core-object is multiply connected (i.e. composed of multiple objects), it is graph separated into global feature objects. Each of these global feature objects represents a core in the final pattern. Lastly, the geometry of the original part is augmented to add core prints where core geometries were extracted. The core print, as currently developed, combines the extracted core geometry and its convex hull.
APA, Harvard, Vancouver, ISO, and other styles
9

Wong, Hiu-Man, Xingjian Chen, Hiu-Hin Tam, Jiecong Lin, Shixiong Zhang, Shankai Yan, Xiangtao Li, and Ka-Chun Wong. "Feature Selection and Feature Extraction: Highlights." In ISMSI 2021: 2021 5th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3461598.3461606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

An, Qian, and Caroline C. Hayes. "Feature Conglomerator: Helping Users Identify Cost Effective Feature Interpretations." In ASME 2003 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2003. http://dx.doi.org/10.1115/detc2003/dfm-48156.

Full text
Abstract:
Feature Conglomerator is a manufacturing planning tool for use in 3 and 5 axis prismatic CNC machining domains. Feature extractors typically have many choices (i.e. interpretations) of how they may subdivide complex volumes into component features. Unfortunately, it may be difficult for the feature extractor to identify the interpretation which will result in the best manufacturing plan. The basic assumption behind this work is that much of the information needed to identify the best feature interpretation is not available during feature extraction. However, it does become available during manufacturing planning. Thus, if the initial features identified can be adjusted and refined during manufacturing planning, interpretation decisions can be better informed. This approach can be contrasted to that of Chang [1], which also refines features during manufacturing planning, in that this work is based on an information needs analysis to determine the most strategic time at which to make feature refinement decisions. This approach further blurs the line between feature extraction and process planning to make better informed feature interpretation decisions.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Feature extraction"

1

Anderson, Dana Z., Gan Zhou, and Germano Montemezzani. Temporal Feature Extraction in Photorefractive Resonators. Fort Belvoir, VA: Defense Technical Information Center, December 1994. http://dx.doi.org/10.21236/ada292908.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Intrator, Nathan. A Neural Network for Feature Extraction. Fort Belvoir, VA: Defense Technical Information Center, March 1990. http://dx.doi.org/10.21236/ada223059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Principe, Jose C. Feature Extraction Using an Information Theoretic Framework. Fort Belvoir, VA: Defense Technical Information Center, December 1999. http://dx.doi.org/10.21236/ada397483.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Downing, D. J., V. Fedorov, W. F. Lawkins, M. D. Morris, and G. Ostrouchov. Large datasets: Segmentation, feature extraction, and compression. Office of Scientific and Technical Information (OSTI), July 1996. http://dx.doi.org/10.2172/366463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Farrar, Charles, Mayuko Nishio, Francois Hemez, Chris Stull, Gyuhae Park, Phil Cornwell, Eloi Figueiredo, D. J. Luscher, and Keith Worden. Feature Extraction for Structural Dynamics Model Validation. Office of Scientific and Technical Information (OSTI), January 2016. http://dx.doi.org/10.2172/1235219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hunt, Galen C., and Randal C. Nelson. Lineal Feature Extraction by Parallel Stick Growing. Fort Belvoir, VA: Defense Technical Information Center, June 1996. http://dx.doi.org/10.21236/ada329864.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jagler, Karl B. Wavelet Signal Processing for Transient Feature Extraction. Fort Belvoir, VA: Defense Technical Information Center, March 1992. http://dx.doi.org/10.21236/ada250519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wickerhauser, M. V., G. L. Weiss, and R. R. Coifman. Feature Extraction by Best-Basis and Wavelet Methods. Fort Belvoir, VA: Defense Technical Information Center, January 1995. http://dx.doi.org/10.21236/ada299572.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Haimes, Robert. Feature Extraction from Parallel/Distributed Transient CFD Solutions. Fort Belvoir, VA: Defense Technical Information Center, December 2000. http://dx.doi.org/10.21236/ada391930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Carin, Lawrence. ICA Feature Extraction and SVM Classification of FLIR Imagery. Fort Belvoir, VA: Defense Technical Information Center, September 2005. http://dx.doi.org/10.21236/ada441506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography