Academic literature on the topic 'FEATURE ENCODING'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'FEATURE ENCODING.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "FEATURE ENCODING"

1

Lathroum, Amanda. "Feature encoding by neural nets." Phonology 6, no. 2 (1989): 305–16. http://dx.doi.org/10.1017/s0952675700001044.

Full text
Abstract:
While the use of categorical features seems to be the appropriate way to express sound patterns within languages, these features do not seem adequate to describe the sounds actually produced by speakers. Examination of the speech signal fails to reveal objective, discrete phonological segments. Similarly, segments are not directly observable in the flow of articulatory movements, and vary slightly according to an individual speaker's articulatory strategies. Because of the lack of a reliable relationship between segments and speech sounds, a plausible transition from feature representation to
APA, Harvard, Vancouver, ISO, and other styles
2

Jaswal, Snehlata, and Robert H. Logie. "Configural encoding in visual feature binding." Journal of Cognitive Psychology 23, no. 5 (2011): 586–603. http://dx.doi.org/10.1080/20445911.2011.570256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Pengxiang, Chao Chen, Jingru Yi, and Dimitris Metaxas. "Point Cloud Processing via Recurrent Set Encoding." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5441–49. http://dx.doi.org/10.1609/aaai.v33i01.33015441.

Full text
Abstract:
We present a new permutation-invariant network for 3D point cloud processing. Our network is composed of a recurrent set encoder and a convolutional feature aggregator. Given an unordered point set, the encoder firstly partitions its ambient space into parallel beams. Points within each beam are then modeled as a sequence and encoded into subregional geometric features by a shared recurrent neural network (RNN). The spatial layout of the beams is regular, and this allows the beam features to be further fed into an efficient 2D convolutional neural network (CNN) for hierarchical feature aggrega
APA, Harvard, Vancouver, ISO, and other styles
4

Eurich, Christian W., and Stefan D. Wilke. "Multidimensional Encoding Strategy of Spiking Neurons." Neural Computation 12, no. 7 (2000): 1519–29. http://dx.doi.org/10.1162/089976600300015240.

Full text
Abstract:
Neural responses in sensory systems are typically triggered by a multitude of stimulus features. Using information theory, we study the encoding accuracy of a population of stochastically spiking neurons characterized by different tuning widths for the different features. The optimal encoding strategy for representing one feature most accurately consists of narrow tuning in the dimension to be encoded, to increase the single-neuron Fisher information, and broad tuning in all other dimensions, to increase the number of active neurons. Extremely narrow tuning without sufficient receptive field o
APA, Harvard, Vancouver, ISO, and other styles
5

Shinomiya, Yuki, and Yukinobu Hoshino. "A Quantitative Quality Measurement for Codebook in Feature Encoding Strategies." Journal of Advanced Computational Intelligence and Intelligent Informatics 21, no. 7 (2017): 1232–39. http://dx.doi.org/10.20965/jaciii.2017.p1232.

Full text
Abstract:
Nowadays, a feature encoding strategy is a general approach to represent a document, an image or audio as a feature vector. In image recognition problems, this approach treats an image as a set of partial feature descriptors. The set is then converted to a feature vector based on basis vectors called codebook. This paper focuses on a prior probability, which is one of codebook parameters and analyzes dependency for the feature encoding. In this paper, we conducted the following two experiments, analysis of prior probabilities in state-of-the-art encodings and control of prior probabilities. Th
APA, Harvard, Vancouver, ISO, and other styles
6

Ronran, Chirawan, Seungwoo Lee, and Hong Jun Jang. "Delayed Combination of Feature Embedding in Bidirectional LSTM CRF for NER." Applied Sciences 10, no. 21 (2020): 7557. http://dx.doi.org/10.3390/app10217557.

Full text
Abstract:
Named Entity Recognition (NER) plays a vital role in natural language processing (NLP). Currently, deep neural network models have achieved significant success in NER. Recent advances in NER systems have introduced various feature selections to identify appropriate representations and handle Out-Of-the-Vocabulary (OOV) words. After selecting the features, they are all concatenated at the embedding layer before being fed into a model to label the input sequences. However, when concatenating the features, information collisions may occur and this would cause the limitation or degradation of the
APA, Harvard, Vancouver, ISO, and other styles
7

James, Melissa S., Stuart J. Johnstone, and William G. Hayward. "Event-Related Potentials, Configural Encoding, and Feature-Based Encoding in Face Recognition." Journal of Psychophysiology 15, no. 4 (2001): 275–85. http://dx.doi.org/10.1027//0269-8803.15.4.275.

Full text
Abstract:
Abstract The effects of manipulating configural and feature information on the face recognition process were investigated by recording event-related potentials (ERPs) from five electrode sites (Fz, Cz, Pz, T5, T6), while 17 European subjects performed an own-race and other-race face recognition task. A series of upright faces were presented in a study phase, followed by a test phase where subjects indicated whether inverted and upright faces were studied or novel via a button press response. An inversion effect, illustrating the disruption of upright configural information, was reflected in ac
APA, Harvard, Vancouver, ISO, and other styles
8

S RAO, VIBHA, and P. RAMESH NAIDU. "Periocular and Iris Feature Encoding - A Survey." International Journal of Innovative Research in Computer and Communication Engineering 03, no. 01 (2015): 368–74. http://dx.doi.org/10.15680/ijircce.2015.0301023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

HUO, Lu, and Leijie ZHANG. "Combined feature compression encoding in image retrieval." TURKISH JOURNAL OF ELECTRICAL ENGINEERING & COMPUTER SCIENCES 27, no. 3 (2019): 1603–18. http://dx.doi.org/10.3906/elk-1803-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Hui-Jin, Ki-Sang Hong, Henry Kang, and Seungyong Lee. "Photo Aesthetics Analysis via DCNN Feature Encoding." IEEE Transactions on Multimedia 19, no. 8 (2017): 1921–32. http://dx.doi.org/10.1109/tmm.2017.2687759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!