Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: FEATURE ENCODING.

Статті в журналах з теми "FEATURE ENCODING"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "FEATURE ENCODING".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Lathroum, Amanda. "Feature encoding by neural nets." Phonology 6, no. 2 (August 1989): 305–16. http://dx.doi.org/10.1017/s0952675700001044.

Повний текст джерела
Анотація:
While the use of categorical features seems to be the appropriate way to express sound patterns within languages, these features do not seem adequate to describe the sounds actually produced by speakers. Examination of the speech signal fails to reveal objective, discrete phonological segments. Similarly, segments are not directly observable in the flow of articulatory movements, and vary slightly according to an individual speaker's articulatory strategies. Because of the lack of a reliable relationship between segments and speech sounds, a plausible transition from feature representation to the actual acoustic signal has proven elusive. This paper utilises a theory of information processing, known as PARALLEL DISTRIBUTED PROCESSING (PDP) NETWORKS (also called neural networks), to propose a model which begins to express this transition: translating the feature bundles indicated in a broad phonetic transcription into continuous, potentially variable articulator behaviour.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Jaswal, Snehlata, and Robert H. Logie. "Configural encoding in visual feature binding." Journal of Cognitive Psychology 23, no. 5 (August 2011): 586–603. http://dx.doi.org/10.1080/20445911.2011.570256.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wu, Pengxiang, Chao Chen, Jingru Yi, and Dimitris Metaxas. "Point Cloud Processing via Recurrent Set Encoding." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5441–49. http://dx.doi.org/10.1609/aaai.v33i01.33015441.

Повний текст джерела
Анотація:
We present a new permutation-invariant network for 3D point cloud processing. Our network is composed of a recurrent set encoder and a convolutional feature aggregator. Given an unordered point set, the encoder firstly partitions its ambient space into parallel beams. Points within each beam are then modeled as a sequence and encoded into subregional geometric features by a shared recurrent neural network (RNN). The spatial layout of the beams is regular, and this allows the beam features to be further fed into an efficient 2D convolutional neural network (CNN) for hierarchical feature aggregation. Our network is effective at spatial feature learning, and competes favorably with the state-of-the-arts (SOTAs) on a number of benchmarks. Meanwhile, it is significantly more efficient compared to the SOTAs.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Eurich, Christian W., and Stefan D. Wilke. "Multidimensional Encoding Strategy of Spiking Neurons." Neural Computation 12, no. 7 (July 1, 2000): 1519–29. http://dx.doi.org/10.1162/089976600300015240.

Повний текст джерела
Анотація:
Neural responses in sensory systems are typically triggered by a multitude of stimulus features. Using information theory, we study the encoding accuracy of a population of stochastically spiking neurons characterized by different tuning widths for the different features. The optimal encoding strategy for representing one feature most accurately consists of narrow tuning in the dimension to be encoded, to increase the single-neuron Fisher information, and broad tuning in all other dimensions, to increase the number of active neurons. Extremely narrow tuning without sufficient receptive field overlap will severely worsen the coding. This implies the existence of an optimal tuning width for the feature to be encoded. Empirically, only a subset of all stimulus features will normally be accessible. In this case, relative encoding errors can be calculated that yield a criterion for the function of a neural population based on the measured tuning curves.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Shinomiya, Yuki, and Yukinobu Hoshino. "A Quantitative Quality Measurement for Codebook in Feature Encoding Strategies." Journal of Advanced Computational Intelligence and Intelligent Informatics 21, no. 7 (November 20, 2017): 1232–39. http://dx.doi.org/10.20965/jaciii.2017.p1232.

Повний текст джерела
Анотація:
Nowadays, a feature encoding strategy is a general approach to represent a document, an image or audio as a feature vector. In image recognition problems, this approach treats an image as a set of partial feature descriptors. The set is then converted to a feature vector based on basis vectors called codebook. This paper focuses on a prior probability, which is one of codebook parameters and analyzes dependency for the feature encoding. In this paper, we conducted the following two experiments, analysis of prior probabilities in state-of-the-art encodings and control of prior probabilities. The first experiment investigates the distribution of prior probabilities and compares recognition performances of recent techniques. The results suggest that recognition performance probably depends on the distribution of prior probabilities. The second experiment tries further statistical analysis by controlling the distribution of prior probabilities. The results show a strong negative linear relationship between a standard deviation of prior probabilities and recognition accuracy. From these experiments, the quality of codebook used for feature encoding can be quantitatively measured, and recognition performances can be improved by optimizing codebook. Besides, the codebook is created at an offline step. Therefore, optimizing codebook does not require any additional computational cost for practical applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ronran, Chirawan, Seungwoo Lee, and Hong Jun Jang. "Delayed Combination of Feature Embedding in Bidirectional LSTM CRF for NER." Applied Sciences 10, no. 21 (October 27, 2020): 7557. http://dx.doi.org/10.3390/app10217557.

Повний текст джерела
Анотація:
Named Entity Recognition (NER) plays a vital role in natural language processing (NLP). Currently, deep neural network models have achieved significant success in NER. Recent advances in NER systems have introduced various feature selections to identify appropriate representations and handle Out-Of-the-Vocabulary (OOV) words. After selecting the features, they are all concatenated at the embedding layer before being fed into a model to label the input sequences. However, when concatenating the features, information collisions may occur and this would cause the limitation or degradation of the performance. To overcome the information collisions, some works tried to directly connect some features to latter layers, which we call the delayed combination and show its effectiveness by comparing it to the early combination. As feature encodings for input, we selected the character-level Convolutional Neural Network (CNN) or Long Short-Term Memory (LSTM) word encoding, the pre-trained word embedding, and the contextual word embedding and additionally designed CNN-based sentence encoding using a dictionary. These feature encodings are combined at early or delayed position of the bidirectional LSTM Conditional Random Field (CRF) model according to each feature’s characteristics. We evaluated the performance of this model on the CoNLL 2003 and OntoNotes 5.0 datasets using the F1 score and compared the delayed combination model with our own implementation of the early combination as well as the previous works. This comparison convinces us that our delayed combination is more effective than the early one and also highly competitive.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

James, Melissa S., Stuart J. Johnstone, and William G. Hayward. "Event-Related Potentials, Configural Encoding, and Feature-Based Encoding in Face Recognition." Journal of Psychophysiology 15, no. 4 (October 2001): 275–85. http://dx.doi.org/10.1027//0269-8803.15.4.275.

Повний текст джерела
Анотація:
Abstract The effects of manipulating configural and feature information on the face recognition process were investigated by recording event-related potentials (ERPs) from five electrode sites (Fz, Cz, Pz, T5, T6), while 17 European subjects performed an own-race and other-race face recognition task. A series of upright faces were presented in a study phase, followed by a test phase where subjects indicated whether inverted and upright faces were studied or novel via a button press response. An inversion effect, illustrating the disruption of upright configural information, was reflected in accuracy measures and in greater lateral N2 amplitude to inverted faces, suggesting that structural encoding is harder for inverted faces. An own-race advantage was found, which may reflect the use of configural encoding for the more frequently experienced own-race faces, and feature-based encoding for the less familiar other-race faces, and was reflected in accuracy measures and ERP effects. The midline N2 was larger to configurally encoded faces (i. e., own-race and upright), possibly suggesting configural encoding involves more complex processing than feature-based encoding. An N400-like component was sensitive to feature manipulations, with greater amplitude to other-race than own-race faces and to inverted than upright faces. This effect was interpreted as reflecting increased activation of incompatible representations activated by a feature-based strategy used in processing of other-race and inverted faces. The late positive complex was sensitive to configural manipulation with larger amplitude to other-race than own-race faces, and was interpreted as reflecting the updating of an own-race norm used in face recognition, to incorporate other-race information.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

S RAO, VIBHA, and P. RAMESH NAIDU. "Periocular and Iris Feature Encoding - A Survey." International Journal of Innovative Research in Computer and Communication Engineering 03, no. 01 (January 30, 2015): 368–74. http://dx.doi.org/10.15680/ijircce.2015.0301023.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

HUO, Lu, and Leijie ZHANG. "Combined feature compression encoding in image retrieval." TURKISH JOURNAL OF ELECTRICAL ENGINEERING & COMPUTER SCIENCES 27, no. 3 (May 15, 2019): 1603–18. http://dx.doi.org/10.3906/elk-1803-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Lee, Hui-Jin, Ki-Sang Hong, Henry Kang, and Seungyong Lee. "Photo Aesthetics Analysis via DCNN Feature Encoding." IEEE Transactions on Multimedia 19, no. 8 (August 2017): 1921–32. http://dx.doi.org/10.1109/tmm.2017.2687759.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Huo, Lu, Tianrong Rao, and Leijie Zhang. "Fused feature encoding in convolutional neural network." Multimedia Tools and Applications 78, no. 2 (June 28, 2018): 1635–48. http://dx.doi.org/10.1007/s11042-018-6249-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Li, Cuixia, Shanshan Yang, Li Shi, Yue Liu, and Yinghao Li. "PTRNet: Global Feature and Local Feature Encoding for Point Cloud Registration." Applied Sciences 12, no. 3 (February 8, 2022): 1741. http://dx.doi.org/10.3390/app12031741.

Повний текст джерела
Анотація:
Existing end-to-end cloud registration methods are often inefficient and susceptible to noise. We propose an end-to-end point cloud registration network model, Point Transformer for Registration Network (PTRNet), that considers local and global features to improve this behavior. Our model uses point clouds as inputs and applies a Transformer method to extract their global features. Using a K-Nearest Neighbor (K-NN) topology, our method then encodes the local features of a point cloud and integrates them with the global features to obtain the point cloud’s strong global features. Comparative experiments using the ModelNet40 data set show that our method offers better results than other methods, with a mean square error (MSE), root mean square error (RMSE), and mean absolute error (MAE) between the ground truth and predicted values lower than those of competing methods. In the case of multi-object class without noise, the rotation average absolute error of PTRNet is reduced to 1.601 degrees and the translation average absolute error is reduced to 0.005 units. Compared to other recent end-to-end registration methods and traditional point cloud registration methods, the PTRNet method has less error, higher registration accuracy, and better robustness.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Kehoe, Devin H., Selvi Aybulut, and Mazyar Fallah. "Higher order, multifeatural object encoding by the oculomotor system." Journal of Neurophysiology 120, no. 6 (December 1, 2018): 3042–62. http://dx.doi.org/10.1152/jn.00834.2017.

Повний текст джерела
Анотація:
Previous behavioral and physiological research has demonstrated that as the behavioral relevance of potential saccade goals increases, they elicit more competition during target selection processing as evidenced by increased saccade curvature and neural activity. However, these effects have only been demonstrated for lower order feature singletons, and it remains unclear whether more complicated featural differences between higher order objects also elicit vector modulation. Therefore, we measured human saccades curvature elicited by distractors bilaterally flanking a target during a visual search saccade task and systematically varied subsets of features shared between the two distractors and the target, referred to as objective similarity (OS). Our results demonstrate that saccades deviated away from the distractor highest in OS to the target and that there was a linear relationship between the magnitude of saccade deviation and the number of feature differences between the most similar distractor and the target. Furthermore, an analysis of curvature over the time course of the saccade demonstrated that curvature only occurred in the first 20–30 ms of the movement. Given the multifeatural complexity of the novel stimuli, these results suggest that saccadic target selection processing involves dynamically reweighting vector representations for movement planning to several possible targets based on their behavioral relevance. NEW & NOTEWORTHY We demonstrate that small featural differences between unfamiliar, higher order object representations modulate vector weights during saccadic target selection processing. Such effects have previously only been demonstrated for familiar, simple feature singletons (e.g., color) in which features characterize entire objects. The complexity and novelty of our stimuli suggest that the oculomotor system dynamically receives visual/cognitive information processed in the higher order representational networks of the cortical visual processing hierarchy and integrates this information for saccadic movement planning.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

DIACONESCU, RĂZVAN, and ALEXANDRE MADEIRA. "Encoding hybridized institutions into first-order logic." Mathematical Structures in Computer Science 26, no. 5 (November 12, 2014): 745–88. http://dx.doi.org/10.1017/s0960129514000383.

Повний текст джерела
Анотація:
A ‘hybridization’ of a logic, referred to as the base logic, consists of developing the characteristic features of hybrid logic on top of the respective base logic, both at the level of syntax (i.e. modalities, nominals, etc.) and of the semantics (i.e. possible worlds). By ‘hybridized institutions’ we mean the result of this process when logics are treated abstractly as institutions (in the sense of the institution theory of Goguen and Burstall). This work develops encodings of hybridized institutions into (many-sorted) first-order logic (abbreviated $\mathcal{FOL}$) as a ‘hybridization’ process of abstract encodings of institutions into $\mathcal{FOL}$, which may be seen as an abstraction of the well-known standard translation of modal logic into $\mathcal{FOL}$. The concept of encoding employed by our work is that of comorphism from institution theory, which is a rather comprehensive concept of encoding as it features encodings both of the syntax and of the semantics of logics/institutions. Moreover, we consider the so-called theoroidal version of comorphisms that encode signatures to theories, a feature that accommodates a wide range of concrete applications. Our theory is also general enough to accommodate various constraints on the possible worlds semantics as well a wide variety of quantifications. We also provide pragmatic sufficient conditions for the conservativity of the encodings to be preserved through the hybridization process, which provides the possibility to shift a formal verification process from the hybridized institution to $\mathcal{FOL}$.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Galeano Weber, Elena M., Haley Keglovits, Arin Fisher, and Silvia A. Bunge. "Insights into visual working memory precision at the feature- and object-level from a hemispheric encoding manipulation." Quarterly Journal of Experimental Psychology 73, no. 11 (July 7, 2020): 1949–68. http://dx.doi.org/10.1177/1747021820934990.

Повний текст джерела
Анотація:
Mnemonic precision is an important aspect of visual working memory (WM). Here, we probed mechanisms that affect precision for spatial (size) and non-spatial (colour) features of an object, and whether these features are encoded and/or stored separately in WM. We probed precision at the feature-level—that is, whether different features of a single object are represented separately or together in WM—and the object-level—that is, whether different features across a set of sequentially presented objects are represented in the same or different WM stores. By manipulating whether stimuli were encoded by the left and/or right hemisphere, we gained further insights into how objects are represented in WM. At the feature-level, we tested whether recall fidelity for the two features of an object fluctuated in tandem from trial to trial. We observed no significant coupling under either central or lateralized encoding, supporting the claim of parallel feature channels at encoding. At the level of WM storage of a set of objects, we found asymmetric feature interference under central encoding, whereby an increase in colour load led to a decrease in size precision. When objects were encoded by a single hemisphere, however, we found largely independent feature stores. Precision for size was more resistant to interference from the size of another object under right-hemisphere encoding; by contrast, precision for colour did not differ across hemispheres, suggesting a more distributed WM store. These findings suggest that distinct features of a single object are represented separately but are then partially integrated during maintenance of a set of sequentially presented objects.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Kim, Minseong, and Hyun-Chul Choi. "Uncorrelated feature encoding for faster image style transfer." Neural Networks 140 (August 2021): 148–57. http://dx.doi.org/10.1016/j.neunet.2021.03.007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Hassan, Ehtesham, and Lekshmi V. L. "Attention Guided Feature Encoding for Scene Text Recognition." Journal of Imaging 8, no. 10 (October 8, 2022): 276. http://dx.doi.org/10.3390/jimaging8100276.

Повний текст джерела
Анотація:
The real-life scene images exhibit a range of variations in text appearances, including complex shapes, variations in sizes, and fancy font properties. Consequently, text recognition from scene images remains a challenging problem in computer vision research. We present a scene text recognition methodology by designing a novel feature-enhanced convolutional recurrent neural network architecture. Our work addresses scene text recognition as well as sequence-to-sequence modeling, where a novel deep encoder–decoder network is proposed. The encoder in the proposed network is designed around a hierarchy of convolutional blocks enabled with spatial attention blocks, followed by bidirectional long short-term memory layers. In contrast to existing methods for scene text recognition, which incorporate temporal attention on the decoder side of the entire architecture, our convolutional architecture incorporates novel spatial attention design to guide feature extraction onto textual details in scene text images. The experiments and analysis demonstrate that our approach learns robust text-specific feature sequences for input images, as the convolution architecture designed for feature extraction is tuned to capture a broader spatial text context. With extensive experiments on ICDAR2013, ICDAR2015, IIIT5K and SVT datasets, the paper demonstrates an improvement over many important state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Li, Lin, Ying Ding, Bo Li, Mengqing Qiao, and Biao Ye. "Malware classification based on double byte feature encoding." Alexandria Engineering Journal 61, no. 1 (January 2022): 91–99. http://dx.doi.org/10.1016/j.aej.2021.04.076.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Mesgarani, N., C. Cheung, K. Johnson, and E. F. Chang. "Phonetic Feature Encoding in Human Superior Temporal Gyrus." Science 343, no. 6174 (January 30, 2014): 1006–10. http://dx.doi.org/10.1126/science.1245994.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Li, N., and Y. F. Li. "Feature encoding for unsupervised segmentation of color images." IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics) 33, no. 3 (June 2003): 438–47. http://dx.doi.org/10.1109/tsmcb.2003.811120.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Kondo, Aki, and Jun Saiki. "Feature-Specific Encoding Flexibility in Visual Working Memory." PLoS ONE 7, no. 12 (December 28, 2012): e50962. http://dx.doi.org/10.1371/journal.pone.0050962.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

McCabe, Andrew, Terry Caelli, Geoff West, and Adam Reeves. "Theory of spatiochromatic image encoding and feature extraction." Journal of the Optical Society of America A 17, no. 10 (October 1, 2000): 1744. http://dx.doi.org/10.1364/josaa.17.001744.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Ebitz, R. Becket, Jiaxin Cindy Tu, and Benjamin Y. Hayden. "Rules warp feature encoding in decision-making circuits." PLOS Biology 18, no. 11 (November 30, 2020): e3000951. http://dx.doi.org/10.1371/journal.pbio.3000951.

Повний текст джерела
Анотація:
We have the capacity to follow arbitrary stimulus–response rules, meaning simple policies that guide our behavior. Rule identity is broadly encoded across decision-making circuits, but there are less data on how rules shape the computations that lead to choices. One idea is that rules could simplify these computations. When we follow a rule, there is no need to encode or compute information that is irrelevant to the current rule, which could reduce the metabolic or energetic demands of decision-making. However, it is not clear if the brain can actually take advantage of this computational simplicity. To test this idea, we recorded from neurons in 3 regions linked to decision-making, the orbitofrontal cortex (OFC), ventral striatum (VS), and dorsal striatum (DS), while macaques performed a rule-based decision-making task. Rule-based decisions were identified via modeling rules as the latent causes of decisions. This left us with a set of physically identical choices that maximized reward and information, but could not be explained by simple stimulus–response rules. Contrasting rule-based choices with these residual choices revealed that following rules (1) decreased the energetic cost of decision-making; and (2) expanded rule-relevant coding dimensions and compressed rule-irrelevant ones. Together, these results suggest that we use rules, in part, because they reduce the costs of decision-making through a distributed representational warping in decision-making circuits.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Cho, Jacee, and Roumyana Slabakova. "Interpreting definiteness in a second language without articles: The case of L2 Russian." Second Language Research 30, no. 2 (January 30, 2014): 159–90. http://dx.doi.org/10.1177/0267658313509647.

Повний текст джерела
Анотація:
This article investigates the second language (L2) acquisition of two expressions of the semantic feature [definite] in Russian, a language without articles, by English and Korean native speakers. Within the Feature Reassembly approach (Lardiere, 2009), Slabakova (2009) has argued that reassembling features that are represented overtly in the first language (L1) and mapping them onto those that are encoded indirectly, or covertly, in the L2 will present a greater difficulty than reassembling features in the opposite learning direction. An idealized scale of predictions of difficulty is proposed based on the overt or covert character of the feature encoding and the ease/difficulty of noticing the feature expression. A total of 158 participants (56 native Russian, 49 English learners and 53 Korean learners of Russian) evaluated the acceptability of test sentences in context. Findings demonstrate that acquiring the expression of a feature that is encoded contextually in the L2 is challenging for learners, while an overt expression of a feature presents less difficulty. On the basis of the learners’ developmental patterns observed in the study, we argue that overt and covert expression of semantic features, feature reassembly, and indirect encoding appear to be significant factors in L2 grammatical feature acquisition.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Ma, Chong, Hongyang Yin, Liguo Weng, Min Xia, and Haifeng Lin. "DAFNet: A Novel Change-Detection Model for High-Resolution Remote-Sensing Imagery Based on Feature Difference and Attention Mechanism." Remote Sensing 15, no. 15 (August 6, 2023): 3896. http://dx.doi.org/10.3390/rs15153896.

Повний текст джерела
Анотація:
Change detection is an important component in the field of remote sensing. At present, deep-learning-based change-detection methods have acquired many breakthrough results. However, current algorithms still present issues such as target misdetection, false alarms, and blurry edges. To alleviate these problems, this work proposes a network based on feature differences and attention mechanisms. This network includes a Siamese architecture-encoding network that encodes images at different times, a Difference Feature-Extraction Module (DFEM) for extracting difference features from bitemporal images, an Attention-Regulation Module (ARM) for optimizing the extracted difference features through attention, and a Cross-Scale Feature-Fusion Module (CSFM) for merging features from different encoding stages. Experimental results demonstrate that this method effectively alleviates issues of target misdetection, false alarms, and blurry edges.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

St-Yves, Ghislain, and Thomas Naselaris. "The feature-weighted receptive field: an interpretable encoding model for complex feature spaces." NeuroImage 180 (October 2018): 188–202. http://dx.doi.org/10.1016/j.neuroimage.2017.06.035.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Aizezi, Yasen, Anwar Jamal, Ruxianguli Abudurexiti, and Mutalipu Muming. "Research on Digital Forensics Based on Uyghur Web Text Classification." International Journal of Digital Crime and Forensics 9, no. 4 (October 2017): 30–39. http://dx.doi.org/10.4018/ijdcf.2017100103.

Повний текст джерела
Анотація:
This paper mainly discusses the use of mutual information (MI) and Support Vector Machines (SVMs) for Uyghur Web text classification and digital forensics process of web text categorization: automatic classification and identification, conversion and pretreatment of plain text based on encoding features of various existing Uyghur Web documents etc., introduces the pre-paratory work for Uyghur Web text encoding. Focusing on the non-Uyghur characters and stop words in the web texts filtering, we put forward a Multi-feature Space Normalized Mutual Information (M-FNMI) algorithm and replace MI between single feature and category with mutual information (MI) between input feature combination and category so as to extract more accurate feature words; finally, we classify features with support vector machine (SVM) algorithm. The experimental result shows that this scheme has a high precision of classification and can provide criterion for digital forensics with specific purpose.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Gheisari, Soheila, Daniel Catchpoole, Amanda Charlton, Zsombor Melegh, Elise Gradhand, and Paul Kennedy. "Computer Aided Classification of Neuroblastoma Histological Images Using Scale Invariant Feature Transform with Feature Encoding." Diagnostics 8, no. 3 (August 28, 2018): 56. http://dx.doi.org/10.3390/diagnostics8030056.

Повний текст джерела
Анотація:
Neuroblastoma is the most common extracranial solid malignancy in early childhood. Optimal management of neuroblastoma depends on many factors, including histopathological classification. Although histopathology study is considered the gold standard for classification of neuroblastoma histological images, computers can help to extract many more features some of which may not be recognizable by human eyes. This paper, proposes a combination of Scale Invariant Feature Transform with feature encoding algorithm to extract highly discriminative features. Then, distinctive image features are classified by Support Vector Machine classifier into five clinically relevant classes. The advantage of our model is extracting features which are more robust to scale variation compared to the Patched Completed Local Binary Pattern and Completed Local Binary Pattern methods. We gathered a database of 1043 histologic images of neuroblastic tumours classified into five subtypes. Our approach identified features that outperformed the state-of-the-art on both our neuroblastoma dataset and a benchmark breast cancer dataset. Our method shows promise for classification of neuroblastoma histological images.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Lv, Yalong, Shengwei Tian, Long Yu, and Ruonan Zhang. "Water Body Semantic Information Description and Recognition Based on Multimodal Models." International Journal of Computational Intelligence and Applications 18, no. 02 (June 2019): 1950015. http://dx.doi.org/10.1142/s1469026819500159.

Повний текст джерела
Анотація:
To solve the problems from using single-layer features in traditional water body identification models, such as the lack of local descriptors, large quantization errors, and the lack of semantic information descriptions, a multimodal model is proposed based on the different levels of feature knowledge. First, based on the multidescriptor hierarchical feature, the middle-level local feature extraction of the water body is achieved, and, combined with the convolutional neural network, the high-order global features of the water body are extracted. Then, the image features are hierarchically normalized, and multimodal RBM self-encoding is used for fusion to reduce the quantization error of each layer feature in the encoding process. Finally, the generative model of the Multimodal Model is used to expand the data and filter the multilayered features after fusion. In addition, the semantic information of a water body is further discovered by using the encoder and decoder of the discriminant model and is classified by employing SoftMax. The results show that compared with the traditional water body identification methods, the proposed method improves the recognition accuracy and image description ability.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Xu, Mengxi, Yingshu Lu, and Xiaobin Wu. "Annular Spatial Pyramid Mapping and Feature Fusion-Based Image Coding Representation and Classification." Wireless Communications and Mobile Computing 2020 (September 11, 2020): 1–9. http://dx.doi.org/10.1155/2020/8838454.

Повний текст джерела
Анотація:
Conventional image classification models commonly adopt a single feature vector to represent informative contents. However, a single image feature system can hardly extract the entirety of the information contained in images, and traditional encoding methods have a large loss of feature information. Aiming to solve this problem, this paper proposes a feature fusion-based image classification model. This model combines the principal component analysis (PCA) algorithm, processed scale invariant feature transform (P-SIFT) and color naming (CN) features to generate mutually independent image representation factors. At the encoding stage of the scale-invariant feature transform (SIFT) feature, the bag-of-visual-word model (BOVW) is used for feature reconstruction. Simultaneously, in order to introduce the spatial information to our extracted features, the rotation invariant spatial pyramid mapping method is introduced for the P-SIFT and CN feature division and representation. At the stage of feature fusion, we adopt a support vector machine with two kernels (SVM-2K) algorithm, which divides the training process into two stages and finally learns the knowledge from the corresponding kernel matrix for the classification performance improvement. The experiments show that the proposed method can effectively improve the accuracy of image description and the precision of image classification.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Maniglia, Mariana R., and Alessandra S. Souza. "Age Differences in the Efficiency of Filtering and Ignoring Distraction in Visual Working Memory." Brain Sciences 10, no. 8 (August 14, 2020): 556. http://dx.doi.org/10.3390/brainsci10080556.

Повний текст джерела
Анотація:
Healthy aging is associated with decline in the ability to maintain visual information in working memory (WM). We examined whether this decline can be explained by decreases in the ability to filter distraction during encoding or to ignore distraction during memory maintenance. Distraction consisted of irrelevant objects (Exp. 1) or irrelevant features of an object (Exp. 2). In Experiment 1, participants completed a spatial WM task requiring remembering locations on a grid. During encoding or during maintenance, irrelevant distractor positions were presented. In Experiment 2, participants encoded either single-feature (colors or orientations) or multifeature objects (colored triangles) and later reproduced one of these features using a continuous scale. In multifeature blocks, a precue appeared before encoding or a retrocue appeared during memory maintenance indicating with 100% certainty to the to-be-tested feature, thereby enabling filtering and ignoring of the irrelevant (not-cued) feature, respectively. There were no age-related deficits in the efficiency of filtering and ignoring distractor objects (Exp. 1) and of filtering irrelevant features (Exp. 2). Both younger and older adults could not ignore irrelevant features when cued with a retrocue. Overall, our results provide no evidence for an aging deficit in using attention to manage visual WM.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Fu, Qiang, and Hongbin Dong. "Breast Cancer Recognition Using Saliency-Based Spiking Neural Network." Wireless Communications and Mobile Computing 2022 (March 24, 2022): 1–17. http://dx.doi.org/10.1155/2022/8369368.

Повний текст джерела
Анотація:
The spiking neural networks (SNNs) use event-driven signals to encode physical information for neural computation. SNN takes the spiking neuron as the basic unit. It modulates the process of nerve cells from receiving stimuli to firing spikes. Therefore, SNN is more biologically plausible. Although the SNN has more characteristics of biological neurons, SNN is rarely used for medical image recognition due to its poor performance. In this paper, a reservoir spiking neural network is used for breast cancer image recognition. Due to the difficulties of extracting the lesion features in medical images, a salient feature extraction method is used in image recognition. The salient feature extraction network is composed of spiking convolution layers, which can effectively extract the features of lesions. Two temporal encoding manners, namely, linear time encoding and entropy-based time encoding methods, are used to encode the input patterns. Readout neurons use the ReSuMe algorithm for training, and the Fruit Fly Optimization Algorithm (FOA) is employed to optimize the network architecture to further improve the reservoir SNN performance. Three modality datasets are used to verify the effectiveness of the proposed method. The results show an accuracy of 97.44% for the BreastMNIST database. The classification accuracy is 98.27% on the mini-MIAS database. And the overall accuracy is 95.83% for the BreaKHis database by using the saliency feature extraction, entropy-based time encoding, and network optimization.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Linhardt, Timothy, and Ananya Sen Gupta. "Sonar feature representation with autoencoders and generative adversarial networks." Journal of the Acoustical Society of America 153, no. 3_supplement (March 1, 2023): A178. http://dx.doi.org/10.1121/10.0018583.

Повний текст джерела
Анотація:
Feature representation in the littoral sonar space is a complicated field due to prevalent channel noise from sound reflections as well as diffraction. The response of a sound wave as it interacts with an object provides insight into the nature of the object itself such as geometry and material composition. We approach automated target recognition in this space with feature representation using autoencoders and generative adversarial networks. Through empirical analysis of learned encoding spaces, the dimensionalities of principal features in our sonar data sets are estimated. Real and complex valued training data, processed in various ways, is used to refine which features are represented, and evaluate the importance of phase information while encoding the data. [This research was funded by The Office of Naval Research with Grant No. N00174-20-1-0016.]
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Liu, Yangyang, Minghua Tian, Chang Xu, and Lixiang Zhao. "Neural network feature learning based on image self-encoding." International Journal of Advanced Robotic Systems 17, no. 2 (March 1, 2020): 172988142092165. http://dx.doi.org/10.1177/1729881420921653.

Повний текст джерела
Анотація:
With the rapid development of information technology and the arrival of the era of big data, people’s access to information is increasingly relying on information such as images. Today, image data are showing an increasing trend in the form of an index. How to use deep learning models to extract valuable information from massive data is very important. In the face of such a situation, people cannot accurately and timely find out the information they need. Therefore, the research on image retrieval technology is very important. Image retrieval is an important technology in the field of computer vision image processing. It realizes fast and accurate query of similar images in image database. The excellent feature representation not only can represent the category information of the image but also capture the relevant semantic information of the image. If the neural network feature learning expression is combined with the image retrieval field, it will definitely improve the application of image retrieval technology. To solve the above problems, this article studies the problems encountered in deep learning neural network feature learning based on image self-encoding and discusses its feature expression in the field of image retrieval. By adding the spatial relationship information obtained by image self-encoding in the neural network training process, the feature expression ability of the selected neural network is improved, and the neural network feature learning based on image coding is successfully applied to the popular field of image retrieval.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Gong, Dihong, Zhifeng Li, Weilin Huang, Xuelong Li, and Dacheng Tao. "Heterogeneous Face Recognition: A Common Encoding Feature Discriminant Approach." IEEE Transactions on Image Processing 26, no. 5 (May 2017): 2079–89. http://dx.doi.org/10.1109/tip.2017.2651380.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Fiser, József, and Richard N. Aslin. "Encoding Multielement Scenes: Statistical Learning of Visual Feature Hierarchies." Journal of Experimental Psychology: General 134, no. 4 (2005): 521–37. http://dx.doi.org/10.1037/0096-3445.134.4.521.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Mammarella, Nicola, and Beth Fairfield. "Source monitoring: The importance of feature binding at encoding." European Journal of Cognitive Psychology 20, no. 1 (January 2008): 91–122. http://dx.doi.org/10.1080/09541440601112522.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Cowan, S. C., Peter J. Blamey, Joseph I. Alcantara, Lesley A. Whitford, Graeme Clark, and Geoff Plant. "Speech feature encoding through an electrotactile speech processor Robert." Journal of the Acoustical Society of America 86, S1 (November 1989): S83. http://dx.doi.org/10.1121/1.2027685.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Krahe, Rüdiger, Gabriel Kreiman, Fabrizio Gabbiani, Christof Koch, and Walter Metzner. "Stimulus Encoding and Feature Extraction by Multiple Sensory Neurons." Journal of Neuroscience 22, no. 6 (March 15, 2002): 2374–82. http://dx.doi.org/10.1523/jneurosci.22-06-02374.2002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Zhao, Yanna, Xu Zhao, Ruotian Luo, and Yuncai Liu. "Person Re-identification by encoding free energy feature maps." Multimedia Tools and Applications 75, no. 8 (March 20, 2015): 4795–813. http://dx.doi.org/10.1007/s11042-015-2503-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Altınçay, Hakan, and Zafer Erenel. "Ternary encoding based feature extraction for binary text classification." Applied Intelligence 41, no. 1 (March 5, 2014): 310–26. http://dx.doi.org/10.1007/s10489-014-0515-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Wang, Tong, Wenan Tan, and Jianxin Xue. "A Data Mining Method For Improving the Prediction Of Bioinformatics Data." Journal of Physics: Conference Series 2137, no. 1 (December 1, 2021): 012067. http://dx.doi.org/10.1088/1742-6596/2137/1/012067.

Повний текст джерела
Анотація:
Abstract The composition of proteins nearly correlated with its function. Therefore, it is very ungently important to discuss a method that can automatically forecast protein structure. The fusion encoding method of PseAA and DC was adopted to describe the protein features. Using this encoding method to express protein sequences will produce higher dimensional feature vectors. This paper uses the algorithm of predigesting the characteristic dimension of proteins. By extracting significant feature vectors from the primitive feature vectors, eigenvectors with high dimensions are changed to eigenvectors with low dimensions. The experimental method of jackknife test is adopted. The consequences indicate that the arithmetic put forwarded here is appropriate for identifying whether the given protein is a homo-oligomer or a hetero-oligomer.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Yang, Jia-Quan, Chun-Hua Chen, Jian-Yu Li, Dong Liu, Tao Li, and Zhi-Hui Zhan. "Compressed-Encoding Particle Swarm Optimization with Fuzzy Learning for Large-Scale Feature Selection." Symmetry 14, no. 6 (June 1, 2022): 1142. http://dx.doi.org/10.3390/sym14061142.

Повний текст джерела
Анотація:
Particle swarm optimization (PSO) is a promising method for feature selection. When using PSO to solve the feature selection problem, the probability of each feature being selected and not being selected is the same in the beginning and is optimized during the evolutionary process. That is, the feature selection probability is optimized from symmetry (i.e., 50% vs. 50%) to asymmetry (i.e., some are selected with a higher probability, and some with a lower probability) to help particles obtain the optimal feature subset. However, when dealing with large-scale features, PSO still faces the challenges of a poor search performance and a long running time. In addition, a suitable representation for particles to deal with the discrete binary optimization problem of feature selection is still in great need. This paper proposes a compressed-encoding PSO with fuzzy learning (CEPSO-FL) for the large-scale feature selection problem. It uses the N-base encoding method for the representation of particles and designs a particle update mechanism based on the Hamming distance and a fuzzy learning strategy, which can be performed in the discrete space. It also proposes a local search strategy to dynamically skip some dimensions when updating particles, thus reducing the search space and reducing the running time. The experimental results show that CEPSO-FL performs well for large-scale feature selection problems. The solutions obtained by CEPSO-FL contain small feature subsets and have an excellent performance in classification problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Son, Chang-Hwan. "Leaf Spot Attention Networks Based on Spot Feature Encoding for Leaf Disease Identification and Detection." Applied Sciences 11, no. 17 (August 28, 2021): 7960. http://dx.doi.org/10.3390/app11177960.

Повний текст джерела
Анотація:
This study proposes a new attention-enhanced YOLO model that incorporates a leaf spot attention mechanism based on regions-of-interest (ROI) feature extraction into the YOLO framework for leaf disease detection. Inspired by a previous study, which revealed that leaf spot attention based on the ROI-aware feature extraction can improve leaf disease recognition accuracy significantly and outperform state-of-the-art deep learning models, this study extends the leaf spot attention model to leaf disease detection. The primary idea is that spot areas indicating leaf diseases appear only in leaves, whereas the background area does not contain useful information regarding leaf diseases. To increase the discriminative power of the feature extractor that is required in the object detection framework, it is essential to extract informative and discriminative features from the spot and leaf areas. To realize this, a new ROI-aware feature extractor, that is, a spot feature extractor was designed. To divide the leaf image into spot, leaf, and background areas, the leaf segmentation module was first pretrained, and then spot feature encoding was applied to encode spot information. Next, the ROI-aware feature extractor was connected to an ROI-aware feature fusion layer to model the leaf spot attention mechanism, and to be joined with the YOLO detection subnetwork. The experimental results confirm that the proposed ROI-aware feature extractor can improve leaf disease detection by boosting the discriminative power of the spot features. In addition, the proposed attention-enhanced YOLO model outperforms conventional state-of-the-art object detection models.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Baviskar, Amol G., and S. S. Pawale. "Efficient Domain Search for Fractal Image Compression Using Feature Extraction Technique." Advanced Materials Research 488-489 (March 2012): 1587–91. http://dx.doi.org/10.4028/www.scientific.net/amr.488-489.1587.

Повний текст джерела
Анотація:
Fractal image compression is a lossy compression technique developed in the early 1990s. It makes use of the local self-similarity property existing in an image and finds a contractive mapping affine transformation (fractal transform) T, such that the fixed point of T is close to the given image in a suitable metric. It has generated much interest due to its promise of high compression ratios with good decompression quality. Image encoding based on fractal block-coding method relies on assumption that image redundancy can be efficiently exploited through block-self transformability. It has shown promise in producing high fidelity, resolution independent images. The low complexity of decoding process also suggested use in real time applications. The high encoding time, in combination with patents on technology have unfortunately discouraged results. In this paper, we have proposed efficient domain search technique using feature extraction for the encoding of fractal image which reduces encoding-decoding time and proposed technique improves quality of compressed image.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

SCHWARTZ, WILLIAM ROBSON, and HELIO PEDRINI. "IMPROVED FRACTAL IMAGE COMPRESSION BASED ON ROBUST FEATURE DESCRIPTORS." International Journal of Image and Graphics 11, no. 04 (October 2011): 571–87. http://dx.doi.org/10.1142/s0219467811004251.

Повний текст джерела
Анотація:
Fractal image compression is one of the most promising techniques for image compression due to advantages such as resolution independence and fast decompression. It exploits the fact that natural scenes present self-similarity to remove redundancy and obtain high compression rates with smaller quality degradation compared to traditional compression methods. The main drawback of fractal compression is its computationally intensive encoding process, due to the need for searching regions with high similarity in the image. Several approaches have been developed to reduce the computational cost to locate similar regions. In this work, we propose a method based on robust feature descriptors to speed up the encoding time. The use of robust features provides more discriminative and representative information for regions of the image. When the regions are better represented, the search for similar parts of the image can be reduced to focus only on the most likely matching candidates, which leads to reduction on the computational time. Our experimental results show that the use of robust feature descriptors reduces the encoding time while keeping high compression rates and reconstruction quality.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Fan, Xiaojin, Mengmeng Liao, Lei Chen, and Jingjing Hu. "Few-Shot Learning for Multi-POSE Face Recognition via Hypergraph De-Deflection and Multi-Task Collaborative Optimization." Electronics 12, no. 10 (May 15, 2023): 2248. http://dx.doi.org/10.3390/electronics12102248.

Повний текст джерела
Анотація:
Few-shot, multi-pose face recognition has always been an interesting yet difficult subject in the field of pattern recognition. Researchers have come up with a variety of workarounds; however, these methods make it either difficult to extract effective features that are robust to poses or difficult to obtain globally optimal solutions. In this paper, we propose a few-shot, multi-pose face recognition method based on hypergraph de-deflection and multi-task collaborative optimization (HDMCO). In HDMCO, the hypergraph is embedded in a non-negative image decomposition to obtain images without pose deflection. Furthermore, a feature encoding method is proposed by considering the importance of samples and combining support vector data description, triangle coding, etc. This feature encoding method is used to extract features from pose-free images. Last but not the least, multi-tasks such as feature extraction and feature recognition are jointly optimized to obtain a solution closer to the global optimal solution. Comprehensive experimental results show that the proposed HDMCO achieves better recognition performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Chen, Dong, Guiqiu Xiang, Jiju Peethambaran, Liqiang Zhang, Jing Li, and Fan Hu. "AFGL-Net: Attentive Fusion of Global and Local Deep Features for Building Façades Parsing." Remote Sensing 13, no. 24 (December 11, 2021): 5039. http://dx.doi.org/10.3390/rs13245039.

Повний текст джерела
Анотація:
In this paper, we propose a deep learning framework, namely AFGL-Net to achieve building façade parsing, i.e., obtaining the semantics of small components of building façade, such as windows and doors. To this end, we present an autoencoder embedding position and direction encoding for local feature encoding. The autoencoder enhances the local feature aggregation and augments the representation of skeleton features of windows and doors. We also integrate the Transformer into AFGL-Net to infer the geometric shapes and structural arrangements of façade components and capture the global contextual features. These global features can help recognize inapparent windows/doors from the façade points corrupted with noise, outliers, occlusions, and irregularities. The attention-based feature fusion mechanism is finally employed to obtain more informative features by simultaneously considering local geometric details and the global contexts. The proposed AFGL-Net is comprehensively evaluated on Dublin and RueMonge2014 benchmarks, achieving 67.02% and 59.80% mIoU, respectively. We also demonstrate the superiority of the proposed AFGL-Net by comparing with the state-of-the-art methods and various ablation studies.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Li, Xiaoqiang, Dan Wang, and Yin Zhang. "Representation for Action Recognition Using Trajectory-Based Low-Level Local Feature and Mid-Level Motion Feature." Applied Computational Intelligence and Soft Computing 2017 (2017): 1–7. http://dx.doi.org/10.1155/2017/4019213.

Повний текст джерела
Анотація:
The dense trajectories and low-level local features are widely used in action recognition recently. However, most of these methods ignore the motion part of action which is the key factor to distinguish the different human action. This paper proposes a new two-layer model of representation for action recognition by describing the video with low-level features and mid-level motion part model. Firstly, we encode the compensated flow (w-flow) trajectory-based local features with Fisher Vector (FV) to retain the low-level characteristic of motion. Then, the motion parts are extracted by clustering the similar trajectories with spatiotemporal distance between trajectories. Finally the representation for action video is the concatenation of low-level descriptors encoding vector and motion part encoding vector. It is used as input to the LibSVM for action recognition. The experiment results demonstrate the improvements on J-HMDB and YouTube datasets, which obtain 67.4% and 87.6%, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Butt, Ammar Mohsin, Muhammad Haroon Yousaf, Fiza Murtaza, Saima Nazir, Serestina Viriri, and Sergio A. Velastin. "Agglomerative Clustering and Residual-VLAD Encoding for Human Action Recognition." Applied Sciences 10, no. 12 (June 26, 2020): 4412. http://dx.doi.org/10.3390/app10124412.

Повний текст джерела
Анотація:
Human action recognition has gathered significant attention in recent years due to its high demand in various application domains. In this work, we propose a novel codebook generation and hybrid encoding scheme for classification of action videos. The proposed scheme develops a discriminative codebook and a hybrid feature vector by encoding the features extracted from CNNs (convolutional neural networks). We explore different CNN architectures for extracting spatio-temporal features. We employ an agglomerative clustering approach for codebook generation, which intends to combine the advantages of global and class-specific codebooks. We propose a Residual Vector of Locally Aggregated Descriptors (R-VLAD) and fuse it with locality-based coding to form a hybrid feature vector. It provides a compact representation along with high order statistics. We evaluated our work on two publicly available standard benchmark datasets HMDB-51 and UCF-101. The proposed method achieves 72.6% and 96.2% on HMDB51 and UCF101, respectively. We conclude that the proposed scheme is able to boost recognition accuracy for human action recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії