Статті в журналах з теми "SPATIAL TEMPORAL DESCRIPTOR"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: SPATIAL TEMPORAL DESCRIPTOR.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "SPATIAL TEMPORAL DESCRIPTOR".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Lin, Bo, and Bin Fang. "A new spatial-temporal histograms of gradients descriptor and HOD-VLAD encoding for human action recognition." International Journal of Wavelets, Multiresolution and Information Processing 17, no. 02 (March 2019): 1940009. http://dx.doi.org/10.1142/s0219691319400095.

Повний текст джерела
Анотація:
Automatic human action recognition is a core functionality of systems for video surveillance and human object interaction. In the whole recognition system, feature description and encoding represent two crucial key steps. In order to construct a powerful action recognition framework, it is important that the two steps must provide reliable performance. In this paper, we proposed a new human action feature descriptor which is called spatio-temporal histograms of gradients (SPHOG). SPHOG is based on the spatial and temporal derivation signal, which extracts the gradient changes between consecutive frames. Compared to the traditional descriptors histograms of optical flow, our proposed SPHOG costs less computation resource. In order to incorporate the distribution information of local descriptors into Vector of Locally Aggregated Descriptors (VLAD), which is a popular encoding approach for Bag-of-Feature representation, a Gaussian kernel is implanted to compute the weighted distance histograms of local descriptors. By doing this, the encoding schema for bag-of-feature (BOF) representation is more effective. We validated our proposed algorithm for human action recognition on three public available datasets KTH, UCF Sports and HMDB51. The evaluation experiment results indicate that the proposed descriptor and encoding method can improve the efficiency of human action recognition and the recognition accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Arun Kumar H. D. and Prabhakar C. J. "Moving Vehicles Detection in Traffic Video Using Modified SXCS-LBP Texture Descriptor." International Journal of Computer Vision and Image Processing 5, no. 2 (July 2015): 14–34. http://dx.doi.org/10.4018/ijcvip.2015070102.

Повний текст джерела
Анотація:
Background modeling and subtraction based method for moving vehicle's detection in traffic video using a novel texture descriptor called as Modified Spatially eXtended Center Symmetric Local Binary Pattern (Modified SXCS-LBP) descriptor. The XCS-LBP texture descriptor is sensitive to noise because in order to generate binary code, the value of center pixel value is used as the threshold directly, and it does not consider temporal motion information. In order to solve this problem, this paper proposed a novel texture descriptor called as Modified SXCS-LBP descriptor for moving vehicle detection based on background modeling and subtraction. The proposed descriptor is robust against noise, illumination variation, and able to detect slow moving vehicles because it considers both spatial and temporal moving information. The evaluation carried out using precision and recall metric, which are obtained using experiments conducted on two popular datasets such as BMC and CDnet datasets. The experimental result shows that the authors' method outperforms existing texture and non-texture based methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Pan, Xianzhang, Wenping Guo, Xiaoying Guo, Wenshu Li, Junjie Xu, and Jinzhao Wu. "Deep Temporal–Spatial Aggregation for Video-Based Facial Expression Recognition." Symmetry 11, no. 1 (January 5, 2019): 52. http://dx.doi.org/10.3390/sym11010052.

Повний текст джерела
Анотація:
The proposed method has 30 streams, i.e., 15 spatial streams and 15 temporal streams. Each spatial stream corresponds to each temporal stream. Therefore, this work correlates with the symmetry concept. It is a difficult task to classify video-based facial expression owing to the gap between the visual descriptors and the emotions. In order to bridge the gap, a new video descriptor for facial expression recognition is presented to aggregate spatial and temporal convolutional features across the entire extent of a video. The designed framework integrates a state-of-the-art 30 stream and has a trainable spatial–temporal feature aggregation layer. This framework is end-to-end trainable for video-based facial expression recognition. Thus, this framework can effectively avoid overfitting to the limited emotional video datasets, and the trainable strategy can learn to better represent an entire video. The different schemas for pooling spatial–temporal features are investigated, and the spatial and temporal streams are best aggregated by utilizing the proposed method. The extensive experiments on two public databases, BAUM-1s and eNTERFACE05, show that this framework has promising performance and outperforms the state-of-the-art strategies.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Uddin, Md Azher, Joolekha Bibi Joolee, Young-Koo Lee, and Kyung-Ah Sohn. "A Novel Multi-Modal Network-Based Dynamic Scene Understanding." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 1 (January 31, 2022): 1–19. http://dx.doi.org/10.1145/3462218.

Повний текст джерела
Анотація:
In recent years, dynamic scene understanding has gained attention from researchers because of its widespread applications. The main important factor in successfully understanding the dynamic scenes lies in jointly representing the appearance and motion features to obtain an informative description. Numerous methods have been introduced to solve dynamic scene recognition problem, nevertheless, a few concerns still need to be investigated. In this article, we introduce a novel multi-modal network for dynamic scene understanding from video data, which captures both spatial appearance and temporal dynamics effectively. Furthermore, two-level joint tuning layers are proposed to integrate the global and local spatial features as well as spatial and temporal stream deep features. In order to extract the temporal information, we present a novel dynamic descriptor, namely, Volume Symmetric Gradient Local Graph Structure ( VSGLGS ), which generates temporal feature maps similar to optical flow maps. However, this approach overcomes the issues of optical flow maps. Additionally, Volume Local Directional Transition Pattern ( VLDTP ) based handcrafted spatiotemporal feature descriptor is also introduced, which extracts the directional information through exploiting edge responses. Lastly, a stacked Bidirectional Long Short-Term Memory ( Bi-LSTM ) network along with a temporal mixed pooling scheme is designed to achieve the dynamic information without noise interference. The extensive experimental investigation proves that the proposed multi-modal network outperforms most of the state-of-the-art approaches for dynamic scene understanding.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Hu, Xing, Shiqiang Hu, Xiaoyu Zhang, Huanlong Zhang, and Lingkun Luo. "Anomaly Detection Based on Local Nearest Neighbor Distance Descriptor in Crowded Scenes." Scientific World Journal 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/632575.

Повний текст джерела
Анотація:
We propose a novel local nearest neighbor distance (LNND) descriptor for anomaly detection in crowded scenes. Comparing with the commonly used low-level feature descriptors in previous works, LNND descriptor has two major advantages. First, LNND descriptor efficiently incorporates spatial and temporal contextual information around the video event that is important for detecting anomalous interaction among multiple events, while most existing feature descriptors only contain the information of single event. Second, LNND descriptor is a compact representation and its dimensionality is typically much lower than the low-level feature descriptor. Therefore, not only the computation time and storage requirement can be accordingly saved by using LNND descriptor for the anomaly detection method with offline training fashion, but also the negative aspects caused by using high-dimensional feature descriptor can be avoided. We validate the effectiveness of LNND descriptor by conducting extensive experiments on different benchmark datasets. Experimental results show the promising performance of LNND-based method against the state-of-the-art methods. It is worthwhile to notice that the LNND-based approach requires less intermediate processing steps without any subsequent processing such as smoothing but achieves comparable event better performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zheng, Aihua, Foqin Wang, Amir Hussain, Jin Tang, and Bo Jiang. "Spatial-temporal representatives selection and weighted patch descriptor for person re-identification." Neurocomputing 290 (May 2018): 121–29. http://dx.doi.org/10.1016/j.neucom.2018.02.039.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Islam, Md Anwarul, Md Azher Uddin, and Young-Koo Lee. "A Distributed Automatic Video Annotation Platform." Applied Sciences 10, no. 15 (July 31, 2020): 5319. http://dx.doi.org/10.3390/app10155319.

Повний текст джерела
Анотація:
In the era of digital devices and the Internet, thousands of videos are taken and share through the Internet. Similarly, CCTV cameras in the digital city produce a large amount of video data that carry essential information. To handle the increased video data and generate knowledge, there is an increasing demand for distributed video annotation. Therefore, in this paper, we propose a novel distributed video annotation platform that explores the spatial information and temporal information. Afterward, we provide higher-level semantic information. The proposed framework is divided into two parts: spatial annotation and spatiotemporal annotation. Therefore, we propose a spatiotemporal descriptor, namely, volume local directional ternary pattern-three orthogonal planes (VLDTP–TOP) in a distributed manner using Spark. Moreover, we developed several state-of-the-art appearance-based and spatiotemporal-based feature descriptors on top of Spark. We also provide the distributed video annotation services for the end-users so that they can easily use the video annotation and APIs for development to produce new video annotation algorithms. Due to the lack of a spatiotemporal video annotation dataset that provides ground truth for both spatial and temporal information, we introduce a video annotation dataset, namely, STAD which provides ground truth for spatial and temporal information. An extensive experimental analysis was performed in order to validate the performance and scalability of the proposed feature descriptors, which proved the excellence of our proposed approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

SEURONT, LAURENT, and YVAN LAGADEUC. "VARIABILITY, INHOMOGENEITY AND HETEROGENEITY: TOWARDS A TERMINOLOGICAL CONSENSUS IN ECOLOGY." Journal of Biological Systems 09, no. 02 (June 2001): 81–87. http://dx.doi.org/10.1142/s0218339001000281.

Повний текст джерела
Анотація:
Current widespread use of ecological terms such as variability, heterogeneity and homogeneity is misleading and prevents ecologists from reaching a terminological consensus on what is meant when discussing these concepts, in particular with regard to the descriptor 'heterogeneous.' We propose the use of 'inhomogeneity' to define patterns or processes exhibiting a scale-dependent structure, whether spatial or temporal. Thus, the concept of 'inhomogeneity' can be regarded as a structural ecological entity. A descriptor exhibiting different kinds of inhomogeneity, either spatially or temporally, will then be qualified as being heterogeneous. The terminological consensus introduced here in the particular frame of ecological sciences is finally discussed and generalized to the actual scientific thought process.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Inturi, Anitha Rani, Vazhora Malayil Manikandan, Mahamkali Naveen Kumar, Shuihua Wang, and Yudong Zhang. "Synergistic Integration of Skeletal Kinematic Features for Vision-Based Fall Detection." Sensors 23, no. 14 (July 10, 2023): 6283. http://dx.doi.org/10.3390/s23146283.

Повний текст джерела
Анотація:
According to the World Health Organisation, falling is a major health problem with potentially fatal implications. Each year, thousands of people die as a result of falls, with seniors making up 80% of these fatalities. The automatic detection of falls may reduce the severity of the consequences. Our study focuses on developing a vision-based fall detection system. Our work proposes a new feature descriptor that results in a new fall detection framework. The body geometry of the subject is analyzed and patterns that help to distinguish falls from non-fall activities are identified in our proposed method. An AlphaPose network is employed to identify 17 keypoints on the human skeleton. Thirteen keypoints are used in our study, and we compute two additional keypoints. These 15 keypoints are divided into five segments, each of which consists of a group of three non-collinear points. These five segments represent the left hand, right hand, left leg, right leg and craniocaudal section. A novel feature descriptor is generated by extracting the distances from the segmented parts, angles within the segmented parts and the angle of inclination for every segmented part. As a result, we may extract three features from each segment, giving us 15 features per frame that preserve spatial information. To capture temporal dynamics, the extracted spatial features are arranged in the temporal sequence. As a result, the feature descriptor in the proposed approach preserves the spatio-temporal dynamics. Thus, a feature descriptor of size [m×15] is formed where m is the number of frames. To recognize fall patterns, machine learning approaches such as decision trees, random forests, and gradient boost are applied to the feature descriptor. Our system was evaluated on the UPfall dataset, which is a benchmark dataset. It has shown very good performance compared to the state-of-the-art approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Yan, Jing Jie, and Ming Han Xin. "Facial Expression Recognition Based on Fused Spatio-Temporal Features." Applied Mechanics and Materials 347-350 (August 2013): 3780–85. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.3780.

Повний текст джерела
Анотація:
Although spatio-temporal features (ST) have recently been developed and shown to be available for facial expression recognition and behavior recognition in videos, it utilizes the method of directly flattening the cuboid into a vector as a feature vector for recognition which causes the obtained vector is likely to potentially sensitive to small cuboid perturbations or noises. To overcome the drawback of spatio-temporal features, we propose a novel method called fused spatio-temporal features (FST) method utilizing the separable linear filters to detect interesting points and fusing two cuboids representation methods including local histogrammed gradient descriptor and flattening the cuboid into a vector for cuboids descriptor. The proposed FST method may robustness to small cuboid perturbations or noises and also preserve both spatial and temporal positional information. The experimental results on two video-based facial expression databases demonstrate the effectiveness of the proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Cen, Shixin, Yang Yu, Gang Yan, Ming Yu, and Qing Yang. "Sparse Spatiotemporal Descriptor for Micro-Expression Recognition Using Enhanced Local Cube Binary Pattern." Sensors 20, no. 16 (August 8, 2020): 4437. http://dx.doi.org/10.3390/s20164437.

Повний текст джерела
Анотація:
As a spontaneous facial expression, a micro-expression can reveal the psychological responses of human beings. Thus, micro-expression recognition can be widely studied and applied for its potentiality in clinical diagnosis, psychological research, and security. However, micro-expression recognition is a formidable challenge due to the short-lived time frame and low-intensity of the facial actions. In this paper, a sparse spatiotemporal descriptor for micro-expression recognition is developed by using the Enhanced Local Cube Binary Pattern (Enhanced LCBP). The proposed Enhanced LCBP is composed of three complementary binary features containing Spatial Difference Local Cube Binary Patterns (Spatial Difference LCBP), Temporal Direction Local Cube Binary Patterns (Temporal Direction LCBP), and Temporal Gradient Local Cube Binary Patterns (Temporal Gradient LCBP). With the application of Enhanced LCBP, it would no longer be a problem to provide binary features with spatiotemporal domain complementarity to capture subtle facial changes. In addition, due to the redundant information existing among the division grids, which affects the ability of descriptors to distinguish micro-expressions, the Multi-Regional Joint Sparse Learning is designed to perform feature selection for the division grids, thus paying more attention to the critical local regions. Finally, the Multi-kernel Support Vector Machine (SVM) is employed to fuse the selected features for the final classification. The proposed method exhibits great advantage and achieves promising results on four spontaneous micro-expression datasets. Through further observation of parameter evaluation and confusion matrix, the sufficiency and effectiveness of the proposed method are proved.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Zuo, Zheming, Bo Wei, Fei Chao, Yanpeng Qu, Yonghong Peng, and Longzhi Yang. "Enhanced Gradient-Based Local Feature Descriptors by Saliency Map for Egocentric Action Recognition." Applied System Innovation 2, no. 1 (February 19, 2019): 7. http://dx.doi.org/10.3390/asi2010007.

Повний текст джерела
Анотація:
Egocentric video analysis is an important tool in healthcare that serves a variety of purposes, such as memory aid systems and physical rehabilitation, and feature extraction is an indispensable process for such analysis. Local feature descriptors have been widely applied due to their simple implementation and reasonable efficiency and performance in applications. This paper proposes an enhanced spatial and temporal local feature descriptor extraction method to boost the performance of action classification. The approach allows local feature descriptors to take advantage of saliency maps, which provide insights into visual attention. The effectiveness of the proposed method was validated and evaluated by a comparative study, whose results demonstrated an improved accuracy of around 2%.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Xiang-Wei, Li, Li Zhan-Ming, Zhang Ming-Xin, Wang Yi-Ju, and Zhang Zhi-Xun. "Rough Sets based Temporal-spatial Color Descriptor Extraction Algorithm in Compressed Domain for Video Retrieval." Information Technology Journal 8, no. 4 (May 1, 2009): 610–14. http://dx.doi.org/10.3923/itj.2009.610.614.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Li, Chenyang, Xin Zhang, Lufan Liao, Lianwen Jin, and Weixin Yang. "Skeleton-Based Gesture Recognition Using Several Fully Connected Layers with Path Signature Features and Temporal Transformer Module." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 8585–93. http://dx.doi.org/10.1609/aaai.v33i01.33018585.

Повний текст джерела
Анотація:
The skeleton based gesture recognition is gaining more popularity due to its wide possible applications. The key issues are how to extract discriminative features and how to design the classification model. In this paper, we first leverage a robust feature descriptor, path signature (PS), and propose three PS features to explicitly represent the spatial and temporal motion characteristics, i.e., spatial PS (S PS), temporal PS (T PS) and temporal spatial PS (T S PS). Considering the significance of fine hand movements in the gesture, we propose an ”attention on hand” (AOH) principle to define joint pairs for the S PS and select single joint for the T PS. In addition, the dyadic method is employed to extract the T PS and T S PS features that encode global and local temporal dynamics in the motion. Secondly, without the recurrent strategy, the classification model still faces challenges on temporal variation among different sequences. We propose a new temporal transformer module (TTM) that can match the sequence key frames by learning the temporal shifting parameter for each input. This is a learning-based module that can be included into standard neural network architecture. Finally, we design a multi-stream fully connected layer based network to treat spatial and temporal features separately and fused them together for the final result. We have tested our method on three benchmark gesture datasets, i.e., ChaLearn 2016, ChaLearn 2013 and MSRC-12. Experimental results demonstrate that we achieve the state-of-the-art performance on skeleton-based gesture recognition with high computational efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Ben Tamou, Abdelouahid, Lahoucine Ballihi, and Driss Aboutajdine. "Automatic Learning of Articulated Skeletons Based on Mean of 3D Joints for Efficient Action Recognition." International Journal of Pattern Recognition and Artificial Intelligence 31, no. 04 (February 2, 2017): 1750008. http://dx.doi.org/10.1142/s0218001417500082.

Повний текст джерела
Анотація:
In this paper, we present a new approach for human action recognition using [Formula: see text] skeleton joints recovered from RGB-D cameras. We propose a descriptor based on differences of skeleton joints. This descriptor combines two characteristics including static posture and overall dynamics that encode spatial and temporal aspects. Then, we apply the mean function on these characteristics in order to form the feature vector, used as an input to Random Forest classifier for action classification. The experimental results on both datasets: MSR Action 3D dataset and MSR Daily Activity 3D dataset demonstrate that our approach is efficient and gives promising results compared to state-of-the-art approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Arunnehru, Jawaharlalnehru, Sambandham Thalapathiraj, Ravikumar Dhanasekar, Loganathan Vijayaraja, Raju Kannadasan, Arfat Ahmad Khan, Mohd Anul Haq, Mohammed Alshehri, Mohamed Ibrahim Alwanin, and Ismail Keshta. "Machine Vision-Based Human Action Recognition Using Spatio-Temporal Motion Features (STMF) with Difference Intensity Distance Group Pattern (DIDGP)." Electronics 11, no. 15 (July 28, 2022): 2363. http://dx.doi.org/10.3390/electronics11152363.

Повний текст джерела
Анотація:
In recent years, human action recognition is modeled as a spatial-temporal video volume. Such aspects have recently expanded greatly due to their explosively evolving real-world uses, such as visual surveillance, autonomous driving, and entertainment. Specifically, the spatio-temporal interest points (STIPs) approach has been widely and efficiently used in action representation for recognition. In this work, a novel approach based on the STIPs is proposed for action descriptors i.e., Two Dimensional-Difference Intensity Distance Group Pattern (2D-DIDGP) and Three Dimensional-Difference Intensity Distance Group Pattern (3D-DIDGP) for representing and recognizing the human actions in video sequences. Initially, this approach captures the local motion in a video that is invariant to size and shape changes. This approach extends further to build unique and discriminative feature description methods to enhance the action recognition rate. The transformation methods, such as DCT (Discrete cosine transform), DWT (Discrete wavelet transforms), and hybrid DWT+DCT, are utilized. The proposed approach is validated on the UT-Interaction dataset that has been extensively studied by past researchers. Then, the classification methods, such as Support Vector Machines (SVM) and Random Forest (RF) classifiers, are exploited. From the observed results, it is perceived that the proposed descriptors especially the DIDGP based descriptor yield promising results on action recognition. Notably, the 3D-DIDGP outperforms the state-of-the-art algorithm predominantly.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Ojo, John Adedapo, and Jamiu Alabi Oladosu. "Effective Smoke Detection Using Spatial-Temporal Energy and Weber Local Descriptors in Three Orthogonal Planes (WLD-TOP)." Journal of Computer Science and Technology 18, no. 01 (April 25, 2018): e05. http://dx.doi.org/10.24215/16666038.18.e05.

Повний текст джерела
Анотація:
Video-based fire detection (VFD) technologies have received significant attention from both academic and industrial communities recently. However, existing VFD approaches are still susceptible to false alarms due to changes in illumination, camera noise, variability of shape, motion, colour, irregular patterns of smoke and flames, modelling and training inaccuracies. Hence, this work aimed at developing a VSD system that will have a high detection rate, low false-alarm rate and short response time. Moving blocks in video frames were segmented and analysed in HSI colour space, and wavelet energy analysis of the smoke candidate blocks was performed. In addition, Dynamic texture descriptors were obtained using Weber Local Descriptor in Three Orthogonal Planes (WLD-TOP). These features were combined and used as inputs to Support Vector Classifier with radial based kernel function, while post-processing stage employs temporal image filtering to reduce false alarm. The algorithm was implemented in MATLAB 8.1.0.604 (R2013a). Accuracy of 99.30%, detection rate of 99.28% and false alarm rate of 0.65% were obtained when tested with some online videos. The output of this work would find applications in early fire detection systems and other applications such as robot vision and automated inspection.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Yan, Xunshi, and Yupin Luo. "Recognizing human actions using a new descriptor based on spatial–temporal interest points and weighted-output classifier." Neurocomputing 87 (June 2012): 51–61. http://dx.doi.org/10.1016/j.neucom.2012.02.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Matsuzaka, Yasunari, and Yoshihiro Uesawa. "Ensemble Learning, Deep Learning-Based and Molecular Descriptor-Based Quantitative Structure–Activity Relationships." Molecules 28, no. 5 (March 6, 2023): 2410. http://dx.doi.org/10.3390/molecules28052410.

Повний текст джерела
Анотація:
A deep learning-based quantitative structure–activity relationship analysis, namely the molecular image-based DeepSNAP–deep learning method, can successfully and automatically capture the spatial and temporal features in an image generated from a three-dimensional (3D) structure of a chemical compound. It allows building high-performance prediction models without extracting and selecting features because of its powerful feature discrimination capability. Deep learning (DL) is based on a neural network with multiple intermediate layers that makes it possible to solve highly complex problems and improve the prediction accuracy by increasing the number of hidden layers. However, DL models are too complex when it comes to understanding the derivation of predictions. Instead, molecular descriptor-based machine learning has clear features owing to the selection and analysis of features. However, molecular descriptor-based machine learning has some limitations in terms of prediction performance, calculation cost, feature selection, etc., while the DeepSNAP–deep learning method outperforms molecular descriptor-based machine learning due to the utilization of 3D structure information and the advanced computer processing power of DL.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

López Medina, Miguel Ángel, Macarena Espinilla, Cristiano Paggeti, and Javier Medina Quero. "Activity Recognition for IoT Devices Using Fuzzy Spatio-Temporal Features as Environmental Sensor Fusion." Sensors 19, no. 16 (August 11, 2019): 3512. http://dx.doi.org/10.3390/s19163512.

Повний текст джерела
Анотація:
The IoT describes a development field where new approaches and trends are in constant change. In this scenario, new devices and sensors are offering higher precision in everyday life in an increasingly less invasive way. In this work, we propose the use of spatial-temporal features by means of fuzzy logic as a general descriptor for heterogeneous sensors. This fuzzy sensor representation is highly efficient and enables devices with low computing power to develop learning and evaluation tasks in activity recognition using light and efficient classifiers. To show the methodology’s potential in real applications, we deploy an intelligent environment where new UWB location devices, inertial objects, wearable devices, and binary sensors are connected with each other and describe daily human activities. We then apply the proposed fuzzy logic-based methodology to obtain spatial-temporal features to fuse the data from the heterogeneous sensor devices. A case study developed in the UJAmISmart Lab of the University of Jaen (Jaen, Spain) shows the encouraging performance of the methodology when recognizing the activity of an inhabitant using efficient classifiers.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Zhou, Xi-guo, Ning-de Jin, Zhen-ya Wang, and Wen-yin Zhang. "Temporal and spatial evolution characteristics of gas-liquid two-phase flow pattern based on image texture spectrum descriptor." Optoelectronics Letters 5, no. 6 (November 2009): 445–49. http://dx.doi.org/10.1007/s11801-009-8215-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Wang, Qiang, Qiao Ma, Chao-Hui Luo, Hai-Yan Liu, and Can-Long Zhang. "Hybrid Histogram of Oriented Optical Flow for Abnormal Behavior Detection in Crowd Scenes." International Journal of Pattern Recognition and Artificial Intelligence 30, no. 02 (February 2016): 1655007. http://dx.doi.org/10.1142/s0218001416550077.

Повний текст джерела
Анотація:
Abnormal behavior detection in crowd scenes has received considerable attention in the field of public safety. Traditional motion models do not account for the continuity of motion characteristics between frames. In this paper, we present a new feature descriptor, called the hybrid optical flow histogram. By importing the concept of acceleration, our method can indicate the change of speed in different directions of a movement. Therefore, our descriptor contains more information on the movement. We also introduce a spatial and temporal region saliency determination method to extract the effective motion area only for samples, which could effectively reduce the computational costs, and we apply a sparse representation to detect abnormal behaviors via sparse reconstruction costs. Sparse representation has a high rate of recognition performance and stability. Experiments involving the UMN datasets and the videos taken by us show that our method can effectively identify various types of anomalies and that the recognition results are better than existing algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Ji, Ze, and Quanquan Han. "A novel image feature descriptor for SLM spattering pattern classification using a consumable camera." International Journal of Advanced Manufacturing Technology 110, no. 11-12 (September 18, 2020): 2955–76. http://dx.doi.org/10.1007/s00170-020-05995-3.

Повний текст джерела
Анотація:
Abstract In selective laser melting (SLM), spattering is an important phenomenon that is highly related to the quality of the manufactured parts. Characterisation and monitoring of spattering behaviours are highly valuable in understanding the manufacturing process and improving the manufacturing quality of SLM. This paper introduces a method of automatic visual classification to distinguish spattering characteristics of SLM processes in different manufacturing conditions. A compact feature descriptor is proposed to represent spattering patterns and its effectiveness is evaluated using real images captured in different conditions. The feature descriptor of this work combines information of spatter trajectory morphology, spatial distributions, and temporal information. The classification is performed using support vector machine (SVM) and random forests for testing and shows highly promising classification accuracy of about 97%. The advantages of this work include compactness for representation and semantic interpretability with the feature description. In addition, the qualities of manufacturing parts are mapped with spattering characteristics under different laser energy densities. Such a map table can be then used to define the desired spatter features, providing a non-contact monitoring solution for online anomaly detection. This work will lead to a further integration of real-time vision monitoring system for an online closed-loop prognostic system for SLM systems, in order to improve the performance in terms of manufacturing quality, power consumption, and fault detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Zhang, Biyao, Huichun Ye, Wei Lu, Wenjiang Huang, Bo Wu, Zhuoqing Hao, and Hong Sun. "A Spatiotemporal Change Detection Method for Monitoring Pine Wilt Disease in a Complex Landscape Using High-Resolution Remote Sensing Imagery." Remote Sensing 13, no. 11 (May 25, 2021): 2083. http://dx.doi.org/10.3390/rs13112083.

Повний текст джерела
Анотація:
Using high-resolution remote sensing data to identify infected trees is an important method for controlling pine wilt disease (PWD). Currently, single-date image classification methods are widely used for PWD detection in pure stands of pine. However, they often yield false detections caused by deciduous trees, brown herbaceous, and sparsely vegetated regions in complex landscapes, resulting in low user accuracies. Due to the limitations on the bands of the high-resolution imagery, it is difficult to distinguish wilted pine trees from such easily confused objects when only using the optical spectral characteristics. This paper proposes a spatiotemporal change detection method to reduce false detections in tree-scale PWD monitoring under a complex landscape. The framework consisted of three parts, which represent the capture of spectral, temporal, and spatial features: (1) the Normalized Green–Red Difference Index (NGRDI) was calculated as a descriptor of canopy greenness; (2) two NGRDI images with similar dates in adjacent years were contrasted to obtain a bitemporal change index that represents the temporal behaviors of typical cover types; and (3) a spatial enhancement was performed on the change index using a convolution kernel matching the spatial patterns of PWD. Finally, a set of criteria based on the above features were established to extract the wilted pine trees. The results showed that the proposed method effectively distinguishes wilted pine trees from other easily confused objects. Compared with single-date image classification, the proposed method significantly improved user’s accuracy (81.2% vs. 67.7%) while maintaining the same level of producer’s accuracy (84.7% vs. 82.6%).
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Li, Xiaoqiang, Yi Zhang, and Dong Liao. "Mining Key Skeleton Poses with Latent SVM for Action Recognition." Applied Computational Intelligence and Soft Computing 2017 (2017): 1–11. http://dx.doi.org/10.1155/2017/5861435.

Повний текст джерела
Анотація:
Human action recognition based on 3D skeleton has become an active research field in recent years with the recently developed commodity depth sensors. Most published methods analyze an entire 3D depth data, construct mid-level part representations, or use trajectory descriptor of spatial-temporal interest point for recognizing human activities. Unlike previous work, a novel and simple action representation is proposed in this paper which models the action as a sequence of inconsecutive and discriminative skeleton poses, named as key skeleton poses. The pairwise relative positions of skeleton joints are used as feature of the skeleton poses which are mined with the aid of the latent support vector machine (latent SVM). The advantage of our method is resisting against intraclass variation such as noise and large nonlinear temporal deformation of human action. We evaluate the proposed approach on three benchmark action datasets captured by Kinect devices: MSR Action 3D dataset, UTKinect Action dataset, and Florence 3D Action dataset. The detailed experimental results demonstrate that the proposed approach achieves superior performance to the state-of-the-art skeleton-based action recognition methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Priyadharshini, P., and B. S. E. Zoraida. "Hybrid Semantic Feature Descriptor and Fuzzy C-Means Clustering for Lung Cancer Detection and Classification." Journal of Computational and Theoretical Nanoscience 18, no. 4 (April 1, 2021): 1263–69. http://dx.doi.org/10.1166/jctn.2021.9391.

Повний текст джерела
Анотація:
Lung cancer (LC) will decrease the yield, which will have a negative impact on the economy. Therefore, primary and accurate the attack finding is a priority for the agro-dependent state. In several modern technologies for early detection of LC, image processing has become a one of the essential tool so that it cannot only early to find the disease accurately, but also successfully measure it. Various approaches have been developed to detect LC based on background modelling. Most of them focus on temporal information but partially or completely ignore spatial information, making it sensitive to noise. In order to overcome these issues an improved hybrid semantic feature descriptor technique is introduced based on Gray-Level Co-Occurrence Matrix (GLCM), Local Binary Pattern (LBP) and histogram of oriented gradients (HOG) feature extraction algorithms. And also to improve the LC segmentation problems a fuzzy c-means clustering algorithm (FCM) is used. Experiments and comparisons on publically available LIDC-IBRI dataset. To evaluate the proposed feature extraction performance three different classifiers are analysed such as artificial neural networks (ANN), recursive neural network and recurrent neural networks (RNNs).
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Uddin, Md, and Young-Koo Lee. "Feature Fusion of Deep Spatial Features and Handcrafted Spatiotemporal Features for Human Action Recognition." Sensors 19, no. 7 (April 2, 2019): 1599. http://dx.doi.org/10.3390/s19071599.

Повний текст джерела
Анотація:
Human action recognition plays a significant part in the research community due to its emerging applications. A variety of approaches have been proposed to resolve this problem, however, several issues still need to be addressed. In action recognition, effectively extracting and aggregating the spatial-temporal information plays a vital role to describe a video. In this research, we propose a novel approach to recognize human actions by considering both deep spatial features and handcrafted spatiotemporal features. Firstly, we extract the deep spatial features by employing a state-of-the-art deep convolutional network, namely Inception-Resnet-v2. Secondly, we introduce a novel handcrafted feature descriptor, namely Weber’s law based Volume Local Gradient Ternary Pattern (WVLGTP), which brings out the spatiotemporal features. It also considers the shape information by using gradient operation. Furthermore, Weber’s law based threshold value and the ternary pattern based on an adaptive local threshold is presented to effectively handle the noisy center pixel value. Besides, a multi-resolution approach for WVLGTP based on an averaging scheme is also presented. Afterward, both these extracted features are concatenated and feed to the Support Vector Machine to perform the classification. Lastly, the extensive experimental analysis shows that our proposed method outperforms state-of-the-art approaches in terms of accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Máñez-Crespo, Julia, Fiona Tomas, Yolanda Fernández-Torquemada, Laura Royo, Fernando Espino, Laura Antich, Néstor E. Bosch, et al. "Variation in Fish Abundance, Diversity and Assemblage Structure in Seagrass Meadows across the Atlanto-Mediterranean Province." Diversity 14, no. 10 (September 28, 2022): 808. http://dx.doi.org/10.3390/d14100808.

Повний текст джерела
Анотація:
Seagrasses worldwide provide key habitats for fish assemblages. Biogeographical disparities in ocean climate conditions and seasonal regimes are well-known drivers of the spatial and temporal variation in seagrass structure, with potential effects on associated fish assemblages. Whether taxonomically disparate fish assemblages support a similar range of ecological functions remains poorly tested in seagrass ecosystems. In this study, we examined variation in the abundance, diversity (from a taxonomic and functional perspective), and assemblage structure of fish community inhabiting nine meadows of the seagrass Cymodocea nodosa across three regions in the Mediterranean (Mallorca and Alicante) and the adjacent Atlantic (Gran Canaria), and identified which attributes typifying the structure of meadows, and large-scale variability in ocean climate, contributed most to explaining such ecological variation. Despite a similar total number of species between Mallorca and Gran Canaria, the latter region had more taxonomically and functionally diverse fish assemblages relative to the western Mediterranean regions, which translated into differences in multivariate assemblage structure. While variation in the abundance of the most conspicuous fish species was largely explained by variation in seagrass structural descriptors, most variation in diversity was accounted for by a descriptor of ocean climate (mean seasonal SST), operating at regional scales. Variation in fish assemblage structure was, to a lesser extent, also explained by local variability in seagrass structure. Beyond climatic drivers, our results suggest that lower temporal variability in the canopy structure of C. nodosa meadows in Gran Canaria provides a more consistent source of food and protection for associated fish assemblages, which likely enhances the more abundant and diverse fish assemblages there.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Yao, Lingxiang, Worapan Kusakunniran, Qiang Wu, Jingsong Xu, and Jian Zhang. "Recognizing Gaits Across Walking and Running Speeds." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 3 (August 31, 2022): 1–22. http://dx.doi.org/10.1145/3488715.

Повний текст джерела
Анотація:
For decades, very few methods were proposed for cross-mode (i.e., walking vs. running) gait recognition. Thus, it remains largely unexplored regarding how to recognize persons by the way they walk and run. Existing cross-mode methods handle the walking-versus-running problem in two ways, either by exploring the generic mapping relation between walking and running modes or by extracting gait features which are non-/less vulnerable to the changes across these two modes. However, for the first approach, a mapping relation fit for one person may not be applicable to another person. There is no generic mapping relation given that walking and running are two highly self-related motions. The second approach does not give more attention to the disparity between walking and running modes, since mode labels are not involved in their feature learning processes. Distinct from these existing cross-mode methods, in our method, mode labels are used in the feature learning process, and a mode-invariant gait descriptor is hybridized for cross-mode gait recognition to handle this walking-versus-running problem. Further research is organized in this article to investigate the disparity between walking and running. Running is different from walking not only in the speed variances but also, more significantly, in prominent gesture/motion changes. According to these rationales, in our proposed method, we give more attention to the differences between walking and running modes, and a robust gait descriptor is developed to hybridize the mode-invariant spatial and temporal features. Two multi-task learning-based networks are proposed in this method to explore these mode-invariant features. Spatial features describe the body parts non-/less affected by mode changes, and temporal features depict the instinct motion relation of each person. Mode labels are also adopted in the training phase to guide the network to give more attention to the disparity across walking and running modes. In addition, relevant experiments on OU-ISIR Treadmill Dataset A have affirmed the effectiveness and feasibility of the proposed method. A state-of-the-art result can be achieved by our proposed method on this dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Siritanawan, Prarinya, Kazunori Kotani, and Fan Chen. "Cumulative Differential Gabor Features for Facial Expression Classification." International Journal of Semantic Computing 09, no. 02 (June 2015): 193–213. http://dx.doi.org/10.1142/s1793351x15400036.

Повний текст джерела
Анотація:
Emotions are written all over our faces. Facial expressions of emotions can be possibly read by computer vision and machine learning system. Regarding the evidence in cognitive science, the perception of facial dynamics is necessary for understanding the facial expression of human emotions. Our previous study proposed a temporal feature to model the levels of facial muscle activation. However, the quality of the feature suffers from various types of interference such as translation, scaling, noise, blurriness, and varying illumination. To cope with such problems, we derive a novel feature descriptor by expanding 2D Gabor features for a time series data. This feature is called Cumulative Differential Gabor feature (CDG). Then, we use a discriminative subspace for estimating an emotion class. As a result, our method gains the advantages of using both spatial and frequency components. The experimental results show the performance and the robustness to the underlying conditions.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Kuang, Yiqun, Hong Cheng, Yali Zheng, Fang Cui, and Rui Huang. "One-shot gesture recognition with attention-based DTW for human-robot collaboration." Assembly Automation 40, no. 1 (August 2, 2019): 40–47. http://dx.doi.org/10.1108/aa-11-2018-0228.

Повний текст джерела
Анотація:
Purpose This paper aims to present a one-shot gesture recognition approach which can be a high-efficient communication channel in human–robot collaboration systems. Design/methodology/approach This paper applies dynamic time warping (DTW) to align two gesture sequences in temporal domain with a novel frame-wise distance measure which matches local features in spatial domain. Furthermore, a novel and robust bidirectional attention region extraction method is proposed to retain information in both movement and hold phase of a gesture. Findings The proposed approach is capable of providing efficient one-shot gesture recognition without elaborately designed features. The experiments on a social robot (JiaJia) demonstrate that the proposed approach can be used in a human–robot collaboration system flexibly. Originality/value According to previous literature, there are no similar solutions that can achieve an efficient gesture recognition with simple local feature descriptor and combine the advantages of local features with DTW.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Mastan, Ch, Ch Ravindra, T. Kishore, T. Harish, R. Veeranjaneyulu, and Dr A. Seshagiri Rao. "An Efficient Approach for Patterns of Oriented Motion Flow Facial Expression Recognition from Depth Video." International Journal of Innovative Research in Computer Science & Technology 10, no. 5 (September 27, 2022): 149–51. http://dx.doi.org/10.55524/ijircst.2022.10.5.22.

Повний текст джерела
Анотація:
Patterns of directed motion flow (POMF) from optical flow data is a novel feature illustration method that we have a tendency to propose in this paper to recognize the correct facial expression from facial video.The POMF encodes the directional flow data with increased native texture small patterns and computes completely distinct directional motion data.It demonstrates its ability to recognize facial data by capturing the spatial and temporal changes caused by facial movements through optical flow and allowing it to examine both domestic and foreign structures.Finally, the hidden Markoff model (HMM) is trained on the expression model using the POMF bar graph.The objective sequences are generated by using the K-means agglomeration method to create a codebook in order to instruct through the HMM. Over RGB and depth camera-based video, the projected technique's performance has been evaluated. The results of the experiments show that the proposed POMF descriptor is more effective than other promising approaches at extracting facial information and has a higher classification rate.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Zin, Thi Thi, Ye Htet, Yuya Akagi, Hiroki Tamura, Kazuhiro Kondo, Sanae Araki, and Etsuo Chosa. "Real-Time Action Recognition System for Elderly People Using Stereo Depth Camera." Sensors 21, no. 17 (September 1, 2021): 5895. http://dx.doi.org/10.3390/s21175895.

Повний текст джерела
Анотація:
Smart technologies are necessary for ambient assisted living (AAL) to help family members, caregivers, and health-care professionals in providing care for elderly people independently. Among these technologies, the current work is proposed as a computer vision-based solution that can monitor the elderly by recognizing actions using a stereo depth camera. In this work, we introduce a system that fuses together feature extraction methods from previous works in a novel combination of action recognition. Using depth frame sequences provided by the depth camera, the system localizes people by extracting different regions of interest (ROI) from UV-disparity maps. As for feature vectors, the spatial-temporal features of two action representation maps (depth motion appearance (DMA) and depth motion history (DMH) with a histogram of oriented gradients (HOG) descriptor) are used in combination with the distance-based features, and fused together with the automatic rounding method for action recognition of continuous long frame sequences. The experimental results are tested using random frame sequences from a dataset that was collected at an elder care center, demonstrating that the proposed system can detect various actions in real-time with reasonable recognition rates, regardless of the length of the image sequences.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Bai, Shizhen, and Fuli Han. "Tourist Behavior Recognition Through Scenic Spot Image Retrieval Based on Image Processing." Traitement du Signal 37, no. 4 (October 10, 2020): 619–26. http://dx.doi.org/10.18280/ts.370410.

Повний текст джерела
Анотація:
The monitoring of tourist behaviors, coupled with the recognition of scenic spots, greatly improves the quality and safety of travel. The visual information is the underlying features of scenic spot images, but the semantics of the information have not been satisfactorily classified or described. Based on image processing technologies, this paper presents a novel method for scenic spot retrieval and tourist behavior recognition. Firstly, the framework of scenic spot image retrieval was constructed, followed by a detailed introduction to the extraction of scale invariant feature transform (SIFT) features. The SIFT feature extraction includes five steps: scale space construction, local space extreme point detection, precise positioning of key points, determination of key point size and direction, and generation of SIFT descriptor. Next, multiple correlated images were mined for the target scenic spot image, and the feature matching method between the target image and the set of scenic spot images was introduced in details. On this basis, a tourist behavior recognition method was designed based on temporal and spatial consistency. The proposed method was proved effective through experiments. The research results provide theoretical reference for image retrieval and behavior recognition in many other fields.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Chen, J., J. L. Hou, and M. Deng. "AN APPROACH TO ALLEVIATE THE FALSE ALARM IN BUILDING CHANGE DETECTION FROM URBAN VHR IMAGE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 21, 2016): 459–65. http://dx.doi.org/10.5194/isprs-archives-xli-b7-459-2016.

Повний текст джерела
Анотація:
Building change detection from very-high-resolution (VHR) urban remote sensing image frequently encounter the challenge of serious false alarm caused by different illumination or viewing angles in bi-temporal images. An approach to alleviate the false alarm in urban building change detection is proposed in this paper. Firstly, as shadows casted by urban buildings are of distinct spectral and shape feature, it adopts a supervised object-based classification technique to extract them in this paper. Secondly, on the opposite direction of sunlight illumination, a straight line is drawn along the principal orientation of building in every extracted shadow region. Starting from the straight line and moving toward the sunlight direction, a rectangular area is constructed to cover partial shadow and rooftop of each building. Thirdly, an algebra and geometry invariant based method is used to abstract the spatial topological relationship of the potential unchanged buildings from all central points of the rectangular area. Finally, based on an oriented texture curvature descriptor, an index is established to determine the actual false alarm in building change detection result. The experiment results validate that the proposed method can be used as an effective framework to alleviate the false alarm in building change detection from urban VHR image.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Chen, J., J. L. Hou, and M. Deng. "AN APPROACH TO ALLEVIATE THE FALSE ALARM IN BUILDING CHANGE DETECTION FROM URBAN VHR IMAGE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 21, 2016): 459–65. http://dx.doi.org/10.5194/isprsarchives-xli-b7-459-2016.

Повний текст джерела
Анотація:
Building change detection from very-high-resolution (VHR) urban remote sensing image frequently encounter the challenge of serious false alarm caused by different illumination or viewing angles in bi-temporal images. An approach to alleviate the false alarm in urban building change detection is proposed in this paper. Firstly, as shadows casted by urban buildings are of distinct spectral and shape feature, it adopts a supervised object-based classification technique to extract them in this paper. Secondly, on the opposite direction of sunlight illumination, a straight line is drawn along the principal orientation of building in every extracted shadow region. Starting from the straight line and moving toward the sunlight direction, a rectangular area is constructed to cover partial shadow and rooftop of each building. Thirdly, an algebra and geometry invariant based method is used to abstract the spatial topological relationship of the potential unchanged buildings from all central points of the rectangular area. Finally, based on an oriented texture curvature descriptor, an index is established to determine the actual false alarm in building change detection result. The experiment results validate that the proposed method can be used as an effective framework to alleviate the false alarm in building change detection from urban VHR image.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Suchorski, Yuri, and Günther Rupprechter. "Catalysis by Imaging: From Meso- to Nano-scale." Topics in Catalysis 63, no. 15-18 (July 2, 2020): 1532–44. http://dx.doi.org/10.1007/s11244-020-01302-2.

Повний текст джерела
Анотація:
AbstractIn-situ imaging of catalytic reactions has provided insights into reaction front propagation, pattern formation and other spatio-temporal effects for decades. Most recently, analysis of the local image intensity opened a way towards evaluation of local reaction kinetics. Herein, our recent studies of catalytic CO oxidation on Pt(hkl) and Rh(hkl) via the kinetics by imaging approach, both on the meso- and nano-scale, are reviewed. Polycrystalline Pt and Rh foils and nanotips were used as µm- and nm-sized surface structure libraries as model systems for reactions in the 10–5–10–6 mbar pressure range. Isobaric light-off and isothermal kinetic transitions were visualized in-situ at µm-resolution by photoemission electron microscopy (PEEM), and at nm-resolution by field emission microscopy (FEM) and field ion microscopy (FIM). The local reaction kinetics of individual Pt(hkl) and Rh(hkl) domains and nanofacets of Pt and Rh nanotips were deduced from the local image intensity analysis. This revealed the structure-sensitivity of CO oxidation, both in the light-off and in the kinetic bistability: for different low-index Pt surfaces, differences of up to 60 K in the critical light-off temperatures and remarkable differences in the bistability ranges of differently oriented stepped Rh surfaces were observed. To prove the spatial coherence of light-off on nanotips, proper orthogonal decomposition (POD) as a spatial correlation analysis was applied to the FIM video-data. The influence of particular configurations of steps and kinks on kinetic transitions were analysed by using the average nearest neighbour number as a common descriptor. Perspectives of nanosized surface structure libraries for future model studies are discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Humeau, L., P. Roumagnac, Y. Picard, I. Robène-Soustrade, F. Chiroleu, L. Gagnevin, and O. Pruvost. "Quantitative and Molecular Epidemiology of Bacterial Blight of Onion in Seed Production Fields." Phytopathology® 96, no. 12 (December 2006): 1345–54. http://dx.doi.org/10.1094/phyto-96-1345.

Повний текст джерела
Анотація:
Onion, a biennial plant species, is threatened by the emerging, seed-borne, and seed-transmitted Xanthomonas axonopodis pv. allii. Bacterial blight epidemics were monitored in seed production fields over two seasons. Temporal disease progress was different between the two seasons, with final incidence ranging from 0.04 to 0.06 in 2003 and from 0.44 to 0.61 in 2004. The number of hours with temperatures above 24°C was the best descriptor for predicting the number of days after inoculation for bacterial blight development on inoculated plants. Fitting the β-binomial distribution and binary power law analysis indicated aggregated patterns of disease incidence data. The β-binomial distribution was superior to the binomial distribution for 97% of the examined data sets. Spatial dependency ranged from 5.9 to 15.2 m, as determined by semivariance analysis. Based on amplified fragment length polymorphism (AFLP) analysis, it was concluded that plots predominantly were infected by the inoculated haplotype. A single other haplotype was identified by AFLP in all plots over the 2 years, and its detection in the field always followed wind-driven rains. X. axonopodis pv. allii-contaminated seed were detected by semiselective isolation and a nested polymerase chain reaction assay at levels up to 0.05% when final disease incidence was 0.61. Contaminated seed originated from both diseased and asymptomatic plants.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Hu, Wenmin, Jiaxing Xu, Wei Zhang, Jiatao Zhao, and Haokun Zhou. "Retrieving Surface Deformation of Mining Areas Using ZY-3 Stereo Imagery and DSMs." Remote Sensing 15, no. 17 (September 1, 2023): 4315. http://dx.doi.org/10.3390/rs15174315.

Повний текст джерела
Анотація:
Measuring surface deformation is crucial for a better understanding of spatial-temporal evolution and the mechanism of mining-induced deformation, thus effectively assessing the mining-related geohazards, such as landslides or damage to surface infrastructures. This study proposes a method of retrieving surface deformation by combining multi-temporal digital surface models (DSMs) with image homonymous features using China’s ZY-3 satellite stereo imagery. DSM is generated from three-line-array images of ZY-3 satellite using a rational function model (RFM) as the imaging geometric model. Then, elevation changes in deformation are extracted using the difference of DSMs acquired at different times, while planar displacements of deformation are calculated using image homonymous features extracted from multi-temporal digital orthographic maps (DOMs). Scale invariant feature transform (SIFT) points and line band descriptor (LBD) lines are selected as two kinds of salient features for image homonymous features generation. Cross profiles are also extracted for deformation in typical regions. Four sets of stereo imagery acquired in 2012 to 2022 are used for deformation extraction and analysis in the Fushun coalfield of China, where surface deformation is quite distinct and coupled with rising and descending elevation together. The results show that 21.60% of the surface in the study area was deformed from 2012 to 2017, while a decline from 2017 to 2022 meant that 17.19% of the surface was deformed with a 95% confidence interval. Moreover, the ratio of descending area was reduced to 6.44% between 2017 and 2022, which is lower than the ratios in other years. The slip deformation area in the west open pit mine is about 1.22 km2 and the displacement on the south slope is large, reaching an average of 26.89 m and sliding from south to north to the bottom of the pit between 2012 and 2017, but elevations are increased by an average of about 16.35 m, involving an area of about 0.86 km2 between 2017 and 2022 due to the restoration of the open pit. The results demonstrate that more quantitative features and specific surface deformation can be retrieved in mining areas by combining image features with DSMs derived from ZY-3 satellite stereo imagery.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Hao, Yu, Ying Liu, Jiulun Fan, and Zhijie Xu. "Group Abnormal Behaviour Detection Algorithm Based on Global Optical Flow." Complexity 2021 (May 5, 2021): 1–12. http://dx.doi.org/10.1155/2021/5543204.

Повний текст джерела
Анотація:
Abnormal behaviour detection algorithm needs to conduct behaviour analysis on the basis of continuous video inclination tracking, and the robustness of the algorithm is reduced for the occlusion of moving targets, the occlusion of the environment, and the movement of targets with the same colour. For this reason, the optical flow information between RGB (red, green, and blue) images and video frames is used as the input of the network in view of group behaviour. Then, the direction, velocity, acceleration, and energy of the crowd were weighted and fused into a global optical flow descriptor. At the same time, the crowd trajectory map is extracted from the original image of a single frame. Following, in order to realize the detection of large displacement moving target and solve the problem that the traditional optical flow algorithm is only suitable for the detection of displacement moving target, a video abnormal behaviour detection algorithm based on the double-flow convolutional neural network is proposed. The network uses two network branches to learn spatial dimension information and temporal dimension information, respectively, and uses short- and long-time neural network to model the dependency relationship between long-time video frames, so as to obtain the final behaviour classification results. Simulation test results show that the proposed method can achieve good recognition effect on multiple datasets, and the performance of abnormal behaviour detection can be significantly improved by using interframe motion information.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Lye, Mohd Haris, Nouar AlDahoul, and Hezerul Abdul Karim. "Fusion of Appearance and Motion Features for Daily Activity Recognition from Egocentric Perspective." Sensors 23, no. 15 (July 30, 2023): 6804. http://dx.doi.org/10.3390/s23156804.

Повний текст джерела
Анотація:
Vidos from a first-person or egocentric perspective offer a promising tool for recognizing various activities related to daily living. In the egocentric perspective, the video is obtained from a wearable camera, and this enables the capture of the person’s activities in a consistent viewpoint. Recognition of activity using a wearable sensor is challenging due to various reasons, such as motion blur and large variations. The existing methods are based on extracting handcrafted features from video frames to represent the contents. These features are domain-dependent, where features that are suitable for a specific dataset may not be suitable for others. In this paper, we propose a novel solution to recognize daily living activities from a pre-segmented video clip. The pre-trained convolutional neural network (CNN) model VGG16 is used to extract visual features from sampled video frames and then aggregated by the proposed pooling scheme. The proposed solution combines appearance and motion features extracted from video frames and optical flow images, respectively. The methods of mean and max spatial pooling (MMSP) and max mean temporal pyramid (TPMM) pooling are proposed to compose the final video descriptor. The feature is applied to a linear support vector machine (SVM) to recognize the type of activities observed in the video clip. The evaluation of the proposed solution was performed on three public benchmark datasets. We performed studies to show the advantage of aggregating appearance and motion features for daily activity recognition. The results show that the proposed solution is promising for recognizing activities of daily living. Compared to several methods on three public datasets, the proposed MMSP–TPMM method produces higher classification performance in terms of accuracy (90.38% with LENA dataset, 75.37% with ADL dataset, 96.08% with FPPA dataset) and average per-class precision (AP) (58.42% with ADL dataset and 96.11% with FPPA dataset).
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Bueno de las Heras, Julio L., Antonio Gutiérrez-Lavín, Manuel María Mahamud-López, Marisol Muñiz-Álvarez, and Patricia Rodríguez-López. "Towards a Unified Model on the Description and Design of Process Operations: Extending the concept of Separation Units to Solid-fluid Sedimentation." Recent Innovations in Chemical Engineering (Formerly Recent Patents on Chemical Engineering) 12, no. 1 (June 25, 2019): 15–53. http://dx.doi.org/10.2174/2405520412666181123094540.

Повний текст джерела
Анотація:
Background: Bridging the gap between different phenomena, mechanisms and levels of description, different design methods can converge in a unitary way of formulation. This protocol consolidates the analogy and parallelism in the description of any unit operation of separation, as is the particular case of sedimentation. This holistic framework is compatible and complementary with other methodologies handled at length, and tries to contribute to the integration of some imaginative and useful - but marginal, heuristic or rustic- procedures for the design of settlers and thickeners, within well founded and unified methodology. Objective: Classical models for hindered sedimentation allow solid flux in the direction of the gravity field to be formulated by analogy to changes obeying a potential, such as molecular transfer in the direction of the gradient and chemical transformation throughout the reaction coordinate. This article justifies the fundamentals of such a suggestive generalized analogy through the definition of the time of the sedimentation unit (TSU), the effective surface area of a sedimentation unit (ASU) and the number of sedimentation units (NSU), as elements of a sizing equation. Methods: This article also introduces the generalization of the model ab initio: Analogy is a well known and efficient tool, not only in the interpretation of events with academic or coaching purposes, but also in the generalized modelling, prospective, innovation, analysis and synthesis of technological processes. Chemical Engineering protocols for the basic dimensioning of Unit Operations driven by potentials (momentum, heat and mass transfer chemical reaction) are founded in macroscopic balances of mass and energy. Results: These balances, emphatically called “design equations”, result from the integration of mechanistic differential formulations at the microscopic level of description (“equations of variation”). In its turn, these equations include phenomenological terms that may be formulated in corpuscular terms in the field of Chemical Physics. The design equation correlates requirements in equipment (e.g. any practical forms of size and residence or elapsed time for an efficient interaction) to the objectives of the operation (e.g. variations in mass or energy contents of a confined or fluent system). This formulation allows the identification of different contributions: intrinsic terms (related to mechanistic kinetics of the phenomena) and circumstantial terms (related to conditions and variables of operation). Conclusion: In fact, this model suggests that temporal or spatial dimensions of the equipment may be assumed to depend irrespectively on two design contributions: the entity of a representative “unit of operation (or process)” - illustrated by a descriptor of this dimension- and the “number of (these) units” needed to achieve the separating or transformative objectives of the operation.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Posa, D. "A simple description of spatial-temporal processes." Computational Statistics & Data Analysis 15, no. 4 (May 1993): 425–37. http://dx.doi.org/10.1016/0167-9473(93)90174-r.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Shen, Zhongwei, Xiao-Jun Wu, and Josef Kittler. "Advanced skeleton-based action recognition via spatial–temporal rotation descriptors." Pattern Analysis and Applications 24, no. 3 (February 14, 2021): 1335–46. http://dx.doi.org/10.1007/s10044-020-00952-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Yang, Meilin, Neeraj Gadgil, Mary L. Comer, and Edward J. Delp. "Adaptive error concealment for temporal–spatial multiple description video coding." Signal Processing: Image Communication 47 (September 2016): 313–31. http://dx.doi.org/10.1016/j.image.2016.05.014.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Xu, Jie, Haoliang Wei, Linke Li, Qiuru Fu, and Jinhong Guo. "Video Description Model Based on Temporal-Spatial and Channel Multi-Attention Mechanisms." Applied Sciences 10, no. 12 (June 23, 2020): 4312. http://dx.doi.org/10.3390/app10124312.

Повний текст джерела
Анотація:
Video description plays an important role in the field of intelligent imaging technology. Attention perception mechanisms are extensively applied in video description models based on deep learning. Most existing models use a temporal-spatial attention mechanism to enhance the accuracy of models. Temporal attention mechanisms can obtain the global features of a video, whereas spatial attention mechanisms obtain local features. Nevertheless, because each channel of the convolutional neural network (CNN) feature maps has certain spatial semantic information, it is insufficient to merely divide the CNN features into regions and then apply a spatial attention mechanism. In this paper, we propose a temporal-spatial and channel attention mechanism that enables the model to take advantage of various video features and ensures the consistency of visual features between sentence descriptions to enhance the effect of the model. Meanwhile, in order to prove the effectiveness of the attention mechanism, this paper proposes a video visualization model based on the video description. Experimental results show that, our model has achieved good performance on the Microsoft Video Description (MSVD) dataset and a certain improvement on the Microsoft Research-Video to Text (MSR-VTT) dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Wang, Yu, Yong-Ping Huang, and Xuan-Jing Shen. "ST-VLAD: Video Face Recognition Based on Aggregated Local Spatial-Temporal Descriptors." IEEE Access 9 (2021): 31170–78. http://dx.doi.org/10.1109/access.2021.3060180.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Pászto, Vít, Rostislav Nétek, Alena Vondráková, and Vít Voženílek. "Municipalities in the Czech Republic—Compilation of “a Universal” Dataset." Data 5, no. 4 (November 24, 2020): 107. http://dx.doi.org/10.3390/data5040107.

Повний текст джерела
Анотація:
There have been many changes in the spatial composition and formal delimitation of administrative boundaries of Czech municipalities over the past 30 years. Many municipalities have changed their official status; they separated into ones that were more independent or were merged with existing ones, or formally redrew their boundaries due to advances in mapping technology. Such changes have made it almost impossible to analyze and visualize the temporal development of selected socioeconomic indicators, in order to deliver spatially coherent and time-comparable results. In this data description, we present an evolution of a unique (geo) dataset comprising of the administrative borders of the Czech municipalities. The uniqueness lies in time and topologically justified spatial data resulting in a common division of the administrative units at the LAU2 level, valid from 1995 to 2019. Besides the topologically correct spatial representations of municipalities in Czechia, we also provide correspondence tables for each year in the mentioned period, which allows joining tabular statistics to spatial data. The dataset is available as a base layer for further temporal and spatial analyses and visualization of various socioeconomic statistical data.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Tsai, Wen-Jiin, and Jian-Yu Chen. "Joint Temporal and Spatial Error Concealment for Multiple Description Video Coding." IEEE Transactions on Circuits and Systems for Video Technology 20, no. 12 (December 2010): 1822–33. http://dx.doi.org/10.1109/tcsvt.2010.2087816.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Bitner-Gregersen, Elzbieta M., Odin Gramstad, Anne Karin Magnusson, and Mika Malila. "Challenges in Description of Nonlinear Waves Due to Sampling Variability." Journal of Marine Science and Engineering 8, no. 4 (April 13, 2020): 279. http://dx.doi.org/10.3390/jmse8040279.

Повний текст джерела
Анотація:
Wave description is affected by several uncertainties, with sampling variability due to limited number of observations being one of them. Ideally, temporal/spatial wave registrations should be as large as possible to eliminate this uncertainty. This is difficult to reach in nature, where stationarity of sea states is an issue, but it can in principle be obtained in laboratory tests and numerical simulations, where initial wave conditions can be kept constant and intrinsic variability can be accounted for by changing random seeds for each run. Using linear, second-order, and third-order unidirectional numerical simulations, we compare temporal and spatial statistics of selected wave parameters and show how sampling variability affects their estimators. The JONSWAP spectrum with gamma peakedness parameters γ = 1, 3.3, and 6 is used in the analysis. The third-order wave data are simulated by a numerical solver based on the higher-order spectral method which includes the leading-order nonlinear dynamical effects. Field data support the analysis. We demonstrate that the nonlinear wave field including dynamical effects is more sensitive to sampling variability than the second-order and linear ones. Furthermore, we show that the mean values of temporal and spatial wave parameters can be equal if the number of simulations is sufficiently large. Consequences for design work are discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії