Добірка наукової літератури з теми "SPATIAL TEMPORAL DESCRIPTOR"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "SPATIAL TEMPORAL DESCRIPTOR".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "SPATIAL TEMPORAL DESCRIPTOR"
Lin, Bo, and Bin Fang. "A new spatial-temporal histograms of gradients descriptor and HOD-VLAD encoding for human action recognition." International Journal of Wavelets, Multiresolution and Information Processing 17, no. 02 (March 2019): 1940009. http://dx.doi.org/10.1142/s0219691319400095.
Повний текст джерелаArun Kumar H. D. and Prabhakar C. J. "Moving Vehicles Detection in Traffic Video Using Modified SXCS-LBP Texture Descriptor." International Journal of Computer Vision and Image Processing 5, no. 2 (July 2015): 14–34. http://dx.doi.org/10.4018/ijcvip.2015070102.
Повний текст джерелаPan, Xianzhang, Wenping Guo, Xiaoying Guo, Wenshu Li, Junjie Xu, and Jinzhao Wu. "Deep Temporal–Spatial Aggregation for Video-Based Facial Expression Recognition." Symmetry 11, no. 1 (January 5, 2019): 52. http://dx.doi.org/10.3390/sym11010052.
Повний текст джерелаUddin, Md Azher, Joolekha Bibi Joolee, Young-Koo Lee, and Kyung-Ah Sohn. "A Novel Multi-Modal Network-Based Dynamic Scene Understanding." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 1 (January 31, 2022): 1–19. http://dx.doi.org/10.1145/3462218.
Повний текст джерелаHu, Xing, Shiqiang Hu, Xiaoyu Zhang, Huanlong Zhang, and Lingkun Luo. "Anomaly Detection Based on Local Nearest Neighbor Distance Descriptor in Crowded Scenes." Scientific World Journal 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/632575.
Повний текст джерелаZheng, Aihua, Foqin Wang, Amir Hussain, Jin Tang, and Bo Jiang. "Spatial-temporal representatives selection and weighted patch descriptor for person re-identification." Neurocomputing 290 (May 2018): 121–29. http://dx.doi.org/10.1016/j.neucom.2018.02.039.
Повний текст джерелаIslam, Md Anwarul, Md Azher Uddin, and Young-Koo Lee. "A Distributed Automatic Video Annotation Platform." Applied Sciences 10, no. 15 (July 31, 2020): 5319. http://dx.doi.org/10.3390/app10155319.
Повний текст джерелаSEURONT, LAURENT, and YVAN LAGADEUC. "VARIABILITY, INHOMOGENEITY AND HETEROGENEITY: TOWARDS A TERMINOLOGICAL CONSENSUS IN ECOLOGY." Journal of Biological Systems 09, no. 02 (June 2001): 81–87. http://dx.doi.org/10.1142/s0218339001000281.
Повний текст джерелаInturi, Anitha Rani, Vazhora Malayil Manikandan, Mahamkali Naveen Kumar, Shuihua Wang, and Yudong Zhang. "Synergistic Integration of Skeletal Kinematic Features for Vision-Based Fall Detection." Sensors 23, no. 14 (July 10, 2023): 6283. http://dx.doi.org/10.3390/s23146283.
Повний текст джерелаYan, Jing Jie, and Ming Han Xin. "Facial Expression Recognition Based on Fused Spatio-Temporal Features." Applied Mechanics and Materials 347-350 (August 2013): 3780–85. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.3780.
Повний текст джерелаДисертації з теми "SPATIAL TEMPORAL DESCRIPTOR"
Megrhi, Sameh. "Spatio-temporal descriptors for human action recognition." Thesis, Paris 13, 2014. http://www.theses.fr/2014PA131046/document.
Повний текст джерелаDue to increasing demand for video analysis systems in recent years, human action de-tection/recognition is being targeted by the research community in order to make video description more accurate and faster, especially for big datasets. The ultimate purpose of human action recognition is to discern automatically what is happening in any given video. This thesis aims to achieve this purpose by contributing to both action detection and recognition tasks. We thus have developed new description methods for human action recognition.For the action detection component we introduce two novel approaches for human action detection. The first proposition is a simple yet effective method that aims at detecting human movements. First, video sequences are segmented into Frame Packets (FPs) and Group of Interest Points (GIP). In this method we track the movements of Interest Points in simple controlled video datasets and then in videos of gradually increasing complexity. The controlled datasets generally contain videos with a static background and simple ac-tions performed by one actor. The more complex realistic datasets are collected from social networks.The second approach for action detection attempts to address the problem of human ac-tion recognition in realistic videos captured by moving cameras. This approach works by segmenting human motion, thus investigating the optimal sufficient frame number to per-form action recognition. Using this approach, we detect object edges using the canny edge detector. Next, we apply all the steps of the motion segmentation process to each frame. Densely distributed interest points are detected and extracted based on dense SURF points with a temporal step of N frames. Then, optical flows of the detected key points between two frames are computed by the iterative Lucas and Kanade optical flow technique, using pyramids. Since we are dealing with scenes captured by moving cameras, the motion of objects necessarily involves the background and/or the camera motion. Hence, we propose to compensate for the camera motion. To do so, we must first assume that camera motion exists if most points move in the same direction. Then, we cluster optical flow vectors using a KNN clustering algorithm in order to determine if the camera motion exists. If it does, we compensate for it by applying the affine transformation to each frame in which camera motion is detected, using as input parameters the camera flow magnitude and deviation. Finally, after camera motion compensation, moving objects are segmented using temporal differencing and a bounding box is drawn around each detected moving object. The action recognition framework is applied to moving persons in the bounding box. Our goal is to reduce the amount of data involved in motion analysis while preserving the most important structural features. We believe that we have performed action detection in the spatial and temporal domain in order to obtain better action detection and recognition while at the same time considerably reducing the processing time
Mercieca, Julian. "Estimation of temporal and spatio-temporal nonlinear descriptor systems." Thesis, University of Sheffield, 2018. http://etheses.whiterose.ac.uk/19416/.
Повний текст джерелаLättman, Håkan. "Description of spatial and temporal distributions of epiphytic lichens." Licentiate thesis, Linköping University, Linköping University, Department of Physics, Chemistry and Biology, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11561.
Повний текст джерелаLichens are, in most cases, sensitive to anthropogenic factors such as air pollution, global warming, forestry and fragmentation. Two studies are included in this thesis. The first is an evaluation of the importance of old oak for the rare epiphytic lichen Cliostomum corrugatum (Ach.) Fr. This study analysed whether C. corrugatum was limited by dispersal or restricted to tree stands with an unbroken continuity or the substrate old oaks. The results provide evidence that the investigated five populations in Östergötland, Sweden, of C. corrugatum exhibit substantial gene flow, an effective dispersal and a small genetic variation between the sites. Most of the genetic variation was within the populations. Thus, C. corrugatum is more dependent of the substrate old oaks, rather than limited by dispersal. The second study investigated possible range shift of some common macrolichens, due to global warming, from 64 sites in southern Sweden comparing the two years 1986 and 2003. The centroid of three lichen species had moved a significant distance, all in a north east direction: Hypogymnia physodes (L.) Nyl. and Vulpicida pinastri (Scop.) J.-E. Mattsson and M. J. Lai on the tree species Juniperus communis L. (50 and 151 km, respectively) and H. physodes on Pinus sylvestris L. (41 km). Considering also the non-significant cases, there is strong evidence for a prevailing NE direction of centroid movement.
Lättman, Håkan. "Description of spatial and temporal distributions of epiphytic lichens /." Linköping : [Department of Physics, Chemistry and Biology], Linköping University, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11561.
Повний текст джерелаBrighi, Marco. "Human Activity Recognition: A Comparative Evaluation of Spatio-Temporal Descriptors." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19436/.
Повний текст джерелаWhiten, Christopher J. "Probabilistic Shape Parsing and Action Recognition Through Binary Spatio-Temporal Feature Description." Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/24006.
Повний текст джерелаMowbray, Stuart David. "Modelling and extracting periodically deforming objects by continuous, spatio-temporal shape description." Thesis, University of Southampton, 2008. https://eprints.soton.ac.uk/266132/.
Повний текст джерелаBOUAJJANI, MOHAMMED. "Contribution a l'etude du raisonnement spatio-temporel. Localisation d'un agent et description d'itineraires." Toulouse 3, 1999. http://www.theses.fr/1999TOU30215.
Повний текст джерелаGruhier, Elise. "Spatiotemporal description and modeling of mechanical product and its assembly sequence based on mereotopology : theory, model and approach." Thesis, Belfort-Montbéliard, 2015. http://www.theses.fr/2015BELF0276/document.
Повний текст джерелаThe major goal of this research is to describe product evolution in the three dimensions (i.e. spatial, temporal andspatiotemporal). In the current industrial context, product models are only considered from a purely spatial point ofview during the design stage and from a purely temporal point of view during the assembly stage. The lack of linkbetween product and process leads to misunderstanding in engineering definition and causes wrong designinterpretation. However, the product undergoes changes throughout the design and assembly phases. The dynamicaspect of design activities requires linking both dimensions in order to be able to represent product evolution andhave consistent information. As such, spatiotemporal dimension (i.e. linking space and time) needs to be added andrelationships between product modelling and assembly sequences need to be particularly studied.This PhD thesis in mechanical design draws inspiration from several domains such as mathematics, geographicinformation systems and philosophy. Here the product is considered from a perdurantist point of view. Perdurantismregards the object as being composed of temporal slices and always keeping the same identity whatever changesundergone. Based on this statement, this PhD thesis introduces a novel product-process description so as to ensureproduct architect's and designer's understanding of design intents at the early design stages. In order to achieve thisobjective, a mereotopological theory, enabling the product description as it is perceived in the real world, has beendeveloped and implemented in an ontology model to be formalized.The JANUS theory qualitatively describes product evolution over time in the context of AOD, integrating assemblysequence planning in the early product design stages. The theory enables the formal relationships description ofproduct-process design information and knowledge. The proposed efforts aim at providing a concrete basis fordescribing changes of spatial entities (i.e. product parts) and their relationships over time and space. This regionbasedtheory links together spatial, temporal and spatiotemporal dimensions, therefore leading to a perdurantistphilosophy in product design.Then, PRONOIA2 - a formal ontology based on the previous mereotopological theory - is developed. Assemblyinformation is accessible and exploitable by information management systems and computer-aided X tools in orderto support product architects and designer's activities. Indeed product design information and knowledge as well asthe related assembly sequence require a semantic and logical foundation in order to be managed consistently andprocessed proactively.Based on JANUS theory and PRONOIA2 ontology, the MERCURY approach enables associating spatial information(managed by PDM) and temporal information (managed by MPM) through spatiotemporal mereotopologicalrelationships. Therefore, new entities are managed through PLM, using ontology and hub system, so as to ensureproactive engineering and improve product architects' and designers' understanding of product evolution
Bouchard, Geneviève. "Architecture d'estimateur de canaux pour récepteur à traitement spatio-temporel." Thesis, Université Laval, 2007. http://www.theses.ulaval.ca/2007/24445/24445.pdf.
Повний текст джерелаКниги з теми "SPATIAL TEMPORAL DESCRIPTOR"
Baum, Rex I. Kinematics of the Aspen Grove landslide, Ephraim Canyon, central Utah: Description and anlysis of deformational structures and of spatial and temporal patterns of movement of the landslide. Denver, CO: U.S. Geological Survey, 1994.
Знайти повний текст джерелаViboud, Cécile, Hélène Broutin, and Gerardo Chowell. Spatial-temporal transmission dynamics and control of infectious diseases: Ebola virus disease (EVD) as a case study. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198789833.003.0004.
Повний текст джерелаPapanicolaou, Andrew C. Overview of Basic Concepts. Edited by Andrew C. Papanicolaou. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199764228.013.002.
Повний текст джерелаEckersley, Andrea. Encountering Surfaces, Encountering Spaces, Encountering Painting. Edinburgh University Press, 2018. http://dx.doi.org/10.3366/edinburgh/9781474429344.003.0006.
Повний текст джерелаBerezin, Mabel. Events as Templates of Possibility: An Analytic Typology of Political Facts. Edited by Jeffrey C. Alexander, Ronald N. Jacobs, and Philip Smith. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780195377767.013.23.
Повний текст джерелаAlston, Richard. The Utopian City in Tacitus’ Agricola. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198768098.003.0011.
Повний текст джерелаSchelbert, Heinrich R. Image-Based Measurements of Myocardial Blood Flow. Oxford University Press, 2015. http://dx.doi.org/10.1093/med/9780199392094.003.0024.
Повний текст джерелаЧастини книг з теми "SPATIAL TEMPORAL DESCRIPTOR"
Li, Xiang-wei, Gang Zheng, and Kai Zhao. "A Novel Temporal-Spatial Color Descriptor Representation for Video Analysis." In Lecture Notes in Electrical Engineering, 505–10. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-28807-4_70.
Повний текст джерелаLaptev, Ivan, and Tony Lindeberg. "Local Descriptors for Spatio-temporal Recognition." In Spatial Coherence for Visual Motion Analysis, 91–103. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11676959_8.
Повний текст джерелаTănase, Claudiu, and Bernard Merialdo. "Efficient Spatio-Temporal Edge Descriptor." In Lecture Notes in Computer Science, 210–21. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27355-1_21.
Повний текст джерелаKufer, Stefan, Daniel Blank, and Andreas Henrich. "Using Hybrid Techniques for Resource Description and Selection in the Context of Distributed Geographic Information Retrieval." In Advances in Spatial and Temporal Databases, 330–47. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-40235-7_19.
Повний текст джерелаLiao, Yihui, Lu Fan, Huiming Ding, and Zhifeng Xie. "Spatial-Temporal Contextual Feature Fusion Network for Movie Description." In Artificial Intelligence, 490–501. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20497-5_40.
Повний текст джерелаKwolek, Bogdan, Tomasz Krzeszowski, Agnieszka Michalczuk, and Henryk Josinski. "3D Gait Recognition Using Spatio-Temporal Motion Descriptors." In Intelligent Information and Database Systems, 595–604. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-05458-2_61.
Повний текст джерелаLiu, Yazhou, Shiguang Shan, Xilin Chen, Janne Heikkila, Wen Gao, and Matti Pietikainen. "Spatial-Temporal Granularity-Tunable Gradients Partition (STGGP) Descriptors for Human Detection." In Computer Vision – ECCV 2010, 327–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15549-9_24.
Повний текст джерелаMattivi, Riccardo, and Ling Shao. "Spatio-temporal Dynamic Texture Descriptors for Human Motion Recognition." In Intelligent Video Event Analysis and Understanding, 69–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-17554-1_4.
Повний текст джерелаUtasi, Ákos, and Andrea Kovács. "Recognizing Human Actions by Using Spatio-temporal Motion Descriptors." In Advanced Concepts for Intelligent Vision Systems, 366–75. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-17691-3_34.
Повний текст джерелаDavison, Adrian K., Moi Hoon Yap, Nicholas Costen, Kevin Tan, Cliff Lansley, and Daniel Leightley. "Micro-Facial Movements: An Investigation on Spatio-Temporal Descriptors." In Computer Vision - ECCV 2014 Workshops, 111–23. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-16181-5_8.
Повний текст джерелаТези доповідей конференцій з теми "SPATIAL TEMPORAL DESCRIPTOR"
Yimo Guo, Guoying Zhao, Jie Chen, Matti Pietikainen, and Zhengguang Xu. "Dynamic texture synthesis using a spatial temporal descriptor." In 2009 16th IEEE International Conference on Image Processing (ICIP 2009). IEEE, 2009. http://dx.doi.org/10.1109/icip.2009.5414395.
Повний текст джерелаYao, Hongxian, Xinghao Jiang, Tanfeng Sun, and Shilin Wang. "3D human action recognition based on the Spatial-Temporal Moving Skeleton Descriptor." In 2017 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2017. http://dx.doi.org/10.1109/icme.2017.8019498.
Повний текст джерелаXie, Jiong, and Cunjin Xue. "A top-down hierarchical spatio-temporal process description method and its data organization." In International Symposium on Spatial Analysis, Spatial-temporal Data Modeling, and Data Mining, edited by Yaolin Liu and Xinming Tang. SPIE, 2009. http://dx.doi.org/10.1117/12.838353.
Повний текст джерелаTu, Yunbin, Xishan Zhang, Bingtao Liu, and Chenggang Yan. "Video Description with Spatial-Temporal Attention." In MM '17: ACM Multimedia Conference. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3123266.3123354.
Повний текст джерелаWild, Walter J. "Optimal Estimators for Astronomical Adaptive Optics." In Adaptive Optics. Washington, D.C.: Optica Publishing Group, 1996. http://dx.doi.org/10.1364/adop.1996.athb.1.
Повний текст джерелаDeMenthon, Daniel, and David Doermann. "Video retrieval using spatio-temporal descriptors." In the eleventh ACM international conference. New York, New York, USA: ACM Press, 2003. http://dx.doi.org/10.1145/957013.957124.
Повний текст джерелаKlaeser, A., M. Marszalek, and C. Schmid. "A Spatio-Temporal Descriptor Based on 3D-Gradients." In British Machine Vision Conference 2008. British Machine Vision Association, 2008. http://dx.doi.org/10.5244/c.22.99.
Повний текст джерелаLung, Fam Boon, Mohamed Hisham Jaward, and Jussi Parkkinen. "Spatio-temporal descriptor for abnormal human activity detection." In 2015 14th IAPR International Conference on Machine Vision Applications (MVA). IEEE, 2015. http://dx.doi.org/10.1109/mva.2015.7153233.
Повний текст джерелаHadjkacem, Bassem, Walid Ayedi, Mohamed Abid, and Hichem Snoussi. "A spatio-temporal covariance descriptor for person re-identification." In 2015 15th International Conference on Intelligent Systems Design and Applications (ISDA). IEEE, 2015. http://dx.doi.org/10.1109/isda.2015.7489188.
Повний текст джерелаGadgil, Neeraj, He Li, and Edward J. Delp. "Spatial subsampling-based multiple description video coding with adaptive temporal-spatial error concealment." In 2015 Picture Coding Symposium (PCS). IEEE, 2015. http://dx.doi.org/10.1109/pcs.2015.7170053.
Повний текст джерелаЗвіти організацій з теми "SPATIAL TEMPORAL DESCRIPTOR"
Oughstun, Kurt E., and Natalie A. Cartwright. A Research Program on the Asymptotic Description of Electromagnetic Pulse Propagation in Spatially Inhomogeneous, Temporally Dispersive, Attenuative Media. Fort Belvoir, VA: Defense Technical Information Center, September 2007. http://dx.doi.org/10.21236/ada474484.
Повний текст джерелаRobert, Gillian. PR-420-153722-R01 Pipeline Right-of-Way Ground Movement Monitoring from InSAR. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), February 2018. http://dx.doi.org/10.55274/r0011463.
Повний текст джерела