Artículos de revistas sobre el tema "Spatio-temporal sequences"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Spatio-temporal sequences.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Spatio-temporal sequences".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Caspi, Y. y M. Irani. "Spatio-temporal alignment of sequences". IEEE Transactions on Pattern Analysis and Machine Intelligence 24, n.º 11 (noviembre de 2002): 1409–24. http://dx.doi.org/10.1109/tpami.2002.1046148.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Horn, D., G. Dror y B. Quenet. "Dynamic Proximity of Spatio-Temporal Sequences". IEEE Transactions on Neural Networks 15, n.º 5 (septiembre de 2004): 1002–8. http://dx.doi.org/10.1109/tnn.2004.832809.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Diego, Ferran, Joan Serrat y Antonio M. Lopez. "Joint Spatio-Temporal Alignment of Sequences". IEEE Transactions on Multimedia 15, n.º 6 (octubre de 2013): 1377–87. http://dx.doi.org/10.1109/tmm.2013.2247390.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Azzabou, Noura y Nikos Paragios. "Spatio-temporal speckle reduction in ultrasound sequences". Inverse Problems & Imaging 4, n.º 2 (2010): 211–22. http://dx.doi.org/10.3934/ipi.2010.4.211.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Hach, Thomas y Tamara Seybold. "Spatio-Temporal Denoising for Depth Map Sequences". International Journal of Multimedia Data Engineering and Management 7, n.º 2 (abril de 2016): 21–35. http://dx.doi.org/10.4018/ijmdem.2016040102.

Texto completo
Resumen
This paper proposes a novel strategy for depth video denoising in RGBD camera systems. Depth map sequences obtained by state-of-the-art Time-of-Flight sensors suffer from high temporal noise. Hence, all high-level RGB video renderings based on the accompanied depth maps' 3D geometry like augmented reality applications will have severe temporal flickering artifacts. The authors approached this limitation by decoupling depth map upscaling from the temporal denoising step. Thereby, denoising is processed on raw pixels including uncorrelated pixel-wise noise distributions. The authors' denoising methodology utilizes joint sparse 3D transform-domain collaborative filtering. Therein, they extract RGB texture information to yield a more stable and accurate highly sparse 3D depth block representation for the consecutive shrinkage operation. They show the effectiveness of our method on real RGBD camera data and on a publicly available synthetic data set. The evaluation reveals that the authors' method is superior to state-of-the-art methods. Their method delivers flicker-free depth video streams for future applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Ahn, J. H. y J. K. Kim. "Spatio-temporal visibility function for image sequences". Electronics Letters 27, n.º 7 (1991): 585. http://dx.doi.org/10.1049/el:19910369.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Oliveira, Francisco P. M., Andreia Sousa, Rubim Santos y João Manuel R. S. Tavares. "Spatio-temporal alignment of pedobarographic image sequences". Medical & Biological Engineering & Computing 49, n.º 7 (8 de abril de 2011): 843–50. http://dx.doi.org/10.1007/s11517-011-0771-x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Pavlovskaya, Marina y Shaul Hochstein. "Explicit Ensemble Perception of Temporal and Spatio-temporal Element Sequences". Journal of Vision 21, n.º 9 (27 de septiembre de 2021): 2570. http://dx.doi.org/10.1167/jov.21.9.2570.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Koseoglu, Baran, Erdem Kaya, Selim Balcisoy y Burcin Bozkaya. "ST Sequence Miner: visualization and mining of spatio-temporal event sequences". Visual Computer 36, n.º 10-12 (16 de julio de 2020): 2369–81. http://dx.doi.org/10.1007/s00371-020-01894-6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Addesso, Paolo, Maurizio Longo, Rocco Restaino y Gemine Vivone. "Spatio-temporal resolution enhancement for cloudy thermal sequences". European Journal of Remote Sensing 52, sup1 (11 de octubre de 2018): 2–14. http://dx.doi.org/10.1080/22797254.2018.1526045.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Xu, Binbin, Sarthak Pathak, Hiromitsu Fujii, Atsushi Yamashita y Hajime Asama. "Spatio-Temporal Video Completion in Spherical Image Sequences". IEEE Robotics and Automation Letters 2, n.º 4 (octubre de 2017): 2032–39. http://dx.doi.org/10.1109/lra.2017.2718106.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Idris, F. M. y S. Panchanathan. "Spatio-temporal indexing of vector quantized video sequences". IEEE Transactions on Circuits and Systems for Video Technology 7, n.º 5 (1997): 728–40. http://dx.doi.org/10.1109/76.633489.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Li, Weiwei, Rong Du y Shudong Chen. "Skeleton-Based Spatio-Temporal U-Network for 3D Human Pose Estimation in Video". Sensors 22, n.º 7 (28 de marzo de 2022): 2573. http://dx.doi.org/10.3390/s22072573.

Texto completo
Resumen
Despite the great progress in 3D pose estimation from videos, there is still a lack of effective means to extract spatio-temporal features of different granularity from complex dynamic skeleton sequences. To tackle this problem, we propose a novel, skeleton-based spatio-temporal U-Net(STUNet) scheme to deal with spatio-temporal features in multiple scales for 3D human pose estimation in video. The proposed STUNet architecture consists of a cascade structure of semantic graph convolution layers and structural temporal dilated convolution layers, progressively extracting and fusing the spatio-temporal semantic features from fine-grained to coarse-grained. This U-shaped network achieves scale compression and feature squeezing by downscaling and upscaling, while abstracting multi-resolution spatio-temporal dependencies through skip connections. Experiments demonstrate that our model effectively captures comprehensive spatio-temporal features in multiple scales and achieves substantial improvements over mainstream methods on real-world datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Cho, J. H. y S. D. Kim. "Object detection using spatio-temporal thresholding in image sequences". Electronics Letters 40, n.º 18 (2004): 1109. http://dx.doi.org/10.1049/el:20045316.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Nolte, Nicholas, Nils Kurzawa, Roland Eils y Carl Herrmann. "MapMyFlu: visualizing spatio-temporal relationships between related influenza sequences". Nucleic Acids Research 43, W1 (4 de mayo de 2015): W547—W551. http://dx.doi.org/10.1093/nar/gkv417.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Tuia, Devis, Rosa Lasaponara, Luciano Telesca y Mikhail Kanevski. "Emergence of spatio-temporal patterns in forest-fire sequences". Physica A: Statistical Mechanics and its Applications 387, n.º 13 (mayo de 2008): 3271–80. http://dx.doi.org/10.1016/j.physa.2008.01.057.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Tomic, M., S. Loncaric y D. Sersic. "Adaptive spatio-temporal denoising of fluoroscopic X-ray sequences". Biomedical Signal Processing and Control 7, n.º 2 (marzo de 2012): 173–79. http://dx.doi.org/10.1016/j.bspc.2011.02.003.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Baddar, Wissam J. y Yong Man Ro. "Mode Variational LSTM Robust to Unseen Modes of Variation: Application to Facial Expression Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 3215–23. http://dx.doi.org/10.1609/aaai.v33i01.33013215.

Texto completo
Resumen
Spatio-temporal feature encoding is essential for encoding the dynamics in video sequences. Recurrent neural networks, particularly long short-term memory (LSTM) units, have been popular as an efficient tool for encoding spatio-temporal features in sequences. In this work, we investigate the effect of mode variations on the encoded spatio-temporal features using LSTMs. We show that the LSTM retains information related to the mode variation in the sequence, which is irrelevant to the task at hand (e.g. classification facial expressions). Actually, the LSTM forget mechanism is not robust enough to mode variations and preserves information that could negatively affect the encoded spatio-temporal features. We propose the mode variational LSTM to encode spatio-temporal features robust to unseen modes of variation. The mode variational LSTM modifies the original LSTM structure by adding an additional cell state that focuses on encoding the mode variation in the input sequence. To efficiently regulate what features should be stored in the additional cell state, additional gating functionality is also introduced. The effectiveness of the proposed mode variational LSTM is verified using the facial expression recognition task. Comparative experiments on publicly available datasets verified that the proposed mode variational LSTM outperforms existing methods. Moreover, a new dynamic facial expression dataset with different modes of variation, including various modes like pose and illumination variations, was collected to comprehensively evaluate the proposed mode variational LSTM. Experimental results verified that the proposed mode variational LSTM encodes spatio-temporal features robust to unseen modes of variation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Cordeiro-Costas, Moisés, Daniel Villanueva, Andrés E. Feijóo-Lorenzo y Javier Martínez-Torres. "Simulation of Wind Speeds with Spatio-Temporal Correlation". Applied Sciences 11, n.º 8 (8 de abril de 2021): 3355. http://dx.doi.org/10.3390/app11083355.

Texto completo
Resumen
Nowadays, there is a growing trend to incorporate renewables in electrical power systems and, in particular, wind energy, which has become an important primary source in the electricity mix of many countries, where wind farms have been proliferating in recent years. This circumstance makes it particularly interesting to understand wind behavior because generated power depends on it. In this paper, a method is proposed to synthetically generate sequences of wind speed values satisfying two important constraints. The first consists of fitting the given statistical distributions, as the generally accepted fact is assumed that the measured wind speed in a location follows a certain distribution. The second consists of imposing spatial and temporal correlations among the simulated wind speed sequences. The method was successfully checked under different scenarios, depending on variables, such as the number of locations, the duration of the data collection period or the size of the simulated series, and the results were of high accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Li, Meng He, Chuan Lin, Jing Bei Tian y Sheng Hui Pan. "An Algorithms for Super-Resolution Reconstruction of Video Based on Spatio-Temporal Adaptive". Advanced Materials Research 532-533 (junio de 2012): 1680–84. http://dx.doi.org/10.4028/www.scientific.net/amr.532-533.1680.

Texto completo
Resumen
For the weakness of conventional POCS algorithms, a novel spatio-temporal adaptive super-resolution reconstruction algorithm of video is proposed in this paper. The spatio-temporal adaptive mechanism, which is based on POCS super-resolution reconstruction algorithm, can effectively prevent reconstructed image from the influence of inaccuracy of motion information and avoid the impact of noise amplification, which exist in using conventional POCS algorithms to reconstruct image sequences in dramatic motion. Experimental results show that the spatio-temporal adaptive algorithm not only effectively alleviate amplification noise but is better than the traditional POCS algorithms in signal to noise ration.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

ILG, WINFRIED, GÖKHAN H. BAKIR, JOHANNES MEZGER y MARTIN A. GIESE. "ON THE REPRESENTATION, LEARNING AND TRANSFER OF SPATIO-TEMPORAL MOVEMENT CHARACTERISTICS". International Journal of Humanoid Robotics 01, n.º 04 (diciembre de 2004): 613–36. http://dx.doi.org/10.1142/s0219843604000320.

Texto completo
Resumen
In this paper we present a learning-based approach for the modeling of complex movement sequences. Based on the method of Spatio-Temporal Morphable Models (STMMs) we derive a hierarchical algorithm that, in a first step, identifies automatically movement elements in movement sequences based on a coarse spatio-temporal description, and in a second step models these movement primitives by approximation through linear combinations of learned example movement trajectories. We describe the different steps of the algorithm and show how it can be applied for modeling and synthesis of complex sequences of human movements that contain movement elements with a variable style. The proposed method is demonstrated on different applications of movement representation relevant for imitation learning of movement styles in humanoid robotics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Liu, Yang. "Multi-Scale Spatio-Temporal Feature Extraction and Depth Estimation from Sequences by Ordinal Classification". Sensors 20, n.º 7 (1 de abril de 2020): 1979. http://dx.doi.org/10.3390/s20071979.

Texto completo
Resumen
Depth estimation is a key problem in 3D computer vision and has a wide variety of applications. In this paper we explore whether deep learning network can predict depth map accurately by learning multi-scale spatio-temporal features from sequences and recasting the depth estimation from a regression task to an ordinal classification task. We design an encoder-decoder network with several multi-scale strategies to improve its performance and extract spatio-temporal features with ConvLSTM. The results of our experiments show that the proposed method has an improvement of almost 10% in error metrics and up to 2% in accuracy metrics. The results also tell us that extracting spatio-temporal features can dramatically improve the performance in depth estimation task. We consider to extend this work to a self-supervised manner to get rid of the dependence on large-scale labeled data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Wang, Jin, Zhao Hui Li, Dong Mei Li y Yu Wang. "A Spatio-Temporal Video Segmentation Method Based on Motion Detection". Applied Mechanics and Materials 135-136 (octubre de 2011): 1147–54. http://dx.doi.org/10.4028/www.scientific.net/amm.135-136.1147.

Texto completo
Resumen
In the paper, a new spatio-temporal segmentation algorithm is proposed to extract moving objects from video sequences, the sequences were taken by stationary cameras. First, the motion detection is used to achieve the mask representing moving regions with a estimation noise parameter. Which can effectively improve noise immunity. Due to the shortage of the moving video object textures, the eight-neighbor motion detection is present, which is used to smooth the mask boundary and fill the interior holes. Then a morphological filter is applied to refine the moving mask. Second, spatial segmentation is detected by the Canny operator. Then utilize the gradient histogram to select the high threshold to increase the adaptivity of Canny algorithm. Finally, merge the temporal and spatial mask by neighborhood matching algorithm to ensure further reliability and efficiency of our algorithm. Experiments on typical sequences have successfully demonstrated the validity of the proposed algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

AMEMORI, Kenichi y Shin ISHII. "Self-Organization and Association for Fine Spatio-Temporal Spike Sequences". Transactions of the Institute of Systems, Control and Information Engineers 13, n.º 7 (2000): 308–17. http://dx.doi.org/10.5687/iscie.13.7_308.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

CHENEVIÈRE, FREDERIC, SAMIA BOUKIR y BERTRAND VACHON. "COMPRESSION AND RECOGNITION OF SPATIO-TEMPORAL SEQUENCES FROM CONTEMPORARY BALLET". International Journal of Pattern Recognition and Artificial Intelligence 20, n.º 05 (agosto de 2006): 727–45. http://dx.doi.org/10.1142/s0218001406004880.

Texto completo
Resumen
We aim at recognizing a set of dance gestures from contemporary ballet. Our input data are motion trajectories followed by the joints of a dancing body provided by a motion-capture system. It is obvious that direct use of the original signals is unreliable and expensive. Therefore, we propose a suitable tool for nonuniform sub-sampling of spatio-temporal signals. The key to our approach is the use of polygonal approximation to provide a compact and efficient representation of motion trajectories. Our dance gesture recognition method involves a set of Hidden Markov Models (HMMs), each of them being related to a motion trajectory followed by the joints. The recognition of such movements is then achieved by matching the resulting gesture models with the input data via HMMs. We have validated our recognition system on 12 fundamental movements from contemporary ballet performed by four dancers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Lipo Wang. "Heteroassociations of spatio-temporal sequences with the bidirectional associative memory". IEEE Transactions on Neural Networks 11, n.º 6 (2000): 1503–5. http://dx.doi.org/10.1109/72.883484.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Perperidis, Dimitrios, Raad H. Mohiaddin y Daniel Rueckert. "Spatio-temporal free-form registration of cardiac MR image sequences". Medical Image Analysis 9, n.º 5 (octubre de 2005): 441–56. http://dx.doi.org/10.1016/j.media.2005.05.004.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Mobasseri, Bijan G. y Preethi Krishnamurthy. "Spatio-temporal object relationships in image sequences using adjacency matrices". Signal, Image and Video Processing 6, n.º 2 (15 de diciembre de 2010): 247–58. http://dx.doi.org/10.1007/s11760-010-0195-3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Luo, Guoliang, Zhigang Deng, Xin Zhao, Xiaogang Jin, Wei Zeng, Wenqiang Xie y Hyewon Seo. "Spatio-temporal Segmentation Based Adaptive Compression of Dynamic Mesh Sequences". ACM Transactions on Multimedia Computing, Communications, and Applications 16, n.º 1 (2 de abril de 2020): 1–24. http://dx.doi.org/10.1145/3377475.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Caucci, Luca, Harrison H. Barrett y Jeffrey J. Rodriguez. "Spatio-temporal Hotelling observer for signal detection from image sequences". Optics Express 17, n.º 13 (16 de junio de 2009): 10946. http://dx.doi.org/10.1364/oe.17.010946.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Zhou, Chunjie, Pengfei Dai y Zhenxing Zhang. "OrientSTS-Spatio Temporal Sequence Searching for Trip Planning". International Journal of Web Services Research 15, n.º 2 (abril de 2018): 21–46. http://dx.doi.org/10.4018/ijwsr.2018040102.

Texto completo
Resumen
For a satisfactory trip planning, the following features are desired: 1) automated suggestion of scenes or attractions; 2) personalized based on the interest and habits of travelers; 3) maximal coverage of sites of interest; and 4) minimal effort such as transporting time on the route. Automated scene suggestion requires collecting massive knowledge about scene sites and their characteristics, and personalized planning requires matching of a traveler profile with knowledge of scenes of interest. As a trip contains a sequence of stops at multiple scenes, the problem of trip planning becomes optimizing a temporal sequence where each stop is weighted. This article presents OrientSTS, a novel spatio-temporal sequence (STS) searching system for optimal personalized trip planning. OrientSTS provides a knowledge base of scenes with their tagged features and season characteristics. By combining personal profiles and scene features, OrientSTS generates a set of weighted scenes for each city for each user. OrientSTS can then retrieve the optimal sequence of scenes in terms of distance, weight, visiting time, and scene features. The authors develop alternative algorithms for searching optimal sequences, with consideration of the weight of each scene, the preference of users, and the travel time constraint. The experiments demonstrate the efficiency of the proposed algorithms based on real datasets from social networks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Xu, Shuqiang, Qunying Huang y Zhiqiang Zou. "Spatio-Temporal Transformer Recommender: Next Location Recommendation with Attention Mechanism by Mining the Spatio-Temporal Relationship between Visited Locations". ISPRS International Journal of Geo-Information 12, n.º 2 (20 de febrero de 2023): 79. http://dx.doi.org/10.3390/ijgi12020079.

Texto completo
Resumen
Location-based social networks (LBSN) allow users to socialize with friends by sharing their daily life experiences online. In particular, a large amount of check-ins data generated by LBSNs capture the visit locations of users and open a new line of research of spatio-temporal big data, i.e., the next point-of-interest (POI) recommendation. At present, while some advanced methods have been proposed for POI recommendation, existing work only leverages the temporal information of two consecutive LBSN check-ins. Specifically, these methods only focus on adjacent visit sequences but ignore non-contiguous visits, while these visits can be important in understanding the spatio-temporal correlation within the trajectory. In order to fully mine this non-contiguous visit information, we propose a multi-layer Spatio-Temporal deep learning attention model for POI recommendation, Spatio-Temporal Transformer Recommender (STTF-Recommender). To incorporate the spatio-temporal patterns, we encode the information in the user’s trajectory as latent representations into their embeddings before feeding them. To mine the spatio-temporal relationship between any two visited locations, we utilize the Transformer aggregation layer. To match the most plausible candidates from all locations, we develop on an attention matcher based on the attention mechanism. The STTF-Recommender was evaluated with two real-world datasets, and the findings showed that STTF improves at least 13.75% in the mean value of the Recall index at different scales compared with the state-of-the-art models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Sousa e Santos, Anderson Carlos y Helio Pedrini. "Human Action Recognition Based on a Spatio-Temporal Video Autoencoder". International Journal of Pattern Recognition and Artificial Intelligence 34, n.º 11 (11 de marzo de 2020): 2040001. http://dx.doi.org/10.1142/s0218001420400017.

Texto completo
Resumen
Due to rapid advances in the development of surveillance cameras with high sampling rates, low cost, small size and high resolution, video-based action recognition systems have become more commonly used in various computer vision applications. Human operators can be supported with the aid of such systems to detect events of interest in video sequences, improving recognition results and reducing failure cases. In this work, we propose and evaluate a method to learn two-dimensional (2D) representations from video sequences based on an autoencoder framework. Spatial and temporal information is explored through a multi-stream convolutional neural network in the context of human action recognition. Experimental results on the challenging UCF101 and HMDB51 datasets demonstrate that our representation is capable of achieving competitive accuracy rates when compared to other approaches available in the literature.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Hainzl, S., G. Zöller y J. Kurths. "Self-organization of spatio-temporal earthquake clusters". Nonlinear Processes in Geophysics 7, n.º 1/2 (30 de junio de 2000): 21–29. http://dx.doi.org/10.5194/npg-7-21-2000.

Texto completo
Resumen
Abstract. Cellular automaton versions of the Burridge-Knopoff model have been shown to reproduce the power law distribution of event sizes; that is, the Gutenberg-Richter law. However, they have failed to reproduce the occurrence of foreshock and aftershock sequences correlated with large earthquakes. We show that in the case of partial stress recovery due to transient creep occurring subsequently to earthquakes in the crust, such spring-block systems self-organize into a statistically stationary state characterized by a power law distribution of fracture sizes as well as by foreshocks and aftershocks accompanying large events. In particular, the increase of foreshock and the decrease of aftershock activity can be described by, aside from a prefactor, the same Omori law. The exponent of the Omori law depends on the relaxation time and on the spatial scale of transient creep. Further investigations concerning the number of aftershocks, the temporal variation of aftershock magnitudes, and the waiting time distribution support the conclusion that this model, even "more realistic" physics in missed, captures in some ways the origin of the size distribution as well as spatio-temporal clustering of earthquakes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

HASEYAMA, Miki, Daisuke IZUMI y Makoto TAKIZAWA. "Super-Resolution Reconstruction for Spatio-Temporal Resolution Enhancement of Video Sequences". IEICE Transactions on Information and Systems E95.D, n.º 9 (2012): 2355–58. http://dx.doi.org/10.1587/transinf.e95.d.2355.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Newport, Robert Ahadizad, Carlo Russo, Sidong Liu, Abdulla Al Suman y Antonio Di Ieva. "SoftMatch: Comparing Scanpaths Using Combinatorial Spatio-Temporal Sequences with Fractal Curves". Sensors 22, n.º 19 (30 de septiembre de 2022): 7438. http://dx.doi.org/10.3390/s22197438.

Texto completo
Resumen
Recent studies matching eye gaze patterns with those of others contain research that is heavily reliant on string editing methods borrowed from early work in bioinformatics. Previous studies have shown string editing methods to be susceptible to false negative results when matching mutated genes or unordered regions of interest in scanpaths. Even as new methods have emerged for matching amino acids using novel combinatorial techniques, scanpath matching is still limited by a traditional collinear approach. This approach reduces the ability to discriminate between free viewing scanpaths of two people looking at the same stimulus due to the heavy weight placed on linearity. To overcome this limitation, we here introduce a new method called SoftMatch to compare pairs of scanpaths. SoftMatch diverges from traditional scanpath matching in two different ways: firstly, by preserving locality using fractal curves to reduce dimensionality from 2D Cartesian (x,y) coordinates into 1D (h) Hilbert distances, and secondly by taking a combinatorial approach to fixation matching using discrete Fréchet distance measurements between segments of scanpath fixation sequences. These matching “sequences of fixations over time” are a loose acronym for SoftMatch. Results indicate high degrees of statistical and substantive significance when scoring matches between scanpaths made during free-form viewing of unfamiliar stimuli. Applications of this method can be used to better understand bottom up perceptual processes extending to scanpath outlier detection, expertise analysis, pathological screening, and salience prediction.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Karli, Sezin y Yucel Saygin. "Mining periodic patterns in spatio-temporal sequences at different time granularities1". Intelligent Data Analysis 13, n.º 2 (17 de abril de 2009): 301–35. http://dx.doi.org/10.3233/ida-2009-0368.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Lotsch, A., M. A. Friedl y J. Pinzon. "Spatio-temporal deconvolution of ndvi image sequences using independent component analysis". IEEE Transactions on Geoscience and Remote Sensing 41, n.º 12 (diciembre de 2003): 2938–42. http://dx.doi.org/10.1109/tgrs.2003.819868.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Li, Renjie, Songyu Yu y Xiaokang Yang. "Efficient Spatio-temporal Segmentation for Extracting Moving Objects in Video Sequences". IEEE Transactions on Consumer Electronics 53, n.º 3 (agosto de 2007): 1161–67. http://dx.doi.org/10.1109/tce.2007.4341600.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Barlow, Horace. "Intraneuronal information processing, directional selectivity and memory for spatio-temporal sequences". Network: Computation in Neural Systems 7, n.º 2 (mayo de 1996): 251–59. http://dx.doi.org/10.1088/0954-898x/7/2/004.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Barlow, Horace. "Intraneuronal information processing, directional selectivity and memory for spatio-temporal sequences". Network: Computation in Neural Systems 7, n.º 2 (enero de 1996): 251–59. http://dx.doi.org/10.1088/0954-898x_7_2_004.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Yang, Zhengyuan, Yuncheng Li, Jianchao Yang y Jiebo Luo. "Action Recognition With Spatio–Temporal Visual Attention on Skeleton Image Sequences". IEEE Transactions on Circuits and Systems for Video Technology 29, n.º 8 (agosto de 2019): 2405–15. http://dx.doi.org/10.1109/tcsvt.2018.2864148.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Mure, Simon, Thomas Grenier, Dominik S. Meier, Charles R. G. Guttmann y Hugues Benoit-Cattin. "Unsupervised spatio-temporal filtering of image sequences. A mean-shift specification". Pattern Recognition Letters 68 (diciembre de 2015): 48–55. http://dx.doi.org/10.1016/j.patrec.2015.07.021.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Souza, Marcos Roberto e., Helena de Almeida Maia, Marcelo Bernardes Vieira y Helio Pedrini. "Survey on visual rhythms: A spatio-temporal representation for video sequences". Neurocomputing 402 (agosto de 2020): 409–22. http://dx.doi.org/10.1016/j.neucom.2020.04.035.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Long, John A., Rick L. Lawrence, Perry R. Miller, Lucy A. Marshall y Mark C. Greenwood. "Adoption of cropping sequences in northeast Montana: A spatio-temporal analysis". Agriculture, Ecosystems & Environment 197 (diciembre de 2014): 77–87. http://dx.doi.org/10.1016/j.agee.2014.07.022.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Hamedani, Kian, Zahra Bahmani y Amin Mohammadian. "Spatio-temporal filtering of thermal video sequences for heart rate estimation". Expert Systems with Applications 54 (julio de 2016): 88–94. http://dx.doi.org/10.1016/j.eswa.2016.01.022.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Li, Ren-jie, Song-yu Yu y Xiang-wen Wang. "Unsupervised spatio-temporal segmentation for extracting moving objects in video sequences". Journal of Shanghai Jiaotong University (Science) 14, n.º 2 (abril de 2009): 154–61. http://dx.doi.org/10.1007/s12204-009-0154-8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Li, Zheng, Xueyuan Huang, Chun Liu y Wei Yang. "Spatio-Temporal Unequal Interval Correlation-Aware Self-Attention Network for Next POI Recommendation". ISPRS International Journal of Geo-Information 11, n.º 11 (29 de octubre de 2022): 543. http://dx.doi.org/10.3390/ijgi11110543.

Texto completo
Resumen
As the core of location-based social networks (LBSNs), the main task of next point-of-interest (POI) recommendation is to predict the next possible POI through the context information from users’ historical check-in trajectories. It is well known that spatial–temporal contextual information plays an important role in analyzing users check-in behaviors. Moreover, the information between POIs provides a non-trivial correlation for modeling users visiting preferences. Unfortunately, the impact of such correlation information and the spatio–temporal unequal interval information between POIs on user selection of next POI, is rarely considered. Therefore, we propose a spatio-temporal unequal interval correlation-aware self-attention network (STUIC-SAN) model for next POI recommendation. Specifically, we first use the linear regression method to obtain the spatio-temporal unequal interval correlation between any two POIs from users’ check-in sequences. Sequentially, we design a spatio-temporal unequal interval correlation-aware self-attention mechanism, which is able to comprehensively capture users’ personalized spatio-temporal unequal interval correlation preferences by incorporating multiple factors, including POIs information, spatio-temporal unequal interval correlation information between POIs, and the absolute positional information of corresponding POIs. On this basis, we perform next POI recommendation. Finally, we conduct comprehensive performance evaluation using large-scale real-world datasets from two popular location-based social networks, namely, Foursquare and Gowalla. Experimental results on two datasets indicate that the proposed STUIC-SAN outperformed the state-of-the-art next POI recommendation approaches regarding two commonly used evaluation metrics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Niu, Yaqing, Sridhar Krishnan y Qin Zhang. "Spatio-Temporal Just Noticeable Distortion Model Guided Video Watermarking". International Journal of Digital Crime and Forensics 2, n.º 4 (octubre de 2010): 16–36. http://dx.doi.org/10.4018/jdcf.2010100102.

Texto completo
Resumen
Perceptual Watermarking should take full advantage of the results from human visual system (HVS) studies. Just noticeable distortion (JND), which refers to the maximum distortion that the HVS does not perceive, gives a way to model the HVS accurately. An effective Spatio-Temporal JND model guided video watermarking scheme in DCT domain is proposed in this paper. The watermarking scheme is based on the design of an additional accurate JND visual model which incorporates spatial Contrast Sensitivity Function (CSF), temporal modulation factor, retinal velocity, luminance adaptation and contrast masking. The proposed watermarking scheme, where the JND model is fully used to determine scene-adaptive upper bounds on watermark insertion, allows providing the maximum strength transparent watermark. Experimental results confirm the improved performance of the Spatio-Temporal JND model. The authors’ Spatio-Temporal JND model is capable of yielding higher injected-watermark energy without introducing noticeable distortion to the original video sequences and outperforms the relevant existing visual models. Simulation results show that the proposed Spatio-Temporal JND model guided video watermarking scheme is more robust than other algorithms based on the relevant existing perceptual models while retaining the watermark transparency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Zhong, Sheng-hua, Yan Liu, Feifei Ren, Jinghuan Zhang y Tongwei Ren. "Video Saliency Detection via Dynamic Consistent Spatio-Temporal Attention Modelling". Proceedings of the AAAI Conference on Artificial Intelligence 27, n.º 1 (30 de junio de 2013): 1063–69. http://dx.doi.org/10.1609/aaai.v27i1.8642.

Texto completo
Resumen
Human vision system actively seeks salient regions and movements in video sequences to reduce the search effort. Modeling computational visual saliency map provides im-portant information for semantic understanding in many real world applications. In this paper, we propose a novel video saliency detection model for detecting the attended regions that correspond to both interesting objects and dominant motions in video sequences. In spatial saliency map, we in-herit the classical bottom-up spatial saliency map. In tem-poral saliency map, a novel optical flow model is proposed based on the dynamic consistency of motion. The spatial and the temporal saliency maps are constructed and further fused together to create a novel attention model. The pro-posed attention model is evaluated on three video datasets. Empirical validations demonstrate the salient regions de-tected by our dynamic consistent saliency map highlight the interesting objects effectively and efficiency. More im-portantly, the automatically video attended regions detected by proposed attention model are consistent with the ground truth saliency maps of eye movement data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía