Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Spatio-temporal sequences.

Статті в журналах з теми "Spatio-temporal sequences"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Spatio-temporal sequences".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Caspi, Y., and M. Irani. "Spatio-temporal alignment of sequences." IEEE Transactions on Pattern Analysis and Machine Intelligence 24, no. 11 (November 2002): 1409–24. http://dx.doi.org/10.1109/tpami.2002.1046148.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Horn, D., G. Dror, and B. Quenet. "Dynamic Proximity of Spatio-Temporal Sequences." IEEE Transactions on Neural Networks 15, no. 5 (September 2004): 1002–8. http://dx.doi.org/10.1109/tnn.2004.832809.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Diego, Ferran, Joan Serrat, and Antonio M. Lopez. "Joint Spatio-Temporal Alignment of Sequences." IEEE Transactions on Multimedia 15, no. 6 (October 2013): 1377–87. http://dx.doi.org/10.1109/tmm.2013.2247390.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Azzabou, Noura, and Nikos Paragios. "Spatio-temporal speckle reduction in ultrasound sequences." Inverse Problems & Imaging 4, no. 2 (2010): 211–22. http://dx.doi.org/10.3934/ipi.2010.4.211.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Hach, Thomas, and Tamara Seybold. "Spatio-Temporal Denoising for Depth Map Sequences." International Journal of Multimedia Data Engineering and Management 7, no. 2 (April 2016): 21–35. http://dx.doi.org/10.4018/ijmdem.2016040102.

Повний текст джерела
Анотація:
This paper proposes a novel strategy for depth video denoising in RGBD camera systems. Depth map sequences obtained by state-of-the-art Time-of-Flight sensors suffer from high temporal noise. Hence, all high-level RGB video renderings based on the accompanied depth maps' 3D geometry like augmented reality applications will have severe temporal flickering artifacts. The authors approached this limitation by decoupling depth map upscaling from the temporal denoising step. Thereby, denoising is processed on raw pixels including uncorrelated pixel-wise noise distributions. The authors' denoising methodology utilizes joint sparse 3D transform-domain collaborative filtering. Therein, they extract RGB texture information to yield a more stable and accurate highly sparse 3D depth block representation for the consecutive shrinkage operation. They show the effectiveness of our method on real RGBD camera data and on a publicly available synthetic data set. The evaluation reveals that the authors' method is superior to state-of-the-art methods. Their method delivers flicker-free depth video streams for future applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ahn, J. H., and J. K. Kim. "Spatio-temporal visibility function for image sequences." Electronics Letters 27, no. 7 (1991): 585. http://dx.doi.org/10.1049/el:19910369.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Oliveira, Francisco P. M., Andreia Sousa, Rubim Santos, and João Manuel R. S. Tavares. "Spatio-temporal alignment of pedobarographic image sequences." Medical & Biological Engineering & Computing 49, no. 7 (April 8, 2011): 843–50. http://dx.doi.org/10.1007/s11517-011-0771-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Pavlovskaya, Marina, and Shaul Hochstein. "Explicit Ensemble Perception of Temporal and Spatio-temporal Element Sequences." Journal of Vision 21, no. 9 (September 27, 2021): 2570. http://dx.doi.org/10.1167/jov.21.9.2570.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Koseoglu, Baran, Erdem Kaya, Selim Balcisoy, and Burcin Bozkaya. "ST Sequence Miner: visualization and mining of spatio-temporal event sequences." Visual Computer 36, no. 10-12 (July 16, 2020): 2369–81. http://dx.doi.org/10.1007/s00371-020-01894-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Addesso, Paolo, Maurizio Longo, Rocco Restaino, and Gemine Vivone. "Spatio-temporal resolution enhancement for cloudy thermal sequences." European Journal of Remote Sensing 52, sup1 (October 11, 2018): 2–14. http://dx.doi.org/10.1080/22797254.2018.1526045.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Xu, Binbin, Sarthak Pathak, Hiromitsu Fujii, Atsushi Yamashita, and Hajime Asama. "Spatio-Temporal Video Completion in Spherical Image Sequences." IEEE Robotics and Automation Letters 2, no. 4 (October 2017): 2032–39. http://dx.doi.org/10.1109/lra.2017.2718106.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Idris, F. M., and S. Panchanathan. "Spatio-temporal indexing of vector quantized video sequences." IEEE Transactions on Circuits and Systems for Video Technology 7, no. 5 (1997): 728–40. http://dx.doi.org/10.1109/76.633489.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Li, Weiwei, Rong Du, and Shudong Chen. "Skeleton-Based Spatio-Temporal U-Network for 3D Human Pose Estimation in Video." Sensors 22, no. 7 (March 28, 2022): 2573. http://dx.doi.org/10.3390/s22072573.

Повний текст джерела
Анотація:
Despite the great progress in 3D pose estimation from videos, there is still a lack of effective means to extract spatio-temporal features of different granularity from complex dynamic skeleton sequences. To tackle this problem, we propose a novel, skeleton-based spatio-temporal U-Net(STUNet) scheme to deal with spatio-temporal features in multiple scales for 3D human pose estimation in video. The proposed STUNet architecture consists of a cascade structure of semantic graph convolution layers and structural temporal dilated convolution layers, progressively extracting and fusing the spatio-temporal semantic features from fine-grained to coarse-grained. This U-shaped network achieves scale compression and feature squeezing by downscaling and upscaling, while abstracting multi-resolution spatio-temporal dependencies through skip connections. Experiments demonstrate that our model effectively captures comprehensive spatio-temporal features in multiple scales and achieves substantial improvements over mainstream methods on real-world datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Cho, J. H., and S. D. Kim. "Object detection using spatio-temporal thresholding in image sequences." Electronics Letters 40, no. 18 (2004): 1109. http://dx.doi.org/10.1049/el:20045316.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Nolte, Nicholas, Nils Kurzawa, Roland Eils, and Carl Herrmann. "MapMyFlu: visualizing spatio-temporal relationships between related influenza sequences." Nucleic Acids Research 43, W1 (May 4, 2015): W547—W551. http://dx.doi.org/10.1093/nar/gkv417.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Tuia, Devis, Rosa Lasaponara, Luciano Telesca, and Mikhail Kanevski. "Emergence of spatio-temporal patterns in forest-fire sequences." Physica A: Statistical Mechanics and its Applications 387, no. 13 (May 2008): 3271–80. http://dx.doi.org/10.1016/j.physa.2008.01.057.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Tomic, M., S. Loncaric, and D. Sersic. "Adaptive spatio-temporal denoising of fluoroscopic X-ray sequences." Biomedical Signal Processing and Control 7, no. 2 (March 2012): 173–79. http://dx.doi.org/10.1016/j.bspc.2011.02.003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Baddar, Wissam J., and Yong Man Ro. "Mode Variational LSTM Robust to Unseen Modes of Variation: Application to Facial Expression Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3215–23. http://dx.doi.org/10.1609/aaai.v33i01.33013215.

Повний текст джерела
Анотація:
Spatio-temporal feature encoding is essential for encoding the dynamics in video sequences. Recurrent neural networks, particularly long short-term memory (LSTM) units, have been popular as an efficient tool for encoding spatio-temporal features in sequences. In this work, we investigate the effect of mode variations on the encoded spatio-temporal features using LSTMs. We show that the LSTM retains information related to the mode variation in the sequence, which is irrelevant to the task at hand (e.g. classification facial expressions). Actually, the LSTM forget mechanism is not robust enough to mode variations and preserves information that could negatively affect the encoded spatio-temporal features. We propose the mode variational LSTM to encode spatio-temporal features robust to unseen modes of variation. The mode variational LSTM modifies the original LSTM structure by adding an additional cell state that focuses on encoding the mode variation in the input sequence. To efficiently regulate what features should be stored in the additional cell state, additional gating functionality is also introduced. The effectiveness of the proposed mode variational LSTM is verified using the facial expression recognition task. Comparative experiments on publicly available datasets verified that the proposed mode variational LSTM outperforms existing methods. Moreover, a new dynamic facial expression dataset with different modes of variation, including various modes like pose and illumination variations, was collected to comprehensively evaluate the proposed mode variational LSTM. Experimental results verified that the proposed mode variational LSTM encodes spatio-temporal features robust to unseen modes of variation.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Cordeiro-Costas, Moisés, Daniel Villanueva, Andrés E. Feijóo-Lorenzo, and Javier Martínez-Torres. "Simulation of Wind Speeds with Spatio-Temporal Correlation." Applied Sciences 11, no. 8 (April 8, 2021): 3355. http://dx.doi.org/10.3390/app11083355.

Повний текст джерела
Анотація:
Nowadays, there is a growing trend to incorporate renewables in electrical power systems and, in particular, wind energy, which has become an important primary source in the electricity mix of many countries, where wind farms have been proliferating in recent years. This circumstance makes it particularly interesting to understand wind behavior because generated power depends on it. In this paper, a method is proposed to synthetically generate sequences of wind speed values satisfying two important constraints. The first consists of fitting the given statistical distributions, as the generally accepted fact is assumed that the measured wind speed in a location follows a certain distribution. The second consists of imposing spatial and temporal correlations among the simulated wind speed sequences. The method was successfully checked under different scenarios, depending on variables, such as the number of locations, the duration of the data collection period or the size of the simulated series, and the results were of high accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Li, Meng He, Chuan Lin, Jing Bei Tian, and Sheng Hui Pan. "An Algorithms for Super-Resolution Reconstruction of Video Based on Spatio-Temporal Adaptive." Advanced Materials Research 532-533 (June 2012): 1680–84. http://dx.doi.org/10.4028/www.scientific.net/amr.532-533.1680.

Повний текст джерела
Анотація:
For the weakness of conventional POCS algorithms, a novel spatio-temporal adaptive super-resolution reconstruction algorithm of video is proposed in this paper. The spatio-temporal adaptive mechanism, which is based on POCS super-resolution reconstruction algorithm, can effectively prevent reconstructed image from the influence of inaccuracy of motion information and avoid the impact of noise amplification, which exist in using conventional POCS algorithms to reconstruct image sequences in dramatic motion. Experimental results show that the spatio-temporal adaptive algorithm not only effectively alleviate amplification noise but is better than the traditional POCS algorithms in signal to noise ration.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

ILG, WINFRIED, GÖKHAN H. BAKIR, JOHANNES MEZGER, and MARTIN A. GIESE. "ON THE REPRESENTATION, LEARNING AND TRANSFER OF SPATIO-TEMPORAL MOVEMENT CHARACTERISTICS." International Journal of Humanoid Robotics 01, no. 04 (December 2004): 613–36. http://dx.doi.org/10.1142/s0219843604000320.

Повний текст джерела
Анотація:
In this paper we present a learning-based approach for the modeling of complex movement sequences. Based on the method of Spatio-Temporal Morphable Models (STMMs) we derive a hierarchical algorithm that, in a first step, identifies automatically movement elements in movement sequences based on a coarse spatio-temporal description, and in a second step models these movement primitives by approximation through linear combinations of learned example movement trajectories. We describe the different steps of the algorithm and show how it can be applied for modeling and synthesis of complex sequences of human movements that contain movement elements with a variable style. The proposed method is demonstrated on different applications of movement representation relevant for imitation learning of movement styles in humanoid robotics.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Liu, Yang. "Multi-Scale Spatio-Temporal Feature Extraction and Depth Estimation from Sequences by Ordinal Classification." Sensors 20, no. 7 (April 1, 2020): 1979. http://dx.doi.org/10.3390/s20071979.

Повний текст джерела
Анотація:
Depth estimation is a key problem in 3D computer vision and has a wide variety of applications. In this paper we explore whether deep learning network can predict depth map accurately by learning multi-scale spatio-temporal features from sequences and recasting the depth estimation from a regression task to an ordinal classification task. We design an encoder-decoder network with several multi-scale strategies to improve its performance and extract spatio-temporal features with ConvLSTM. The results of our experiments show that the proposed method has an improvement of almost 10% in error metrics and up to 2% in accuracy metrics. The results also tell us that extracting spatio-temporal features can dramatically improve the performance in depth estimation task. We consider to extend this work to a self-supervised manner to get rid of the dependence on large-scale labeled data.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Wang, Jin, Zhao Hui Li, Dong Mei Li, and Yu Wang. "A Spatio-Temporal Video Segmentation Method Based on Motion Detection." Applied Mechanics and Materials 135-136 (October 2011): 1147–54. http://dx.doi.org/10.4028/www.scientific.net/amm.135-136.1147.

Повний текст джерела
Анотація:
In the paper, a new spatio-temporal segmentation algorithm is proposed to extract moving objects from video sequences, the sequences were taken by stationary cameras. First, the motion detection is used to achieve the mask representing moving regions with a estimation noise parameter. Which can effectively improve noise immunity. Due to the shortage of the moving video object textures, the eight-neighbor motion detection is present, which is used to smooth the mask boundary and fill the interior holes. Then a morphological filter is applied to refine the moving mask. Second, spatial segmentation is detected by the Canny operator. Then utilize the gradient histogram to select the high threshold to increase the adaptivity of Canny algorithm. Finally, merge the temporal and spatial mask by neighborhood matching algorithm to ensure further reliability and efficiency of our algorithm. Experiments on typical sequences have successfully demonstrated the validity of the proposed algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

AMEMORI, Kenichi, and Shin ISHII. "Self-Organization and Association for Fine Spatio-Temporal Spike Sequences." Transactions of the Institute of Systems, Control and Information Engineers 13, no. 7 (2000): 308–17. http://dx.doi.org/10.5687/iscie.13.7_308.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

CHENEVIÈRE, FREDERIC, SAMIA BOUKIR, and BERTRAND VACHON. "COMPRESSION AND RECOGNITION OF SPATIO-TEMPORAL SEQUENCES FROM CONTEMPORARY BALLET." International Journal of Pattern Recognition and Artificial Intelligence 20, no. 05 (August 2006): 727–45. http://dx.doi.org/10.1142/s0218001406004880.

Повний текст джерела
Анотація:
We aim at recognizing a set of dance gestures from contemporary ballet. Our input data are motion trajectories followed by the joints of a dancing body provided by a motion-capture system. It is obvious that direct use of the original signals is unreliable and expensive. Therefore, we propose a suitable tool for nonuniform sub-sampling of spatio-temporal signals. The key to our approach is the use of polygonal approximation to provide a compact and efficient representation of motion trajectories. Our dance gesture recognition method involves a set of Hidden Markov Models (HMMs), each of them being related to a motion trajectory followed by the joints. The recognition of such movements is then achieved by matching the resulting gesture models with the input data via HMMs. We have validated our recognition system on 12 fundamental movements from contemporary ballet performed by four dancers.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Lipo Wang. "Heteroassociations of spatio-temporal sequences with the bidirectional associative memory." IEEE Transactions on Neural Networks 11, no. 6 (2000): 1503–5. http://dx.doi.org/10.1109/72.883484.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Perperidis, Dimitrios, Raad H. Mohiaddin, and Daniel Rueckert. "Spatio-temporal free-form registration of cardiac MR image sequences." Medical Image Analysis 9, no. 5 (October 2005): 441–56. http://dx.doi.org/10.1016/j.media.2005.05.004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Mobasseri, Bijan G., and Preethi Krishnamurthy. "Spatio-temporal object relationships in image sequences using adjacency matrices." Signal, Image and Video Processing 6, no. 2 (December 15, 2010): 247–58. http://dx.doi.org/10.1007/s11760-010-0195-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Luo, Guoliang, Zhigang Deng, Xin Zhao, Xiaogang Jin, Wei Zeng, Wenqiang Xie, and Hyewon Seo. "Spatio-temporal Segmentation Based Adaptive Compression of Dynamic Mesh Sequences." ACM Transactions on Multimedia Computing, Communications, and Applications 16, no. 1 (April 2, 2020): 1–24. http://dx.doi.org/10.1145/3377475.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Caucci, Luca, Harrison H. Barrett, and Jeffrey J. Rodriguez. "Spatio-temporal Hotelling observer for signal detection from image sequences." Optics Express 17, no. 13 (June 16, 2009): 10946. http://dx.doi.org/10.1364/oe.17.010946.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Zhou, Chunjie, Pengfei Dai, and Zhenxing Zhang. "OrientSTS-Spatio Temporal Sequence Searching for Trip Planning." International Journal of Web Services Research 15, no. 2 (April 2018): 21–46. http://dx.doi.org/10.4018/ijwsr.2018040102.

Повний текст джерела
Анотація:
For a satisfactory trip planning, the following features are desired: 1) automated suggestion of scenes or attractions; 2) personalized based on the interest and habits of travelers; 3) maximal coverage of sites of interest; and 4) minimal effort such as transporting time on the route. Automated scene suggestion requires collecting massive knowledge about scene sites and their characteristics, and personalized planning requires matching of a traveler profile with knowledge of scenes of interest. As a trip contains a sequence of stops at multiple scenes, the problem of trip planning becomes optimizing a temporal sequence where each stop is weighted. This article presents OrientSTS, a novel spatio-temporal sequence (STS) searching system for optimal personalized trip planning. OrientSTS provides a knowledge base of scenes with their tagged features and season characteristics. By combining personal profiles and scene features, OrientSTS generates a set of weighted scenes for each city for each user. OrientSTS can then retrieve the optimal sequence of scenes in terms of distance, weight, visiting time, and scene features. The authors develop alternative algorithms for searching optimal sequences, with consideration of the weight of each scene, the preference of users, and the travel time constraint. The experiments demonstrate the efficiency of the proposed algorithms based on real datasets from social networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Xu, Shuqiang, Qunying Huang, and Zhiqiang Zou. "Spatio-Temporal Transformer Recommender: Next Location Recommendation with Attention Mechanism by Mining the Spatio-Temporal Relationship between Visited Locations." ISPRS International Journal of Geo-Information 12, no. 2 (February 20, 2023): 79. http://dx.doi.org/10.3390/ijgi12020079.

Повний текст джерела
Анотація:
Location-based social networks (LBSN) allow users to socialize with friends by sharing their daily life experiences online. In particular, a large amount of check-ins data generated by LBSNs capture the visit locations of users and open a new line of research of spatio-temporal big data, i.e., the next point-of-interest (POI) recommendation. At present, while some advanced methods have been proposed for POI recommendation, existing work only leverages the temporal information of two consecutive LBSN check-ins. Specifically, these methods only focus on adjacent visit sequences but ignore non-contiguous visits, while these visits can be important in understanding the spatio-temporal correlation within the trajectory. In order to fully mine this non-contiguous visit information, we propose a multi-layer Spatio-Temporal deep learning attention model for POI recommendation, Spatio-Temporal Transformer Recommender (STTF-Recommender). To incorporate the spatio-temporal patterns, we encode the information in the user’s trajectory as latent representations into their embeddings before feeding them. To mine the spatio-temporal relationship between any two visited locations, we utilize the Transformer aggregation layer. To match the most plausible candidates from all locations, we develop on an attention matcher based on the attention mechanism. The STTF-Recommender was evaluated with two real-world datasets, and the findings showed that STTF improves at least 13.75% in the mean value of the Recall index at different scales compared with the state-of-the-art models.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Sousa e Santos, Anderson Carlos, and Helio Pedrini. "Human Action Recognition Based on a Spatio-Temporal Video Autoencoder." International Journal of Pattern Recognition and Artificial Intelligence 34, no. 11 (March 11, 2020): 2040001. http://dx.doi.org/10.1142/s0218001420400017.

Повний текст джерела
Анотація:
Due to rapid advances in the development of surveillance cameras with high sampling rates, low cost, small size and high resolution, video-based action recognition systems have become more commonly used in various computer vision applications. Human operators can be supported with the aid of such systems to detect events of interest in video sequences, improving recognition results and reducing failure cases. In this work, we propose and evaluate a method to learn two-dimensional (2D) representations from video sequences based on an autoencoder framework. Spatial and temporal information is explored through a multi-stream convolutional neural network in the context of human action recognition. Experimental results on the challenging UCF101 and HMDB51 datasets demonstrate that our representation is capable of achieving competitive accuracy rates when compared to other approaches available in the literature.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Hainzl, S., G. Zöller, and J. Kurths. "Self-organization of spatio-temporal earthquake clusters." Nonlinear Processes in Geophysics 7, no. 1/2 (June 30, 2000): 21–29. http://dx.doi.org/10.5194/npg-7-21-2000.

Повний текст джерела
Анотація:
Abstract. Cellular automaton versions of the Burridge-Knopoff model have been shown to reproduce the power law distribution of event sizes; that is, the Gutenberg-Richter law. However, they have failed to reproduce the occurrence of foreshock and aftershock sequences correlated with large earthquakes. We show that in the case of partial stress recovery due to transient creep occurring subsequently to earthquakes in the crust, such spring-block systems self-organize into a statistically stationary state characterized by a power law distribution of fracture sizes as well as by foreshocks and aftershocks accompanying large events. In particular, the increase of foreshock and the decrease of aftershock activity can be described by, aside from a prefactor, the same Omori law. The exponent of the Omori law depends on the relaxation time and on the spatial scale of transient creep. Further investigations concerning the number of aftershocks, the temporal variation of aftershock magnitudes, and the waiting time distribution support the conclusion that this model, even "more realistic" physics in missed, captures in some ways the origin of the size distribution as well as spatio-temporal clustering of earthquakes.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

HASEYAMA, Miki, Daisuke IZUMI, and Makoto TAKIZAWA. "Super-Resolution Reconstruction for Spatio-Temporal Resolution Enhancement of Video Sequences." IEICE Transactions on Information and Systems E95.D, no. 9 (2012): 2355–58. http://dx.doi.org/10.1587/transinf.e95.d.2355.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Newport, Robert Ahadizad, Carlo Russo, Sidong Liu, Abdulla Al Suman, and Antonio Di Ieva. "SoftMatch: Comparing Scanpaths Using Combinatorial Spatio-Temporal Sequences with Fractal Curves." Sensors 22, no. 19 (September 30, 2022): 7438. http://dx.doi.org/10.3390/s22197438.

Повний текст джерела
Анотація:
Recent studies matching eye gaze patterns with those of others contain research that is heavily reliant on string editing methods borrowed from early work in bioinformatics. Previous studies have shown string editing methods to be susceptible to false negative results when matching mutated genes or unordered regions of interest in scanpaths. Even as new methods have emerged for matching amino acids using novel combinatorial techniques, scanpath matching is still limited by a traditional collinear approach. This approach reduces the ability to discriminate between free viewing scanpaths of two people looking at the same stimulus due to the heavy weight placed on linearity. To overcome this limitation, we here introduce a new method called SoftMatch to compare pairs of scanpaths. SoftMatch diverges from traditional scanpath matching in two different ways: firstly, by preserving locality using fractal curves to reduce dimensionality from 2D Cartesian (x,y) coordinates into 1D (h) Hilbert distances, and secondly by taking a combinatorial approach to fixation matching using discrete Fréchet distance measurements between segments of scanpath fixation sequences. These matching “sequences of fixations over time” are a loose acronym for SoftMatch. Results indicate high degrees of statistical and substantive significance when scoring matches between scanpaths made during free-form viewing of unfamiliar stimuli. Applications of this method can be used to better understand bottom up perceptual processes extending to scanpath outlier detection, expertise analysis, pathological screening, and salience prediction.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Karli, Sezin, and Yucel Saygin. "Mining periodic patterns in spatio-temporal sequences at different time granularities1." Intelligent Data Analysis 13, no. 2 (April 17, 2009): 301–35. http://dx.doi.org/10.3233/ida-2009-0368.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Lotsch, A., M. A. Friedl, and J. Pinzon. "Spatio-temporal deconvolution of ndvi image sequences using independent component analysis." IEEE Transactions on Geoscience and Remote Sensing 41, no. 12 (December 2003): 2938–42. http://dx.doi.org/10.1109/tgrs.2003.819868.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Li, Renjie, Songyu Yu, and Xiaokang Yang. "Efficient Spatio-temporal Segmentation for Extracting Moving Objects in Video Sequences." IEEE Transactions on Consumer Electronics 53, no. 3 (August 2007): 1161–67. http://dx.doi.org/10.1109/tce.2007.4341600.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Barlow, Horace. "Intraneuronal information processing, directional selectivity and memory for spatio-temporal sequences." Network: Computation in Neural Systems 7, no. 2 (May 1996): 251–59. http://dx.doi.org/10.1088/0954-898x/7/2/004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Barlow, Horace. "Intraneuronal information processing, directional selectivity and memory for spatio-temporal sequences." Network: Computation in Neural Systems 7, no. 2 (January 1996): 251–59. http://dx.doi.org/10.1088/0954-898x_7_2_004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Yang, Zhengyuan, Yuncheng Li, Jianchao Yang, and Jiebo Luo. "Action Recognition With Spatio–Temporal Visual Attention on Skeleton Image Sequences." IEEE Transactions on Circuits and Systems for Video Technology 29, no. 8 (August 2019): 2405–15. http://dx.doi.org/10.1109/tcsvt.2018.2864148.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Mure, Simon, Thomas Grenier, Dominik S. Meier, Charles R. G. Guttmann, and Hugues Benoit-Cattin. "Unsupervised spatio-temporal filtering of image sequences. A mean-shift specification." Pattern Recognition Letters 68 (December 2015): 48–55. http://dx.doi.org/10.1016/j.patrec.2015.07.021.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Souza, Marcos Roberto e., Helena de Almeida Maia, Marcelo Bernardes Vieira, and Helio Pedrini. "Survey on visual rhythms: A spatio-temporal representation for video sequences." Neurocomputing 402 (August 2020): 409–22. http://dx.doi.org/10.1016/j.neucom.2020.04.035.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Long, John A., Rick L. Lawrence, Perry R. Miller, Lucy A. Marshall, and Mark C. Greenwood. "Adoption of cropping sequences in northeast Montana: A spatio-temporal analysis." Agriculture, Ecosystems & Environment 197 (December 2014): 77–87. http://dx.doi.org/10.1016/j.agee.2014.07.022.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Hamedani, Kian, Zahra Bahmani, and Amin Mohammadian. "Spatio-temporal filtering of thermal video sequences for heart rate estimation." Expert Systems with Applications 54 (July 2016): 88–94. http://dx.doi.org/10.1016/j.eswa.2016.01.022.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Li, Ren-jie, Song-yu Yu, and Xiang-wen Wang. "Unsupervised spatio-temporal segmentation for extracting moving objects in video sequences." Journal of Shanghai Jiaotong University (Science) 14, no. 2 (April 2009): 154–61. http://dx.doi.org/10.1007/s12204-009-0154-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Li, Zheng, Xueyuan Huang, Chun Liu, and Wei Yang. "Spatio-Temporal Unequal Interval Correlation-Aware Self-Attention Network for Next POI Recommendation." ISPRS International Journal of Geo-Information 11, no. 11 (October 29, 2022): 543. http://dx.doi.org/10.3390/ijgi11110543.

Повний текст джерела
Анотація:
As the core of location-based social networks (LBSNs), the main task of next point-of-interest (POI) recommendation is to predict the next possible POI through the context information from users’ historical check-in trajectories. It is well known that spatial–temporal contextual information plays an important role in analyzing users check-in behaviors. Moreover, the information between POIs provides a non-trivial correlation for modeling users visiting preferences. Unfortunately, the impact of such correlation information and the spatio–temporal unequal interval information between POIs on user selection of next POI, is rarely considered. Therefore, we propose a spatio-temporal unequal interval correlation-aware self-attention network (STUIC-SAN) model for next POI recommendation. Specifically, we first use the linear regression method to obtain the spatio-temporal unequal interval correlation between any two POIs from users’ check-in sequences. Sequentially, we design a spatio-temporal unequal interval correlation-aware self-attention mechanism, which is able to comprehensively capture users’ personalized spatio-temporal unequal interval correlation preferences by incorporating multiple factors, including POIs information, spatio-temporal unequal interval correlation information between POIs, and the absolute positional information of corresponding POIs. On this basis, we perform next POI recommendation. Finally, we conduct comprehensive performance evaluation using large-scale real-world datasets from two popular location-based social networks, namely, Foursquare and Gowalla. Experimental results on two datasets indicate that the proposed STUIC-SAN outperformed the state-of-the-art next POI recommendation approaches regarding two commonly used evaluation metrics.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Niu, Yaqing, Sridhar Krishnan, and Qin Zhang. "Spatio-Temporal Just Noticeable Distortion Model Guided Video Watermarking." International Journal of Digital Crime and Forensics 2, no. 4 (October 2010): 16–36. http://dx.doi.org/10.4018/jdcf.2010100102.

Повний текст джерела
Анотація:
Perceptual Watermarking should take full advantage of the results from human visual system (HVS) studies. Just noticeable distortion (JND), which refers to the maximum distortion that the HVS does not perceive, gives a way to model the HVS accurately. An effective Spatio-Temporal JND model guided video watermarking scheme in DCT domain is proposed in this paper. The watermarking scheme is based on the design of an additional accurate JND visual model which incorporates spatial Contrast Sensitivity Function (CSF), temporal modulation factor, retinal velocity, luminance adaptation and contrast masking. The proposed watermarking scheme, where the JND model is fully used to determine scene-adaptive upper bounds on watermark insertion, allows providing the maximum strength transparent watermark. Experimental results confirm the improved performance of the Spatio-Temporal JND model. The authors’ Spatio-Temporal JND model is capable of yielding higher injected-watermark energy without introducing noticeable distortion to the original video sequences and outperforms the relevant existing visual models. Simulation results show that the proposed Spatio-Temporal JND model guided video watermarking scheme is more robust than other algorithms based on the relevant existing perceptual models while retaining the watermark transparency.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Zhong, Sheng-hua, Yan Liu, Feifei Ren, Jinghuan Zhang, and Tongwei Ren. "Video Saliency Detection via Dynamic Consistent Spatio-Temporal Attention Modelling." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 30, 2013): 1063–69. http://dx.doi.org/10.1609/aaai.v27i1.8642.

Повний текст джерела
Анотація:
Human vision system actively seeks salient regions and movements in video sequences to reduce the search effort. Modeling computational visual saliency map provides im-portant information for semantic understanding in many real world applications. In this paper, we propose a novel video saliency detection model for detecting the attended regions that correspond to both interesting objects and dominant motions in video sequences. In spatial saliency map, we in-herit the classical bottom-up spatial saliency map. In tem-poral saliency map, a novel optical flow model is proposed based on the dynamic consistency of motion. The spatial and the temporal saliency maps are constructed and further fused together to create a novel attention model. The pro-posed attention model is evaluated on three video datasets. Empirical validations demonstrate the salient regions de-tected by our dynamic consistent saliency map highlight the interesting objects effectively and efficiency. More im-portantly, the automatically video attended regions detected by proposed attention model are consistent with the ground truth saliency maps of eye movement data.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії