Journal articles on the topic 'Visual and time-based digitalthesis'

To see the other types of publications on this topic, follow the link: Visual and time-based digitalthesis.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Visual and time-based digitalthesis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Perez, Jesús M., Pablo G. Aledo, and Pablo P. Sanchez. "Real-time voxel-based visual hull reconstruction." Microprocessors and Microsystems 36, no. 5 (July 2012): 439–47. http://dx.doi.org/10.1016/j.micpro.2012.05.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Song, Yan, and Bo He. "Feature-Based Real-Time Visual SLAM Using Kinect." Advanced Materials Research 989-994 (July 2014): 2651–54. http://dx.doi.org/10.4028/www.scientific.net/amr.989-994.2651.

Full text
Abstract:
In this paper, a novel feature-based real-time visual Simultaneous localization and mapping (SLAM) system is proposed. This system generates colored 3-D reconstruction models and 3-D estimated trajectory using a Kinect style camera. Microsoft Kinect, a low priced 3-D camera, is the only sensor we use in our experiment. Kinect style sensors give RGB-D (red-green-blue depth) data which contains 2D image and per-pixel depth information. ORB (Oriented FAST and Rotated BRIEF) is the algorithm used to extract image features for speed up the whole system. Our system can be used to generate 3-D detailed reconstruction models. Furthermore, an estimated 3D trajectory of the sensor is given in this paper. The results of the experiments demonstrate that our system performs robustly and effectively in both getting detailed 3D models and mapping camera trajectory.
APA, Harvard, Vancouver, ISO, and other styles
3

Couffe, C., R. Mizzi, and G. A. Michael. "Salience-based progression of visual attention: Time course." Psychologie Française 61, no. 3 (September 2016): 163–75. http://dx.doi.org/10.1016/j.psfr.2015.04.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Watson, Derrick G., Suzannah Compton, and Hannah Bailey. "Visual marking: The influence of temporary changes on time-based visual selection." Journal of Experimental Psychology: Human Perception and Performance 37, no. 6 (2011): 1729–38. http://dx.doi.org/10.1037/a0023097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Blagrove, E., and D. Watson. "Visual marking: The effect of emotional change on time-based visual selection." Journal of Vision 9, no. 8 (March 21, 2010): 188. http://dx.doi.org/10.1167/9.8.188.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zupan, Zorana, and Derrick G. Watson. "Perceptual grouping constrains inhibition in time-based visual selection." Attention, Perception, & Psychophysics 82, no. 2 (December 24, 2019): 500–517. http://dx.doi.org/10.3758/s13414-019-01892-4.

Full text
Abstract:
AbstractIn time-based visual selection, task-irrelevant, old stimuli can be inhibited in order to allow the selective processing of new stimuli that appear at a later point in time (the preview benefit; Watson & Humphreys, 1997). The current study investigated if illusory and non-illusory perceptual groups influence the ability to inhibit old and prioritize new stimuli in time-based visual selection. Experiment 1 showed that with Kanizsa-type illusory stimuli, a preview benefit occurred only when displays contained a small number of items. Experiment 2 demonstrated that a set of Kanizsa-type illusory stimuli could be selectively searched amongst a set of non-illusory distractors with no additional preview benefit obtained by separating the two sets of stimuli in time. Experiment 3 showed that, similarly to Experiment 1, non-illusory perceptual groups also produced a preview benefit only for a small number of number of distractors. Experiment 4 demonstrated that local changes to perceptually grouped old items eliminated the preview benefit. The results indicate that the preview benefit is reduced in capacity when applied to complex stimuli that require perceptual grouping, regardless of whether the grouped elements elicit illusory contours. Further, inhibition is applied at the level of grouped objects, rather than to the individual elements making up those groups. The findings are discussed in terms of capacity limits in the inhibition of old distractor stimuli when they consist of perceptual groups, the attentional requirements of forming perceptual groups and the mechanisms and efficiency of time-based visual selection.
APA, Harvard, Vancouver, ISO, and other styles
7

Yulong Xu, Jiabao Wang, Hang Li, Yang Li, Zhuang Miao, and Yafei Zhang. "Patch-based Scale Calculation for Real-time Visual Tracking." IEEE Signal Processing Letters 23, no. 1 (January 2016): 40–44. http://dx.doi.org/10.1109/lsp.2015.2479360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mitsuda, Takashi, Noriaki Maru, Kazunobu Fujikawa, and Fumio Miyazaki. "Binocular visual servoing based on linear time-invariant mapping." Advanced Robotics 11, no. 5 (January 1996): 429–43. http://dx.doi.org/10.1163/156855397x00146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Rui, and Jirong Lian. "Real-time Visual Tracking Based on Convolutional Neural Networks." Journal of Physics: Conference Series 1601 (July 2020): 032053. http://dx.doi.org/10.1088/1742-6596/1601/3/032053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Zhihua, Ziyuan Li, Ning Yu, and Steven Wen. "Locality-Based Visual Outlier Detection Algorithm for Time Series." Security and Communication Networks 2017 (2017): 1–10. http://dx.doi.org/10.1155/2017/1869787.

Full text
Abstract:
Physiological theories indicate that the deepest impression for time series data with respect to the human visual system is its extreme value. Based on this principle, by researching the strategies of extreme-point-based hierarchy segmentation, the hierarchy-segmentation-based data extraction method for time series, and the ideas of locality outlier, a novel outlier detection model and method for time series are proposed. The presented algorithm intuitively labels an outlier factor to each subsequence in time series such that the visual outlier detection gets relatively direct. The experimental results demonstrate the average advantage of the developed method over the compared methods and the efficient data reduction capability for time series, which indicates the promising performance of the proposed method and its practical application value.
APA, Harvard, Vancouver, ISO, and other styles
11

Tünnermann, Jan, and Bärbel Mertsching. "Region-Based Artificial Visual Attention in Space and Time." Cognitive Computation 6, no. 1 (June 27, 2013): 125–43. http://dx.doi.org/10.1007/s12559-013-9220-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Fei, Mengjuan, Zhaojie Ju, Xiantong Zhen, and Jing Li. "Real-time visual tracking based on improved perceptual hashing." Multimedia Tools and Applications 76, no. 3 (July 16, 2016): 4617–34. http://dx.doi.org/10.1007/s11042-016-3723-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Hou, Yi, Hong Zhang, and Shilin Zhou. "Tree-based indexing for real-time ConvNet landmark-based visual place recognition." International Journal of Advanced Robotic Systems 14, no. 1 (January 1, 2017): 172988141668695. http://dx.doi.org/10.1177/1729881416686951.

Full text
Abstract:
Recent impressive studies on using ConvNet landmarks for visual place recognition take an approach that involves three steps: (a) detection of landmarks, (b) description of the landmarks by ConvNet features using a convolutional neural network, and (c) matching of the landmarks in the current view with those in the database views. Such an approach has been shown to achieve the state-of-the-art accuracy even under significant viewpoint and environmental changes. However, the computational burden in step (c) significantly prevents this approach from being applied in practice, due to the complexity of linear search in high-dimensional space of the ConvNet features. In this article, we propose two simple and efficient search methods to tackle this issue. Both methods are built upon tree-based indexing. Given a set of ConvNet features of a query image, the first method directly searches the features’ approximate nearest neighbors in a tree structure that is constructed from ConvNet features of database images. The database images are voted on by features in the query image, according to a lookup table which maps each ConvNet feature to its corresponding database image. The database image with the highest vote is considered the solution. Our second method uses a coarse-to-fine procedure: the coarse step uses the first method to coarsely find the top- N database images, and the fine step performs a linear search in Hamming space of the hash codes of the ConvNet features to determine the best match. Experimental results demonstrate that our methods achieve real-time search performance on five data sets with different sizes and various conditions. Most notably, by achieving an average search time of 0.035 seconds/query, our second method improves the matching efficiency by the three orders of magnitude over a linear search baseline on a database with 20,688 images, with negligible loss in place recognition accuracy.
APA, Harvard, Vancouver, ISO, and other styles
14

Saito, Daisuke, Hironori Washizaki, and Yoshiaki Fukazawa. "Comparison of Text-Based and Visual-Based Programming Input Methods for First-Time Learners." Journal of Information Technology Education: Research 16 (2017): 209–26. http://dx.doi.org/10.28945/3775.

Full text
Abstract:
Aim/Purpose: When learning to program, both text-based and visual-based input methods are common. However, it is unclear which method is more appropriate for first-time learners (first learners). Background: The differences in the learning effect between text-based and visual-based input methods for first learners are compared the using a questionnaire and problems to assess first learners’ understanding of programming. In addition, we study the benefits and feasibility of both methods. Methodology: In this research, we used the sandbox game Minecraft and the extended function ComputerCraftEdu (CCEdu). CCEdu provides a Lua programming environments for the two (text and visual) methods inside Minecraft. We conducted a lecture course on both methods for first learners in Japan ranging in age from 6 to about 15 years old. The lecture taught the basics and concepts of programming. Furthermore, we implemented a questionnaire about the attitude of programming before and after the lecture. Contribution: This research is more than a comparison between the visual method and the text method. It compares visual input and text input methods in the same environment. It clearly shows the difference between the programming learning effects of visual input and text input for first learners. In addition, it shows the more suitable input method for introductory education of first learners in programming learning. Findings: The following results are revealed: (1) The visual input method induces a larger change in attitude toward programming; (2) The number of operations and input quantity influence both groups; (3) The overall results suggest that a visual input is advantageous in a programming implementation environment for first learners. Impact on Society: A visual input method is better suited for first learners as it improves the attitude toward programming. Future Research: In the future, we plan to collect and analyze additional data as well as elucidate the correlation between attitudes and understanding of programming.
APA, Harvard, Vancouver, ISO, and other styles
15

Fujimoto, Hiroshi, and Yoichi Hori. "Visual Servoing Based on Multirate Control and Dead-time Compensation." Journal of the Robotics Society of Japan 22, no. 6 (2004): 780–87. http://dx.doi.org/10.7210/jrsj.22.780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Wei Zhou, Nenghai Yu, and Liansheng Zhuang. "A Generic Approach to Real-time Part-based Visual Tracking." Journal of Convergence Information Technology 7, no. 4 (March 31, 2012): 222–30. http://dx.doi.org/10.4156/jcit.vol7.issue4.27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Abbasi‐Kesbi, Reza, Hamidreza Memarzadeh‐Tehran, and M. Jamal Deen. "Technique to estimate human reaction time based on visual perception." Healthcare Technology Letters 4, no. 2 (April 2017): 73–77. http://dx.doi.org/10.1049/htl.2016.0106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Köthur, P., C. Witt, M. Sips, N. Marwan, S. Schinkel, and D. Dransch. "Visual Analytics for Correlation-Based Comparison of Time Series Ensembles." Computer Graphics Forum 34, no. 3 (June 2015): 411–20. http://dx.doi.org/10.1111/cgf.12653.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Schreck, Tobias, Tatiana Tekušová, Jörn Kohlhammer, and Dieter Fellner. "Trajectory-based visual analysis of large financial time series data." ACM SIGKDD Explorations Newsletter 9, no. 2 (December 2007): 30–37. http://dx.doi.org/10.1145/1345448.1345454.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zupan, Zorana, Derrick G. Watson, and Elisabeth Blagrove. "Inhibition in time-based visual selection: Strategic or by default?" Journal of Experimental Psychology: Human Perception and Performance 41, no. 5 (2015): 1442–61. http://dx.doi.org/10.1037/a0039499.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Ragulskis, M., A. Fedaravicius, and J. Ragulskiene. "Experimental implementation of visual cryptography based on time average moiré." EPJ Web of Conferences 6 (2010): 41001. http://dx.doi.org/10.1051/epjconf/20100641001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Zupan, Z., E. Blagrove, and D. Watson. "Developing Time-Based Visual Selection: The Preview Task in Children." Journal of Vision 14, no. 10 (August 22, 2014): 230. http://dx.doi.org/10.1167/14.10.230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Shijia, Huang, and Wang Luping. "STL_Siam: Real-time Visual Tracking based on reinforcement guided network." Journal of Physics: Conference Series 1684 (November 2020): 012060. http://dx.doi.org/10.1088/1742-6596/1684/1/012060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Gardner, Jill C., Robert M. Douglas, and Max S. Cynader. "A time-based stereoscopic depth mechanism in the visual cortex." Brain Research 328, no. 1 (February 1985): 154–57. http://dx.doi.org/10.1016/0006-8993(85)91335-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Eimer, Martin. "The time course of feature-based and object-based control of visual attention." Journal of Vision 15, no. 12 (September 1, 2015): 1394. http://dx.doi.org/10.1167/15.12.1394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Hu, Bo, and He Huang. "Visual Odometry Implementation and Accuracy Evaluation Based on Real-time Appearance-based Mapping." Sensors and Materials 32, no. 7 (July 10, 2020): 2261. http://dx.doi.org/10.18494/sam.2020.2870.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Lin, Yen-Hui, Chih-Yong Chen, Shih-Yi Lu, and Yu-Chao Lin. "Visual fatigue during VDT work: Effects of time-based and environment-based conditions." Displays 29, no. 5 (December 2008): 487–92. http://dx.doi.org/10.1016/j.displa.2008.04.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Matas Martín, Yolanda, Carlos Manuel Santos Plaza, Félix Hernández del Olmo, and Elena Gaudioso Vázquez. "TOWARDS WEB-BASED VISUAL TRAINING." International Journal of Developmental and Educational Psychology. Revista INFAD de Psicología. 1, no. 2 (July 15, 2016): 55. http://dx.doi.org/10.17060/ijodaep.2015.n2.v1.323.

Full text
Abstract:
To an extent, vision is a function that can be learned. Broadly speaking, this learning -named visual perceptive development, takes place spontaneously. Nevertheless, it is not the case of a significant number of children who, due to their visual or perceptual impairment, have difficulties either receiving or processing visual stimuli from their environment. In this case, Visual Stimulation programs should be applied to these people so that their visual functions can be developed.The advances developed in last two decades on information and communication technologies, ICT, are not reflected in the existing software tools for the field. As a contribution to solve this problem, we have designed and developed an Interactive Educational System supported on a web platform with the main aim to provide professionals the required mechanisms to perform the basic Visual Stimulation tasks. At the same time, it takes advantage of the several opportunities offered by the Internet. In this paper, we analyze the limitations of previous existing tools and present EVIN (Visual Stimulation on the Internet). The main objective of the EVIN project is the development of a web platform which exploits the potential of ICT along with the experience gained by low vision professionals.
APA, Harvard, Vancouver, ISO, and other styles
29

Ni, Zhenjiang, Sio-Hoi Ieng, Christoph Posch, Stéphane Régnier, and Ryad Benosman. "Visual Tracking Using Neuromorphic Asynchronous Event-Based Cameras." Neural Computation 27, no. 4 (April 2015): 925–53. http://dx.doi.org/10.1162/neco_a_00720.

Full text
Abstract:
This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly.
APA, Harvard, Vancouver, ISO, and other styles
30

Kim, Namgon Lucas, and JongWon Kim. "Deploying GPU-based Real-time DXT compression for Networked Visual Sharing." Proceedings of the Asia-Pacific Advanced Network 32 (December 13, 2011): 139. http://dx.doi.org/10.7125/apan.32.17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Bulut, Gülden Günay, Mehmet Cem Çatalbaş, and Hasan Güler. "Chaotic Systems Based Real-Time Implementation of Visual Cryptography Using LabVIEW." Traitement du Signal 37, no. 4 (October 10, 2020): 639–45. http://dx.doi.org/10.18280/ts.370413.

Full text
Abstract:
In recent years, chaotic systems have begun to take a substantial place in literature due to increasing importance of secure communication. Chaotic synchronization which has emerged as a necessity for secure communication can now be performed with many different methods. This study purposes the Master-Slave synchronization via active controller and real time simulation of five different chaotic systems such as Lorenz, Sprott, Rucklidge, Moore-Spiegel, Rössler. Master slave synchronization was performed because of synchronization realized between the same type of chaotic systems with different initial parameters and also because of the systems were expected to behave similarly as a result of synchronization. Active control method was used to amplify the difference signal between master and slave systems which have different initial parameters and to return back synchronization information to the slave system. The real time simulation and synchronization of the master and slave systems performed successfully in LabVIEW environment. Furthermore, for the real time implementation, analogue outputs of NI-DAQ card used and real time results also were observed on an oscilloscope and secure communication application using sinusoidal signal and an image encryption application achieved successfully.
APA, Harvard, Vancouver, ISO, and other styles
32

ZHONG, Jiandong, and Jianbo SU. "A Real-time Moving Object Tracking System Based on Visual Prediction." ROBOT 32, no. 4 (August 12, 2010): 516–21. http://dx.doi.org/10.3724/sp.j.1218.2010.00516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Xiao, Junhao, Dan Xiong, Qinghua Yu, Kaihong Huang, Huimin Lu, and Zhiwen Zeng. "A Real-Time Sliding-Window-Based Visual-Inertial Odometry for MAVs." IEEE Transactions on Industrial Informatics 16, no. 6 (June 2020): 4049–58. http://dx.doi.org/10.1109/tii.2019.2959380.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Moya, L., S. Shomstein, A. Bagic, and M. Behrmann. "The time course of neural activity in object-based visual attention." Journal of Vision 8, no. 6 (March 27, 2010): 549. http://dx.doi.org/10.1167/8.6.549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Chen, Jiannan, Changchun Hua, and Xinping Guan. "Image based fixed time visual servoing control for the quadrotor UAV." IET Control Theory & Applications 13, no. 18 (December 17, 2019): 3117–23. http://dx.doi.org/10.1049/iet-cta.2019.0032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Galbraith, J. M., G. T. Kenyon, and R. W. Ziolkowski. "Time-to-collision estimation from motion based on primate visual processing." IEEE Transactions on Pattern Analysis and Machine Intelligence 27, no. 8 (August 2005): 1279–91. http://dx.doi.org/10.1109/tpami.2005.168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Marchand, Éric, Patrick Bouthemy, and François Chaumette. "A 2D–3D model-based approach to real-time visual tracking." Image and Vision Computing 19, no. 13 (November 2001): 941–55. http://dx.doi.org/10.1016/s0262-8856(01)00054-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

蒋, 小丽. "Time Management: Visual Analysis of Core Journals Based on CNKI Database." Advances in Social Sciences 09, no. 12 (2020): 1986–93. http://dx.doi.org/10.12677/ass.2020.912280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Liu, Yanqing, Dongchen Zhu, Jingquan Peng, Xianshun Wang, Lei Wang, Lili Chen, Jiamao Li, and Xiaolin Zhang. "Real-Time Robust Stereo Visual SLAM System Based on Bionic Eyes." IEEE Transactions on Medical Robotics and Bionics 2, no. 3 (August 2020): 391–98. http://dx.doi.org/10.1109/tmrb.2020.3011981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Schreij, Daniel, and Christian N. L. Olivers. "The role of space and time in object-based visual search." Visual Cognition 21, no. 3 (March 2013): 306–29. http://dx.doi.org/10.1080/13506285.2013.789092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Ameri, Ali, Ernest N. Kamavuako, Erik J. Scheme, Kevin B. Englehart, and Philip A. Parker. "Real-time, simultaneous myoelectric control using visual target-based training paradigm." Biomedical Signal Processing and Control 13 (September 2014): 8–14. http://dx.doi.org/10.1016/j.bspc.2014.03.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Kapela, Rafał, Paweł Śniatała, and Andrzej Rybarczyk. "Real-time visual content description system based on MPEG-7 descriptors." Multimedia Tools and Applications 53, no. 1 (March 2, 2010): 119–50. http://dx.doi.org/10.1007/s11042-010-0493-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Zhou, Wenzhang, Longyin Wen, Libo Zhang, Dawei Du, Tiejian Luo, and Yanjun Wu. "SiamCAN: Real-Time Visual Tracking Based on Siamese Center-Aware Network." IEEE Transactions on Image Processing 30 (2021): 3597–609. http://dx.doi.org/10.1109/tip.2021.3060905.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Huang, Ren-Jie, Chun-Yu Tsao, Yi-Pin Kuo, Yi-Chung Lai, Chi Liu, Zhe-Wei Tu, Jung-Hua Wang, and Chung-Cheng Chang. "Fast Visual Tracking Based on Convolutional Networks." Sensors 18, no. 8 (July 24, 2018): 2405. http://dx.doi.org/10.3390/s18082405.

Full text
Abstract:
Recently, an upsurge of deep learning has provided a new direction for the field of computer vision and visual tracking. However, expensive offline training time and the large number of images required by deep learning have greatly hindered progress. This paper aims to further improve the computational performance of CNT which is reported to deliver 5 fps performance in visual tracking, we propose a method called Fast-CNT which differs from CNT in three aspects: firstly, an adaptive k value (rather than a constant 100) is determined for an input video; secondly, background filters used in CNT are omitted in this work to save computation time without affecting performance; thirdly, SURF feature points are used in conjunction with the particle filter to address the drift problem in CNT. Extensive experimental results on land and undersea video sequences show that Fast-CNT outperforms CNT by 2~10 times in terms of computational efficiency.
APA, Harvard, Vancouver, ISO, and other styles
45

Jia, Qingyu, Liang Chang, Baohua Qiang, Shihao Zhang, Wu Xie, Xianyi Yang, Yangchang Sun, and Minghao Yang. "Real-Time 3D Reconstruction Method Based on Monocular Vision." Sensors 21, no. 17 (September 2, 2021): 5909. http://dx.doi.org/10.3390/s21175909.

Full text
Abstract:
Real-time 3D reconstruction is one of the current popular research directions of computer vision, and it has become the core technology in the fields of virtual reality, industrialized automatic systems, and mobile robot path planning. Currently, there are three main problems in the real-time 3D reconstruction field. Firstly, it is expensive. It requires more varied sensors, so it is less convenient. Secondly, the reconstruction speed is slow, and the 3D model cannot be established accurately in real time. Thirdly, the reconstruction error is large, which cannot meet the requirements of scenes with accuracy. For this reason, we propose a real-time 3D reconstruction method based on monocular vision in this paper. Firstly, a single RGB-D camera is used to collect visual information in real time, and the YOLACT++ network is used to identify and segment the visual information to extract part of the important visual information. Secondly, we combine the three stages of depth recovery, depth optimization, and deep fusion to propose a three-dimensional position estimation method based on deep learning for joint coding of visual information. It can reduce the depth error caused by the depth measurement process, and the accurate 3D point values of the segmented image can be obtained directly. Finally, we propose a method based on the limited outlier adjustment of the cluster center distance to optimize the three-dimensional point values obtained above. It improves the real-time reconstruction accuracy and obtains the three-dimensional model of the object in real time. Experimental results show that this method only needs a single RGB-D camera, which is not only low cost and convenient to use, but also significantly improves the speed and accuracy of 3D reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
46

Lv, Zhihan, Liang Qiao, Amit Kumar Singh, and Qingjun Wang. "Fine-Grained Visual Computing Based on Deep Learning." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 1s (April 20, 2021): 1–19. http://dx.doi.org/10.1145/3418215.

Full text
Abstract:
With increasing amounts of information, the image information received by people also increases exponentially. To perform fine-grained categorization and recognition of images and visual calculations, this study combines the Visual Geometry Group Network 16 model of convolutional neural networks and the vision attention mechanism to build a multi-level fine-grained image feature categorization model. Finally, the TensorFlow platform is utilized to simulate the fine-grained image classification model based on the visual attention mechanism. The results show that in terms of accuracy and required training time, the fine-grained image categorization effect of the multi-level feature categorization model constructed by this study is optimal, with an accuracy rate of 85.3% and a minimum training time of 108 s. In the similarity effect analysis, it is found that the chi-square distance between Log Gabor features and the degree of image distortion show a strong positive correlation; in addition, the validity of this measure is verified. Therefore, through the research in this study, it is found that the constructed fine-grained image categorization model has higher accuracy in image recognition categorization, shorter training time, and significantly better performance in similar feature effects, which provides an experimental reference for the visual computing of fine-grained images in the future.
APA, Harvard, Vancouver, ISO, and other styles
47

Zhiwen Yu and Hau-San Wong. "A Rule Based Technique for Extraction of Visual Attention Regions Based on Real-Time Clustering." IEEE Transactions on Multimedia 9, no. 4 (June 2007): 766–84. http://dx.doi.org/10.1109/tmm.2007.893351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Gu, Wang Huan, Yu Zhu, Xu Dong Chen, Lin Fei He, and Bing Bing Zheng. "Hierarchical CNN-based real-time fatigue detection system by visual-based technologies using MSP model." IET Image Processing 12, no. 12 (December 1, 2018): 2319–29. http://dx.doi.org/10.1049/iet-ipr.2018.5245.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Liu, Peixin, Xianfeng Yuan, Chengjin Zhang, Yong Song, Chuanzheng Liu, and Ziyan Li. "Real-Time Photometric Calibrated Monocular Direct Visual SLAM." Sensors 19, no. 16 (August 19, 2019): 3604. http://dx.doi.org/10.3390/s19163604.

Full text
Abstract:
To solve the illumination sensitivity problems of mobile ground equipment, an enhanced visual SLAM algorithm based on the sparse direct method was proposed in this paper. Firstly, the vignette and response functions of the input sequences were optimized based on the photometric formation of the camera. Secondly, the Shi–Tomasi corners of the input sequence were tracked, and optimization equations were established using the pixel tracking of sparse direct visual odometry (VO). Thirdly, the Levenberg–Marquardt (L–M) method was applied to solve the joint optimization equation, and the photometric calibration parameters in the VO were updated to realize the real-time dynamic compensation of the exposure of the input sequences, which reduced the effects of the light variations on SLAM’s (simultaneous localization and mapping) accuracy and robustness. Finally, a Shi–Tomasi corner filtered strategy was designed to reduce the computational complexity of the proposed algorithm, and the loop closure detection was realized based on the oriented FAST and rotated BRIEF (ORB) features. The proposed algorithm was tested using TUM, KITTI, EuRoC, and an actual environment, and the experimental results show that the positioning and mapping performance of the proposed algorithm is promising.
APA, Harvard, Vancouver, ISO, and other styles
50

Souza, Alessandra S., and Klaus Oberauer. "Time-based forgetting in visual working memory reflects temporal distinctiveness, not decay." Psychonomic Bulletin & Review 22, no. 1 (May 14, 2014): 156–62. http://dx.doi.org/10.3758/s13423-014-0652-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography