Статті в журналах з теми "Video synchronization"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Video synchronization.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Video synchronization".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

EL-Sallam, Amar A., and Ajmal S. Mian. "Correlation based speech-video synchronization." Pattern Recognition Letters 32, no. 6 (April 2011): 780–86. http://dx.doi.org/10.1016/j.patrec.2011.01.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Lin, E. T., and E. J. Delp. "Temporal Synchronization in Video Watermarking." IEEE Transactions on Signal Processing 52, no. 10 (October 2004): 3007–22. http://dx.doi.org/10.1109/tsp.2004.833866.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Fu, Jia Bing, and He Wei Yu. "Audio-Video Synchronization Method Based on Playback Time." Applied Mechanics and Materials 300-301 (February 2013): 1677–80. http://dx.doi.org/10.4028/www.scientific.net/amm.300-301.1677.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper proposes an audio and video synchronization method based on playback time. In ordinary playing process, the playback rate of audio is constant, so we can locate the playback time of audio and video by locating the key frame to get synchronization. Experimental results show that this method can get synchronization between audio and video and be achieved simply, furthermore, it can reduce the system overhead for synchronization.
4

Li, Xiao Ni, He Xin Chen, and Da Zhong Wang. "Research on Audio-Video Synchronization Coding Based on Mode Selection in H.264." Applied Mechanics and Materials 182-183 (June 2012): 701–5. http://dx.doi.org/10.4028/www.scientific.net/amm.182-183.701.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
An embedded audio-video synchronization compression coding approach is presented. The proposed method takes advantage of the different mode types used by the H.264 encoder during the inter prediction stage, different modes carry corresponding audio information, and the audio will be embedded into video stream by choosing modes during the inter prediction stage, then synchronization coding is applied to the mixing video and audio. We have verified the synchronization processing method based on H.264/AVC using JM Model and experimental results show that this method has achieved synchronization between audio and video at small embedded cost, and the same time, audio signal can be extracted without distortion, besides, this method has hardly effect on the quality of video image.
5

Liu, Yiguang, Menglong Yang, and Zhisheng You. "Video synchronization based on events alignment." Pattern Recognition Letters 33, no. 10 (July 2012): 1338–48. http://dx.doi.org/10.1016/j.patrec.2012.02.009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Mu Li and Vishal Monga. "Twofold Video Hashing With Automatic Synchronization." IEEE Transactions on Information Forensics and Security 10, no. 8 (August 2015): 1727–38. http://dx.doi.org/10.1109/tifs.2015.2425362.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhou, Zhongyi, Anran Xu, and Koji Yatani. "SyncUp." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, no. 3 (September 9, 2021): 1–25. http://dx.doi.org/10.1145/3478120.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The beauty of synchronized dancing lies in the synchronization of body movements among multiple dancers. While dancers utilize camera recordings for their practice, standard video interfaces do not efficiently support their activities of identifying segments where they are not well synchronized. This thus fails to close a tight loop of an iterative practice process (i.e., capturing a practice, reviewing the video, and practicing again). We present SyncUp, a system that provides multiple interactive visualizations to support the practice of synchronized dancing and liberate users from manual inspection of recorded practice videos. By analyzing videos uploaded by users, SyncUp quantifies two aspects of synchronization in dancing: pose similarity among multiple dancers and temporal alignment of their movements. The system then highlights which body parts and which portions of the dance routine require further practice to achieve better synchronization. The results of our system evaluations show that our pose similarity estimation and temporal alignment predictions were correlated well with human ratings. Participants in our qualitative user evaluation expressed the benefits and its potential use of SyncUp, confirming that it would enable quick iterative practice.
8

Yang, Shu Zhen, Guang Lin Chu, and Ming Wang. "A Study on Parallel Processing Video Splicing System with Multi-Processor." Applied Mechanics and Materials 198-199 (September 2012): 304–9. http://dx.doi.org/10.4028/www.scientific.net/amm.198-199.304.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Here to introduce a parallel processing video splicing system with multi-processor. The main processor gets the encoded video data from video source and outputs the video data to the coprocessors simultaneously after decoding the data. Those coprocessors capture the video data needed for splicing and display it on the objective monitor. Owing to this method, the sophisticated time synchronization algorithm is no longer needed. The proposed approach also lowers the system resources consuming, and promotes the accuracy of the video synchronization from multiple coprocessors.
9

Kwon, Ohsung. "Class Analysis Method Using Video Synchronization Algorithm." Journal of The Korean Association of Information Education 19, no. 4 (December 30, 2015): 441–48. http://dx.doi.org/10.14352/jkaie.2015.19.4.441.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Chen, T., H. P. Graf, and K. Wang. "Lip synchronization using speech-assisted video processing." IEEE Signal Processing Letters 2, no. 4 (April 1995): 57–59. http://dx.doi.org/10.1109/97.376913.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Shih-Wel Sun and Pao-Chi Chang. "Video watermarking synchronization based on profile statistics." IEEE Aerospace and Electronic Systems Magazine 19, no. 5 (May 2004): 21–25. http://dx.doi.org/10.1109/maes.2004.1301222.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Capuni, Ilir, Nertil Zhuri, and Rejvi Dardha. "TimeStream: Exploiting video streams for clock synchronization." Ad Hoc Networks 91 (August 2019): 101878. http://dx.doi.org/10.1016/j.adhoc.2019.101878.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Zhang, Qiang, Lin Yao, Yajun Li, and Jungong Han. "Video Synchronization Based on Projective-Invariant Descriptor." Neural Processing Letters 49, no. 3 (July 25, 2018): 1093–110. http://dx.doi.org/10.1007/s11063-018-9885-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Elhajj, Imad H., Nadine Bou Dargham, Ning Xi, and Yunyi Jia. "Real-Time Adaptive Content-Based Synchronization of Multimedia Streams." Advances in Multimedia 2011 (2011): 1–13. http://dx.doi.org/10.1155/2011/914062.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Traditional synchronization schemes of multimedia applications are based on temporal relationships between inter- and intrastreams. These schemes do not provide good synchronization in the presence of random delay. As a solution, this paper proposes an adaptive content-based synchronization scheme that synchronizes multimedia streams by accounting for content in addition to time. This approach to synchronization is based on the fact that having two streams sampled close in time does not always imply that these streams are close in content. The proposed scheme primary contribution is the synchronization of audio and video streams based on content. The secondary contribution is adapting the frame rate based on content decisions. Testing adaptive content-based and adaptive time-based synchronization algorithms remotely between the American University of Beirut and Michigan State University showed that the proposed method outperforms the traditional synchronization method. Objective and subjective assessment of the received video and audio quality demonstrated that the content-based scheme provides better synchronization and overall quality of multimedia streams. Although demonstrated using a video conference application, the method can be applied to any multimedia streams including nontraditional ones referred to as supermedia like control signals, haptic, and other sensory measurements. In addition, the method can be applied to synchronize more than two streams simultaneously.
15

Solokhina, T. V., Ya Ya Petrichkovich, A. A. Belyaev, I. A. Belyaev, and A. V. Egorov. "Dataflow synchronization mechanism for H.264 hardware video codec." Issues of radio electronics, no. 8 (August 7, 2019): 13–20. http://dx.doi.org/10.21778/2218-5453-2019-8-13-20.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Modern video compression standards require significant computational costs for their implementation. With a high rate of receipt of video data and significant computational costs, it may be preferable to use hardware rather than software compression tools. The article proposes a method for synchronizing data streams during hardware implementation of compression / decompression in accordance with the H.264 standard. The developed video codec is an IP core as part of an 1892ВМ14Я microcircuit operating under the control of an external processor core. The architecture and the main characteristics of the video codec are presented. To synchronize the operation of the computing blocks and the controller of direct access to the video memory, the video codec contains an event register, which is a set of data readiness flags for the blocks involved in processing. The experimental results of measuring performance characteristics on real video scenes with various formats of the transmitted image, which confirmed the high throughput of the developed video codec, are presented.
16

Wang, Yuanyuan, Daisuke Kitayama, Yukiko Kawai, Kazutoshi Sumiya, and Yoshiharu Ishikawa. "An Automatic Video Reinforcing System for TV Programs using Semantic Metadata from Closed Captions." International Journal of Multimedia Data Engineering and Management 7, no. 1 (January 2016): 1–21. http://dx.doi.org/10.4018/ijmdem.2016010101.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
There are various TV programs such as travel and educational programs. While watching TV programs, viewers often search related information about the programs through the Web. Nevertheless, as TV programs keep playing, viewers possibly miss some important scenes when searching the Web. As a result, their enjoyment would be spoiled. Another problem is that there are various topics in each scene of a video, and viewers usually have different levels of knowledge. Thus, it is important to detect topics in videos and supplement videos with related information automatically. In this paper, the authors propose a novel automatic video reinforcing system with two functions: (1) a media synchronization mechanism, which presents supplementary information synchronized with videos, in order to enable viewers to effectively understand the geographic data in videos; (2) a video reconstruction mechanism, which generates new video contents based on viewers' interests and knowledge by adding and removing scenes, in order to enable viewers to enjoy the generated videos without additional search.
17

Whitehead, Anthony, Robert Laganiere, and Prosenjit Bose. "Formalization of the General Video Temporal Synchronization Problem." ELCVIA Electronic Letters on Computer Vision and Image Analysis 9, no. 1 (April 21, 2010): 1. http://dx.doi.org/10.5565/rev/elcvia.330.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Jang, Sung-Bong, and Hong-Seok Na. "Synchronization Quality Enhancement in 3G-324M Video Telephony." IEEE Transactions on Circuits and Systems for Video Technology 21, no. 10 (October 2011): 1512–21. http://dx.doi.org/10.1109/tcsvt.2011.2164832.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

KIM, C. "Efficient Media Synchronization Method for Video Telephony System." IEICE Transactions on Information and Systems E89-D, no. 6 (June 1, 2006): 1901–5. http://dx.doi.org/10.1093/ietisy/e89-d.6.1901.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Nakano, Tamami, Yoshiharu Yamamoto, Keiichi Kitajo, Toshimitsu Takahashi, and Shigeru Kitazawa. "Synchronization of spontaneous eyeblinks while viewing video stories." Proceedings of the Royal Society B: Biological Sciences 276, no. 1673 (July 29, 2009): 3635–44. http://dx.doi.org/10.1098/rspb.2009.0828.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Tresadern, Philip A., and Ian D. Reid. "Video synchronization from human motion using rank constraints." Computer Vision and Image Understanding 113, no. 8 (August 2009): 891–906. http://dx.doi.org/10.1016/j.cviu.2009.03.012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Andreatos, AS, and EN Protonotarios. "Receiver synchronization of a packet video communication system." Computer Communications 17, no. 6 (June 1994): 387–95. http://dx.doi.org/10.1016/0140-3664(94)90123-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Cao, Xiaochun, Lin Wu, Jiangjian Xiao, Hassan Foroosh, Jigui Zhu, and Xiaohong Li. "Video synchronization and its application to object transfer." Image and Vision Computing 28, no. 1 (January 2010): 92–100. http://dx.doi.org/10.1016/j.imavis.2009.04.015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Shakurova, A. R. "Analysis of visual perception features by corneal reflex components examination." Kazan medical journal 95, no. 1 (February 15, 2014): 82–86. http://dx.doi.org/10.17816/kmj1462.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The article surveys the data of experimental studies in which the corneal reflex was used in the analysis of the visual perception process. Visual perception largely depends on the physiological characteristics of the human visual system, both individual and general. Blinking performs a number of functions, one of which is protection, including protection from unpleasant or undesired information. Blinking is closely related to the processes of concentration and disinterest. Blinking while watching a video is synchronized in single person and in a group of people watching the same video fragment. Blinking synchronization depends on the video plot; background video does not cause synchronization. Blinking synchronization is not gender specified. A longer duration of blinking is associated with a significant increase of the intervals between blinks. Accounting for these features of visual perception will allow to coordinate the work with video in several ways. First of all, it is an analysis of the reaction by monitoring the blinks while watching the video. Such analysis should contain a detailed and comprehensive decoding including electrophysiological, psychological and psychophysiological tools. Thus, the analysis of visual perception by studying the corneal reflex components requires an interdisciplinary approach and should be targeted to getting the results usable both for further studies of psychological features and principles of human visual perception and for further creation of most effectively perceived video.
25

Roșca, Gabriela, and Constantin Radu Mirescu. "An Affordable Temporal Calibration Method for Common Video Camera." Applied Mechanics and Materials 555 (June 2014): 781–86. http://dx.doi.org/10.4028/www.scientific.net/amm.555.781.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The temporal synchronization between video capture and sensor data acquisition are provided by most of advanced data gathering systems using dedicated (and expensive) hardware. This paper raises the question that one could obtain accurate temporal assessment with a normal commercial video camera. With the help of an affordable ATmega328 microcontroller enabled board – for real time programming, and an inexpensive LED based circuit together with some image recognition software an affordable temporal calibration method was proposed that address two of the issues concerning timing of video capture: the real time for image capture and usage of external recorded timers for synchronization between various acquisition systems.
26

Sharma, Atul, Sushil Raut, Kohei Shimasaki, Taku Senoo, and Idaku Ishii. "Visual-Feedback-Based Frame-by-Frame Synchronization for 3000 fps Projector–Camera Visual Light Communication." Electronics 10, no. 14 (July 8, 2021): 1631. http://dx.doi.org/10.3390/electronics10141631.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper proposes a novel method for synchronizing a high frame-rate (HFR) camera with an HFR projector, using a visual feedback-based synchronization algorithm for streaming video sequences in real time on a visible-light communication (VLC)-based system. The frame rates of the camera and projector are equal, and their phases are synchronized. A visual feedback-based synchronization algorithm is used to mitigate the complexities and stabilization issues of wire-based triggering for long-distance systems. The HFR projector projects a binary pattern modulated at 3000 fps. The HFR camera system operates at 3000 fps, which can capture and generate a delay signal to be given to the next camera clock cycle so that it matches the phase of the HFR projector. To test the synchronization performance, we used an HFR projector–camera-based VLC system in which the proposed synchronization algorithm provides maximum bandwidth utilization for the high-throughput transmission ability of the system and reduces data redundancy efficiently. The transmitter of the VLC system encodes the input video sequence into gray code, which is projected via high-definition multimedia interface streaming in the form of binary images 590 × 1060. At the receiver, a monochrome HFR camera can simultaneously capture and decode 12-bit 512 × 512 images in real time and reconstruct a color video sequence at 60 fps. The efficiency of the visual feedback-based synchronization algorithm is evaluated by streaming offline and live video sequences, using a VLC system with single and dual projectors, providing a multiple-projector-based system. The results show that the 3000 fps camera was successfully synchronized with a 3000 fps single-projector and a 1500 fps dual-projector system. It was confirmed that the synchronization algorithm can also be applied to VLC systems, autonomous vehicles, and surveillance applications.
27

Staelens, Nicolas, Jonas De Meulenaere, Lizzy Bleumers, Glenn Van Wallendael, Jan De Cock, Koen Geeraert, Nick Vercammen, et al. "Assessing the importance of audio/video synchronization for simultaneous translation of video sequences." Multimedia Systems 18, no. 6 (May 3, 2012): 445–57. http://dx.doi.org/10.1007/s00530-012-0262-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Yang, Ming, Chih-Cheng Hung, and Edward Jung. "Secure Information Delivery through High Bitrate Data Embedding within Digital Video and its Application to Audio/Video Synchronization." International Journal of Information Security and Privacy 6, no. 4 (October 2012): 71–93. http://dx.doi.org/10.4018/jisp.2012100104.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Secure communication has traditionally been ensured with data encryption, which has become easier to break than before due to the advancement of computing power. For this reason, information hiding techniques have emerged as an alternative to achieve secure communication. In this research, a novel information hiding methodology is proposed to deliver secure information with the transmission/broadcasting of digital video. Secure data will be embedded within the video frames through vector quantization. At the receiver end, the embedded information can be extracted without the presence of the original video contents. In this system, the major performance goals include visual transparency, high bitrate, and robustness to lossy compression. Based on the proposed methodology, the authors have developed a novel synchronization scheme, which ensures audio/video synchronization through speech-in-video techniques. Compared to existing algorithms, the main contributions of the proposed methodology are: (1) it achieves both high bitrate and robustness against lossy compression; (2) it has investigated impact of embedded information to the performance of video compression, which has not been addressed in previous research. The proposed algorithm is very useful in practical applications such as secure communication, captioning, speech-in-video, video-in-video, etc.
29

Peña, Raul, Alfonso Ávila, David Muñoz, and Juan Lavariega. "A Data Hiding Technique to Synchronously Embed Physiological Signals in H.264/AVC Encoded Video for Medicine Healthcare." BioMed Research International 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/514087.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The recognition of clinical manifestations in both video images and physiological-signal waveforms is an important aid to improve the safety and effectiveness in medical care. Physicians can rely on video-waveform (VW) observations to recognize difficult-to-spot signs and symptoms. The VW observations can also reduce the number of false positive incidents and expand the recognition coverage to abnormal health conditions. The synchronization between the video images and the physiological-signal waveforms is fundamental for the successful recognition of the clinical manifestations. The use of conventional equipment to synchronously acquire and display the video-waveform information involves complex tasks such as the video capture/compression, the acquisition/compression of each physiological signal, and the video-waveform synchronization based on timestamps. This paper introduces a data hiding technique capable of both enabling embedding channels and synchronously hiding samples of physiological signals into encoded video sequences. Our data hiding technique offers large data capacity and simplifies the complexity of the video-waveform acquisition and reproduction. The experimental results revealed successful embedding and full restoration of signal’s samples. Our results also demonstrated a small distortion in the video objective quality, a small increment in bit-rate, and embedded cost savings of −2.6196% for high and medium motion video sequences.
30

Kim, Giseok, Jae-Soo Cho, Gwangsoon Lee, and Eung-Don Lee. "Real-time Temporal Synchronization and Compensation in Stereoscopic Video." Journal of Broadcast Engineering 18, no. 5 (September 30, 2013): 680–90. http://dx.doi.org/10.5909/jbe.2013.18.5.680.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Park, Youngsoo, Dohoon Kim, and Namho Hur. "A Method of Frame Synchronization for Stereoscopic 3D Video." Journal of Broadcast Engineering 18, no. 6 (November 30, 2013): 850–58. http://dx.doi.org/10.5909/jbe.2013.18.6.850.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Waddell, J. Patrick. "Audio/Video Synchronization in Compressed Systems: A Status Report." SMPTE Motion Imaging Journal 119, no. 3 (April 2010): 35–41. http://dx.doi.org/10.5594/j11397.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Eagan-Deprez, Kathleen, and Reginald Humphreys. "Fostering mind-body synchronization and trance using fractal video." Technoetic Arts 3, no. 2 (September 1, 2005): 93–104. http://dx.doi.org/10.1386/tear.3.2.93/1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Zhang, Hao, Chuohao Yeo, and Kannan Ramchandran. "VSYNC: Bandwidth-Efficient and Distortion-Tolerant Video File Synchronization." IEEE Transactions on Circuits and Systems for Video Technology 22, no. 1 (January 2012): 67–76. http://dx.doi.org/10.1109/tcsvt.2011.2158336.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Rothermel, A., and R. Lares. "Synchronization of analog video signals with improved image stability." IEEE Transactions on Consumer Electronics 49, no. 4 (November 2003): 1292–300. http://dx.doi.org/10.1109/tce.2003.1261232.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Garner, Geoffrey, and Hyunsurk Ryu. "Synchronization of audio/video bridging networks using IEEE 802.1AS." IEEE Communications Magazine 49, no. 2 (February 2011): 140–47. http://dx.doi.org/10.1109/mcom.2011.5706322.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Kubota, S., M. Morikura, and S. Kato. "High-quality frame-synchronization for satellite video signal transmission." IEEE Transactions on Aerospace and Electronic Systems 31, no. 1 (January 1995): 430–35. http://dx.doi.org/10.1109/7.366324.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Partha Sindu, I. Gede, and A. A. Gede Yudhi Paramartha. "The Effect of the Instructional Media Based on Lecture Video and Slide Synchronization System on Statistics Learning Achievement." SHS Web of Conferences 42 (2018): 00073. http://dx.doi.org/10.1051/shsconf/20184200073.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The purpose of this study was to determine the effect of the use of the instructional media based on lecture video and slide synchronization system on Statistics learning achievement of the students of PTI department . The benefit of this research is to help lecturers in the instructional process i to improve student's learning achievements that lead to better students’ learning outcomes. Students can use instructional media which is created from the lecture video and slide synchronization system to support more interactive self-learning activities. Students can conduct learning activities more efficiently and conductively because synchronized lecture video and slide can assist students in the learning process. The population of this research was all students of semester VI (six) majoring in Informatics Engineering Education. The sample of the research was the students of class VI B and VI D of the academic year 2016/2017. The type of research used in this study was quasi-experiment. The research design used was post test only with non equivalent control group design. The result of this research concluded that there was a significant influence in the application of learning media based on lectures video and slide synchronization system on statistics learning result on PTI department.
39

Xiahou, Jian Bing, and Zhen Xiong Wang. "The Apply of DirectShow in Scenario Interactive Teaching System." Advanced Materials Research 926-930 (May 2014): 4641–44. http://dx.doi.org/10.4028/www.scientific.net/amr.926-930.4641.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This article describes the basic knowledge and principles of the DirectShow technology, including the DirectShow system architecture and COM technologies. And how to use DirectShow to do video real-time acquisition and storage, audio and video synchronization in the scene interactive teaching system.
40

Schiavio, Andrea, Jan Stupacher, Richard Parncutt, and Renee Timmers. "Learning Music From Each Other: Synchronization, Turn-taking, or Imitation?" Music Perception 37, no. 5 (June 2020): 403–22. http://dx.doi.org/10.1525/mp.2020.37.5.403.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In an experimental study, we investigated how well novices can learn from each other in situations of technology-aided musical skill acquisition, comparing joint and solo learning, and learning through imitation, synchronization, and turn-taking. Fifty-four participants became familiar, either solo or in pairs, with three short musical melodies and then individually performed each from memory. Each melody was learned in a different way: participants from the solo group were asked via an instructional video to: 1) play in synchrony with the video, 2) take turns with the video, or 3) imitate the video. Participants from the duo group engaged in the same learning trials, but with a partner. Novices in both groups performed more accurately in pitch and time when learning in synchrony and turn-taking than in imitation. No differences were found between solo and joint learning. These results suggest that musical learning benefits from a shared, in-the-moment, musical experience, where responsibilities and cognitive resources are distributed between biological (i.e., peers) and hybrid (i.e., participant(s) and computer) assemblies.
41

Liu, Cunping. "Video Frequency Current Media Technology and Its Applications in Colleges and Universities." Electronics Science Technology and Application 1, no. 1 (July 27, 2014): 15. http://dx.doi.org/10.18686/esta.v1i1.4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p>Dealing with video materials with current media has unparalleled advantages over tradition audio-video materials. The combination of technology with campus network has wide application in colleges and universities. Streaming media has a high transfer rate, data synchronization, high stability characteristics, is the best way to achieve network audio and video transmission. A large number of video and audio data transmission is the main campus network applications, video streaming technology is mainly used in colleges and universities.</p>
42

MacIntyre, Blair, Marco Lohse, Jay David Bolter, and Emmanuel Moreno. "Integrating 2-D Video Actors into 3-D Augmented-Reality Systems." Presence: Teleoperators and Virtual Environments 11, no. 2 (April 2002): 189–202. http://dx.doi.org/10.1162/1054746021470621.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper, we discuss the integration of 2-D video actors into 3-D augmentedreality (AR) systems. In the context of our research on narrative forms for AR, we have found ourselves needing highly expressive content that is most easily created by human actors. We discuss the feasibility and utility of using video actors in an AR situation and then present our Video Actor Framework (including the VideoActor editor and the Video3D Java package) for easily integrating 2-D videos of actors into Java 3D, an object-oriented 3-D graphics programming environment. The framework is based on the idea of supporting tight spatial and temporal synchronization between the content of the video and the rest of the 3-D world. We present a number of illustrative examples that demonstrate the utility of the toolkit and editor. We close with a discussion and example of our recent work implementing these ideas in Macromedia Director, a popular multimedia production tool.
43

Ahuja, Rakesh, and Dr Sarabjeet Singh Bedi. "Video Watermarking Scheme Based on Candidates I-frames for Copyright Protection." Indonesian Journal of Electrical Engineering and Computer Science 5, no. 2 (February 1, 2017): 391. http://dx.doi.org/10.11591/ijeecs.v5.i2.pp391-400.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p>This paper focuses on moving picture expert group ( MPEG) based digital video watermarking scheme for copyright protection services. The present work implemented the video watermarking technique in which discrete cosine transform (DCT) intermediate frequency coefficients from instantaneous decoder refresh (IDR) frames are utilized. The subset of IDR frames as candidate frames are chosen to reduces the probability of temporal synchronization attacks to achieve better robustness and high visual perceptual quality. A secret key based cryptographic technique is used to enhance the security of embedded watermark. The proposed scheme embedded the watermark directly during the differential pulse code modulation (DPCM) process and extracting through decoding the entropy details. Three keys are required to extract the watermark whereas one of the key is used to stop the extraction process and the remaining two are used to display the watermark. The robustness is evaluated by testing spatial synchronization attacks, temporal synchronization attacks and re-encoding attacks. The present work calculates the simulation results that shows the watermarking scheme achieved high robustness against video processing attacks frequently occur in the real world and perceptibility also obtained without changing the motion vectors during the DPCM process of MPEG-2 encoding scheme.</p>
44

Wang, Kelvin C. P., Xuyang Li, and Robert P. Elliott. "Generic Multimedia Database for Highway Infrastructure Management." Transportation Research Record: Journal of the Transportation Research Board 1615, no. 1 (January 1998): 56–64. http://dx.doi.org/10.3141/1615-08.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Images of highway right-of-way are used widely by highway agencies through their photologging services to obtain visual information for the analysis of traffic accidents, design improvement, and highway pavement management. The video data usually are in analog format, which is limited in accessibility and search, cannot automatically display site engineering data sets with video, and does not allow simultaneous access by multiple users. Recognizing the need to improve the existing photologging systems, the state highway agency of Arkansas sponsored a research project to develop a full digital, computer-based highway information system that extends the capabilities of existing photologging equipment. The software technologies developed for a distributed multimedia-based highway information system (MMHIS) are presented. MMHIS removes several limitations of the existing systems. The advanced technologies used in this system include digital video, data synchronization, high-speed networking, and video server. The developed system can dynamically link the digital video with the corresponding engineering site data based on a novel algorithm for the data synchronization. Also presented is a unique technique to construct a three-dimensional user interface for MMHIS based on the terrain map of Arkansas.
45

O'Connor, Brian J., H. John Yack, and Scott C. White. "Reducing Errors in Kinetic Calculations: improved Synchronization of Video and Ground Reaction Force Records." Journal of Applied Biomechanics 11, no. 2 (May 1995): 216–23. http://dx.doi.org/10.1123/jab.11.2.216.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A strategy is presented for temporally aligning ground reaction force and kinematic data. Alignment of these data requires marking both the force and video records at a common event. The strategy uses the information content of the video signal, which is A/D converted along with the ground reaction force analog signals, to accomplish this alignment in time. The vertical blanking pulses in the video signal, which define the start of each video field, can be readily identified, provided the correct A/D sampling rate is selected. Knowledge of the position of these vertical blanking pulses relative to the synchronization pulse makes it possible to precisely align the video and analog data in time. Choosing an A/D sampling rate of 598 Hz would enable video and analog data to be synchronized to within 1/1,196 s. Minimizing temporal alignment error results in greater accuracy and .reliability in calculations used to determine joint kinetics.
46

PARK, Youngsoo, Taewon KIM, and Namho HUR. "Frame Synchronization for Depth-Based 3D Video Using Edge Coherence." IEICE Transactions on Information and Systems E96.D, no. 9 (2013): 2166–69. http://dx.doi.org/10.1587/transinf.e96.d.2166.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Hoffner, Randall. "Audio-Video Synchronization across DTV Transport Interfaces: The Impossible Dream?" SMPTE Journal 109, no. 11 (November 2000): 881–84. http://dx.doi.org/10.5594/j17554.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Li, Bing, and Mei-qiang Shi. "Audio-Video synchronization coding approach based on H.264/AVC." IEICE Electronics Express 6, no. 22 (2009): 1556–61. http://dx.doi.org/10.1587/elex.6.1556.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Zanella, F., F. Pasqualetti, R. Carli, and F. Bullo. "Simultaneous Boundary Partitioning and Cameras Synchronization for Optimal Video Surveillance*." IFAC Proceedings Volumes 45, no. 26 (September 2012): 1–6. http://dx.doi.org/10.3182/20120914-2-us-4030.00023.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

KAMEI, Katsuari, Akifumi TOYOTA, and Jun-ichi KUSHIDA. "Upsurge of Viewer's Emotion by Video Sharing Using Pseudo Synchronization." Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 24, no. 5 (2012): 944–53. http://dx.doi.org/10.3156/jsoft.24.944.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії