Artykuły w czasopismach na temat „Video synchronization”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Video synchronization.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Video synchronization”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

EL-Sallam, Amar A., i Ajmal S. Mian. "Correlation based speech-video synchronization". Pattern Recognition Letters 32, nr 6 (kwiecień 2011): 780–86. http://dx.doi.org/10.1016/j.patrec.2011.01.001.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Lin, E. T., i E. J. Delp. "Temporal Synchronization in Video Watermarking". IEEE Transactions on Signal Processing 52, nr 10 (październik 2004): 3007–22. http://dx.doi.org/10.1109/tsp.2004.833866.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Fu, Jia Bing, i He Wei Yu. "Audio-Video Synchronization Method Based on Playback Time". Applied Mechanics and Materials 300-301 (luty 2013): 1677–80. http://dx.doi.org/10.4028/www.scientific.net/amm.300-301.1677.

Pełny tekst źródła
Streszczenie:
This paper proposes an audio and video synchronization method based on playback time. In ordinary playing process, the playback rate of audio is constant, so we can locate the playback time of audio and video by locating the key frame to get synchronization. Experimental results show that this method can get synchronization between audio and video and be achieved simply, furthermore, it can reduce the system overhead for synchronization.
Style APA, Harvard, Vancouver, ISO itp.
4

Li, Xiao Ni, He Xin Chen i Da Zhong Wang. "Research on Audio-Video Synchronization Coding Based on Mode Selection in H.264". Applied Mechanics and Materials 182-183 (czerwiec 2012): 701–5. http://dx.doi.org/10.4028/www.scientific.net/amm.182-183.701.

Pełny tekst źródła
Streszczenie:
An embedded audio-video synchronization compression coding approach is presented. The proposed method takes advantage of the different mode types used by the H.264 encoder during the inter prediction stage, different modes carry corresponding audio information, and the audio will be embedded into video stream by choosing modes during the inter prediction stage, then synchronization coding is applied to the mixing video and audio. We have verified the synchronization processing method based on H.264/AVC using JM Model and experimental results show that this method has achieved synchronization between audio and video at small embedded cost, and the same time, audio signal can be extracted without distortion, besides, this method has hardly effect on the quality of video image.
Style APA, Harvard, Vancouver, ISO itp.
5

Liu, Yiguang, Menglong Yang i Zhisheng You. "Video synchronization based on events alignment". Pattern Recognition Letters 33, nr 10 (lipiec 2012): 1338–48. http://dx.doi.org/10.1016/j.patrec.2012.02.009.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Mu Li i Vishal Monga. "Twofold Video Hashing With Automatic Synchronization". IEEE Transactions on Information Forensics and Security 10, nr 8 (sierpień 2015): 1727–38. http://dx.doi.org/10.1109/tifs.2015.2425362.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Zhou, Zhongyi, Anran Xu i Koji Yatani. "SyncUp". Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, nr 3 (9.09.2021): 1–25. http://dx.doi.org/10.1145/3478120.

Pełny tekst źródła
Streszczenie:
The beauty of synchronized dancing lies in the synchronization of body movements among multiple dancers. While dancers utilize camera recordings for their practice, standard video interfaces do not efficiently support their activities of identifying segments where they are not well synchronized. This thus fails to close a tight loop of an iterative practice process (i.e., capturing a practice, reviewing the video, and practicing again). We present SyncUp, a system that provides multiple interactive visualizations to support the practice of synchronized dancing and liberate users from manual inspection of recorded practice videos. By analyzing videos uploaded by users, SyncUp quantifies two aspects of synchronization in dancing: pose similarity among multiple dancers and temporal alignment of their movements. The system then highlights which body parts and which portions of the dance routine require further practice to achieve better synchronization. The results of our system evaluations show that our pose similarity estimation and temporal alignment predictions were correlated well with human ratings. Participants in our qualitative user evaluation expressed the benefits and its potential use of SyncUp, confirming that it would enable quick iterative practice.
Style APA, Harvard, Vancouver, ISO itp.
8

Yang, Shu Zhen, Guang Lin Chu i Ming Wang. "A Study on Parallel Processing Video Splicing System with Multi-Processor". Applied Mechanics and Materials 198-199 (wrzesień 2012): 304–9. http://dx.doi.org/10.4028/www.scientific.net/amm.198-199.304.

Pełny tekst źródła
Streszczenie:
Here to introduce a parallel processing video splicing system with multi-processor. The main processor gets the encoded video data from video source and outputs the video data to the coprocessors simultaneously after decoding the data. Those coprocessors capture the video data needed for splicing and display it on the objective monitor. Owing to this method, the sophisticated time synchronization algorithm is no longer needed. The proposed approach also lowers the system resources consuming, and promotes the accuracy of the video synchronization from multiple coprocessors.
Style APA, Harvard, Vancouver, ISO itp.
9

Kwon, Ohsung. "Class Analysis Method Using Video Synchronization Algorithm". Journal of The Korean Association of Information Education 19, nr 4 (30.12.2015): 441–48. http://dx.doi.org/10.14352/jkaie.2015.19.4.441.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Chen, T., H. P. Graf i K. Wang. "Lip synchronization using speech-assisted video processing". IEEE Signal Processing Letters 2, nr 4 (kwiecień 1995): 57–59. http://dx.doi.org/10.1109/97.376913.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Shih-Wel Sun i Pao-Chi Chang. "Video watermarking synchronization based on profile statistics". IEEE Aerospace and Electronic Systems Magazine 19, nr 5 (maj 2004): 21–25. http://dx.doi.org/10.1109/maes.2004.1301222.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Capuni, Ilir, Nertil Zhuri i Rejvi Dardha. "TimeStream: Exploiting video streams for clock synchronization". Ad Hoc Networks 91 (sierpień 2019): 101878. http://dx.doi.org/10.1016/j.adhoc.2019.101878.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Zhang, Qiang, Lin Yao, Yajun Li i Jungong Han. "Video Synchronization Based on Projective-Invariant Descriptor". Neural Processing Letters 49, nr 3 (25.07.2018): 1093–110. http://dx.doi.org/10.1007/s11063-018-9885-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Elhajj, Imad H., Nadine Bou Dargham, Ning Xi i Yunyi Jia. "Real-Time Adaptive Content-Based Synchronization of Multimedia Streams". Advances in Multimedia 2011 (2011): 1–13. http://dx.doi.org/10.1155/2011/914062.

Pełny tekst źródła
Streszczenie:
Traditional synchronization schemes of multimedia applications are based on temporal relationships between inter- and intrastreams. These schemes do not provide good synchronization in the presence of random delay. As a solution, this paper proposes an adaptive content-based synchronization scheme that synchronizes multimedia streams by accounting for content in addition to time. This approach to synchronization is based on the fact that having two streams sampled close in time does not always imply that these streams are close in content. The proposed scheme primary contribution is the synchronization of audio and video streams based on content. The secondary contribution is adapting the frame rate based on content decisions. Testing adaptive content-based and adaptive time-based synchronization algorithms remotely between the American University of Beirut and Michigan State University showed that the proposed method outperforms the traditional synchronization method. Objective and subjective assessment of the received video and audio quality demonstrated that the content-based scheme provides better synchronization and overall quality of multimedia streams. Although demonstrated using a video conference application, the method can be applied to any multimedia streams including nontraditional ones referred to as supermedia like control signals, haptic, and other sensory measurements. In addition, the method can be applied to synchronize more than two streams simultaneously.
Style APA, Harvard, Vancouver, ISO itp.
15

Solokhina, T. V., Ya Ya Petrichkovich, A. A. Belyaev, I. A. Belyaev i A. V. Egorov. "Dataflow synchronization mechanism for H.264 hardware video codec". Issues of radio electronics, nr 8 (7.08.2019): 13–20. http://dx.doi.org/10.21778/2218-5453-2019-8-13-20.

Pełny tekst źródła
Streszczenie:
Modern video compression standards require significant computational costs for their implementation. With a high rate of receipt of video data and significant computational costs, it may be preferable to use hardware rather than software compression tools. The article proposes a method for synchronizing data streams during hardware implementation of compression / decompression in accordance with the H.264 standard. The developed video codec is an IP core as part of an 1892ВМ14Я microcircuit operating under the control of an external processor core. The architecture and the main characteristics of the video codec are presented. To synchronize the operation of the computing blocks and the controller of direct access to the video memory, the video codec contains an event register, which is a set of data readiness flags for the blocks involved in processing. The experimental results of measuring performance characteristics on real video scenes with various formats of the transmitted image, which confirmed the high throughput of the developed video codec, are presented.
Style APA, Harvard, Vancouver, ISO itp.
16

Wang, Yuanyuan, Daisuke Kitayama, Yukiko Kawai, Kazutoshi Sumiya i Yoshiharu Ishikawa. "An Automatic Video Reinforcing System for TV Programs using Semantic Metadata from Closed Captions". International Journal of Multimedia Data Engineering and Management 7, nr 1 (styczeń 2016): 1–21. http://dx.doi.org/10.4018/ijmdem.2016010101.

Pełny tekst źródła
Streszczenie:
There are various TV programs such as travel and educational programs. While watching TV programs, viewers often search related information about the programs through the Web. Nevertheless, as TV programs keep playing, viewers possibly miss some important scenes when searching the Web. As a result, their enjoyment would be spoiled. Another problem is that there are various topics in each scene of a video, and viewers usually have different levels of knowledge. Thus, it is important to detect topics in videos and supplement videos with related information automatically. In this paper, the authors propose a novel automatic video reinforcing system with two functions: (1) a media synchronization mechanism, which presents supplementary information synchronized with videos, in order to enable viewers to effectively understand the geographic data in videos; (2) a video reconstruction mechanism, which generates new video contents based on viewers' interests and knowledge by adding and removing scenes, in order to enable viewers to enjoy the generated videos without additional search.
Style APA, Harvard, Vancouver, ISO itp.
17

Whitehead, Anthony, Robert Laganiere i Prosenjit Bose. "Formalization of the General Video Temporal Synchronization Problem". ELCVIA Electronic Letters on Computer Vision and Image Analysis 9, nr 1 (21.04.2010): 1. http://dx.doi.org/10.5565/rev/elcvia.330.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Jang, Sung-Bong, i Hong-Seok Na. "Synchronization Quality Enhancement in 3G-324M Video Telephony". IEEE Transactions on Circuits and Systems for Video Technology 21, nr 10 (październik 2011): 1512–21. http://dx.doi.org/10.1109/tcsvt.2011.2164832.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

KIM, C. "Efficient Media Synchronization Method for Video Telephony System". IEICE Transactions on Information and Systems E89-D, nr 6 (1.06.2006): 1901–5. http://dx.doi.org/10.1093/ietisy/e89-d.6.1901.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Nakano, Tamami, Yoshiharu Yamamoto, Keiichi Kitajo, Toshimitsu Takahashi i Shigeru Kitazawa. "Synchronization of spontaneous eyeblinks while viewing video stories". Proceedings of the Royal Society B: Biological Sciences 276, nr 1673 (29.07.2009): 3635–44. http://dx.doi.org/10.1098/rspb.2009.0828.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Tresadern, Philip A., i Ian D. Reid. "Video synchronization from human motion using rank constraints". Computer Vision and Image Understanding 113, nr 8 (sierpień 2009): 891–906. http://dx.doi.org/10.1016/j.cviu.2009.03.012.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Andreatos, AS, i EN Protonotarios. "Receiver synchronization of a packet video communication system". Computer Communications 17, nr 6 (czerwiec 1994): 387–95. http://dx.doi.org/10.1016/0140-3664(94)90123-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Cao, Xiaochun, Lin Wu, Jiangjian Xiao, Hassan Foroosh, Jigui Zhu i Xiaohong Li. "Video synchronization and its application to object transfer". Image and Vision Computing 28, nr 1 (styczeń 2010): 92–100. http://dx.doi.org/10.1016/j.imavis.2009.04.015.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Shakurova, A. R. "Analysis of visual perception features by corneal reflex components examination". Kazan medical journal 95, nr 1 (15.02.2014): 82–86. http://dx.doi.org/10.17816/kmj1462.

Pełny tekst źródła
Streszczenie:
The article surveys the data of experimental studies in which the corneal reflex was used in the analysis of the visual perception process. Visual perception largely depends on the physiological characteristics of the human visual system, both individual and general. Blinking performs a number of functions, one of which is protection, including protection from unpleasant or undesired information. Blinking is closely related to the processes of concentration and disinterest. Blinking while watching a video is synchronized in single person and in a group of people watching the same video fragment. Blinking synchronization depends on the video plot; background video does not cause synchronization. Blinking synchronization is not gender specified. A longer duration of blinking is associated with a significant increase of the intervals between blinks. Accounting for these features of visual perception will allow to coordinate the work with video in several ways. First of all, it is an analysis of the reaction by monitoring the blinks while watching the video. Such analysis should contain a detailed and comprehensive decoding including electrophysiological, psychological and psychophysiological tools. Thus, the analysis of visual perception by studying the corneal reflex components requires an interdisciplinary approach and should be targeted to getting the results usable both for further studies of psychological features and principles of human visual perception and for further creation of most effectively perceived video.
Style APA, Harvard, Vancouver, ISO itp.
25

Roșca, Gabriela, i Constantin Radu Mirescu. "An Affordable Temporal Calibration Method for Common Video Camera". Applied Mechanics and Materials 555 (czerwiec 2014): 781–86. http://dx.doi.org/10.4028/www.scientific.net/amm.555.781.

Pełny tekst źródła
Streszczenie:
The temporal synchronization between video capture and sensor data acquisition are provided by most of advanced data gathering systems using dedicated (and expensive) hardware. This paper raises the question that one could obtain accurate temporal assessment with a normal commercial video camera. With the help of an affordable ATmega328 microcontroller enabled board – for real time programming, and an inexpensive LED based circuit together with some image recognition software an affordable temporal calibration method was proposed that address two of the issues concerning timing of video capture: the real time for image capture and usage of external recorded timers for synchronization between various acquisition systems.
Style APA, Harvard, Vancouver, ISO itp.
26

Sharma, Atul, Sushil Raut, Kohei Shimasaki, Taku Senoo i Idaku Ishii. "Visual-Feedback-Based Frame-by-Frame Synchronization for 3000 fps Projector–Camera Visual Light Communication". Electronics 10, nr 14 (8.07.2021): 1631. http://dx.doi.org/10.3390/electronics10141631.

Pełny tekst źródła
Streszczenie:
This paper proposes a novel method for synchronizing a high frame-rate (HFR) camera with an HFR projector, using a visual feedback-based synchronization algorithm for streaming video sequences in real time on a visible-light communication (VLC)-based system. The frame rates of the camera and projector are equal, and their phases are synchronized. A visual feedback-based synchronization algorithm is used to mitigate the complexities and stabilization issues of wire-based triggering for long-distance systems. The HFR projector projects a binary pattern modulated at 3000 fps. The HFR camera system operates at 3000 fps, which can capture and generate a delay signal to be given to the next camera clock cycle so that it matches the phase of the HFR projector. To test the synchronization performance, we used an HFR projector–camera-based VLC system in which the proposed synchronization algorithm provides maximum bandwidth utilization for the high-throughput transmission ability of the system and reduces data redundancy efficiently. The transmitter of the VLC system encodes the input video sequence into gray code, which is projected via high-definition multimedia interface streaming in the form of binary images 590 × 1060. At the receiver, a monochrome HFR camera can simultaneously capture and decode 12-bit 512 × 512 images in real time and reconstruct a color video sequence at 60 fps. The efficiency of the visual feedback-based synchronization algorithm is evaluated by streaming offline and live video sequences, using a VLC system with single and dual projectors, providing a multiple-projector-based system. The results show that the 3000 fps camera was successfully synchronized with a 3000 fps single-projector and a 1500 fps dual-projector system. It was confirmed that the synchronization algorithm can also be applied to VLC systems, autonomous vehicles, and surveillance applications.
Style APA, Harvard, Vancouver, ISO itp.
27

Staelens, Nicolas, Jonas De Meulenaere, Lizzy Bleumers, Glenn Van Wallendael, Jan De Cock, Koen Geeraert, Nick Vercammen i in. "Assessing the importance of audio/video synchronization for simultaneous translation of video sequences". Multimedia Systems 18, nr 6 (3.05.2012): 445–57. http://dx.doi.org/10.1007/s00530-012-0262-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Yang, Ming, Chih-Cheng Hung i Edward Jung. "Secure Information Delivery through High Bitrate Data Embedding within Digital Video and its Application to Audio/Video Synchronization". International Journal of Information Security and Privacy 6, nr 4 (październik 2012): 71–93. http://dx.doi.org/10.4018/jisp.2012100104.

Pełny tekst źródła
Streszczenie:
Secure communication has traditionally been ensured with data encryption, which has become easier to break than before due to the advancement of computing power. For this reason, information hiding techniques have emerged as an alternative to achieve secure communication. In this research, a novel information hiding methodology is proposed to deliver secure information with the transmission/broadcasting of digital video. Secure data will be embedded within the video frames through vector quantization. At the receiver end, the embedded information can be extracted without the presence of the original video contents. In this system, the major performance goals include visual transparency, high bitrate, and robustness to lossy compression. Based on the proposed methodology, the authors have developed a novel synchronization scheme, which ensures audio/video synchronization through speech-in-video techniques. Compared to existing algorithms, the main contributions of the proposed methodology are: (1) it achieves both high bitrate and robustness against lossy compression; (2) it has investigated impact of embedded information to the performance of video compression, which has not been addressed in previous research. The proposed algorithm is very useful in practical applications such as secure communication, captioning, speech-in-video, video-in-video, etc.
Style APA, Harvard, Vancouver, ISO itp.
29

Peña, Raul, Alfonso Ávila, David Muñoz i Juan Lavariega. "A Data Hiding Technique to Synchronously Embed Physiological Signals in H.264/AVC Encoded Video for Medicine Healthcare". BioMed Research International 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/514087.

Pełny tekst źródła
Streszczenie:
The recognition of clinical manifestations in both video images and physiological-signal waveforms is an important aid to improve the safety and effectiveness in medical care. Physicians can rely on video-waveform (VW) observations to recognize difficult-to-spot signs and symptoms. The VW observations can also reduce the number of false positive incidents and expand the recognition coverage to abnormal health conditions. The synchronization between the video images and the physiological-signal waveforms is fundamental for the successful recognition of the clinical manifestations. The use of conventional equipment to synchronously acquire and display the video-waveform information involves complex tasks such as the video capture/compression, the acquisition/compression of each physiological signal, and the video-waveform synchronization based on timestamps. This paper introduces a data hiding technique capable of both enabling embedding channels and synchronously hiding samples of physiological signals into encoded video sequences. Our data hiding technique offers large data capacity and simplifies the complexity of the video-waveform acquisition and reproduction. The experimental results revealed successful embedding and full restoration of signal’s samples. Our results also demonstrated a small distortion in the video objective quality, a small increment in bit-rate, and embedded cost savings of −2.6196% for high and medium motion video sequences.
Style APA, Harvard, Vancouver, ISO itp.
30

Kim, Giseok, Jae-Soo Cho, Gwangsoon Lee i Eung-Don Lee. "Real-time Temporal Synchronization and Compensation in Stereoscopic Video". Journal of Broadcast Engineering 18, nr 5 (30.09.2013): 680–90. http://dx.doi.org/10.5909/jbe.2013.18.5.680.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Park, Youngsoo, Dohoon Kim i Namho Hur. "A Method of Frame Synchronization for Stereoscopic 3D Video". Journal of Broadcast Engineering 18, nr 6 (30.11.2013): 850–58. http://dx.doi.org/10.5909/jbe.2013.18.6.850.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Waddell, J. Patrick. "Audio/Video Synchronization in Compressed Systems: A Status Report". SMPTE Motion Imaging Journal 119, nr 3 (kwiecień 2010): 35–41. http://dx.doi.org/10.5594/j11397.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Eagan-Deprez, Kathleen, i Reginald Humphreys. "Fostering mind-body synchronization and trance using fractal video". Technoetic Arts 3, nr 2 (1.09.2005): 93–104. http://dx.doi.org/10.1386/tear.3.2.93/1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Zhang, Hao, Chuohao Yeo i Kannan Ramchandran. "VSYNC: Bandwidth-Efficient and Distortion-Tolerant Video File Synchronization". IEEE Transactions on Circuits and Systems for Video Technology 22, nr 1 (styczeń 2012): 67–76. http://dx.doi.org/10.1109/tcsvt.2011.2158336.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Rothermel, A., i R. Lares. "Synchronization of analog video signals with improved image stability". IEEE Transactions on Consumer Electronics 49, nr 4 (listopad 2003): 1292–300. http://dx.doi.org/10.1109/tce.2003.1261232.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Garner, Geoffrey, i Hyunsurk Ryu. "Synchronization of audio/video bridging networks using IEEE 802.1AS". IEEE Communications Magazine 49, nr 2 (luty 2011): 140–47. http://dx.doi.org/10.1109/mcom.2011.5706322.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Kubota, S., M. Morikura i S. Kato. "High-quality frame-synchronization for satellite video signal transmission". IEEE Transactions on Aerospace and Electronic Systems 31, nr 1 (styczeń 1995): 430–35. http://dx.doi.org/10.1109/7.366324.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Partha Sindu, I. Gede, i A. A. Gede Yudhi Paramartha. "The Effect of the Instructional Media Based on Lecture Video and Slide Synchronization System on Statistics Learning Achievement". SHS Web of Conferences 42 (2018): 00073. http://dx.doi.org/10.1051/shsconf/20184200073.

Pełny tekst źródła
Streszczenie:
The purpose of this study was to determine the effect of the use of the instructional media based on lecture video and slide synchronization system on Statistics learning achievement of the students of PTI department . The benefit of this research is to help lecturers in the instructional process i to improve student's learning achievements that lead to better students’ learning outcomes. Students can use instructional media which is created from the lecture video and slide synchronization system to support more interactive self-learning activities. Students can conduct learning activities more efficiently and conductively because synchronized lecture video and slide can assist students in the learning process. The population of this research was all students of semester VI (six) majoring in Informatics Engineering Education. The sample of the research was the students of class VI B and VI D of the academic year 2016/2017. The type of research used in this study was quasi-experiment. The research design used was post test only with non equivalent control group design. The result of this research concluded that there was a significant influence in the application of learning media based on lectures video and slide synchronization system on statistics learning result on PTI department.
Style APA, Harvard, Vancouver, ISO itp.
39

Xiahou, Jian Bing, i Zhen Xiong Wang. "The Apply of DirectShow in Scenario Interactive Teaching System". Advanced Materials Research 926-930 (maj 2014): 4641–44. http://dx.doi.org/10.4028/www.scientific.net/amr.926-930.4641.

Pełny tekst źródła
Streszczenie:
This article describes the basic knowledge and principles of the DirectShow technology, including the DirectShow system architecture and COM technologies. And how to use DirectShow to do video real-time acquisition and storage, audio and video synchronization in the scene interactive teaching system.
Style APA, Harvard, Vancouver, ISO itp.
40

Schiavio, Andrea, Jan Stupacher, Richard Parncutt i Renee Timmers. "Learning Music From Each Other: Synchronization, Turn-taking, or Imitation?" Music Perception 37, nr 5 (czerwiec 2020): 403–22. http://dx.doi.org/10.1525/mp.2020.37.5.403.

Pełny tekst źródła
Streszczenie:
In an experimental study, we investigated how well novices can learn from each other in situations of technology-aided musical skill acquisition, comparing joint and solo learning, and learning through imitation, synchronization, and turn-taking. Fifty-four participants became familiar, either solo or in pairs, with three short musical melodies and then individually performed each from memory. Each melody was learned in a different way: participants from the solo group were asked via an instructional video to: 1) play in synchrony with the video, 2) take turns with the video, or 3) imitate the video. Participants from the duo group engaged in the same learning trials, but with a partner. Novices in both groups performed more accurately in pitch and time when learning in synchrony and turn-taking than in imitation. No differences were found between solo and joint learning. These results suggest that musical learning benefits from a shared, in-the-moment, musical experience, where responsibilities and cognitive resources are distributed between biological (i.e., peers) and hybrid (i.e., participant(s) and computer) assemblies.
Style APA, Harvard, Vancouver, ISO itp.
41

Liu, Cunping. "Video Frequency Current Media Technology and Its Applications in Colleges and Universities". Electronics Science Technology and Application 1, nr 1 (27.07.2014): 15. http://dx.doi.org/10.18686/esta.v1i1.4.

Pełny tekst źródła
Streszczenie:
<p>Dealing with video materials with current media has unparalleled advantages over tradition audio-video materials. The combination of technology with campus network has wide application in colleges and universities. Streaming media has a high transfer rate, data synchronization, high stability characteristics, is the best way to achieve network audio and video transmission. A large number of video and audio data transmission is the main campus network applications, video streaming technology is mainly used in colleges and universities.</p>
Style APA, Harvard, Vancouver, ISO itp.
42

MacIntyre, Blair, Marco Lohse, Jay David Bolter i Emmanuel Moreno. "Integrating 2-D Video Actors into 3-D Augmented-Reality Systems". Presence: Teleoperators and Virtual Environments 11, nr 2 (kwiecień 2002): 189–202. http://dx.doi.org/10.1162/1054746021470621.

Pełny tekst źródła
Streszczenie:
In this paper, we discuss the integration of 2-D video actors into 3-D augmentedreality (AR) systems. In the context of our research on narrative forms for AR, we have found ourselves needing highly expressive content that is most easily created by human actors. We discuss the feasibility and utility of using video actors in an AR situation and then present our Video Actor Framework (including the VideoActor editor and the Video3D Java package) for easily integrating 2-D videos of actors into Java 3D, an object-oriented 3-D graphics programming environment. The framework is based on the idea of supporting tight spatial and temporal synchronization between the content of the video and the rest of the 3-D world. We present a number of illustrative examples that demonstrate the utility of the toolkit and editor. We close with a discussion and example of our recent work implementing these ideas in Macromedia Director, a popular multimedia production tool.
Style APA, Harvard, Vancouver, ISO itp.
43

Ahuja, Rakesh, i Dr Sarabjeet Singh Bedi. "Video Watermarking Scheme Based on Candidates I-frames for Copyright Protection". Indonesian Journal of Electrical Engineering and Computer Science 5, nr 2 (1.02.2017): 391. http://dx.doi.org/10.11591/ijeecs.v5.i2.pp391-400.

Pełny tekst źródła
Streszczenie:
<p>This paper focuses on moving picture expert group ( MPEG) based digital video watermarking scheme for copyright protection services. The present work implemented the video watermarking technique in which discrete cosine transform (DCT) intermediate frequency coefficients from instantaneous decoder refresh (IDR) frames are utilized. The subset of IDR frames as candidate frames are chosen to reduces the probability of temporal synchronization attacks to achieve better robustness and high visual perceptual quality. A secret key based cryptographic technique is used to enhance the security of embedded watermark. The proposed scheme embedded the watermark directly during the differential pulse code modulation (DPCM) process and extracting through decoding the entropy details. Three keys are required to extract the watermark whereas one of the key is used to stop the extraction process and the remaining two are used to display the watermark. The robustness is evaluated by testing spatial synchronization attacks, temporal synchronization attacks and re-encoding attacks. The present work calculates the simulation results that shows the watermarking scheme achieved high robustness against video processing attacks frequently occur in the real world and perceptibility also obtained without changing the motion vectors during the DPCM process of MPEG-2 encoding scheme.</p>
Style APA, Harvard, Vancouver, ISO itp.
44

Wang, Kelvin C. P., Xuyang Li i Robert P. Elliott. "Generic Multimedia Database for Highway Infrastructure Management". Transportation Research Record: Journal of the Transportation Research Board 1615, nr 1 (styczeń 1998): 56–64. http://dx.doi.org/10.3141/1615-08.

Pełny tekst źródła
Streszczenie:
Images of highway right-of-way are used widely by highway agencies through their photologging services to obtain visual information for the analysis of traffic accidents, design improvement, and highway pavement management. The video data usually are in analog format, which is limited in accessibility and search, cannot automatically display site engineering data sets with video, and does not allow simultaneous access by multiple users. Recognizing the need to improve the existing photologging systems, the state highway agency of Arkansas sponsored a research project to develop a full digital, computer-based highway information system that extends the capabilities of existing photologging equipment. The software technologies developed for a distributed multimedia-based highway information system (MMHIS) are presented. MMHIS removes several limitations of the existing systems. The advanced technologies used in this system include digital video, data synchronization, high-speed networking, and video server. The developed system can dynamically link the digital video with the corresponding engineering site data based on a novel algorithm for the data synchronization. Also presented is a unique technique to construct a three-dimensional user interface for MMHIS based on the terrain map of Arkansas.
Style APA, Harvard, Vancouver, ISO itp.
45

O'Connor, Brian J., H. John Yack i Scott C. White. "Reducing Errors in Kinetic Calculations: improved Synchronization of Video and Ground Reaction Force Records". Journal of Applied Biomechanics 11, nr 2 (maj 1995): 216–23. http://dx.doi.org/10.1123/jab.11.2.216.

Pełny tekst źródła
Streszczenie:
A strategy is presented for temporally aligning ground reaction force and kinematic data. Alignment of these data requires marking both the force and video records at a common event. The strategy uses the information content of the video signal, which is A/D converted along with the ground reaction force analog signals, to accomplish this alignment in time. The vertical blanking pulses in the video signal, which define the start of each video field, can be readily identified, provided the correct A/D sampling rate is selected. Knowledge of the position of these vertical blanking pulses relative to the synchronization pulse makes it possible to precisely align the video and analog data in time. Choosing an A/D sampling rate of 598 Hz would enable video and analog data to be synchronized to within 1/1,196 s. Minimizing temporal alignment error results in greater accuracy and .reliability in calculations used to determine joint kinetics.
Style APA, Harvard, Vancouver, ISO itp.
46

PARK, Youngsoo, Taewon KIM i Namho HUR. "Frame Synchronization for Depth-Based 3D Video Using Edge Coherence". IEICE Transactions on Information and Systems E96.D, nr 9 (2013): 2166–69. http://dx.doi.org/10.1587/transinf.e96.d.2166.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Hoffner, Randall. "Audio-Video Synchronization across DTV Transport Interfaces: The Impossible Dream?" SMPTE Journal 109, nr 11 (listopad 2000): 881–84. http://dx.doi.org/10.5594/j17554.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Li, Bing, i Mei-qiang Shi. "Audio-Video synchronization coding approach based on H.264/AVC". IEICE Electronics Express 6, nr 22 (2009): 1556–61. http://dx.doi.org/10.1587/elex.6.1556.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Zanella, F., F. Pasqualetti, R. Carli i F. Bullo. "Simultaneous Boundary Partitioning and Cameras Synchronization for Optimal Video Surveillance*". IFAC Proceedings Volumes 45, nr 26 (wrzesień 2012): 1–6. http://dx.doi.org/10.3182/20120914-2-us-4030.00023.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

KAMEI, Katsuari, Akifumi TOYOTA i Jun-ichi KUSHIDA. "Upsurge of Viewer's Emotion by Video Sharing Using Pseudo Synchronization". Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 24, nr 5 (2012): 944–53. http://dx.doi.org/10.3156/jsoft.24.944.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii