Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Video synchronization.

Zeitschriftenartikel zum Thema „Video synchronization“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Video synchronization" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

EL-Sallam, Amar A., und Ajmal S. Mian. „Correlation based speech-video synchronization“. Pattern Recognition Letters 32, Nr. 6 (April 2011): 780–86. http://dx.doi.org/10.1016/j.patrec.2011.01.001.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Lin, E. T., und E. J. Delp. „Temporal Synchronization in Video Watermarking“. IEEE Transactions on Signal Processing 52, Nr. 10 (Oktober 2004): 3007–22. http://dx.doi.org/10.1109/tsp.2004.833866.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Fu, Jia Bing, und He Wei Yu. „Audio-Video Synchronization Method Based on Playback Time“. Applied Mechanics and Materials 300-301 (Februar 2013): 1677–80. http://dx.doi.org/10.4028/www.scientific.net/amm.300-301.1677.

Der volle Inhalt der Quelle
Annotation:
This paper proposes an audio and video synchronization method based on playback time. In ordinary playing process, the playback rate of audio is constant, so we can locate the playback time of audio and video by locating the key frame to get synchronization. Experimental results show that this method can get synchronization between audio and video and be achieved simply, furthermore, it can reduce the system overhead for synchronization.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Li, Xiao Ni, He Xin Chen und Da Zhong Wang. „Research on Audio-Video Synchronization Coding Based on Mode Selection in H.264“. Applied Mechanics and Materials 182-183 (Juni 2012): 701–5. http://dx.doi.org/10.4028/www.scientific.net/amm.182-183.701.

Der volle Inhalt der Quelle
Annotation:
An embedded audio-video synchronization compression coding approach is presented. The proposed method takes advantage of the different mode types used by the H.264 encoder during the inter prediction stage, different modes carry corresponding audio information, and the audio will be embedded into video stream by choosing modes during the inter prediction stage, then synchronization coding is applied to the mixing video and audio. We have verified the synchronization processing method based on H.264/AVC using JM Model and experimental results show that this method has achieved synchronization between audio and video at small embedded cost, and the same time, audio signal can be extracted without distortion, besides, this method has hardly effect on the quality of video image.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Liu, Yiguang, Menglong Yang und Zhisheng You. „Video synchronization based on events alignment“. Pattern Recognition Letters 33, Nr. 10 (Juli 2012): 1338–48. http://dx.doi.org/10.1016/j.patrec.2012.02.009.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Mu Li und Vishal Monga. „Twofold Video Hashing With Automatic Synchronization“. IEEE Transactions on Information Forensics and Security 10, Nr. 8 (August 2015): 1727–38. http://dx.doi.org/10.1109/tifs.2015.2425362.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Zhou, Zhongyi, Anran Xu und Koji Yatani. „SyncUp“. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, Nr. 3 (09.09.2021): 1–25. http://dx.doi.org/10.1145/3478120.

Der volle Inhalt der Quelle
Annotation:
The beauty of synchronized dancing lies in the synchronization of body movements among multiple dancers. While dancers utilize camera recordings for their practice, standard video interfaces do not efficiently support their activities of identifying segments where they are not well synchronized. This thus fails to close a tight loop of an iterative practice process (i.e., capturing a practice, reviewing the video, and practicing again). We present SyncUp, a system that provides multiple interactive visualizations to support the practice of synchronized dancing and liberate users from manual inspection of recorded practice videos. By analyzing videos uploaded by users, SyncUp quantifies two aspects of synchronization in dancing: pose similarity among multiple dancers and temporal alignment of their movements. The system then highlights which body parts and which portions of the dance routine require further practice to achieve better synchronization. The results of our system evaluations show that our pose similarity estimation and temporal alignment predictions were correlated well with human ratings. Participants in our qualitative user evaluation expressed the benefits and its potential use of SyncUp, confirming that it would enable quick iterative practice.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Yang, Shu Zhen, Guang Lin Chu und Ming Wang. „A Study on Parallel Processing Video Splicing System with Multi-Processor“. Applied Mechanics and Materials 198-199 (September 2012): 304–9. http://dx.doi.org/10.4028/www.scientific.net/amm.198-199.304.

Der volle Inhalt der Quelle
Annotation:
Here to introduce a parallel processing video splicing system with multi-processor. The main processor gets the encoded video data from video source and outputs the video data to the coprocessors simultaneously after decoding the data. Those coprocessors capture the video data needed for splicing and display it on the objective monitor. Owing to this method, the sophisticated time synchronization algorithm is no longer needed. The proposed approach also lowers the system resources consuming, and promotes the accuracy of the video synchronization from multiple coprocessors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Kwon, Ohsung. „Class Analysis Method Using Video Synchronization Algorithm“. Journal of The Korean Association of Information Education 19, Nr. 4 (30.12.2015): 441–48. http://dx.doi.org/10.14352/jkaie.2015.19.4.441.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Chen, T., H. P. Graf und K. Wang. „Lip synchronization using speech-assisted video processing“. IEEE Signal Processing Letters 2, Nr. 4 (April 1995): 57–59. http://dx.doi.org/10.1109/97.376913.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Shih-Wel Sun und Pao-Chi Chang. „Video watermarking synchronization based on profile statistics“. IEEE Aerospace and Electronic Systems Magazine 19, Nr. 5 (Mai 2004): 21–25. http://dx.doi.org/10.1109/maes.2004.1301222.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Capuni, Ilir, Nertil Zhuri und Rejvi Dardha. „TimeStream: Exploiting video streams for clock synchronization“. Ad Hoc Networks 91 (August 2019): 101878. http://dx.doi.org/10.1016/j.adhoc.2019.101878.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Zhang, Qiang, Lin Yao, Yajun Li und Jungong Han. „Video Synchronization Based on Projective-Invariant Descriptor“. Neural Processing Letters 49, Nr. 3 (25.07.2018): 1093–110. http://dx.doi.org/10.1007/s11063-018-9885-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Elhajj, Imad H., Nadine Bou Dargham, Ning Xi und Yunyi Jia. „Real-Time Adaptive Content-Based Synchronization of Multimedia Streams“. Advances in Multimedia 2011 (2011): 1–13. http://dx.doi.org/10.1155/2011/914062.

Der volle Inhalt der Quelle
Annotation:
Traditional synchronization schemes of multimedia applications are based on temporal relationships between inter- and intrastreams. These schemes do not provide good synchronization in the presence of random delay. As a solution, this paper proposes an adaptive content-based synchronization scheme that synchronizes multimedia streams by accounting for content in addition to time. This approach to synchronization is based on the fact that having two streams sampled close in time does not always imply that these streams are close in content. The proposed scheme primary contribution is the synchronization of audio and video streams based on content. The secondary contribution is adapting the frame rate based on content decisions. Testing adaptive content-based and adaptive time-based synchronization algorithms remotely between the American University of Beirut and Michigan State University showed that the proposed method outperforms the traditional synchronization method. Objective and subjective assessment of the received video and audio quality demonstrated that the content-based scheme provides better synchronization and overall quality of multimedia streams. Although demonstrated using a video conference application, the method can be applied to any multimedia streams including nontraditional ones referred to as supermedia like control signals, haptic, and other sensory measurements. In addition, the method can be applied to synchronize more than two streams simultaneously.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Solokhina, T. V., Ya Ya Petrichkovich, A. A. Belyaev, I. A. Belyaev und A. V. Egorov. „Dataflow synchronization mechanism for H.264 hardware video codec“. Issues of radio electronics, Nr. 8 (07.08.2019): 13–20. http://dx.doi.org/10.21778/2218-5453-2019-8-13-20.

Der volle Inhalt der Quelle
Annotation:
Modern video compression standards require significant computational costs for their implementation. With a high rate of receipt of video data and significant computational costs, it may be preferable to use hardware rather than software compression tools. The article proposes a method for synchronizing data streams during hardware implementation of compression / decompression in accordance with the H.264 standard. The developed video codec is an IP core as part of an 1892ВМ14Я microcircuit operating under the control of an external processor core. The architecture and the main characteristics of the video codec are presented. To synchronize the operation of the computing blocks and the controller of direct access to the video memory, the video codec contains an event register, which is a set of data readiness flags for the blocks involved in processing. The experimental results of measuring performance characteristics on real video scenes with various formats of the transmitted image, which confirmed the high throughput of the developed video codec, are presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Wang, Yuanyuan, Daisuke Kitayama, Yukiko Kawai, Kazutoshi Sumiya und Yoshiharu Ishikawa. „An Automatic Video Reinforcing System for TV Programs using Semantic Metadata from Closed Captions“. International Journal of Multimedia Data Engineering and Management 7, Nr. 1 (Januar 2016): 1–21. http://dx.doi.org/10.4018/ijmdem.2016010101.

Der volle Inhalt der Quelle
Annotation:
There are various TV programs such as travel and educational programs. While watching TV programs, viewers often search related information about the programs through the Web. Nevertheless, as TV programs keep playing, viewers possibly miss some important scenes when searching the Web. As a result, their enjoyment would be spoiled. Another problem is that there are various topics in each scene of a video, and viewers usually have different levels of knowledge. Thus, it is important to detect topics in videos and supplement videos with related information automatically. In this paper, the authors propose a novel automatic video reinforcing system with two functions: (1) a media synchronization mechanism, which presents supplementary information synchronized with videos, in order to enable viewers to effectively understand the geographic data in videos; (2) a video reconstruction mechanism, which generates new video contents based on viewers' interests and knowledge by adding and removing scenes, in order to enable viewers to enjoy the generated videos without additional search.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Whitehead, Anthony, Robert Laganiere und Prosenjit Bose. „Formalization of the General Video Temporal Synchronization Problem“. ELCVIA Electronic Letters on Computer Vision and Image Analysis 9, Nr. 1 (21.04.2010): 1. http://dx.doi.org/10.5565/rev/elcvia.330.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Jang, Sung-Bong, und Hong-Seok Na. „Synchronization Quality Enhancement in 3G-324M Video Telephony“. IEEE Transactions on Circuits and Systems for Video Technology 21, Nr. 10 (Oktober 2011): 1512–21. http://dx.doi.org/10.1109/tcsvt.2011.2164832.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

KIM, C. „Efficient Media Synchronization Method for Video Telephony System“. IEICE Transactions on Information and Systems E89-D, Nr. 6 (01.06.2006): 1901–5. http://dx.doi.org/10.1093/ietisy/e89-d.6.1901.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Nakano, Tamami, Yoshiharu Yamamoto, Keiichi Kitajo, Toshimitsu Takahashi und Shigeru Kitazawa. „Synchronization of spontaneous eyeblinks while viewing video stories“. Proceedings of the Royal Society B: Biological Sciences 276, Nr. 1673 (29.07.2009): 3635–44. http://dx.doi.org/10.1098/rspb.2009.0828.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Tresadern, Philip A., und Ian D. Reid. „Video synchronization from human motion using rank constraints“. Computer Vision and Image Understanding 113, Nr. 8 (August 2009): 891–906. http://dx.doi.org/10.1016/j.cviu.2009.03.012.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Andreatos, AS, und EN Protonotarios. „Receiver synchronization of a packet video communication system“. Computer Communications 17, Nr. 6 (Juni 1994): 387–95. http://dx.doi.org/10.1016/0140-3664(94)90123-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Cao, Xiaochun, Lin Wu, Jiangjian Xiao, Hassan Foroosh, Jigui Zhu und Xiaohong Li. „Video synchronization and its application to object transfer“. Image and Vision Computing 28, Nr. 1 (Januar 2010): 92–100. http://dx.doi.org/10.1016/j.imavis.2009.04.015.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Shakurova, A. R. „Analysis of visual perception features by corneal reflex components examination“. Kazan medical journal 95, Nr. 1 (15.02.2014): 82–86. http://dx.doi.org/10.17816/kmj1462.

Der volle Inhalt der Quelle
Annotation:
The article surveys the data of experimental studies in which the corneal reflex was used in the analysis of the visual perception process. Visual perception largely depends on the physiological characteristics of the human visual system, both individual and general. Blinking performs a number of functions, one of which is protection, including protection from unpleasant or undesired information. Blinking is closely related to the processes of concentration and disinterest. Blinking while watching a video is synchronized in single person and in a group of people watching the same video fragment. Blinking synchronization depends on the video plot; background video does not cause synchronization. Blinking synchronization is not gender specified. A longer duration of blinking is associated with a significant increase of the intervals between blinks. Accounting for these features of visual perception will allow to coordinate the work with video in several ways. First of all, it is an analysis of the reaction by monitoring the blinks while watching the video. Such analysis should contain a detailed and comprehensive decoding including electrophysiological, psychological and psychophysiological tools. Thus, the analysis of visual perception by studying the corneal reflex components requires an interdisciplinary approach and should be targeted to getting the results usable both for further studies of psychological features and principles of human visual perception and for further creation of most effectively perceived video.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Roșca, Gabriela, und Constantin Radu Mirescu. „An Affordable Temporal Calibration Method for Common Video Camera“. Applied Mechanics and Materials 555 (Juni 2014): 781–86. http://dx.doi.org/10.4028/www.scientific.net/amm.555.781.

Der volle Inhalt der Quelle
Annotation:
The temporal synchronization between video capture and sensor data acquisition are provided by most of advanced data gathering systems using dedicated (and expensive) hardware. This paper raises the question that one could obtain accurate temporal assessment with a normal commercial video camera. With the help of an affordable ATmega328 microcontroller enabled board – for real time programming, and an inexpensive LED based circuit together with some image recognition software an affordable temporal calibration method was proposed that address two of the issues concerning timing of video capture: the real time for image capture and usage of external recorded timers for synchronization between various acquisition systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Sharma, Atul, Sushil Raut, Kohei Shimasaki, Taku Senoo und Idaku Ishii. „Visual-Feedback-Based Frame-by-Frame Synchronization for 3000 fps Projector–Camera Visual Light Communication“. Electronics 10, Nr. 14 (08.07.2021): 1631. http://dx.doi.org/10.3390/electronics10141631.

Der volle Inhalt der Quelle
Annotation:
This paper proposes a novel method for synchronizing a high frame-rate (HFR) camera with an HFR projector, using a visual feedback-based synchronization algorithm for streaming video sequences in real time on a visible-light communication (VLC)-based system. The frame rates of the camera and projector are equal, and their phases are synchronized. A visual feedback-based synchronization algorithm is used to mitigate the complexities and stabilization issues of wire-based triggering for long-distance systems. The HFR projector projects a binary pattern modulated at 3000 fps. The HFR camera system operates at 3000 fps, which can capture and generate a delay signal to be given to the next camera clock cycle so that it matches the phase of the HFR projector. To test the synchronization performance, we used an HFR projector–camera-based VLC system in which the proposed synchronization algorithm provides maximum bandwidth utilization for the high-throughput transmission ability of the system and reduces data redundancy efficiently. The transmitter of the VLC system encodes the input video sequence into gray code, which is projected via high-definition multimedia interface streaming in the form of binary images 590 × 1060. At the receiver, a monochrome HFR camera can simultaneously capture and decode 12-bit 512 × 512 images in real time and reconstruct a color video sequence at 60 fps. The efficiency of the visual feedback-based synchronization algorithm is evaluated by streaming offline and live video sequences, using a VLC system with single and dual projectors, providing a multiple-projector-based system. The results show that the 3000 fps camera was successfully synchronized with a 3000 fps single-projector and a 1500 fps dual-projector system. It was confirmed that the synchronization algorithm can also be applied to VLC systems, autonomous vehicles, and surveillance applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Staelens, Nicolas, Jonas De Meulenaere, Lizzy Bleumers, Glenn Van Wallendael, Jan De Cock, Koen Geeraert, Nick Vercammen et al. „Assessing the importance of audio/video synchronization for simultaneous translation of video sequences“. Multimedia Systems 18, Nr. 6 (03.05.2012): 445–57. http://dx.doi.org/10.1007/s00530-012-0262-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Yang, Ming, Chih-Cheng Hung und Edward Jung. „Secure Information Delivery through High Bitrate Data Embedding within Digital Video and its Application to Audio/Video Synchronization“. International Journal of Information Security and Privacy 6, Nr. 4 (Oktober 2012): 71–93. http://dx.doi.org/10.4018/jisp.2012100104.

Der volle Inhalt der Quelle
Annotation:
Secure communication has traditionally been ensured with data encryption, which has become easier to break than before due to the advancement of computing power. For this reason, information hiding techniques have emerged as an alternative to achieve secure communication. In this research, a novel information hiding methodology is proposed to deliver secure information with the transmission/broadcasting of digital video. Secure data will be embedded within the video frames through vector quantization. At the receiver end, the embedded information can be extracted without the presence of the original video contents. In this system, the major performance goals include visual transparency, high bitrate, and robustness to lossy compression. Based on the proposed methodology, the authors have developed a novel synchronization scheme, which ensures audio/video synchronization through speech-in-video techniques. Compared to existing algorithms, the main contributions of the proposed methodology are: (1) it achieves both high bitrate and robustness against lossy compression; (2) it has investigated impact of embedded information to the performance of video compression, which has not been addressed in previous research. The proposed algorithm is very useful in practical applications such as secure communication, captioning, speech-in-video, video-in-video, etc.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Peña, Raul, Alfonso Ávila, David Muñoz und Juan Lavariega. „A Data Hiding Technique to Synchronously Embed Physiological Signals in H.264/AVC Encoded Video for Medicine Healthcare“. BioMed Research International 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/514087.

Der volle Inhalt der Quelle
Annotation:
The recognition of clinical manifestations in both video images and physiological-signal waveforms is an important aid to improve the safety and effectiveness in medical care. Physicians can rely on video-waveform (VW) observations to recognize difficult-to-spot signs and symptoms. The VW observations can also reduce the number of false positive incidents and expand the recognition coverage to abnormal health conditions. The synchronization between the video images and the physiological-signal waveforms is fundamental for the successful recognition of the clinical manifestations. The use of conventional equipment to synchronously acquire and display the video-waveform information involves complex tasks such as the video capture/compression, the acquisition/compression of each physiological signal, and the video-waveform synchronization based on timestamps. This paper introduces a data hiding technique capable of both enabling embedding channels and synchronously hiding samples of physiological signals into encoded video sequences. Our data hiding technique offers large data capacity and simplifies the complexity of the video-waveform acquisition and reproduction. The experimental results revealed successful embedding and full restoration of signal’s samples. Our results also demonstrated a small distortion in the video objective quality, a small increment in bit-rate, and embedded cost savings of −2.6196% for high and medium motion video sequences.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Kim, Giseok, Jae-Soo Cho, Gwangsoon Lee und Eung-Don Lee. „Real-time Temporal Synchronization and Compensation in Stereoscopic Video“. Journal of Broadcast Engineering 18, Nr. 5 (30.09.2013): 680–90. http://dx.doi.org/10.5909/jbe.2013.18.5.680.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Park, Youngsoo, Dohoon Kim und Namho Hur. „A Method of Frame Synchronization for Stereoscopic 3D Video“. Journal of Broadcast Engineering 18, Nr. 6 (30.11.2013): 850–58. http://dx.doi.org/10.5909/jbe.2013.18.6.850.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Waddell, J. Patrick. „Audio/Video Synchronization in Compressed Systems: A Status Report“. SMPTE Motion Imaging Journal 119, Nr. 3 (April 2010): 35–41. http://dx.doi.org/10.5594/j11397.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Eagan-Deprez, Kathleen, und Reginald Humphreys. „Fostering mind-body synchronization and trance using fractal video“. Technoetic Arts 3, Nr. 2 (01.09.2005): 93–104. http://dx.doi.org/10.1386/tear.3.2.93/1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Zhang, Hao, Chuohao Yeo und Kannan Ramchandran. „VSYNC: Bandwidth-Efficient and Distortion-Tolerant Video File Synchronization“. IEEE Transactions on Circuits and Systems for Video Technology 22, Nr. 1 (Januar 2012): 67–76. http://dx.doi.org/10.1109/tcsvt.2011.2158336.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Rothermel, A., und R. Lares. „Synchronization of analog video signals with improved image stability“. IEEE Transactions on Consumer Electronics 49, Nr. 4 (November 2003): 1292–300. http://dx.doi.org/10.1109/tce.2003.1261232.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Garner, Geoffrey, und Hyunsurk Ryu. „Synchronization of audio/video bridging networks using IEEE 802.1AS“. IEEE Communications Magazine 49, Nr. 2 (Februar 2011): 140–47. http://dx.doi.org/10.1109/mcom.2011.5706322.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Kubota, S., M. Morikura und S. Kato. „High-quality frame-synchronization for satellite video signal transmission“. IEEE Transactions on Aerospace and Electronic Systems 31, Nr. 1 (Januar 1995): 430–35. http://dx.doi.org/10.1109/7.366324.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Partha Sindu, I. Gede, und A. A. Gede Yudhi Paramartha. „The Effect of the Instructional Media Based on Lecture Video and Slide Synchronization System on Statistics Learning Achievement“. SHS Web of Conferences 42 (2018): 00073. http://dx.doi.org/10.1051/shsconf/20184200073.

Der volle Inhalt der Quelle
Annotation:
The purpose of this study was to determine the effect of the use of the instructional media based on lecture video and slide synchronization system on Statistics learning achievement of the students of PTI department . The benefit of this research is to help lecturers in the instructional process i to improve student's learning achievements that lead to better students’ learning outcomes. Students can use instructional media which is created from the lecture video and slide synchronization system to support more interactive self-learning activities. Students can conduct learning activities more efficiently and conductively because synchronized lecture video and slide can assist students in the learning process. The population of this research was all students of semester VI (six) majoring in Informatics Engineering Education. The sample of the research was the students of class VI B and VI D of the academic year 2016/2017. The type of research used in this study was quasi-experiment. The research design used was post test only with non equivalent control group design. The result of this research concluded that there was a significant influence in the application of learning media based on lectures video and slide synchronization system on statistics learning result on PTI department.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Xiahou, Jian Bing, und Zhen Xiong Wang. „The Apply of DirectShow in Scenario Interactive Teaching System“. Advanced Materials Research 926-930 (Mai 2014): 4641–44. http://dx.doi.org/10.4028/www.scientific.net/amr.926-930.4641.

Der volle Inhalt der Quelle
Annotation:
This article describes the basic knowledge and principles of the DirectShow technology, including the DirectShow system architecture and COM technologies. And how to use DirectShow to do video real-time acquisition and storage, audio and video synchronization in the scene interactive teaching system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Schiavio, Andrea, Jan Stupacher, Richard Parncutt und Renee Timmers. „Learning Music From Each Other: Synchronization, Turn-taking, or Imitation?“ Music Perception 37, Nr. 5 (Juni 2020): 403–22. http://dx.doi.org/10.1525/mp.2020.37.5.403.

Der volle Inhalt der Quelle
Annotation:
In an experimental study, we investigated how well novices can learn from each other in situations of technology-aided musical skill acquisition, comparing joint and solo learning, and learning through imitation, synchronization, and turn-taking. Fifty-four participants became familiar, either solo or in pairs, with three short musical melodies and then individually performed each from memory. Each melody was learned in a different way: participants from the solo group were asked via an instructional video to: 1) play in synchrony with the video, 2) take turns with the video, or 3) imitate the video. Participants from the duo group engaged in the same learning trials, but with a partner. Novices in both groups performed more accurately in pitch and time when learning in synchrony and turn-taking than in imitation. No differences were found between solo and joint learning. These results suggest that musical learning benefits from a shared, in-the-moment, musical experience, where responsibilities and cognitive resources are distributed between biological (i.e., peers) and hybrid (i.e., participant(s) and computer) assemblies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Liu, Cunping. „Video Frequency Current Media Technology and Its Applications in Colleges and Universities“. Electronics Science Technology and Application 1, Nr. 1 (27.07.2014): 15. http://dx.doi.org/10.18686/esta.v1i1.4.

Der volle Inhalt der Quelle
Annotation:
<p>Dealing with video materials with current media has unparalleled advantages over tradition audio-video materials. The combination of technology with campus network has wide application in colleges and universities. Streaming media has a high transfer rate, data synchronization, high stability characteristics, is the best way to achieve network audio and video transmission. A large number of video and audio data transmission is the main campus network applications, video streaming technology is mainly used in colleges and universities.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

MacIntyre, Blair, Marco Lohse, Jay David Bolter und Emmanuel Moreno. „Integrating 2-D Video Actors into 3-D Augmented-Reality Systems“. Presence: Teleoperators and Virtual Environments 11, Nr. 2 (April 2002): 189–202. http://dx.doi.org/10.1162/1054746021470621.

Der volle Inhalt der Quelle
Annotation:
In this paper, we discuss the integration of 2-D video actors into 3-D augmentedreality (AR) systems. In the context of our research on narrative forms for AR, we have found ourselves needing highly expressive content that is most easily created by human actors. We discuss the feasibility and utility of using video actors in an AR situation and then present our Video Actor Framework (including the VideoActor editor and the Video3D Java package) for easily integrating 2-D videos of actors into Java 3D, an object-oriented 3-D graphics programming environment. The framework is based on the idea of supporting tight spatial and temporal synchronization between the content of the video and the rest of the 3-D world. We present a number of illustrative examples that demonstrate the utility of the toolkit and editor. We close with a discussion and example of our recent work implementing these ideas in Macromedia Director, a popular multimedia production tool.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Ahuja, Rakesh, und Dr Sarabjeet Singh Bedi. „Video Watermarking Scheme Based on Candidates I-frames for Copyright Protection“. Indonesian Journal of Electrical Engineering and Computer Science 5, Nr. 2 (01.02.2017): 391. http://dx.doi.org/10.11591/ijeecs.v5.i2.pp391-400.

Der volle Inhalt der Quelle
Annotation:
<p>This paper focuses on moving picture expert group ( MPEG) based digital video watermarking scheme for copyright protection services. The present work implemented the video watermarking technique in which discrete cosine transform (DCT) intermediate frequency coefficients from instantaneous decoder refresh (IDR) frames are utilized. The subset of IDR frames as candidate frames are chosen to reduces the probability of temporal synchronization attacks to achieve better robustness and high visual perceptual quality. A secret key based cryptographic technique is used to enhance the security of embedded watermark. The proposed scheme embedded the watermark directly during the differential pulse code modulation (DPCM) process and extracting through decoding the entropy details. Three keys are required to extract the watermark whereas one of the key is used to stop the extraction process and the remaining two are used to display the watermark. The robustness is evaluated by testing spatial synchronization attacks, temporal synchronization attacks and re-encoding attacks. The present work calculates the simulation results that shows the watermarking scheme achieved high robustness against video processing attacks frequently occur in the real world and perceptibility also obtained without changing the motion vectors during the DPCM process of MPEG-2 encoding scheme.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Wang, Kelvin C. P., Xuyang Li und Robert P. Elliott. „Generic Multimedia Database for Highway Infrastructure Management“. Transportation Research Record: Journal of the Transportation Research Board 1615, Nr. 1 (Januar 1998): 56–64. http://dx.doi.org/10.3141/1615-08.

Der volle Inhalt der Quelle
Annotation:
Images of highway right-of-way are used widely by highway agencies through their photologging services to obtain visual information for the analysis of traffic accidents, design improvement, and highway pavement management. The video data usually are in analog format, which is limited in accessibility and search, cannot automatically display site engineering data sets with video, and does not allow simultaneous access by multiple users. Recognizing the need to improve the existing photologging systems, the state highway agency of Arkansas sponsored a research project to develop a full digital, computer-based highway information system that extends the capabilities of existing photologging equipment. The software technologies developed for a distributed multimedia-based highway information system (MMHIS) are presented. MMHIS removes several limitations of the existing systems. The advanced technologies used in this system include digital video, data synchronization, high-speed networking, and video server. The developed system can dynamically link the digital video with the corresponding engineering site data based on a novel algorithm for the data synchronization. Also presented is a unique technique to construct a three-dimensional user interface for MMHIS based on the terrain map of Arkansas.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

O'Connor, Brian J., H. John Yack und Scott C. White. „Reducing Errors in Kinetic Calculations: improved Synchronization of Video and Ground Reaction Force Records“. Journal of Applied Biomechanics 11, Nr. 2 (Mai 1995): 216–23. http://dx.doi.org/10.1123/jab.11.2.216.

Der volle Inhalt der Quelle
Annotation:
A strategy is presented for temporally aligning ground reaction force and kinematic data. Alignment of these data requires marking both the force and video records at a common event. The strategy uses the information content of the video signal, which is A/D converted along with the ground reaction force analog signals, to accomplish this alignment in time. The vertical blanking pulses in the video signal, which define the start of each video field, can be readily identified, provided the correct A/D sampling rate is selected. Knowledge of the position of these vertical blanking pulses relative to the synchronization pulse makes it possible to precisely align the video and analog data in time. Choosing an A/D sampling rate of 598 Hz would enable video and analog data to be synchronized to within 1/1,196 s. Minimizing temporal alignment error results in greater accuracy and .reliability in calculations used to determine joint kinetics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

PARK, Youngsoo, Taewon KIM und Namho HUR. „Frame Synchronization for Depth-Based 3D Video Using Edge Coherence“. IEICE Transactions on Information and Systems E96.D, Nr. 9 (2013): 2166–69. http://dx.doi.org/10.1587/transinf.e96.d.2166.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Hoffner, Randall. „Audio-Video Synchronization across DTV Transport Interfaces: The Impossible Dream?“ SMPTE Journal 109, Nr. 11 (November 2000): 881–84. http://dx.doi.org/10.5594/j17554.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Li, Bing, und Mei-qiang Shi. „Audio-Video synchronization coding approach based on H.264/AVC“. IEICE Electronics Express 6, Nr. 22 (2009): 1556–61. http://dx.doi.org/10.1587/elex.6.1556.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Zanella, F., F. Pasqualetti, R. Carli und F. Bullo. „Simultaneous Boundary Partitioning and Cameras Synchronization for Optimal Video Surveillance*“. IFAC Proceedings Volumes 45, Nr. 26 (September 2012): 1–6. http://dx.doi.org/10.3182/20120914-2-us-4030.00023.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

KAMEI, Katsuari, Akifumi TOYOTA und Jun-ichi KUSHIDA. „Upsurge of Viewer's Emotion by Video Sharing Using Pseudo Synchronization“. Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 24, Nr. 5 (2012): 944–53. http://dx.doi.org/10.3156/jsoft.24.944.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie