Artículos de revistas sobre el tema "Video synchronization"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Video synchronization.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Video synchronization".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

EL-Sallam, Amar A. y Ajmal S. Mian. "Correlation based speech-video synchronization". Pattern Recognition Letters 32, n.º 6 (abril de 2011): 780–86. http://dx.doi.org/10.1016/j.patrec.2011.01.001.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Lin, E. T. y E. J. Delp. "Temporal Synchronization in Video Watermarking". IEEE Transactions on Signal Processing 52, n.º 10 (octubre de 2004): 3007–22. http://dx.doi.org/10.1109/tsp.2004.833866.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Fu, Jia Bing y He Wei Yu. "Audio-Video Synchronization Method Based on Playback Time". Applied Mechanics and Materials 300-301 (febrero de 2013): 1677–80. http://dx.doi.org/10.4028/www.scientific.net/amm.300-301.1677.

Texto completo
Resumen
This paper proposes an audio and video synchronization method based on playback time. In ordinary playing process, the playback rate of audio is constant, so we can locate the playback time of audio and video by locating the key frame to get synchronization. Experimental results show that this method can get synchronization between audio and video and be achieved simply, furthermore, it can reduce the system overhead for synchronization.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Li, Xiao Ni, He Xin Chen y Da Zhong Wang. "Research on Audio-Video Synchronization Coding Based on Mode Selection in H.264". Applied Mechanics and Materials 182-183 (junio de 2012): 701–5. http://dx.doi.org/10.4028/www.scientific.net/amm.182-183.701.

Texto completo
Resumen
An embedded audio-video synchronization compression coding approach is presented. The proposed method takes advantage of the different mode types used by the H.264 encoder during the inter prediction stage, different modes carry corresponding audio information, and the audio will be embedded into video stream by choosing modes during the inter prediction stage, then synchronization coding is applied to the mixing video and audio. We have verified the synchronization processing method based on H.264/AVC using JM Model and experimental results show that this method has achieved synchronization between audio and video at small embedded cost, and the same time, audio signal can be extracted without distortion, besides, this method has hardly effect on the quality of video image.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Liu, Yiguang, Menglong Yang y Zhisheng You. "Video synchronization based on events alignment". Pattern Recognition Letters 33, n.º 10 (julio de 2012): 1338–48. http://dx.doi.org/10.1016/j.patrec.2012.02.009.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Mu Li y Vishal Monga. "Twofold Video Hashing With Automatic Synchronization". IEEE Transactions on Information Forensics and Security 10, n.º 8 (agosto de 2015): 1727–38. http://dx.doi.org/10.1109/tifs.2015.2425362.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Zhou, Zhongyi, Anran Xu y Koji Yatani. "SyncUp". Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, n.º 3 (9 de septiembre de 2021): 1–25. http://dx.doi.org/10.1145/3478120.

Texto completo
Resumen
The beauty of synchronized dancing lies in the synchronization of body movements among multiple dancers. While dancers utilize camera recordings for their practice, standard video interfaces do not efficiently support their activities of identifying segments where they are not well synchronized. This thus fails to close a tight loop of an iterative practice process (i.e., capturing a practice, reviewing the video, and practicing again). We present SyncUp, a system that provides multiple interactive visualizations to support the practice of synchronized dancing and liberate users from manual inspection of recorded practice videos. By analyzing videos uploaded by users, SyncUp quantifies two aspects of synchronization in dancing: pose similarity among multiple dancers and temporal alignment of their movements. The system then highlights which body parts and which portions of the dance routine require further practice to achieve better synchronization. The results of our system evaluations show that our pose similarity estimation and temporal alignment predictions were correlated well with human ratings. Participants in our qualitative user evaluation expressed the benefits and its potential use of SyncUp, confirming that it would enable quick iterative practice.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Yang, Shu Zhen, Guang Lin Chu y Ming Wang. "A Study on Parallel Processing Video Splicing System with Multi-Processor". Applied Mechanics and Materials 198-199 (septiembre de 2012): 304–9. http://dx.doi.org/10.4028/www.scientific.net/amm.198-199.304.

Texto completo
Resumen
Here to introduce a parallel processing video splicing system with multi-processor. The main processor gets the encoded video data from video source and outputs the video data to the coprocessors simultaneously after decoding the data. Those coprocessors capture the video data needed for splicing and display it on the objective monitor. Owing to this method, the sophisticated time synchronization algorithm is no longer needed. The proposed approach also lowers the system resources consuming, and promotes the accuracy of the video synchronization from multiple coprocessors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Kwon, Ohsung. "Class Analysis Method Using Video Synchronization Algorithm". Journal of The Korean Association of Information Education 19, n.º 4 (30 de diciembre de 2015): 441–48. http://dx.doi.org/10.14352/jkaie.2015.19.4.441.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Chen, T., H. P. Graf y K. Wang. "Lip synchronization using speech-assisted video processing". IEEE Signal Processing Letters 2, n.º 4 (abril de 1995): 57–59. http://dx.doi.org/10.1109/97.376913.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Shih-Wel Sun y Pao-Chi Chang. "Video watermarking synchronization based on profile statistics". IEEE Aerospace and Electronic Systems Magazine 19, n.º 5 (mayo de 2004): 21–25. http://dx.doi.org/10.1109/maes.2004.1301222.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Capuni, Ilir, Nertil Zhuri y Rejvi Dardha. "TimeStream: Exploiting video streams for clock synchronization". Ad Hoc Networks 91 (agosto de 2019): 101878. http://dx.doi.org/10.1016/j.adhoc.2019.101878.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Zhang, Qiang, Lin Yao, Yajun Li y Jungong Han. "Video Synchronization Based on Projective-Invariant Descriptor". Neural Processing Letters 49, n.º 3 (25 de julio de 2018): 1093–110. http://dx.doi.org/10.1007/s11063-018-9885-6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Elhajj, Imad H., Nadine Bou Dargham, Ning Xi y Yunyi Jia. "Real-Time Adaptive Content-Based Synchronization of Multimedia Streams". Advances in Multimedia 2011 (2011): 1–13. http://dx.doi.org/10.1155/2011/914062.

Texto completo
Resumen
Traditional synchronization schemes of multimedia applications are based on temporal relationships between inter- and intrastreams. These schemes do not provide good synchronization in the presence of random delay. As a solution, this paper proposes an adaptive content-based synchronization scheme that synchronizes multimedia streams by accounting for content in addition to time. This approach to synchronization is based on the fact that having two streams sampled close in time does not always imply that these streams are close in content. The proposed scheme primary contribution is the synchronization of audio and video streams based on content. The secondary contribution is adapting the frame rate based on content decisions. Testing adaptive content-based and adaptive time-based synchronization algorithms remotely between the American University of Beirut and Michigan State University showed that the proposed method outperforms the traditional synchronization method. Objective and subjective assessment of the received video and audio quality demonstrated that the content-based scheme provides better synchronization and overall quality of multimedia streams. Although demonstrated using a video conference application, the method can be applied to any multimedia streams including nontraditional ones referred to as supermedia like control signals, haptic, and other sensory measurements. In addition, the method can be applied to synchronize more than two streams simultaneously.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Solokhina, T. V., Ya Ya Petrichkovich, A. A. Belyaev, I. A. Belyaev y A. V. Egorov. "Dataflow synchronization mechanism for H.264 hardware video codec". Issues of radio electronics, n.º 8 (7 de agosto de 2019): 13–20. http://dx.doi.org/10.21778/2218-5453-2019-8-13-20.

Texto completo
Resumen
Modern video compression standards require significant computational costs for their implementation. With a high rate of receipt of video data and significant computational costs, it may be preferable to use hardware rather than software compression tools. The article proposes a method for synchronizing data streams during hardware implementation of compression / decompression in accordance with the H.264 standard. The developed video codec is an IP core as part of an 1892ВМ14Я microcircuit operating under the control of an external processor core. The architecture and the main characteristics of the video codec are presented. To synchronize the operation of the computing blocks and the controller of direct access to the video memory, the video codec contains an event register, which is a set of data readiness flags for the blocks involved in processing. The experimental results of measuring performance characteristics on real video scenes with various formats of the transmitted image, which confirmed the high throughput of the developed video codec, are presented.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Wang, Yuanyuan, Daisuke Kitayama, Yukiko Kawai, Kazutoshi Sumiya y Yoshiharu Ishikawa. "An Automatic Video Reinforcing System for TV Programs using Semantic Metadata from Closed Captions". International Journal of Multimedia Data Engineering and Management 7, n.º 1 (enero de 2016): 1–21. http://dx.doi.org/10.4018/ijmdem.2016010101.

Texto completo
Resumen
There are various TV programs such as travel and educational programs. While watching TV programs, viewers often search related information about the programs through the Web. Nevertheless, as TV programs keep playing, viewers possibly miss some important scenes when searching the Web. As a result, their enjoyment would be spoiled. Another problem is that there are various topics in each scene of a video, and viewers usually have different levels of knowledge. Thus, it is important to detect topics in videos and supplement videos with related information automatically. In this paper, the authors propose a novel automatic video reinforcing system with two functions: (1) a media synchronization mechanism, which presents supplementary information synchronized with videos, in order to enable viewers to effectively understand the geographic data in videos; (2) a video reconstruction mechanism, which generates new video contents based on viewers' interests and knowledge by adding and removing scenes, in order to enable viewers to enjoy the generated videos without additional search.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Whitehead, Anthony, Robert Laganiere y Prosenjit Bose. "Formalization of the General Video Temporal Synchronization Problem". ELCVIA Electronic Letters on Computer Vision and Image Analysis 9, n.º 1 (21 de abril de 2010): 1. http://dx.doi.org/10.5565/rev/elcvia.330.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Jang, Sung-Bong y Hong-Seok Na. "Synchronization Quality Enhancement in 3G-324M Video Telephony". IEEE Transactions on Circuits and Systems for Video Technology 21, n.º 10 (octubre de 2011): 1512–21. http://dx.doi.org/10.1109/tcsvt.2011.2164832.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

KIM, C. "Efficient Media Synchronization Method for Video Telephony System". IEICE Transactions on Information and Systems E89-D, n.º 6 (1 de junio de 2006): 1901–5. http://dx.doi.org/10.1093/ietisy/e89-d.6.1901.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Nakano, Tamami, Yoshiharu Yamamoto, Keiichi Kitajo, Toshimitsu Takahashi y Shigeru Kitazawa. "Synchronization of spontaneous eyeblinks while viewing video stories". Proceedings of the Royal Society B: Biological Sciences 276, n.º 1673 (29 de julio de 2009): 3635–44. http://dx.doi.org/10.1098/rspb.2009.0828.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Tresadern, Philip A. y Ian D. Reid. "Video synchronization from human motion using rank constraints". Computer Vision and Image Understanding 113, n.º 8 (agosto de 2009): 891–906. http://dx.doi.org/10.1016/j.cviu.2009.03.012.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Andreatos, AS y EN Protonotarios. "Receiver synchronization of a packet video communication system". Computer Communications 17, n.º 6 (junio de 1994): 387–95. http://dx.doi.org/10.1016/0140-3664(94)90123-6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Cao, Xiaochun, Lin Wu, Jiangjian Xiao, Hassan Foroosh, Jigui Zhu y Xiaohong Li. "Video synchronization and its application to object transfer". Image and Vision Computing 28, n.º 1 (enero de 2010): 92–100. http://dx.doi.org/10.1016/j.imavis.2009.04.015.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Shakurova, A. R. "Analysis of visual perception features by corneal reflex components examination". Kazan medical journal 95, n.º 1 (15 de febrero de 2014): 82–86. http://dx.doi.org/10.17816/kmj1462.

Texto completo
Resumen
The article surveys the data of experimental studies in which the corneal reflex was used in the analysis of the visual perception process. Visual perception largely depends on the physiological characteristics of the human visual system, both individual and general. Blinking performs a number of functions, one of which is protection, including protection from unpleasant or undesired information. Blinking is closely related to the processes of concentration and disinterest. Blinking while watching a video is synchronized in single person and in a group of people watching the same video fragment. Blinking synchronization depends on the video plot; background video does not cause synchronization. Blinking synchronization is not gender specified. A longer duration of blinking is associated with a significant increase of the intervals between blinks. Accounting for these features of visual perception will allow to coordinate the work with video in several ways. First of all, it is an analysis of the reaction by monitoring the blinks while watching the video. Such analysis should contain a detailed and comprehensive decoding including electrophysiological, psychological and psychophysiological tools. Thus, the analysis of visual perception by studying the corneal reflex components requires an interdisciplinary approach and should be targeted to getting the results usable both for further studies of psychological features and principles of human visual perception and for further creation of most effectively perceived video.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Roșca, Gabriela y Constantin Radu Mirescu. "An Affordable Temporal Calibration Method for Common Video Camera". Applied Mechanics and Materials 555 (junio de 2014): 781–86. http://dx.doi.org/10.4028/www.scientific.net/amm.555.781.

Texto completo
Resumen
The temporal synchronization between video capture and sensor data acquisition are provided by most of advanced data gathering systems using dedicated (and expensive) hardware. This paper raises the question that one could obtain accurate temporal assessment with a normal commercial video camera. With the help of an affordable ATmega328 microcontroller enabled board – for real time programming, and an inexpensive LED based circuit together with some image recognition software an affordable temporal calibration method was proposed that address two of the issues concerning timing of video capture: the real time for image capture and usage of external recorded timers for synchronization between various acquisition systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Sharma, Atul, Sushil Raut, Kohei Shimasaki, Taku Senoo y Idaku Ishii. "Visual-Feedback-Based Frame-by-Frame Synchronization for 3000 fps Projector–Camera Visual Light Communication". Electronics 10, n.º 14 (8 de julio de 2021): 1631. http://dx.doi.org/10.3390/electronics10141631.

Texto completo
Resumen
This paper proposes a novel method for synchronizing a high frame-rate (HFR) camera with an HFR projector, using a visual feedback-based synchronization algorithm for streaming video sequences in real time on a visible-light communication (VLC)-based system. The frame rates of the camera and projector are equal, and their phases are synchronized. A visual feedback-based synchronization algorithm is used to mitigate the complexities and stabilization issues of wire-based triggering for long-distance systems. The HFR projector projects a binary pattern modulated at 3000 fps. The HFR camera system operates at 3000 fps, which can capture and generate a delay signal to be given to the next camera clock cycle so that it matches the phase of the HFR projector. To test the synchronization performance, we used an HFR projector–camera-based VLC system in which the proposed synchronization algorithm provides maximum bandwidth utilization for the high-throughput transmission ability of the system and reduces data redundancy efficiently. The transmitter of the VLC system encodes the input video sequence into gray code, which is projected via high-definition multimedia interface streaming in the form of binary images 590 × 1060. At the receiver, a monochrome HFR camera can simultaneously capture and decode 12-bit 512 × 512 images in real time and reconstruct a color video sequence at 60 fps. The efficiency of the visual feedback-based synchronization algorithm is evaluated by streaming offline and live video sequences, using a VLC system with single and dual projectors, providing a multiple-projector-based system. The results show that the 3000 fps camera was successfully synchronized with a 3000 fps single-projector and a 1500 fps dual-projector system. It was confirmed that the synchronization algorithm can also be applied to VLC systems, autonomous vehicles, and surveillance applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Staelens, Nicolas, Jonas De Meulenaere, Lizzy Bleumers, Glenn Van Wallendael, Jan De Cock, Koen Geeraert, Nick Vercammen et al. "Assessing the importance of audio/video synchronization for simultaneous translation of video sequences". Multimedia Systems 18, n.º 6 (3 de mayo de 2012): 445–57. http://dx.doi.org/10.1007/s00530-012-0262-4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Yang, Ming, Chih-Cheng Hung y Edward Jung. "Secure Information Delivery through High Bitrate Data Embedding within Digital Video and its Application to Audio/Video Synchronization". International Journal of Information Security and Privacy 6, n.º 4 (octubre de 2012): 71–93. http://dx.doi.org/10.4018/jisp.2012100104.

Texto completo
Resumen
Secure communication has traditionally been ensured with data encryption, which has become easier to break than before due to the advancement of computing power. For this reason, information hiding techniques have emerged as an alternative to achieve secure communication. In this research, a novel information hiding methodology is proposed to deliver secure information with the transmission/broadcasting of digital video. Secure data will be embedded within the video frames through vector quantization. At the receiver end, the embedded information can be extracted without the presence of the original video contents. In this system, the major performance goals include visual transparency, high bitrate, and robustness to lossy compression. Based on the proposed methodology, the authors have developed a novel synchronization scheme, which ensures audio/video synchronization through speech-in-video techniques. Compared to existing algorithms, the main contributions of the proposed methodology are: (1) it achieves both high bitrate and robustness against lossy compression; (2) it has investigated impact of embedded information to the performance of video compression, which has not been addressed in previous research. The proposed algorithm is very useful in practical applications such as secure communication, captioning, speech-in-video, video-in-video, etc.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Peña, Raul, Alfonso Ávila, David Muñoz y Juan Lavariega. "A Data Hiding Technique to Synchronously Embed Physiological Signals in H.264/AVC Encoded Video for Medicine Healthcare". BioMed Research International 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/514087.

Texto completo
Resumen
The recognition of clinical manifestations in both video images and physiological-signal waveforms is an important aid to improve the safety and effectiveness in medical care. Physicians can rely on video-waveform (VW) observations to recognize difficult-to-spot signs and symptoms. The VW observations can also reduce the number of false positive incidents and expand the recognition coverage to abnormal health conditions. The synchronization between the video images and the physiological-signal waveforms is fundamental for the successful recognition of the clinical manifestations. The use of conventional equipment to synchronously acquire and display the video-waveform information involves complex tasks such as the video capture/compression, the acquisition/compression of each physiological signal, and the video-waveform synchronization based on timestamps. This paper introduces a data hiding technique capable of both enabling embedding channels and synchronously hiding samples of physiological signals into encoded video sequences. Our data hiding technique offers large data capacity and simplifies the complexity of the video-waveform acquisition and reproduction. The experimental results revealed successful embedding and full restoration of signal’s samples. Our results also demonstrated a small distortion in the video objective quality, a small increment in bit-rate, and embedded cost savings of −2.6196% for high and medium motion video sequences.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Kim, Giseok, Jae-Soo Cho, Gwangsoon Lee y Eung-Don Lee. "Real-time Temporal Synchronization and Compensation in Stereoscopic Video". Journal of Broadcast Engineering 18, n.º 5 (30 de septiembre de 2013): 680–90. http://dx.doi.org/10.5909/jbe.2013.18.5.680.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Park, Youngsoo, Dohoon Kim y Namho Hur. "A Method of Frame Synchronization for Stereoscopic 3D Video". Journal of Broadcast Engineering 18, n.º 6 (30 de noviembre de 2013): 850–58. http://dx.doi.org/10.5909/jbe.2013.18.6.850.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Waddell, J. Patrick. "Audio/Video Synchronization in Compressed Systems: A Status Report". SMPTE Motion Imaging Journal 119, n.º 3 (abril de 2010): 35–41. http://dx.doi.org/10.5594/j11397.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Eagan-Deprez, Kathleen y Reginald Humphreys. "Fostering mind-body synchronization and trance using fractal video". Technoetic Arts 3, n.º 2 (1 de septiembre de 2005): 93–104. http://dx.doi.org/10.1386/tear.3.2.93/1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Zhang, Hao, Chuohao Yeo y Kannan Ramchandran. "VSYNC: Bandwidth-Efficient and Distortion-Tolerant Video File Synchronization". IEEE Transactions on Circuits and Systems for Video Technology 22, n.º 1 (enero de 2012): 67–76. http://dx.doi.org/10.1109/tcsvt.2011.2158336.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Rothermel, A. y R. Lares. "Synchronization of analog video signals with improved image stability". IEEE Transactions on Consumer Electronics 49, n.º 4 (noviembre de 2003): 1292–300. http://dx.doi.org/10.1109/tce.2003.1261232.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Garner, Geoffrey y Hyunsurk Ryu. "Synchronization of audio/video bridging networks using IEEE 802.1AS". IEEE Communications Magazine 49, n.º 2 (febrero de 2011): 140–47. http://dx.doi.org/10.1109/mcom.2011.5706322.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Kubota, S., M. Morikura y S. Kato. "High-quality frame-synchronization for satellite video signal transmission". IEEE Transactions on Aerospace and Electronic Systems 31, n.º 1 (enero de 1995): 430–35. http://dx.doi.org/10.1109/7.366324.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Partha Sindu, I. Gede y A. A. Gede Yudhi Paramartha. "The Effect of the Instructional Media Based on Lecture Video and Slide Synchronization System on Statistics Learning Achievement". SHS Web of Conferences 42 (2018): 00073. http://dx.doi.org/10.1051/shsconf/20184200073.

Texto completo
Resumen
The purpose of this study was to determine the effect of the use of the instructional media based on lecture video and slide synchronization system on Statistics learning achievement of the students of PTI department . The benefit of this research is to help lecturers in the instructional process i to improve student's learning achievements that lead to better students’ learning outcomes. Students can use instructional media which is created from the lecture video and slide synchronization system to support more interactive self-learning activities. Students can conduct learning activities more efficiently and conductively because synchronized lecture video and slide can assist students in the learning process. The population of this research was all students of semester VI (six) majoring in Informatics Engineering Education. The sample of the research was the students of class VI B and VI D of the academic year 2016/2017. The type of research used in this study was quasi-experiment. The research design used was post test only with non equivalent control group design. The result of this research concluded that there was a significant influence in the application of learning media based on lectures video and slide synchronization system on statistics learning result on PTI department.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Xiahou, Jian Bing y Zhen Xiong Wang. "The Apply of DirectShow in Scenario Interactive Teaching System". Advanced Materials Research 926-930 (mayo de 2014): 4641–44. http://dx.doi.org/10.4028/www.scientific.net/amr.926-930.4641.

Texto completo
Resumen
This article describes the basic knowledge and principles of the DirectShow technology, including the DirectShow system architecture and COM technologies. And how to use DirectShow to do video real-time acquisition and storage, audio and video synchronization in the scene interactive teaching system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Schiavio, Andrea, Jan Stupacher, Richard Parncutt y Renee Timmers. "Learning Music From Each Other: Synchronization, Turn-taking, or Imitation?" Music Perception 37, n.º 5 (junio de 2020): 403–22. http://dx.doi.org/10.1525/mp.2020.37.5.403.

Texto completo
Resumen
In an experimental study, we investigated how well novices can learn from each other in situations of technology-aided musical skill acquisition, comparing joint and solo learning, and learning through imitation, synchronization, and turn-taking. Fifty-four participants became familiar, either solo or in pairs, with three short musical melodies and then individually performed each from memory. Each melody was learned in a different way: participants from the solo group were asked via an instructional video to: 1) play in synchrony with the video, 2) take turns with the video, or 3) imitate the video. Participants from the duo group engaged in the same learning trials, but with a partner. Novices in both groups performed more accurately in pitch and time when learning in synchrony and turn-taking than in imitation. No differences were found between solo and joint learning. These results suggest that musical learning benefits from a shared, in-the-moment, musical experience, where responsibilities and cognitive resources are distributed between biological (i.e., peers) and hybrid (i.e., participant(s) and computer) assemblies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Liu, Cunping. "Video Frequency Current Media Technology and Its Applications in Colleges and Universities". Electronics Science Technology and Application 1, n.º 1 (27 de julio de 2014): 15. http://dx.doi.org/10.18686/esta.v1i1.4.

Texto completo
Resumen
<p>Dealing with video materials with current media has unparalleled advantages over tradition audio-video materials. The combination of technology with campus network has wide application in colleges and universities. Streaming media has a high transfer rate, data synchronization, high stability characteristics, is the best way to achieve network audio and video transmission. A large number of video and audio data transmission is the main campus network applications, video streaming technology is mainly used in colleges and universities.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

MacIntyre, Blair, Marco Lohse, Jay David Bolter y Emmanuel Moreno. "Integrating 2-D Video Actors into 3-D Augmented-Reality Systems". Presence: Teleoperators and Virtual Environments 11, n.º 2 (abril de 2002): 189–202. http://dx.doi.org/10.1162/1054746021470621.

Texto completo
Resumen
In this paper, we discuss the integration of 2-D video actors into 3-D augmentedreality (AR) systems. In the context of our research on narrative forms for AR, we have found ourselves needing highly expressive content that is most easily created by human actors. We discuss the feasibility and utility of using video actors in an AR situation and then present our Video Actor Framework (including the VideoActor editor and the Video3D Java package) for easily integrating 2-D videos of actors into Java 3D, an object-oriented 3-D graphics programming environment. The framework is based on the idea of supporting tight spatial and temporal synchronization between the content of the video and the rest of the 3-D world. We present a number of illustrative examples that demonstrate the utility of the toolkit and editor. We close with a discussion and example of our recent work implementing these ideas in Macromedia Director, a popular multimedia production tool.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Ahuja, Rakesh y Dr Sarabjeet Singh Bedi. "Video Watermarking Scheme Based on Candidates I-frames for Copyright Protection". Indonesian Journal of Electrical Engineering and Computer Science 5, n.º 2 (1 de febrero de 2017): 391. http://dx.doi.org/10.11591/ijeecs.v5.i2.pp391-400.

Texto completo
Resumen
<p>This paper focuses on moving picture expert group ( MPEG) based digital video watermarking scheme for copyright protection services. The present work implemented the video watermarking technique in which discrete cosine transform (DCT) intermediate frequency coefficients from instantaneous decoder refresh (IDR) frames are utilized. The subset of IDR frames as candidate frames are chosen to reduces the probability of temporal synchronization attacks to achieve better robustness and high visual perceptual quality. A secret key based cryptographic technique is used to enhance the security of embedded watermark. The proposed scheme embedded the watermark directly during the differential pulse code modulation (DPCM) process and extracting through decoding the entropy details. Three keys are required to extract the watermark whereas one of the key is used to stop the extraction process and the remaining two are used to display the watermark. The robustness is evaluated by testing spatial synchronization attacks, temporal synchronization attacks and re-encoding attacks. The present work calculates the simulation results that shows the watermarking scheme achieved high robustness against video processing attacks frequently occur in the real world and perceptibility also obtained without changing the motion vectors during the DPCM process of MPEG-2 encoding scheme.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Wang, Kelvin C. P., Xuyang Li y Robert P. Elliott. "Generic Multimedia Database for Highway Infrastructure Management". Transportation Research Record: Journal of the Transportation Research Board 1615, n.º 1 (enero de 1998): 56–64. http://dx.doi.org/10.3141/1615-08.

Texto completo
Resumen
Images of highway right-of-way are used widely by highway agencies through their photologging services to obtain visual information for the analysis of traffic accidents, design improvement, and highway pavement management. The video data usually are in analog format, which is limited in accessibility and search, cannot automatically display site engineering data sets with video, and does not allow simultaneous access by multiple users. Recognizing the need to improve the existing photologging systems, the state highway agency of Arkansas sponsored a research project to develop a full digital, computer-based highway information system that extends the capabilities of existing photologging equipment. The software technologies developed for a distributed multimedia-based highway information system (MMHIS) are presented. MMHIS removes several limitations of the existing systems. The advanced technologies used in this system include digital video, data synchronization, high-speed networking, and video server. The developed system can dynamically link the digital video with the corresponding engineering site data based on a novel algorithm for the data synchronization. Also presented is a unique technique to construct a three-dimensional user interface for MMHIS based on the terrain map of Arkansas.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

O'Connor, Brian J., H. John Yack y Scott C. White. "Reducing Errors in Kinetic Calculations: improved Synchronization of Video and Ground Reaction Force Records". Journal of Applied Biomechanics 11, n.º 2 (mayo de 1995): 216–23. http://dx.doi.org/10.1123/jab.11.2.216.

Texto completo
Resumen
A strategy is presented for temporally aligning ground reaction force and kinematic data. Alignment of these data requires marking both the force and video records at a common event. The strategy uses the information content of the video signal, which is A/D converted along with the ground reaction force analog signals, to accomplish this alignment in time. The vertical blanking pulses in the video signal, which define the start of each video field, can be readily identified, provided the correct A/D sampling rate is selected. Knowledge of the position of these vertical blanking pulses relative to the synchronization pulse makes it possible to precisely align the video and analog data in time. Choosing an A/D sampling rate of 598 Hz would enable video and analog data to be synchronized to within 1/1,196 s. Minimizing temporal alignment error results in greater accuracy and .reliability in calculations used to determine joint kinetics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

PARK, Youngsoo, Taewon KIM y Namho HUR. "Frame Synchronization for Depth-Based 3D Video Using Edge Coherence". IEICE Transactions on Information and Systems E96.D, n.º 9 (2013): 2166–69. http://dx.doi.org/10.1587/transinf.e96.d.2166.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Hoffner, Randall. "Audio-Video Synchronization across DTV Transport Interfaces: The Impossible Dream?" SMPTE Journal 109, n.º 11 (noviembre de 2000): 881–84. http://dx.doi.org/10.5594/j17554.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Li, Bing y Mei-qiang Shi. "Audio-Video synchronization coding approach based on H.264/AVC". IEICE Electronics Express 6, n.º 22 (2009): 1556–61. http://dx.doi.org/10.1587/elex.6.1556.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Zanella, F., F. Pasqualetti, R. Carli y F. Bullo. "Simultaneous Boundary Partitioning and Cameras Synchronization for Optimal Video Surveillance*". IFAC Proceedings Volumes 45, n.º 26 (septiembre de 2012): 1–6. http://dx.doi.org/10.3182/20120914-2-us-4030.00023.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

KAMEI, Katsuari, Akifumi TOYOTA y Jun-ichi KUSHIDA. "Upsurge of Viewer's Emotion by Video Sharing Using Pseudo Synchronization". Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 24, n.º 5 (2012): 944–53. http://dx.doi.org/10.3156/jsoft.24.944.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía