Добірка наукової літератури з теми "Video synchronization"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Video synchronization".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Video synchronization":

1

EL-Sallam, Amar A., and Ajmal S. Mian. "Correlation based speech-video synchronization." Pattern Recognition Letters 32, no. 6 (April 2011): 780–86. http://dx.doi.org/10.1016/j.patrec.2011.01.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Lin, E. T., and E. J. Delp. "Temporal Synchronization in Video Watermarking." IEEE Transactions on Signal Processing 52, no. 10 (October 2004): 3007–22. http://dx.doi.org/10.1109/tsp.2004.833866.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Fu, Jia Bing, and He Wei Yu. "Audio-Video Synchronization Method Based on Playback Time." Applied Mechanics and Materials 300-301 (February 2013): 1677–80. http://dx.doi.org/10.4028/www.scientific.net/amm.300-301.1677.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper proposes an audio and video synchronization method based on playback time. In ordinary playing process, the playback rate of audio is constant, so we can locate the playback time of audio and video by locating the key frame to get synchronization. Experimental results show that this method can get synchronization between audio and video and be achieved simply, furthermore, it can reduce the system overhead for synchronization.
4

Li, Xiao Ni, He Xin Chen, and Da Zhong Wang. "Research on Audio-Video Synchronization Coding Based on Mode Selection in H.264." Applied Mechanics and Materials 182-183 (June 2012): 701–5. http://dx.doi.org/10.4028/www.scientific.net/amm.182-183.701.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
An embedded audio-video synchronization compression coding approach is presented. The proposed method takes advantage of the different mode types used by the H.264 encoder during the inter prediction stage, different modes carry corresponding audio information, and the audio will be embedded into video stream by choosing modes during the inter prediction stage, then synchronization coding is applied to the mixing video and audio. We have verified the synchronization processing method based on H.264/AVC using JM Model and experimental results show that this method has achieved synchronization between audio and video at small embedded cost, and the same time, audio signal can be extracted without distortion, besides, this method has hardly effect on the quality of video image.
5

Liu, Yiguang, Menglong Yang, and Zhisheng You. "Video synchronization based on events alignment." Pattern Recognition Letters 33, no. 10 (July 2012): 1338–48. http://dx.doi.org/10.1016/j.patrec.2012.02.009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Mu Li and Vishal Monga. "Twofold Video Hashing With Automatic Synchronization." IEEE Transactions on Information Forensics and Security 10, no. 8 (August 2015): 1727–38. http://dx.doi.org/10.1109/tifs.2015.2425362.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhou, Zhongyi, Anran Xu, and Koji Yatani. "SyncUp." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, no. 3 (September 9, 2021): 1–25. http://dx.doi.org/10.1145/3478120.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The beauty of synchronized dancing lies in the synchronization of body movements among multiple dancers. While dancers utilize camera recordings for their practice, standard video interfaces do not efficiently support their activities of identifying segments where they are not well synchronized. This thus fails to close a tight loop of an iterative practice process (i.e., capturing a practice, reviewing the video, and practicing again). We present SyncUp, a system that provides multiple interactive visualizations to support the practice of synchronized dancing and liberate users from manual inspection of recorded practice videos. By analyzing videos uploaded by users, SyncUp quantifies two aspects of synchronization in dancing: pose similarity among multiple dancers and temporal alignment of their movements. The system then highlights which body parts and which portions of the dance routine require further practice to achieve better synchronization. The results of our system evaluations show that our pose similarity estimation and temporal alignment predictions were correlated well with human ratings. Participants in our qualitative user evaluation expressed the benefits and its potential use of SyncUp, confirming that it would enable quick iterative practice.
8

Yang, Shu Zhen, Guang Lin Chu, and Ming Wang. "A Study on Parallel Processing Video Splicing System with Multi-Processor." Applied Mechanics and Materials 198-199 (September 2012): 304–9. http://dx.doi.org/10.4028/www.scientific.net/amm.198-199.304.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Here to introduce a parallel processing video splicing system with multi-processor. The main processor gets the encoded video data from video source and outputs the video data to the coprocessors simultaneously after decoding the data. Those coprocessors capture the video data needed for splicing and display it on the objective monitor. Owing to this method, the sophisticated time synchronization algorithm is no longer needed. The proposed approach also lowers the system resources consuming, and promotes the accuracy of the video synchronization from multiple coprocessors.
9

Kwon, Ohsung. "Class Analysis Method Using Video Synchronization Algorithm." Journal of The Korean Association of Information Education 19, no. 4 (December 30, 2015): 441–48. http://dx.doi.org/10.14352/jkaie.2015.19.4.441.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Chen, T., H. P. Graf, and K. Wang. "Lip synchronization using speech-assisted video processing." IEEE Signal Processing Letters 2, no. 4 (April 1995): 57–59. http://dx.doi.org/10.1109/97.376913.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Video synchronization":

1

Wedge, Daniel John. "Video sequence synchronization." University of Western Australia. School of Computer Science and Software Engineering, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0084.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
[Truncated abstract] Video sequence synchronization is necessary for any computer vision application that integrates data from multiple simultaneously recorded video sequences. With the increased availability of video cameras as either dedicated devices, or as components within digital cameras or mobile phones, a large volume of video data is available as input for a growing range of computer vision applications that process multiple video sequences. To ensure that the output of these applications is correct, accurate video sequence synchronization is essential. Whilst hardware synchronization methods can embed timestamps into each sequence on-the-fly, they require specialized hardware and it is necessary to set up the camera network in advance. On the other hand, computer vision-based software synchronization algorithms can be used to post-process video sequences recorded by cameras that are not networked, such as common consumer hand-held video cameras or cameras embedded in mobile phones, or to synchronize historical videos for which hardware synchronization was not possible. The current state-of-the-art software algorithms vary in their input and output requirements and camera configuration assumptions. ... Next, I describe an approach that synchronizes two video sequences where an object exhibits ballistic motions. Given the epipolar geometry relating the two cameras and the imaged ballistic trajectory of an object, the algorithm uses a novel iterative approach that exploits object motion to rapidly determine pairs of temporally corresponding frames. This algorithm accurately synchronizes videos recorded at different frame rates and takes few iterations to converge to sub-frame accuracy. Whereas the method presented by the first algorithm integrates tracking data from all frames to synchronize the sequences as a whole, this algorithm recovers the synchronization by locating pairs of temporally corresponding frames in each sequence. Finally, I introduce an algorithm for synchronizing two video sequences recorded by stationary cameras with unknown epipolar geometry. This approach is unique in that it recovers both the frame rate ratio and the frame offset of the two sequences by finding matching space-time interest points that represent events in each sequence; the algorithm does not require object tracking. RANSAC-based approaches that take a set of putatively matching interest points and recover either a homography or a fundamental matrix relating a pair of still images are well known. This algorithm extends these techniques using space-time interest points in place of spatial features, and uses nested instances of RANSAC to also recover the frame rate ratio and frame offset of a pair of video sequences. In this thesis, it is demonstrated that each of the above algorithms can accurately recover the frame rate ratio and frame offset of a range of real video sequences. Each algorithm makes a contribution to the body of video sequence synchronization literature, and it is shown that the synchronization problem can be solved using a range of approaches.
2

Yang, Hsueh-szu, and Benjamin Kupferschmidt. "Time Stamp Synchronization in Video Systems." International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/605988.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
Synchronized video is crucial for data acquisition and telecommunication applications. For real-time applications, out-of-sync video may cause jitter, choppiness and latency. For data analysis, it is important to synchronize multiple video channels and data that are acquired from PCM, MIL-STD-1553 and other sources. Nowadays, video codecs can be easily obtained to play most types of video. However, a great deal of effort is still required to develop the synchronization methods that are used in a data acquisition system. This paper will describe several methods that TTC has adopted in our system to improve the synchronization of multiple data sources.
3

Gaskill, David M. "TECHNIQUES FOR SYNCHRONIZING THERMAL ARRAY CHART RECORDERS TO VIDEO." International Foundation for Telemetering, 1992. http://hdl.handle.net/10150/608901.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
International Telemetering Conference Proceedings / October 26-29, 1992 / Town and Country Hotel and Convention Center, San Diego, California
Video tape is becoming more and more popular for storing and analyzing missions. Video tape is inexpensive, it can hold a two hour test, and it can be edited and manipulated by easily available consumer electronics equipment. Standard technology allows each frame to be time stamped with SMPTE code, so that any point in the mission can be displayed on a CRT. To further correlate data from multiple acquisition systems, the SMPTE code can be derived from IRIG using commercially available code converters. Unfortunately, acquiring and storing analog data has not been so easy. Typically, analog signals from various sensors are coded, transmitted, decoded and sent to a chart recorder. Since chart recorders cannot normally store an entire mission internally, or time stamp each data value, it is very difficult for an analyst to accurately correlate analog data to an individual video frame. Normally the only method is to note the time stamp on the video frame and unroll the chart to the appropriate second or minute, depending on the code used, noted in the margin, and estimate the frame location as a percentage of the time code period. This is very inconvenient if the telemetrist is trying to establish an on-line data retreival system. To make matters worse, the methods of presentation are very different, chart paper as opposed to a CRT, and require the analyst to shift focus constantly. For these reasons, many telemetry stations do not currently have a workable plan to integrate analog and video subsystems even though it is now generally agreed that such integration is ultimately desirable.
4

Daami, Mourad. "Synchronization control of coded video streams, algorithms and implementation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq26314.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Abraham, Justin Kuruvilla. "Study of the TR Synchronization and Video Conversion Unit." Master's thesis, University of Cape Town, 2012. http://hdl.handle.net/11427/14137.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This dissertation describes the design and testing of a model of the Synchronization and Video Conversion Unit (SVCU), a subsystem of the tracking radar (TR) at Denel Overberg Test Range (OTR). The SVCU synchronizes all the radar sub-systems and also converts the returned RF target signals to digital numbers. The technology within the SVCU is outdated and spares are scarce if not unattainable. This study forms the first phase of the development of a new SVCU and will determine the specifications of the hardware needed to build the replacement. Models of the transmit and receive chain of the radar were first developed in SystemVueTM. A comprehensive literature review was then done, yielding an accurate model of the current SVCU. The radar model was run, with simulated target and scene parameters, and its output fed into the SVCU model. The output of the SVCU was then processed by a CFAR detector and gated tracking algorithms implemented in MathLang and Python. The simulated target was correctly identified in the range-Doppler plane. The tracking gates (used to measure range and Doppler) were then corrupted with jitter, rise- time and offsets. A statistical analysis was done on the effect of these impurities on the radar measurements. A new SVCU architecture, utilizing high speed ADCs and digital integrators, was then tested. The effects of non-linearities (DNL and INL) in the ADC and phase noise on the ADC sample clock on the radar measurements were analysed. The jitter on the transmit sync (TX), the ADC sample clock and tracking gates were found to be the most critical aspects of the SVCU. To meet the specified measurement accuracy of the radar, the root-sum-square of the jitter on these syncs (jitter budget) must not exceed 30 nanoseconds. A case study was then done to determine the jitter budget achievable in an FPGA-centric SVCU design. The study concluded that a jitter budget of 30 ns is achievable. Moreover, in an FPGA based design the jitter introduced by the interface sending the TX sync from the FPGA (SVCU) to the transmitter assembly will, almost entirely, determine the range accuracy of the TR. From these findings, a new SVCU, based on the RHINO board from the UCT RRSG, was recommended and the future work outlined.
6

Yilmaz, Ayhan. "Robust Video Transmission Using Data Hiding." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1093509/index.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Video transmission over noisy wireless channels leads to errors on video, which degrades the visual quality notably and makes error concealment an indispensable job. In the literature, there are several error concealment techniques based on estimating the lost parts of the video from the available data. Utilization of data hiding for this problem, which seems to be an alternative of predicting the lost data, provides a reserve information about the video to the receiver while unchanging the transmitted bit-stream syntax
hence, improves the reconstruction video quality without significant extra channel utilization. A complete error resilient video transmission codec is proposed, utilizing imperceptible embedded information for combined detecting, resynchronization and reconstruction of the errors and lost data. The data, which is imperceptibly embedded into the video itself at the encoder, is extracted from the video at the decoder side to be utilized in error concealment. A spatial domain error recovery technique, which hides edge orientation information of a block, and a resynchronization technique, which embeds bit length of a block into other blocks are combined, as well as some parity information about the hidden data, to conceal channel errors on intra-coded frames of a video sequence. The errors on inter-coded frames are basically recovered by hiding motion vector information along with a checksum into the next frames. The simulation results show that the proposed approach performs superior to conventional approaches for concealing the errors in binary symmetric channels, especially for higher bit rates and error rates.
7

Potetsianakis, Emmanouil. "Enhancing video applications through timed metadata." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT029.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les dispositifs d'enregistrement vidéo sont souvent équipés de capteurs (smartphones par exemple, avec récepteur GPS, gyroscope, etc.) ou utilisés dans des systèmes où des capteurs sont présents (par exemple, caméras de surveillance, zones avec capteurs de température et/ou d'humidité). Par conséquent, de nombreux systèmes traitent et distribuent la vidéo avec des flux de métadonnées temporels, souvent sous forme de contenu généré par l'utilisateur (UGC). La diffusion vidéo a fait l'objet d'études approfondies, mais les flux de métadonnées ont des caractéristiques et des formes différentes, et il n'existe en pratique pas de méthode cohérente et efficace pour les traiter conjointement avec les flux vidéo. Dans cette thèse, nous étudions les moyens d'améliorer les applications vidéo grâce aux métadonnées temporelles. Nous définissons comme métadonnées temporelles toutes les données non audiovisuelles enregistrées ou produites, qui sont pertinentes à un moment précis sur la ligne de temps du média. ”L'amélioration” des applications vidéo a une double signification, et ce travail se compose de deux parties respectives. Premièrement, utiliser les métadonnées temporelles pour étendre les capacités des applications multimédias, en introduisant de nouvelles fonctionnalités. Deuxièmement, utiliser les métadonnées chronométrées pour améliorer la distribution de contenu pour de telles applications. Pour l'extension d'applications multimédias, nous avons adopté une approche exploratoire et nous présentons deux cas d'utilisation avec des exemples d'application. Dans le premier cas, les métadonnées temporelles sont utilisées comme données d'entrée pour générer du contenu, et dans le second, elles sont utilisées pour étendre les capacités de navigation pour le contenu multimédia sous-jacent. En concevant et en mettant en œuvre deux scénarios d'application différents, nous avons pu identifier le potentiel et les limites des systèmes vidéo avec métadonnées temporelles. Nous utilisons les résultats de la première partie afin d'améliorer les applications vidéo, en utilisant les métadonnées temporelles pour optimiser la diffusion du contenu. Plus précisément, nous étudions l'utilisation de métadonnées temporelles pour l'adaptation multi-variables dans la diffusion vidéo multi-vues et nous testons nos propositions sur une des plateformes développées précédemment. Notre dernière contribution est un système de buffering pour la lecture synchrone et à faible latence dans les systèmes de streaming en direct
Video recording devices are often equipped with sensors (smartphones for example, with GPS receiver, gyroscope etc.), or used in settings where sensors are present (e.g. monitor cameras, in areas with temperature and/or humidity sensors). As a result, many systems process and distribute video together with timed metadata streams, often sourced as User-Generated Content. Video delivery has been thoroughly studied, however timed metadata streams have varying characteristics and forms, thus a consistent and effective way to handle them in conjunction with the video streams does not exist. In this Thesis we study ways to enhance video applications through timed metadata. We define as timed metadata all the non-audiovisual data recorded or produced, that are relevant to a specific time on the media timeline. ”Enhancing” video applications has a double meaning, and this work consists of two respective parts. First, using the timed metadata to extend the capabilities of multimedia applications, by introducing novel functionalities. Second, using the timed metadata to improve the content delivery for such applications. To extend multimedia applications, we have taken an exploratory approach, and we demonstrate two use cases with application examples. In the first case, timed metadata is used as input for generating content, and in the second, it is used to extend the navigational capabilities for the underlying multimedia content. By designing and implementing two different application scenarios we were able to identify the potential and limitations of video systems with timed metadata. We use the findings from the first part, to work from the perspective of enhancing video applications, by using the timed metadata to improve delivery of the content. More specifically, we study the use of timed metadata for multi-variable adaptation in multi-view video delivery - and we test our proposals on one of the platforms developed previously. Our final contribution is a buffering scheme for synchronous and lowlatency playback in live streaming systems
8

Carranza, López José Camilo. "On the synchronization of two metronomes and their related dynamics /." Ilha Solteira, 2017. http://hdl.handle.net/11449/151204.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Orientador: Michael John Brennan
Resumo: Nesta tese são investigadas, teórica e experimentalmente, a sincronização em fase e a sincronização em anti-fase de dois metrônomos oscilando sobre uma base móvel, a partir de um modelo aqui proposto. Uma descrição do funcionamento do mecanismo de escapamento dos metrônomos é feita, junto a um estudo da relação entre este e o oscilador de van der Pol. Também uma aproximação experimental do valor do amortecimento do metrônomo é fornecida. A frequência instantânea das respostas, numérica e experimental, do sistema é usada na analise. A diferença de outros trabalhos prévios, os dados experimentais têm sido adquiridos usando vídeos dos experimentos e extraídos com ajuda do software Tracker. Para investigar a relação entre as condições iniciais do sistema e seu estado final de sincronização, foram usados mapas bidimensionais chamados ‘basins of attraction’. A relação entre o modelo proposto e um modelo prévio também é mostrada. Encontrou-se que os parâmetros relevantes em relação a ambos os tipos de sincronização são a razão entre a massa do metrônomo e a massa da base, e o amortecimento do sistema. Tem-se encontrado, tanto experimental quanto teoricamente, que a frequência de oscilação dos metrônomos aumenta quando o sistema sincroniza-se em fase, e se mantém a mesma de um metrônomo isolado quando o sistema sincroniza-se em anti-fase. A partir de simulações numéricas encontrou-se que, em geral, incrementos no amortecimento do sistema levam ao sistema se sincronizar mais em fase d... (Resumo completo, clicar acesso eletrônico abaixo)
Doutor
9

Wehbe, Hassan. "Synchronisation automatique d'un contenu audiovisuel avec un texte qui le décrit." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30104/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Nous abordons le problème de la synchronisation automatique d'un contenu audiovisuel avec une procédure textuelle qui le décrit. La stratégie consiste à extraire des informations sur la structure des deux contenus puis à les mettre en correspondance. Nous proposons deux outils d'analyse vidéo qui extraient respectivement : * les limites des évènements d'intérêt à l'aide d'une méthode de quantification de type dictionnaire * les segments dans lesquels une action se répète en exploitant une méthode d'analyse fréquentielle : le YIN. Ensuite, nous proposons un système de synchronisation qui fusionne les informations fournies par ces outils pour établir des associations entre les instructions textuelles et les segments vidéo correspondants. Une "Matrice de confiance" est construite et exploitée de manière récursive pour établir ces associations en regard de leur fiabilité
We address the problem of automatic synchronization of an audiovisual content with a procedural text that describes it. The strategy consists in extracting pieces of information about the structure from both contents, and in matching them depending on their types. We propose two video analysis tools that respectively extract: * Limits of events of interest using an approach inspired by dictionary quantization. * Segments that enclose a repeated action based on the YIN frequency analysis method. We then propose a synchronization system that merges results coming from these tools in order to establish links between textual instructions and the corresponding video segments. To do so, a "Confidence Matrix" is built and recursively processed in order to identify these links in respect with their reliability
10

Carranza, López José Camilo [UNESP]. "On the synchronization of two metronomes and their related dynamics." Universidade Estadual Paulista (UNESP), 2017. http://hdl.handle.net/11449/151204.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Submitted by CAMILO CARRANZA (carranzacamilo@gmail.com) on 2017-07-25T19:58:22Z No. of bitstreams: 1 Camilo_PhD_Thesis.pdf: 11035322 bytes, checksum: efe400c07b13cabff41e927078789c59 (MD5)
Approved for entry into archive by Luiz Galeffi (luizgaleffi@gmail.com) on 2017-07-26T18:31:30Z (GMT) No. of bitstreams: 1 carranzalopez_jc_dr_ilha.pdf: 11035322 bytes, checksum: efe400c07b13cabff41e927078789c59 (MD5)
Made available in DSpace on 2017-07-26T18:31:30Z (GMT). No. of bitstreams: 1 carranzalopez_jc_dr_ilha.pdf: 11035322 bytes, checksum: efe400c07b13cabff41e927078789c59 (MD5) Previous issue date: 2017-06-05
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Nesta tese são investigadas, teórica e experimentalmente, a sincronização em fase e a sincronização em anti-fase de dois metrônomos oscilando sobre uma base móvel, a partir de um modelo aqui proposto. Uma descrição do funcionamento do mecanismo de escapamento dos metrônomos é feita, junto a um estudo da relação entre este e o oscilador de van der Pol. Também uma aproximação experimental do valor do amortecimento do metrônomo é fornecida. A frequência instantânea das respostas, numérica e experimental, do sistema é usada na analise. A diferença de outros trabalhos prévios, os dados experimentais têm sido adquiridos usando vídeos dos experimentos e extraídos com ajuda do software Tracker. Para investigar a relação entre as condições iniciais do sistema e seu estado final de sincronização, foram usados mapas bidimensionais chamados ‘basins of attraction’. A relação entre o modelo proposto e um modelo prévio também é mostrada. Encontrou-se que os parâmetros relevantes em relação a ambos os tipos de sincronização são a razão entre a massa do metrônomo e a massa da base, e o amortecimento do sistema. Tem-se encontrado, tanto experimental quanto teoricamente, que a frequência de oscilação dos metrônomos aumenta quando o sistema sincroniza-se em fase, e se mantém a mesma de um metrônomo isolado quando o sistema sincroniza-se em anti-fase. A partir de simulações numéricas encontrou-se que, em geral, incrementos no amortecimento do sistema levam ao sistema se sincronizar mais em fase do que em anti-fase. Adicionalmente se encontrou que, para dado valor de amortecimento, diminuir a massa da base leva a uma situação em que a sincronização em anti-fase é mais comum do que a sincronização em fase.
This thesis concerns a theoretical and experimental investigation into the synchronization of two coupled metronomes. A simplified model is proposed to study in-phase and anti-phase synchronization of two metronomes oscillating on a mobile base. A description of the escapement mechanism driving metronomes is given and its relationship with the van der Pol oscillator is discussed. Also an experimental value for the damping in the metronome is determined. The instantaneous frequency of the responses from both numerical and experimental data is used in the analysis. Unlike previous studies, measurements are made using videos and the time domain responses of the metronomes extracted by means of tracker software. Basins of attraction are used to investigate the relationship between initial conditions, parameters and both final synchronization states. The relationship between the model and a previous pendulum model is also shown. The key parameters concerning both kind of synchronization have been found to be the mass ratio between the metronome mass and the base mass, and the damping in the system. It has been shown, both theoretically and experimentally, that the frequency of oscillation of the metronomes increases when the system reaches in-phase synchronization, and is the same as an isolated metronome when the system synchronizes in anti-phase. From numerical simulations, it has been found that, in general, increasing damping leads the system to synchronize more in-phase than in anti-phase. It has also been found that, for a given damping value, decreasing the mass of the base results in the situation where anti-phase synchronization is more common than in-phase synchronization.

Книги з теми "Video synchronization":

1

Rona, Jeffrey C. Synchronization from reel to reel: A complete guide for the synchronization of audio, film & video. Milwaukee, WI: Hal Leonard Publishing Corporation, 1990.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Rona, Jeffrey C. Synchronization, from reel to reel: A complete guide for the synchronization of audio, film & video. Edited by Schiff Ronny S and Wilkinson Scott R. 1953-. Milwaukee, WI: H. Leonard Pub., Corp., 1989.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Hawkins, Stan. Aesthetics and Hyperembodiment in Pop Videos. Edited by John Richardson, Claudia Gorbman, and Carol Vernallis. Oxford University Press, 2013. http://dx.doi.org/10.1093/oxfordhb/9780199733866.013.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This article appears in theOxford Handbook of New Audiovisual Aestheticsedited by John Richardson, Claudia Gorbman, and Carol Vernallis. This chapter uses textual analysis of the music video “Umbrella,” featuring Rihanna, to demonstrate the intricacies of sound and image synchronization. It argues that music highlights subject positions according to the viewer’s expectations, assessment, and understanding of the displayed subject. Rihanna’s erotic imagery forms a critical point for contemplating the pop artist’s physical responses to music. One central ingredient of most video performances is disclosed by the suggestive positioning of the gendered body, which extends far beyond everyday experience. Such notions are theorized through aspects of hyperembodiment and hypersexuality, wherein the technological constructedness of the body constitutes a prime part of video production. The aesthetics of performance are predicated on the reassemblance of the body audiovisually. Editing, production, and technology shape the images, which are stimulated by musical sound, and ultimately the audiovisual flow in pop videos mediates a range of conventions that say much about our ever-evolving cultural domains.
4

ST 318:2015: Synchronization of 59.94- or 50-Hz Related Video and Audio Systems in Analog and Digital Areas — Reference Signals. 3 Barker Avenue., White Plains, NY 10601: The Society of Motion Picture and Television Engineers SMPTE, 2015. http://dx.doi.org/10.5594/smpte.st318.2015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Barrière, Jean-Baptiste, and Aleksi Barrière. When Music Unfolds into Image. Edited by Yael Kaduri. Oxford University Press, 2016. http://dx.doi.org/10.1093/oxfordhb/9780199841547.013.39.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The authors reflect on their own experience of developing a specific form of multimedia live performance: the visual concert. The various video projects they realized for works by Finnish composer Kaija Saariaho serve as examples illustrating a more general aesthetic question: what can video art bring to music within the concert ritual? Answers are suggested first in a general assessment of the scientific (perception and cognition research) and cultural roots and parameters of cross-media art forms, and second in an analysis of the contemporary technological tools that allow the visual concert to move beyond the antiquated paradigms of synesthesia, synchronization, or aleatory autonomy of juxtaposed media, and thus to meet the challenges of contemporary music. These mostly unexplored links between new musical techniques and video art open new opportunities that expand the listener’s experience of music and suggest a practice that can become an art form of its own.

Частини книг з теми "Video synchronization":

1

Wang, Xue, and Qing Wang. "Video Synchronization with Trajectory Pulse." In Communications in Computer and Information Science, 12–19. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-3476-3_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Wedge, Daniel, Du Huynh, and Peter Kovesi. "Motion Guided Video Sequence Synchronization." In Computer Vision – ACCV 2006, 832–41. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11612704_83.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Bazin, Jean-Charles, and Alexander Sorkine-Hornung. "ActionSnapping: Motion-Based Video Synchronization." In Computer Vision – ECCV 2016, 155–69. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46454-1_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Furht, Borko, Stephen W. Smoliar, and HongJiang Zhang. "Multimedia Networking and Synchronization." In Video and Image Processing in Multimedia Systems, 33–57. Boston, MA: Springer US, 1995. http://dx.doi.org/10.1007/978-1-4615-2277-5_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kryston, Kevin, Eric Novotny, Ralf Schmälzle, and Ron Tamborini. "Social Demand in Video Games and the Synchronization Theory of Flow." In Video Games, 161–77. New York, NY : Routledge, 2018. | Series: Electronic media research series: Routledge, 2018. http://dx.doi.org/10.4324/9781351235266-10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Cai, Ying, and Grammati E. Pantziou. "A Synchronization Mechanism for Multimedia Presentation." In Multimedia Communications and Video Coding, 157–62. Boston, MA: Springer US, 1996. http://dx.doi.org/10.1007/978-1-4613-0403-6_20.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zuniga, Gabriel, and Ephraim Feig. "Synchronization Issues on Software MPEG Playback Systems." In Multimedia Communications and Video Coding, 141–45. Boston, MA: Springer US, 1996. http://dx.doi.org/10.1007/978-1-4613-0403-6_18.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Liu, Changdong, Yong Xie, Myung J. Lee, and Tarek N. Saadawi. "Adaptive Synchronization in Real-Time Multimedia Applications." In Multimedia Communications and Video Coding, 147–56. Boston, MA: Springer US, 1996. http://dx.doi.org/10.1007/978-1-4613-0403-6_19.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Bulterman, Dick C. A., and Robert Liere. "Multimedia synchronization and UNIX." In Network and Operating System Support for Digital Audio and Video, 105–19. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/3-540-55639-7_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Rothermel, Kurt, and Gabriel Dermler. "Synchronization in Joint-Viewing environments." In Network and Operating System Support for Digital Audio and Video, 106–18. Berlin, Heidelberg: Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/3-540-57183-3_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Video synchronization":

1

Waingankar, P., and D. Valsan. "Audio-video synchronization." In the International Conference & Workshop. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/1980022.1980068.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Sung, Chih-Ta S. "MPEG audio-video synchronization." In Electronic Imaging '97, edited by Sethuraman Panchanathan and Frans Sijstermans. SPIE, 1997. http://dx.doi.org/10.1117/12.263515.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Mediavilla, Ricardo. "Automatic test station for network synchronization performance characterization." In Voice, Video, and Data Communications, edited by John M. Senior, Robert A. Cryan, and Chunming Qiao. SPIE, 1997. http://dx.doi.org/10.1117/12.290373.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Shankar, Sukrit, Joan Lasenby, and Anil Kokaram. "Warping trajectories for video synchronization." In the 4th ACM/IEEE international workshop. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2510650.2510654.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Lin, Eugene T., and Edward J. Delp III. "Temporal synchronization in video watermarking." In Electronic Imaging 2002, edited by Edward J. Delp III and Ping W. Wong. SPIE, 2002. http://dx.doi.org/10.1117/12.465310.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Stokking, Hans, Pablo Cesar, Fernando Boronat, and Mario Montagud. "Media Synchronization Workshop." In TVX'15: ACM International Conference on Interactive Experiences for TV and Online Video. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2745197.2745699.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Wieschollek, Patrick, Ido Freeman, and Hendrik P. A. Lensch. "Learning Robust Video Synchronization without Annotations." In 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 2017. http://dx.doi.org/10.1109/icmla.2017.0-173.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Wu, Yuanyuan, Xiaohai He, and Truong Q. Nguyen. "Subframe video synchronization by matching trajectories." In ICASSP 2013 - 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2013. http://dx.doi.org/10.1109/icassp.2013.6638060.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Li, Mu, and Vishal Monga. "Twofold video hashing with automatic synchronization." In 2014 IEEE International Conference on Image Processing (ICIP). IEEE, 2014. http://dx.doi.org/10.1109/icip.2014.7026085.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Yang, Ming, Nikolaos Bourbakis, Zizhong Chen, and Monica Trifas. "An Efficient Audio-Video Synchronization Methodology." In Multimedia and Expo, 2007 IEEE International Conference on. IEEE, 2007. http://dx.doi.org/10.1109/icme.2007.4284763.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії