Дисертації з теми "Video synchronization"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Video synchronization.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-30 дисертацій для дослідження на тему "Video synchronization".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Wedge, Daniel John. "Video sequence synchronization." University of Western Australia. School of Computer Science and Software Engineering, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0084.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
[Truncated abstract] Video sequence synchronization is necessary for any computer vision application that integrates data from multiple simultaneously recorded video sequences. With the increased availability of video cameras as either dedicated devices, or as components within digital cameras or mobile phones, a large volume of video data is available as input for a growing range of computer vision applications that process multiple video sequences. To ensure that the output of these applications is correct, accurate video sequence synchronization is essential. Whilst hardware synchronization methods can embed timestamps into each sequence on-the-fly, they require specialized hardware and it is necessary to set up the camera network in advance. On the other hand, computer vision-based software synchronization algorithms can be used to post-process video sequences recorded by cameras that are not networked, such as common consumer hand-held video cameras or cameras embedded in mobile phones, or to synchronize historical videos for which hardware synchronization was not possible. The current state-of-the-art software algorithms vary in their input and output requirements and camera configuration assumptions. ... Next, I describe an approach that synchronizes two video sequences where an object exhibits ballistic motions. Given the epipolar geometry relating the two cameras and the imaged ballistic trajectory of an object, the algorithm uses a novel iterative approach that exploits object motion to rapidly determine pairs of temporally corresponding frames. This algorithm accurately synchronizes videos recorded at different frame rates and takes few iterations to converge to sub-frame accuracy. Whereas the method presented by the first algorithm integrates tracking data from all frames to synchronize the sequences as a whole, this algorithm recovers the synchronization by locating pairs of temporally corresponding frames in each sequence. Finally, I introduce an algorithm for synchronizing two video sequences recorded by stationary cameras with unknown epipolar geometry. This approach is unique in that it recovers both the frame rate ratio and the frame offset of the two sequences by finding matching space-time interest points that represent events in each sequence; the algorithm does not require object tracking. RANSAC-based approaches that take a set of putatively matching interest points and recover either a homography or a fundamental matrix relating a pair of still images are well known. This algorithm extends these techniques using space-time interest points in place of spatial features, and uses nested instances of RANSAC to also recover the frame rate ratio and frame offset of a pair of video sequences. In this thesis, it is demonstrated that each of the above algorithms can accurately recover the frame rate ratio and frame offset of a range of real video sequences. Each algorithm makes a contribution to the body of video sequence synchronization literature, and it is shown that the synchronization problem can be solved using a range of approaches.
2

Yang, Hsueh-szu, and Benjamin Kupferschmidt. "Time Stamp Synchronization in Video Systems." International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/605988.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
Synchronized video is crucial for data acquisition and telecommunication applications. For real-time applications, out-of-sync video may cause jitter, choppiness and latency. For data analysis, it is important to synchronize multiple video channels and data that are acquired from PCM, MIL-STD-1553 and other sources. Nowadays, video codecs can be easily obtained to play most types of video. However, a great deal of effort is still required to develop the synchronization methods that are used in a data acquisition system. This paper will describe several methods that TTC has adopted in our system to improve the synchronization of multiple data sources.
3

Gaskill, David M. "TECHNIQUES FOR SYNCHRONIZING THERMAL ARRAY CHART RECORDERS TO VIDEO." International Foundation for Telemetering, 1992. http://hdl.handle.net/10150/608901.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
International Telemetering Conference Proceedings / October 26-29, 1992 / Town and Country Hotel and Convention Center, San Diego, California
Video tape is becoming more and more popular for storing and analyzing missions. Video tape is inexpensive, it can hold a two hour test, and it can be edited and manipulated by easily available consumer electronics equipment. Standard technology allows each frame to be time stamped with SMPTE code, so that any point in the mission can be displayed on a CRT. To further correlate data from multiple acquisition systems, the SMPTE code can be derived from IRIG using commercially available code converters. Unfortunately, acquiring and storing analog data has not been so easy. Typically, analog signals from various sensors are coded, transmitted, decoded and sent to a chart recorder. Since chart recorders cannot normally store an entire mission internally, or time stamp each data value, it is very difficult for an analyst to accurately correlate analog data to an individual video frame. Normally the only method is to note the time stamp on the video frame and unroll the chart to the appropriate second or minute, depending on the code used, noted in the margin, and estimate the frame location as a percentage of the time code period. This is very inconvenient if the telemetrist is trying to establish an on-line data retreival system. To make matters worse, the methods of presentation are very different, chart paper as opposed to a CRT, and require the analyst to shift focus constantly. For these reasons, many telemetry stations do not currently have a workable plan to integrate analog and video subsystems even though it is now generally agreed that such integration is ultimately desirable.
4

Daami, Mourad. "Synchronization control of coded video streams, algorithms and implementation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq26314.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Abraham, Justin Kuruvilla. "Study of the TR Synchronization and Video Conversion Unit." Master's thesis, University of Cape Town, 2012. http://hdl.handle.net/11427/14137.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This dissertation describes the design and testing of a model of the Synchronization and Video Conversion Unit (SVCU), a subsystem of the tracking radar (TR) at Denel Overberg Test Range (OTR). The SVCU synchronizes all the radar sub-systems and also converts the returned RF target signals to digital numbers. The technology within the SVCU is outdated and spares are scarce if not unattainable. This study forms the first phase of the development of a new SVCU and will determine the specifications of the hardware needed to build the replacement. Models of the transmit and receive chain of the radar were first developed in SystemVueTM. A comprehensive literature review was then done, yielding an accurate model of the current SVCU. The radar model was run, with simulated target and scene parameters, and its output fed into the SVCU model. The output of the SVCU was then processed by a CFAR detector and gated tracking algorithms implemented in MathLang and Python. The simulated target was correctly identified in the range-Doppler plane. The tracking gates (used to measure range and Doppler) were then corrupted with jitter, rise- time and offsets. A statistical analysis was done on the effect of these impurities on the radar measurements. A new SVCU architecture, utilizing high speed ADCs and digital integrators, was then tested. The effects of non-linearities (DNL and INL) in the ADC and phase noise on the ADC sample clock on the radar measurements were analysed. The jitter on the transmit sync (TX), the ADC sample clock and tracking gates were found to be the most critical aspects of the SVCU. To meet the specified measurement accuracy of the radar, the root-sum-square of the jitter on these syncs (jitter budget) must not exceed 30 nanoseconds. A case study was then done to determine the jitter budget achievable in an FPGA-centric SVCU design. The study concluded that a jitter budget of 30 ns is achievable. Moreover, in an FPGA based design the jitter introduced by the interface sending the TX sync from the FPGA (SVCU) to the transmitter assembly will, almost entirely, determine the range accuracy of the TR. From these findings, a new SVCU, based on the RHINO board from the UCT RRSG, was recommended and the future work outlined.
6

Yilmaz, Ayhan. "Robust Video Transmission Using Data Hiding." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1093509/index.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Video transmission over noisy wireless channels leads to errors on video, which degrades the visual quality notably and makes error concealment an indispensable job. In the literature, there are several error concealment techniques based on estimating the lost parts of the video from the available data. Utilization of data hiding for this problem, which seems to be an alternative of predicting the lost data, provides a reserve information about the video to the receiver while unchanging the transmitted bit-stream syntax
hence, improves the reconstruction video quality without significant extra channel utilization. A complete error resilient video transmission codec is proposed, utilizing imperceptible embedded information for combined detecting, resynchronization and reconstruction of the errors and lost data. The data, which is imperceptibly embedded into the video itself at the encoder, is extracted from the video at the decoder side to be utilized in error concealment. A spatial domain error recovery technique, which hides edge orientation information of a block, and a resynchronization technique, which embeds bit length of a block into other blocks are combined, as well as some parity information about the hidden data, to conceal channel errors on intra-coded frames of a video sequence. The errors on inter-coded frames are basically recovered by hiding motion vector information along with a checksum into the next frames. The simulation results show that the proposed approach performs superior to conventional approaches for concealing the errors in binary symmetric channels, especially for higher bit rates and error rates.
7

Potetsianakis, Emmanouil. "Enhancing video applications through timed metadata." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT029.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les dispositifs d'enregistrement vidéo sont souvent équipés de capteurs (smartphones par exemple, avec récepteur GPS, gyroscope, etc.) ou utilisés dans des systèmes où des capteurs sont présents (par exemple, caméras de surveillance, zones avec capteurs de température et/ou d'humidité). Par conséquent, de nombreux systèmes traitent et distribuent la vidéo avec des flux de métadonnées temporels, souvent sous forme de contenu généré par l'utilisateur (UGC). La diffusion vidéo a fait l'objet d'études approfondies, mais les flux de métadonnées ont des caractéristiques et des formes différentes, et il n'existe en pratique pas de méthode cohérente et efficace pour les traiter conjointement avec les flux vidéo. Dans cette thèse, nous étudions les moyens d'améliorer les applications vidéo grâce aux métadonnées temporelles. Nous définissons comme métadonnées temporelles toutes les données non audiovisuelles enregistrées ou produites, qui sont pertinentes à un moment précis sur la ligne de temps du média. ”L'amélioration” des applications vidéo a une double signification, et ce travail se compose de deux parties respectives. Premièrement, utiliser les métadonnées temporelles pour étendre les capacités des applications multimédias, en introduisant de nouvelles fonctionnalités. Deuxièmement, utiliser les métadonnées chronométrées pour améliorer la distribution de contenu pour de telles applications. Pour l'extension d'applications multimédias, nous avons adopté une approche exploratoire et nous présentons deux cas d'utilisation avec des exemples d'application. Dans le premier cas, les métadonnées temporelles sont utilisées comme données d'entrée pour générer du contenu, et dans le second, elles sont utilisées pour étendre les capacités de navigation pour le contenu multimédia sous-jacent. En concevant et en mettant en œuvre deux scénarios d'application différents, nous avons pu identifier le potentiel et les limites des systèmes vidéo avec métadonnées temporelles. Nous utilisons les résultats de la première partie afin d'améliorer les applications vidéo, en utilisant les métadonnées temporelles pour optimiser la diffusion du contenu. Plus précisément, nous étudions l'utilisation de métadonnées temporelles pour l'adaptation multi-variables dans la diffusion vidéo multi-vues et nous testons nos propositions sur une des plateformes développées précédemment. Notre dernière contribution est un système de buffering pour la lecture synchrone et à faible latence dans les systèmes de streaming en direct
Video recording devices are often equipped with sensors (smartphones for example, with GPS receiver, gyroscope etc.), or used in settings where sensors are present (e.g. monitor cameras, in areas with temperature and/or humidity sensors). As a result, many systems process and distribute video together with timed metadata streams, often sourced as User-Generated Content. Video delivery has been thoroughly studied, however timed metadata streams have varying characteristics and forms, thus a consistent and effective way to handle them in conjunction with the video streams does not exist. In this Thesis we study ways to enhance video applications through timed metadata. We define as timed metadata all the non-audiovisual data recorded or produced, that are relevant to a specific time on the media timeline. ”Enhancing” video applications has a double meaning, and this work consists of two respective parts. First, using the timed metadata to extend the capabilities of multimedia applications, by introducing novel functionalities. Second, using the timed metadata to improve the content delivery for such applications. To extend multimedia applications, we have taken an exploratory approach, and we demonstrate two use cases with application examples. In the first case, timed metadata is used as input for generating content, and in the second, it is used to extend the navigational capabilities for the underlying multimedia content. By designing and implementing two different application scenarios we were able to identify the potential and limitations of video systems with timed metadata. We use the findings from the first part, to work from the perspective of enhancing video applications, by using the timed metadata to improve delivery of the content. More specifically, we study the use of timed metadata for multi-variable adaptation in multi-view video delivery - and we test our proposals on one of the platforms developed previously. Our final contribution is a buffering scheme for synchronous and lowlatency playback in live streaming systems
8

Carranza, López José Camilo. "On the synchronization of two metronomes and their related dynamics /." Ilha Solteira, 2017. http://hdl.handle.net/11449/151204.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Orientador: Michael John Brennan
Resumo: Nesta tese são investigadas, teórica e experimentalmente, a sincronização em fase e a sincronização em anti-fase de dois metrônomos oscilando sobre uma base móvel, a partir de um modelo aqui proposto. Uma descrição do funcionamento do mecanismo de escapamento dos metrônomos é feita, junto a um estudo da relação entre este e o oscilador de van der Pol. Também uma aproximação experimental do valor do amortecimento do metrônomo é fornecida. A frequência instantânea das respostas, numérica e experimental, do sistema é usada na analise. A diferença de outros trabalhos prévios, os dados experimentais têm sido adquiridos usando vídeos dos experimentos e extraídos com ajuda do software Tracker. Para investigar a relação entre as condições iniciais do sistema e seu estado final de sincronização, foram usados mapas bidimensionais chamados ‘basins of attraction’. A relação entre o modelo proposto e um modelo prévio também é mostrada. Encontrou-se que os parâmetros relevantes em relação a ambos os tipos de sincronização são a razão entre a massa do metrônomo e a massa da base, e o amortecimento do sistema. Tem-se encontrado, tanto experimental quanto teoricamente, que a frequência de oscilação dos metrônomos aumenta quando o sistema sincroniza-se em fase, e se mantém a mesma de um metrônomo isolado quando o sistema sincroniza-se em anti-fase. A partir de simulações numéricas encontrou-se que, em geral, incrementos no amortecimento do sistema levam ao sistema se sincronizar mais em fase d... (Resumo completo, clicar acesso eletrônico abaixo)
Doutor
9

Wehbe, Hassan. "Synchronisation automatique d'un contenu audiovisuel avec un texte qui le décrit." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30104/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Nous abordons le problème de la synchronisation automatique d'un contenu audiovisuel avec une procédure textuelle qui le décrit. La stratégie consiste à extraire des informations sur la structure des deux contenus puis à les mettre en correspondance. Nous proposons deux outils d'analyse vidéo qui extraient respectivement : * les limites des évènements d'intérêt à l'aide d'une méthode de quantification de type dictionnaire * les segments dans lesquels une action se répète en exploitant une méthode d'analyse fréquentielle : le YIN. Ensuite, nous proposons un système de synchronisation qui fusionne les informations fournies par ces outils pour établir des associations entre les instructions textuelles et les segments vidéo correspondants. Une "Matrice de confiance" est construite et exploitée de manière récursive pour établir ces associations en regard de leur fiabilité
We address the problem of automatic synchronization of an audiovisual content with a procedural text that describes it. The strategy consists in extracting pieces of information about the structure from both contents, and in matching them depending on their types. We propose two video analysis tools that respectively extract: * Limits of events of interest using an approach inspired by dictionary quantization. * Segments that enclose a repeated action based on the YIN frequency analysis method. We then propose a synchronization system that merges results coming from these tools in order to establish links between textual instructions and the corresponding video segments. To do so, a "Confidence Matrix" is built and recursively processed in order to identify these links in respect with their reliability
10

Carranza, López José Camilo [UNESP]. "On the synchronization of two metronomes and their related dynamics." Universidade Estadual Paulista (UNESP), 2017. http://hdl.handle.net/11449/151204.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Submitted by CAMILO CARRANZA (carranzacamilo@gmail.com) on 2017-07-25T19:58:22Z No. of bitstreams: 1 Camilo_PhD_Thesis.pdf: 11035322 bytes, checksum: efe400c07b13cabff41e927078789c59 (MD5)
Approved for entry into archive by Luiz Galeffi (luizgaleffi@gmail.com) on 2017-07-26T18:31:30Z (GMT) No. of bitstreams: 1 carranzalopez_jc_dr_ilha.pdf: 11035322 bytes, checksum: efe400c07b13cabff41e927078789c59 (MD5)
Made available in DSpace on 2017-07-26T18:31:30Z (GMT). No. of bitstreams: 1 carranzalopez_jc_dr_ilha.pdf: 11035322 bytes, checksum: efe400c07b13cabff41e927078789c59 (MD5) Previous issue date: 2017-06-05
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Nesta tese são investigadas, teórica e experimentalmente, a sincronização em fase e a sincronização em anti-fase de dois metrônomos oscilando sobre uma base móvel, a partir de um modelo aqui proposto. Uma descrição do funcionamento do mecanismo de escapamento dos metrônomos é feita, junto a um estudo da relação entre este e o oscilador de van der Pol. Também uma aproximação experimental do valor do amortecimento do metrônomo é fornecida. A frequência instantânea das respostas, numérica e experimental, do sistema é usada na analise. A diferença de outros trabalhos prévios, os dados experimentais têm sido adquiridos usando vídeos dos experimentos e extraídos com ajuda do software Tracker. Para investigar a relação entre as condições iniciais do sistema e seu estado final de sincronização, foram usados mapas bidimensionais chamados ‘basins of attraction’. A relação entre o modelo proposto e um modelo prévio também é mostrada. Encontrou-se que os parâmetros relevantes em relação a ambos os tipos de sincronização são a razão entre a massa do metrônomo e a massa da base, e o amortecimento do sistema. Tem-se encontrado, tanto experimental quanto teoricamente, que a frequência de oscilação dos metrônomos aumenta quando o sistema sincroniza-se em fase, e se mantém a mesma de um metrônomo isolado quando o sistema sincroniza-se em anti-fase. A partir de simulações numéricas encontrou-se que, em geral, incrementos no amortecimento do sistema levam ao sistema se sincronizar mais em fase do que em anti-fase. Adicionalmente se encontrou que, para dado valor de amortecimento, diminuir a massa da base leva a uma situação em que a sincronização em anti-fase é mais comum do que a sincronização em fase.
This thesis concerns a theoretical and experimental investigation into the synchronization of two coupled metronomes. A simplified model is proposed to study in-phase and anti-phase synchronization of two metronomes oscillating on a mobile base. A description of the escapement mechanism driving metronomes is given and its relationship with the van der Pol oscillator is discussed. Also an experimental value for the damping in the metronome is determined. The instantaneous frequency of the responses from both numerical and experimental data is used in the analysis. Unlike previous studies, measurements are made using videos and the time domain responses of the metronomes extracted by means of tracker software. Basins of attraction are used to investigate the relationship between initial conditions, parameters and both final synchronization states. The relationship between the model and a previous pendulum model is also shown. The key parameters concerning both kind of synchronization have been found to be the mass ratio between the metronome mass and the base mass, and the damping in the system. It has been shown, both theoretically and experimentally, that the frequency of oscillation of the metronomes increases when the system reaches in-phase synchronization, and is the same as an isolated metronome when the system synchronizes in anti-phase. From numerical simulations, it has been found that, in general, increasing damping leads the system to synchronize more in-phase than in anti-phase. It has also been found that, for a given damping value, decreasing the mass of the base results in the situation where anti-phase synchronization is more common than in-phase synchronization.
11

Nilsson, Johan, and Mikael Rothin. "Live Demonstration of Mismatch Compensation for Time-Interleaved ADCs." Thesis, Linköpings universitet, Elektroniksystem, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-78709.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The purpose of this thesis is to demonstrate the effects of mismatch errors that occur in time-interleaved analog-to-digital converters (TI-ADC) and how these are compensated for by proprietary methods from Signal Processing Devices Sweden AB. This will be demonstrated by two different implementations, both based on the combined digitizer/generator SDR14. These demonstrations shall be done in a way that is easy to grasp for people with limited knowledge in signal processing. The first implementation is an analog video demo where an analog video signal is sampled by such an TI-ADC in the SDR14, and then converted back to analog and displayed with the help of a TV tuner. The mismatch compensation can be turned on and off and the difference on the resulting video image is clearly visible. The second implementation is a digital communication demo based on W-CDMA, implemented on the FPGA of the SDR14. Four parallel W-CDMA signals of 5 MHz are sent and received by the SDR14. QPSK, 16-QAM, and 64-QAM modulated signals were successfully sent and the mismatch effects were clearly visible in the constellation diagrams. Techniques used are, for example: root-raised cosine pulse shaping, RF modulation, carrier recovery, and timing recovery.
12

Rossholm, Andreas. "On Enhancement and Quality Assessment of Audio and Video in Communication Systems." Doctoral thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00604.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The use of audio and video communication has increased exponentially over the last decade and has gone from speech over GSM to HD resolution video conference between continents on mobile devices. As the use becomes more widespread the interest in delivering high quality media increases even on devices with limited resources. This includes both development and enhancement of the communication chain but also the topic of objective measurements of the perceived quality. The focus of this thesis work has been to perform enhancement within speech encoding and video decoding, to measure influence factors of audio and video performance, and to build methods to predict the perceived video quality. The audio enhancement part of this thesis addresses the well known problem in the GSM system with an interfering signal generated by the switching nature of TDMA cellular telephony. Two different solutions are given to suppress such interference internally in the mobile handset. The first method involves the use of subtractive noise cancellation employing correlators, the second uses a structure of IIR notch filters. Both solutions use control algorithms based on the state of the communication between the mobile handset and the base station. The video enhancement part presents two post-filters. These two filters are designed to improve visual quality of highly compressed video streams from standard, block-based video codecs by combating both blocking and ringing artifacts. The second post-filter also performs sharpening. The third part addresses the problem of measuring audio and video delay as well as skewness between these, also known as synchronization. This method is a black box technique which enables it to be applied on any audiovisual application, proprietary as well as open standards, and can be run on any platform and over any network connectivity. The last part addresses no-reference (NR) bitstream video quality prediction using features extracted from the coded video stream. Several methods have been used and evaluated: Multiple Linear Regression (MLR), Artificial Neural Network (ANN), and Least Square Support Vector Machines (LS-SVM), showing high correlation with both MOS and objective video assessment methods as PSNR and PEVQ. The impact from temporal, spatial and quantization variations on perceptual video quality has also been addressed, together with the trade off between these, and for this purpose a set of locally conducted subjective experiments were performed.
13

Zara, Henri. "Système d'acquisition vidéo rapide : application à la mécanique des fluides." Saint-Etienne, 1997. http://www.theses.fr/1997STET4012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les systèmes de vision sont aujourd'hui largement employés pour les études expérimentales en mécanique des fluides. Les techniques d'acquisition d'images se heurtent cependant à une limitation technologique concernant la cadence et la résolution des images. De nouveaux capteurs d'images électroniques rapides, mais également certaines méthodes d'exposition permettent d'apporter des solutions à ce problème. Nous présentons dans ce mémoire un système d'acquisition vidéo rapide élaboré autour d'une plate-forme expérimentale de mécanique des fluides. Ce système est constitué en particulier : d'une camera CCD numérique (8bits) de résolution 512x512 pixels avec une cadence de 100 i/s ; d'un système de synchronisation qui assure les différents modes d'exposition des images ainsi que la synchronisation des éléments de la plate-forme. L'ensemble est commandé par un ordinateur PC, pour la configuration des paramètres ainsi que pour le stockage des images. La technique d'éclairage mise en oeuvre est la tomographie laser. Un choix technologique original nous a conduit au couplage de la camera à un intensificateur de lumière. Ce choix permet l'utilisation de sources laser continues de faible puissance. Il offre également de larges possibilités d'exposition des images par la commande de la porte optique de l'intensificateur. Dans une première partie nous présentons une étude détaillée des capteurs d'images à semi-conducteur, ainsi que des intensificateurs d'images. Les descriptions techniques concernant la réalisation de la caméra et du système de synchronisation sont ensuite exposées. L'ensemble d'acquisition vidéo final met en oeuvre deux cameras synchronisées permettant l'enregistrement d'une séquence de paires d'images d'occurrence très proche (50 ns). Quelques exemples expérimentaux permettent de confirmer les possibilités de notre système
14

Ali, Usman. "WiBOX - Une passerelle pour une réception robuste de vidéo diffusée via WIMAX et une rediffusion indoor via WIFI." Phd thesis, Télécom ParisTech, 2010. http://pastel.archives-ouvertes.fr/pastel-00576262.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cette thèse étudie un certain nombre d'outils (rassemblés dans la WiBox) nécessaires pour une réception fiable de vidéo diffusée sur WiMAX, puis rediffusée sur Wifi. Il s'agit de fournir des services WiMAX à des utilisateurs WiFi, avec une qualité de réception vidéo raisonnable, même avec un très faible signal WiMAX. Pour cela, des techniques de décodage conjoint de paquets erronés sont indispensables afin de limiter les retards liés aux retransmissions. Dans la première partie de cette thèse, nous considérons le problème de la délinéation de paquets agrégés en macro-paquets. Cette opération d'agrégation est réalisée dans de nombreux protocoles afin d'améliorer le rapport en-tête/charge utile des systèmes de communication. Plusieurs méthodes de délinéation sont proposées. Elles exploitent d'une part les informations souples provenant des couches protocolaires basses ainsi que la redondance présente dans les paquets devant être séparés. L'ensemble des successions possibles de paquets au sein d'un macro-paquet est décrit à l'aide d'un trellis. Le problème de délinéation est transformé en un problème d'estimation de l'état d'une variable aléatoire Markovienne, pour lequel de nombreux algorithmes (BCJR, Viterbi) sont disponibles. Cette technique est très efficace mais complexe. De plus, elle nécessite la réception de l'ensemble du macro-paquet, ce qui peut induire des latences importantes. Dans une deuxième étape, nous proposons une technque où le décodage se fait sur une fenêtre glissante contenant une partie du macro-paquet. Un treillis glissant est considéré cette fois. La taille de la fenêtre permet d'ajuster un comproimis entre complexité et efficacité de décodage. Enfin, une méthode de décodage à la volée exploitant un automate à 3 état et des tests d'hypothèses Bayésiens permet de réaliser une délinéation moins efficace, mais sans latence. Ces méthodes sont comparées sur un problème de délinéation de paquets MAC dans des macro-paquets PHY dans WiMAX. Dans la deuxième partie de la thèse, nous proposons de réaliser un décodage souple des codes en blocs utilisés dans certaines couches de piles protocolaires pour le multimédia. Cdes sorties souples sont générées afin de permettre un décodage conjoint des en-têtes et de la charge utile au niveau des couches supérieures. Nous avons en particulier étudié des outils de décodage souple ldans le cas de la norme RTP FEC, et avons comparé les performances du décodeur proposé avec des approches classiques de décodage. En résumé, les techniques de décodage conjoint proposées permettent de réduire le nombre de paquets perdus, d'augmenter le nombre de paquets transmis vers les couches applicatives où des décodeurs source-canal conjoints peuvent être utilisés pour améliorer la qualité de la vidéo reçue.
15

Šimoník, Petr. "Měřič odstupu signálu od šumu obrazových signálů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217681.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The diplomma thesis is dealing with possibilities of Signal to noise ratio measurement by method, which is based on direct measurement. It is chosen the most suitable method – signal and noise separation to two different parallel signal branches, where is measured signal strength in one branch and root mean square value in the other. The thesis is consisted of a concept of detail block scheme of Signal to noise ratio meter, which was designed in terms of theoretical knowledge. Particular functional blocks were circuit-designed, the active and passive parts were chosen and their function were described. There were made simulation and displayed input and output time flows. There is designed the whole connection of engineered Signal to noise ratio meter in the last part of my thesis. The double-sided board of printed circuit is contained too. It was created simple programme for supervisor micro-processor. Thereby were constructed complete bases for realization.
16

Ewelle, Ewelle Richard. "Adapter les communications des jeux dans le cloud." Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTS145/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Le Cloud computing émerge comme le nouveau paradigme informatique dans lequel la virtualisation des ressources fournit des services fiables correspondant aux demandes des utilisateurs. De nos jours, la plupart des applications interactives et utilisant beaucoup de données sont développées sur le cloud: Le jeu vidéo en est un exemple. Avec l'arrivée du cloud computing, l'accessibilité et l'ubiquité du jeu ont un brillant avenir; Les jeux peuvent être hébergés dans un serveur centralisé et accessibles via l'Internet par un client léger sur une grande variété de dispositifs avec des capacités modestes : c'est le cloud gaming. Le Cloud computing, dans le contexte de jeu vidéo a beaucoup attiré l'attention en raison de ses facilités d'évolution, de disponibilité et capacité de calcul. Cependant, les systèmes de cloud gaming actuels ont des exigences très fortes en termes de ressources réseau, réduisant ainsi l'accessibilité et l'ubiquité des jeux dans le cloud, car les dispositifs clients avec peu de bande passante et les personnes situées dans la zone avec des conditions de réseau limitées et/ou instables, ne peuvent pas bénéficier de ces services de cloud computing. Dans cette thèse, nous présentons une technique d'adaptation inspirée par l'approche du niveau de détail (Level of detail) dans les graphiques 3D. Elle est basée sur un paradigme du cloud gaming dans l'objectif de fournir une accessibilité multi-plateforme, tout en améliorant la qualité d'expérience (QoE) du joueur en réduisant l'impact des mauvaises conditions réseau (delai, perte, gigue) sur l'interactivité et réactivité du jeu. Notre première contribution se compose de modèles de jeu reliant les objets du jeu à leurs besoins en termes de communication représentés par leurs importances dans le jeu. Nous avons ensuite fourni une approche de niveau de détail pour gérer la distribution des ressources réseau basée sur l'importance des objets dans la scène et les conditions réseau. Nous validons notre approche en utilisant des jeu prototypes et evaluons la QoE du joueur, par des expériences pilotes. Les résultats montrent que le framework proposé fournit une importante amélioration de la QoE
With the arrival of cloud computing technology, game accessibility and ubiquity havea bright future. Games can be hosted in a centralize server and accessed through theInternet by a thin client on a wide variety of devices with modest capabilities: cloudgaming. Some of the advantages of using cloud computing in game context includes:device ubiquity, computing exibility, affordable cost and lowered set up overheads andcompatibility issues. However, current cloud gaming systems have very strong requirementsin terms of network resources, thus reducing their widespread adoption. In factdevices with little bandwidth and people located in area with limited network capacity,cannot take advantage of these cloud services. In this thesis we present an adaptationtechnique inspired by the level of detail (LoD) approach in 3D graphics. It is based ona cloud gaming paradigm in other to maintain user's quality of experience (QoE) byreducing the impact of poor network parameters (delay, loss, bandwidth) on game interactivity.Our first contribution consist of game models expressing game objects and theircommunications needs represented by their importance in the game. We provided twodifferent ways to manage objects' importance using agents organizations and gameplaycomponents. We then provided a level of detail approach for managing network resourcedistribution based on objects importance in the game scene and network conditions. Weexploited the dynamic objects importance adjustment models presented above to proposeLoD systems adapting to changes during game sessions. The experimental validation ofboth adaptation models showed that the suggested adaptation minimizes the effects oflow and/or unstable network conditions in maintaining game responsiveness and player'sQoE
17

Lin, Shu-Yu, and 林書宇. "Synchronization of SVC-based P2P Video Streaming." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/69785764242630167669.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
碩士
國立中正大學
資訊工程所
98
Due to the rapid development of the Internet, Internet applications are blooming. Lots of users watch the video through the Internet and P2P architecture is one of the awesome methods for video streaming. Owning to the different capacities of user’s device and bandwidth, a video coding scheme which is called Scalable Video Coding is used to accomplish these differences. SVC supplies several kinds of video quality for different requests of users. In nowadays, P2P streaming combine its overlay structure and the characteristics of SVC to serve more heterogeneous users. There exist many P2P streaming applications which require play out synchronization among users, such as online video streaming and video conference. But it is discovered that because of the heterogeneous amount users, the displayed segment of the same time will not be synchronized as we expect. In this thesis, we propose a video play out synchronization mechanism which adopts SVC coding scheme, P2P streaming technology, and NTP protocol. Our simulation results show that the users in the synchronization system could watch the same video sequence since the video is first be played.
18

黃俊傑. "3D Network Browse of Real-Time Synchronization Video Conference." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/88132786363181354690.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
碩士
國立嘉義大學
資訊工程研究所
92
Because of recent years network technique progresses unlimitly, and accompanies with people pairs of the need of the teaching quality; therefore distance learning 2D image teaching can not satisfy present condition. The study at 3D distance learning environment below make use of three cameras retrieval article different angle image, Furthermore retrieval image will appear the article of the 3D and this article send to Web page which can be studying. Make uses to understands to teach scholar that want the teaching of expressed contents clearly. This research divide into three parts, the difference carry for : retrieval ,server and client. This three aspects comes to attain us the 3D of that presented teaching, we don't need expensive three-dimensional equipment, just make use of 3 cameras and a personal computer, therefore we reduce the cost. We will make use of at distance learning teaching on, come improves formerly the teaching of 2D image. Let the contents of the teaching attract to study, furthermore let to study to understand the contents of knowledge.
19

El-Helaly, Mohamed. "Integrated audio-video synchronization system for use in multimedia applications." Thesis, 2006. http://spectrum.library.concordia.ca/9214/1/El_Helaly_M_2006.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The use of multimedia system have moved beyond the studio barriers and into the homes. As computers become more powerful, multimedia systems become more realizable on the PC. As these multimedia systems become more complicated, the need to provide complex integration systems and synchronization arises. To develop a multimedia system, one must ensure that a synchronization approach is in place to solve the timing issues related to the media types involved. Temporal information in multimedia systems must be maintained such that no loss of coherency is endured. The system must ensure that no matter how much processing is performed on the signals, the output has to maintain the temporal integrity of the signals as they were when they were inputted. This thesis develops a multimedia system that processes two media streams. Audio and video streams are fed to the system. The system produces an object segmented output, (silhouettes of the object) along with the recognized speech from the audio. The speech that is to be recognized by the system is spoken by the objects/speakers. The challenge lies in maintaining the synchronization and integrating the video and the recognized speech at the output. Note that the system is a stream based system by that the video and audio are continuously captured and processed. This thesis presents a solution to the problem of synchronization in the temporal domain and the overall integration of the multimedia system. The thesis presents a time-stamp approach to solve the synchronization problem between audio and video signals. This approach is adaptive to the cases where the video processing delay is larger or smaller than the audio processing delay. The contributions include the verification of using time-stamps in the synchronization process and that it is possible to synchronize heavily delayed signals. The system requires an integration process such that the audio and video signals are integrated with one another at the output
20

Leu, Yow-Sheng, and 呂侑陞. "Video and Audio Synchronization Mechanisms and Applications On Embedded System." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/n4dr53.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
碩士
國立成功大學
工程科學系碩博士班
92
Nowadays digital image process systems is progressing very rapidly, digital cameras, digital Video camera are producing one after another. When we were make video capture , image recognition or image tracing, devices with a lot of sampling video data or filter the data which the users need, and then again we use these data to perform these functions which the users would like to reach. Since personal computers are for general purpose, it is unfavorable to provide some special functions. The other reason is that the movement of the computer is not convenient. According to the above reasons, TMS320DSC25 platform is a feasible choice to develop embedded system.   This thesis develops a system with TMS320DSC25 platform. The system includes two processors-ARM and DSP. By the way of CCD and TVP514, it can capture digital video data, at the same time, it can receive audio data by microphone. By DSP's hardware acceleration dealing with video encode and audio encode, thesis will make the result of the calculation output to memory. This thesis will use two different ways to implement video and audio synchronization encoding and decoding which can adjust frame rate. Finally, I have performed some tests and experiments with two different ways to show the comparisons. How to implement video and audio system and the combine with and ESOL operation system and then provide complete application of multimedia has been discussed in the thesis.
21

Huang, Yu-Shiang, and 黃昱翔. "To Apply UPnP Technology in Synchronization between Separated Video and Audio Players." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/07402067233613157033.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
碩士
中華大學
資訊工程學系(所)
98
On Multimedia Digital Home Networks, an audio player may be separated from the video player. Due to forward or rewind a video, it makes the video player and the audio player asynchronous. Based on the original UPnP AV architecture, a synchronous mechanism is proposed in this paper. With exchanging both status of the video player and the audio player in UPnP messages, the synchronization is accomplished.
22

Lai-Huei, Wang, and 王來輝. "The Design of Synchronization Algorithms for High-mobility Digital Video Broadcast Receivers." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/38204850851541558193.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
碩士
國立交通大學
電信工程系所
93
In recent years, OFDM becomes popular for broadband communications. However, compared to single carrier systems, OFDM system is very sensitive to the synchronization problem. The mismatch of crystal oscillators between the transmitter and the receiver circuitry causes the sampling clock frequency offset and will introduce ICI and ISI. Furthermore, if the mobile wireless transmission is considered, the synchronization maintenance will become more difficult because of the Doppler effect. The main topic of this thesis is to discuss the synchronization problems of OFDM systems. DVB-T is selected as the system platform for discuss here. Many synchronization algorithms had been proposed and are applied successfully in DVB-T demodulators. However, most of them can operate properly for stationary wireless and low-mobility receiving. To combat the severe Doppler effect in high-mobility environment, effective synchronization algorithms are necessary. Therefore, the synchronization algorithm in high-mobility environment is mainly concerned in this thesis. We propose an innovative synchronization algorithm, which is shown almost not to be affected by the effect caused by high mobility.
23

Chien, Chih-Pin, and 簡志賓. "ECG and Video Synchronization Scheme Design and Implementation in the Biomedical Monitoring Software System." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/21369653385533542319.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
碩士
國立中正大學
通訊工程學系
98
With the highly development of medical technology, the average life span of humans is getting longer and the world is turning into an aging society. As known, aging society costs relatively high in medical treatment and resources. In order to make well use of those resources, large-scaled hospitals have already started mobile medical services to provide better nursing to the patient who lives far away from city and the poor. However, mobile medical service is still limited to the long distance and transportation inconvenience. Therefore, advanced countries are focusing on the research and development of Telehealth Care. The main objective of this research is to realize the low voltages SOC design skills in the healthcare box, Ministry of Economic Affairs and Industrial Technology Development Program. In the healthcare box, the ECG signal sent from patient is via ZigBee wireless sensor to transfer. Along with patient’s real time video, Doctors are able to periodically monitor patient’s physical and mental conditions through internet and to give proper treatment. Telehealth Care decreases medical costs, human resources and easily to access and take care of the patients living in distance. In this research, VLC Decoder and image compression are adopting H.264 Standard via RTP to convey messages. Through this systematical experiment, ECG signals from patients are sent to Doctor’s receiver with images simultaneously. This does not only provide Doctors patient’s health detail and data but increase the practicability of Telehealth Care.
24

Tsuei, Ying-Ho, and 崔瀅和. "An Agent Synchronization System Using Human Feature Point tracking and Localization in Video Sequences." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/41566963921202585677.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
碩士
中原大學
資訊工程研究所
97
Building a model is often the first step for many technical applications. In computer animations or movies, motion of the model is typically driven using human motion that is estimated by a motion capturing system. But, such a system is often costly and may not be affordable for the general public in daily applications. To solve the problem, we propose an agent synchronization system using human feature point tracking and localization in video sequences. The method includes: human feature point definition, level-based feature point tracking, feature point localization, and agent synchronization. Our system is demonstrated using two video sequences, namely “symmetric hand motion behavior” and “asymmetric hand motion behavior”, to build an agent that exhibits matched human motion behavior in synchronization. The results clearly showed that our system could reasonably trace and localize human feature points. In conclusion, our system may offer cost-effective solution in building synchronization models, and could be incorporated into interactive interfaces in a virtual environment to enhance human interaction, or computer animations and movies to generate video sequences with model motion corresponding to human motion.
25

Hsu, Wei-Lun, and 許維倫. "Index modulation for H.264 video watermarking and temporal synchronization based on feature statistics." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/v89vw9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
碩士
國立中央大學
通訊工程研究所
94
H.264 is a new advanced standard. The applications of video on Internet or wireless networks become very popular nowadays. However, these digital contents can be easily modified and copied by end users. Hence copyright protection, copy control and integrity verification has become important issues in recent years. Digital watermarking is a means of claiming ownership of a data source. In the proposed system, block polarity and block index modulation are used to achieve watermark embedding. The block polarity is determined based on the nonzero quantized DC coefficient in each 4x4 integer DCT block. The block index is the pseudo-quantized block activity that is represented by the sum of magnitude of quantized AC coefficients. The watermark embedding is actually performed by the index modulation that will modify quantized AC coefficient values by a small amount to force the activity to be quantized into a specific region. For resisting temporal attacks, such as frame dropping, frame insertion, and frame transposition, we also propose a temporal synchronization method for video watermarking by matching feature statistics. The feature statistics are calculated by local variances or eigenvalues in video content and sent as side information. Temporal attacks can be detected by comparing side information and feature statistics that be calculated from the received video. Simulation results show that the proposed method performs well and extract embedded watermark without the original video signal. Additionally, the algorithm is not very complex and appropriate for real-time applications. Based on the extracted feature statistics, the video watermarking system is more robustness against temporal attacks.
26

LIU, GUO-MIN, and 劉國明. "A synchronization control mechanism and performance evaluation for packetized voice and video in ATM networks." Thesis, 1992. http://ndltd.ncl.edu.tw/handle/83761441656312109866.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

LIEN-YING-SHUN and 連英順. "Synchronization of New 4D Lorenz Chaotic System and Implement Real-Time Video Cryptosystem via FPGA." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/31450576719270469744.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
碩士
國立臺灣科技大學
自動化及控制研究所
104
Because multimedia applications affect many aspects of our life, multimedia data security is becoming an important problem. To protect the increasing use of multimedia, security technologies are being developed. Based on the 3D Lorenz chaotic system, we redesign and produce the New 4D Lorenz chaotic system. And we apply the Matlab to analyze the new chaotic system’s properties which include 2D phase portraits, 3D phase portraits, equilibrium analysis, divergence analysis, power spectral density analysis and Lyapunov exponent diagrams. Then we simulate the new chaotic system by electronic circuit simulation software named Multisim. If the simulation is correct, we establish the real circuit on the breadboard and compare the result with simulation. In the part of control theory, we use sliding mode control, integral sliding mode control and adaptive integral sliding mode control to implement the New 4D Lorenz chaotic system’s synchronization and compare the three kinds of controllers’ difference. After that, we discretize the new system and synchronize it. Therefore, we can implement on the FPGA platform. Finally, we use the chaotic system’s property to implement real-time video secure encryption algorithm. Besides that, utilizing the synchronization of control for the real-time video decryption algorithm on FPGA. In this study, we convert the master and synchronized slave system to digital signals. The master system is used for real-time video encryption and the slave one is used for decrypting real-time video.
28

Pooley, Daniel William. "The automated synchronisation of independently moving cameras." 2008. http://hdl.handle.net/2440/49461.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Computer vision is concerned with the recovery of useful scene or camera information from a set of images. One classical problem is the estimation of the 3D scene structure depicted in multiple photographs. Such estimation fundamentally requires determining how the cameras are related in space. For a dynamic event recorded by multiple video cameras, finding the temporal relationship between cameras has a similar importance. Estimating such synchrony is key to a further analysis of the dynamic scene components. Existing approaches to synchronisation involve using visual cues common to both videos, and consider a discrete uniform range of synchronisation hypotheses. These prior methods exploit known constraints which hold in the presence of synchrony, from which both a temporal relationship, and an unchanging spatial relationship between the cameras can be recovered. This thesis presents methods that synchronise a pair of independently moving cameras. The spatial configuration of cameras is assumed to be known, and a cost function is developed to measure the quality of synchrony even for accuracies within a fraction of a frame. A Histogram method is developed which changes the approach from a consideration of multiple synchronisation hypotheses, to searching for seemingly synchronous frame pairs independently. Such a strategy has increased efficiency in the case of unknown frame rates. Further savings can be achieved by reducing the sampling rate of the search, by only testing for synchrony across a small subset of frames. Two robust algorithms are devised, using Bayesian inference to adaptively seek the sampling rate that minimises total execution time. These algorithms have a general underlying premise, and should be applicable to a wider class of robust estimation problems. A method is also devised to robustly synchronise two moving cameras when their spatial relationship is unknown. It is assumed that the motion of each camera has been estimated independently, so that these motion estimates are unregistered. The algorithm recovers both a synchronisation estimate, and a 3D transformation that spatially registers the two cameras.
Thesis (Ph.D.) - University of Adelaide, School of Computer Science, 2008
29

HUANG, BO-SONG, and 黃柏菘. "A Study on Synchronization of Multi-source Videos." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/22126670905961308803.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
碩士
國立雲林科技大學
電機工程系
104
In this thesis, a video synchronization system was developed based on feature point matching. The proposed system contains several parts: pre-processing, key frame selection, feature point matching, and frame index difference computation. Since videos may have different frame rates and frame sizes, the pre-processing procedure is used to normalize the frame rate and frame size of videos. After key frame extraction, matching similar key frames from difference videos is conducted based on feature point matching and RANSAC. The frame index differences among videos can be obtained by analyzing these matched key frames and then synchronizing videos can be achieved.To evaluate the performance of the proposed system, some videos are captured for testing. Experimental results show that key frames can be selected effectively. Frame matching can be well achieved based on feature point matching and RANSAC. The higher score of subjective testing demonstrates that the proposed system can synchronize videos well.
30

Breuleux-Ouellette, Yan. "Tempêtes. Composition audiovisuelle." Thèse, 2013. http://hdl.handle.net/1866/10756.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії