Dissertations / Theses on the topic 'Video'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Video.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Sedlařík, Vladimír. "Informační strategie firmy." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2012. http://www.nusl.cz/ntk/nusl-223526.
Full textLindskog, Eric, and Wrang Jesper. "Design of video players for branched videos." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148592.
Full textSalam, Sazilah. "VidIO : a model for personalized video information management." Thesis, University of Southampton, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.242411.
Full textAklouf, Mourad. "Video for events : Compression and transport of the next generation video codec." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG029.
Full textThe acquisition and delivery of video content with minimal latency has become essential in several business areas such as sports broadcasting, video conferencing, telepresence, remote vehicle operation, or remote system control. The live streaming industry has grown in 2020 and it will expand further in the next few years with the emergence of new high-efficiency video codecs based on the Versatile Video Coding (VVC) standard and the fifth generation of mobile networks (5G).HTTP Adaptive Streaming (HAS) methods such as MPEG-DASH, using algorithms to adapt the transmission rate of compressed video, have proven to be very effective in improving the quality of experience (QoE) in a video-on-demand (VOD) context.Nevertheless, minimizing the delay between image acquisition and display at the receiver is essential in applications where latency is critical. Most rate adaptation algorithms are developed to optimize video transmission from a server situated in the core network to mobile clients. In applications requiring low-latency streaming, such as remote control of drones or broadcasting of sports events, the role of the server is played by a mobile terminal. The latter will acquire, compress, and transmit the video and transmit the compressed stream via a radio access channel to one or more clients. Therefore, client-driven rate adaptation approaches are unsuitable in this context because of the variability of the channel characteristics. In addition, HAS, for which the decision-making is done with a periodicity of the order of a second, are not sufficiently reactive when the server is moving, which may generate significant delays. It is therefore important to use a very fine adaptation granularity in order to reduce the end-to-end delay. The reduced size of the transmission and reception buffers (to minimize latency) makes it more difficult to adapt the throughput in our use case. When the bandwidth varies with a time constant smaller than the period with which the regulation is made, bad transmission rate decisions can induce a significant latency overhead.The aim of this thesis is to provide some answers to the problem of low-latency delivery of video acquired, compressed, and transmitted by mobile terminals. We first present a frame-by-frame rate adaptation algorithm for low latency broadcasting. A Model Predictive Control (MPC) approach is proposed to determine the coding rate of each frame to be transmitted. This approach uses information about the buffer level of the transmitter and about the characteristics of the transmission channel. Since the frames are coded live, a model relating the quantization parameter (QP) to the output rate of the video encoder is required. Hence, we have proposed a new model linking the rate to the QP of the current frame and to the distortion of the previous frame. This model provides much better results in the context of a frame-by-frame decision on the coding rate than the reference models in the literature.In addition to the above techniques, we have also proposed tools to reduce the complexity of video encoders such as VVC. The current version of the VVC encoder (VTM10) has an execution time nine times higher than that of the HEVC encoder. Therefore, the VVC encoder is not suitable for real-time encoding and streaming applications on currently available platforms. In this context, we present a systematic branch-and-prune method to identify a set of coding tools that can be disabled while satisfying a constraint on coding efficiency. This work contributes to the realization of a real-time VVC coder
Le, Thuc Trinh. "Video inpainting and semi-supervised object removal." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT026/document.
Full textNowadays, the rapid increase of video creates a massive demand for video-based editing applications. In this dissertation, we solve several problems relating to video post-processing and focus on objects removal application in video. To complete this task, we divided it into two problems: (1) A video objects segmentation step to select which objects to remove and (2) a video inpainting step to filling the damaged regions.For the video segmentation problem, we design a system which is suitable for object removal applications with different requirements in terms of accuracy and efficiency. Our approach relies on the combination of Convolutional Neural Networks (CNNs) for segmentation and the classical mask tracking method. In particular, we adopt the segmentation networks for image case and apply them to video case by performing frame-by-frame segmentation. By exploiting both offline and online training with first frame annotation only, the networks are able to produce highly accurate video object segmentation. Besides, we propose a mask tracking module to ensure temporal continuity and a mask linking module to ensure the identity coherence across frames. Moreover, we introduce a simple way to learn the dilation layer in the mask, which helps us create suitable masks for video objects removal application.For the video inpainting problem, we divide our work into two categories base on the type of background. In particular, we present a simple motion-guided pixel propagation method to deal with static background cases. We show that the problem of objects removal with a static background can be solved efficiently using a simple motion-based technique. To deal with dynamic background, we introduce video inpainting method by optimization a global patch-based energy function. To increase the speed of the algorithm, we proposed a parallel extension of the 3D PatchMatch algorithm. To improve accuracy, we systematically incorporate the optical flow in the overall process. We end up with a video inpainting method which is able to reconstruct moving objects as well as reproduce dynamic textures while running in a reasonable time.Finally, we combine the video objects segmentation and video inpainting methods into a unified system to removes undesired objects in videos. To the best of our knowledge, this is the first system of this kind. In our system, the user only needs to approximately delimit in the first frame the objects to be edited. These annotation process is facilitated by the help of superpixels. Then, these annotations are refined and propagated through the video by the video objects segmentation method. One or several objects can then be removed automatically using our video inpainting methods. This results in a flexible computational video editing tool, with numerous potential applications, ranging from crowd suppression to unphysical scenes correction
Lei, Zhijun. "Video transcoding techniques for wireless video communications." Thesis, University of Ottawa (Canada), 2004. http://hdl.handle.net/10393/29134.
Full textMilovanovic, Marta. "Pruning and compression of multi-view content for immersive video coding." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT023.
Full textThis thesis addresses the problem of efficient compression of immersive video content, represented with Multiview Video plus Depth (MVD) format. The Moving Picture Experts Group (MPEG) standard for the transmission of MVD data is called MPEG Immersive Video (MIV), which utilizes 2D video codecs to compress the source texture and depth information. Compared to traditional video coding, immersive video coding is more complex and constrained not only by trade-off between bitrate and quality, but also by the pixel rate. Because of that, MIV uses pruning to reduce the pixel rate and inter-view correlations and creates a mosaic of image pieces (patches). Decoder-side depth estimation (DSDE) has emerged as an alternative approach to improve the immersive video system by avoiding the transmission of depth maps and moving the depth estimation process to the decoder side. DSDE has been studied for the case of numerous fully transmitted views (without pruning). In this thesis, we demonstrate possible advances in immersive video coding, emphasized on pruning the input content. We go beyond DSDE and examine the distinct effect of patch-level depth restoration at the decoder side. We propose two approaches to incorporate decoder-side depth estimation (DSDE) on content pruned with MIV. The first approach excludes a subset of depth maps from the transmission, and the second approach uses the quality of depth patches estimated at the encoder side to distinguish between those that need to be transmitted and those that can be recovered at the decoder side. Our experiments show 4.63 BD-rate gain for Y-PSNR on average. Furthermore, we also explore the use of neural image-based rendering (IBR) techniques to enhance the quality of novel view synthesis and show that neural synthesis itself provides the information needed to prune the content. Our results show a good trade-off between pixel rate and synthesis quality, achieving the view synthesis improvements of 3.6 dB on average
Arrufat, Batalla Adrià. "Multiple transforms for video coding." Thesis, Rennes, INSA, 2015. http://www.theses.fr/2015ISAR0025/document.
Full textState of the art video codecs use transforms to ensure a compact signal representation. The transform stage is where compression takes place, however, little variety is observed in the type of transforms used for standardised video coding schemes: often, a single transform is considered, usually a Discrete Cosine Transform (DCT). Recently, other transforms have started being considered in addition to the DCT. For instance, in the latest video coding standard, High Efficiency Video Coding (HEVC), the 4x4 sized blocks can make use of the Discrete Sine Transform (DST) and, in addition, it also possible not to transform them. This fact reveals an increasing interest to consider a plurality of transforms to achieve higher compression rates. This thesis focuses on extending HEVC through the use of multiple transforms. After a general introduction to video compression and transform coding, two transform designs are studied in detail: the Karhunen Loève Transform (KLT) and a Rate-Distortion Optimised Transform are considered. These two methods are compared against each other by replacing the transforms in HEVC. This experiment validates the appropriateness of the design. A coding scheme that incorporates and boosts the use of multiple transforms is introduced: several transforms are made available to the encoder, which chooses the one that provides the best rate-distortion trade-off. Consequently, a design method for building systems using multiple transforms is also described. With this coding scheme, significant amounts of bit-rate savings are achieved over HEVC, especially when using many complex transforms. However, these improvements come at the expense of increased complexity in terms of coding, decoding and storage requirements. As a result, simplifications are considered while limiting the impact on bit-rate savings. A first approach is introduced, in which incomplete transforms are used. This kind of transforms use one single base vector and are conceived to work as companions of the HEVC transforms. This technique is evaluated and provides significant complexity reductions over the previous system, although the bit-rate savings are modest. A systematic method, which specifically determines the best trade-offs between the number of transforms and bit-rate savings, is designed. This method uses two different types of transforms based separable orthogonal transforms and Discrete Trigonometric Transforms (DTTs) in particular. Several designs are presented, allowing for different complexity and bitrate savings trade-offs. These systems reveal the interest of using multiple transforms for video coding
Le, Thuc Trinh. "Video inpainting and semi-supervised object removal." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT026.
Full textNowadays, the rapid increase of video creates a massive demand for video-based editing applications. In this dissertation, we solve several problems relating to video post-processing and focus on objects removal application in video. To complete this task, we divided it into two problems: (1) A video objects segmentation step to select which objects to remove and (2) a video inpainting step to filling the damaged regions.For the video segmentation problem, we design a system which is suitable for object removal applications with different requirements in terms of accuracy and efficiency. Our approach relies on the combination of Convolutional Neural Networks (CNNs) for segmentation and the classical mask tracking method. In particular, we adopt the segmentation networks for image case and apply them to video case by performing frame-by-frame segmentation. By exploiting both offline and online training with first frame annotation only, the networks are able to produce highly accurate video object segmentation. Besides, we propose a mask tracking module to ensure temporal continuity and a mask linking module to ensure the identity coherence across frames. Moreover, we introduce a simple way to learn the dilation layer in the mask, which helps us create suitable masks for video objects removal application.For the video inpainting problem, we divide our work into two categories base on the type of background. In particular, we present a simple motion-guided pixel propagation method to deal with static background cases. We show that the problem of objects removal with a static background can be solved efficiently using a simple motion-based technique. To deal with dynamic background, we introduce video inpainting method by optimization a global patch-based energy function. To increase the speed of the algorithm, we proposed a parallel extension of the 3D PatchMatch algorithm. To improve accuracy, we systematically incorporate the optical flow in the overall process. We end up with a video inpainting method which is able to reconstruct moving objects as well as reproduce dynamic textures while running in a reasonable time.Finally, we combine the video objects segmentation and video inpainting methods into a unified system to removes undesired objects in videos. To the best of our knowledge, this is the first system of this kind. In our system, the user only needs to approximately delimit in the first frame the objects to be edited. These annotation process is facilitated by the help of superpixels. Then, these annotations are refined and propagated through the video by the video objects segmentation method. One or several objects can then be removed automatically using our video inpainting methods. This results in a flexible computational video editing tool, with numerous potential applications, ranging from crowd suppression to unphysical scenes correction
Dufour, Sophie-Isabelle. "Imaginem video : L'image vidéo dans l'"histoire longue" des images." Paris 3, 2004. http://www.theses.fr/2004PA030054.
Full textWhat do I see when I look at a video? Actually, I do not see a video but an image. My purpose is to study the status of video image from the point of view of the so-called " long history " of images, dealing therefore with very ancient problems that occured long before the technical invention of the medium. A distinction must be made between images and the very notion of image : one could say that the difficult notion of image can be specified only through the various media in which it embodies itself. In this study, video questions image itself. Art works will keep their privilege, because through them the status of video image is best revealed; but my intention is to show that the powers of image go far beyond aesthetics. The first problem will be the one raised by the myth of Narcissus, as a lover of image(s), because it is seminal. It leads, for instance, to the notion of fluidity, which will prove essential in my study of the " ghostliness " of video image (as well as in my study of space in video). Last but not least, the relations between time and video image should be specified with Bergson's help, and I shall try to prove how useful can be this philosopher's notion of time when one hopes to understand the singularity of video image
Hammouri, Ghassan. "Video++, an object-oriented approach to video algebra." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq26329.pdf.
Full textPu, Ruonan. "Target-sensitive video segmentation for seamless video composition /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?CSED%202007%20PU.
Full textBhat, Abharana Ramdas. "A new video quality metric for compressed video." Thesis, Robert Gordon University, 2012. http://hdl.handle.net/10059/794.
Full textTsoi, Yau Chat. "Video cosmetics : digital removal of blemishes from video /." View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?COMP%202003%20TSOI.
Full textIncludes bibliographical references (leaves 83-86). Also available in electronic version. Access restricted to campus users.
Banda, Dalitso Hansini. "Deep video-to-video transformations for accessibility applications." Thesis, Massachusetts Institute of Technology, 2018. https://hdl.handle.net/1721.1/121622.
Full textThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 73-79).
We develop a class of visual assistive technologies that can learn visual transforms to improve accessibility as an alternative to traditional methods that mostly rely on extracted symbolic information. In this thesis, we mainly focus on how we can apply this class of systems to address photosensitivity. People with photosensitivity may have seizures, migraines or other adverse reactions to certain visual stimuli such as flashing images and alternating patterns. We develop deep learning models that learn to identify and transform video sequences containing such stimuli whilst preserving video quality and content. Using descriptions of the adverse visual stimuli, we train models to learn transforms to remove such stimuli. We show that these deep learning models are able to generalize to real-world examples of images with these problematic stimuli. From our experimental trials, human subjects rated video sequences transformed by our models as having significantly less problematic stimuli than their input. We extend these ideas; we show how these deep transformation networks can be applied in other visual assistive domains through demonstration of an application addressing the problem of emotion recognition in those with the Autism Spectrum Disorder.
by Dalitso Hansini Banda.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Napolitano, Pasquale. "Video-Design: progettare lo spazio con il video." Doctoral thesis, Università degli Studi di Salerno, 2010. http://hdl.handle.net/10556/123.
Full textIl progetto di ricerca di cui questo testo rappresenta un primo, paparziale punto fermo, è consistito principalmente nel rilevare e nel descrivere in che modo il videodesign, prima ancora che un corollario di competenze tecniche e progettuali, costituisce una particolare disposizione nei confronti del visualscape contemporaneo, una pratica dello sguardo in grado di seguire i filamenti ibridi che si intrecciano in ogni oggetto audiovisivo. L'obiettivo generale della ricerca è stato quello di utilizzare la speculazione sul design del video per delineare una serie di forme simboliche che mettano in forma una peculiare tipologia di sguardo. La visione messa in forma dal cinema aderisce ai canoni tradizionalmente attribuiti alla prospettiva rinascimentale, con la sua concezione vettoriale dello sguardo, conformemente alla teoria di Ervin Panfosky, che vede nella prospettiva lineare la forma simbolica dell'area moderna. Il tipo di sguardo proposto invece dai video-oggetti è, non più vettoriale, ma sintetico. Uno sguardo che non riduce a sintesi, ma resta paratattico. Inoltre si è cercato di far emergere dalla presente analisi la gamma di forme simboliche sottese alla forma-video, attraverso un escursus storico(nel primo capitolo) e puntate mirate sul contemporaneo , specialmente quelle forme audiovisive non ancora precisamente catalogabili, ma che si mostrano come forma ibrida tra video e spazio (i capitoli su motion picture, ambienti sensibili-interattivi, suond sculptures e live-media).
VIII ciclo n.s.
Chen, Juan. "Content-based Digital Video Processing. Digital Videos Segmentation, Retrieval and Interpretation." Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4256.
Full textKrist, Antonín. "Pokročilé metody postprodukce a distribuce videa s využitím IT." Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-19121.
Full textBordes, Philippe. "Adapting video compression to new formats." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S003/document.
Full textThe new video codecs should be designed with an high level of adaptability in terms of network bandwidth, format scalability (size, color space…) and backward compatibility. This thesis was made in this context and within the scope of the HEVC standard development. In a first part, several Video Coding adaptations that exploit the signal properties and which take place at the bit-stream creation are explored. The study of improved frame partitioning for inter prediction allows better fitting the actual motion frontiers and shows significant gains. This principle is further extended to long-term motion modeling with trajectories. We also show how the cross-component correlation statistics and the luminance change between pictures can be exploited to increase the coding efficiency. In a second part, post-creation stream adaptations relying on intrinsic stream flexibility are investigated. In particular, a new color gamut scalability scheme addressing color space adaptation is proposed. From this work, we derive color remapping metadata and an associated model to provide low complexity and general purpose color remapping feature. We also explore the adaptive resolution coding and how to extend scalable codec to stream-switching applications. Several of the described techniques have been proposed to MPEG. Some of them have been adopted in the HEVC standard and in the UHD Blu-ray Disc. Various techniques for adapting the video compression to the content characteristics and to the distribution use cases have been considered. They can be selected or combined together depending on the applications requirements
Yu, Jin Nah. "Video dithering." Thesis, Texas A&M University, 2004. http://hdl.handle.net/1969.1/505.
Full textWaldemarsson, Lars-Åke. "Holografisk Video." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7049.
Full textDetta examensarbete utgår ifrån en artikel i vilken en metod för att skapa holografisk video beskrivs. Syftet med arbetet är att återskapa denna metod. Metoden bygger på projicering av hologram med hjälp av delar från en projektor, en laser och några linser.
Först görs en litteraturstudie för att få förståelse över hur metoden fungerar. Här behandlas hur ögat ser djup och vilka olika typer av displayer det finns för att återge tredimensionella holografiska bilder. Vidare beskrivs skillnaden mellan optisk och datorgenererad holografi. Detta arbete hanterar enbart datorgenererad holografi.
Diffraktion, böjning av ljusstrålar och interferens mellan ljusstrålar ligger som grund för metoden att skapa holografiska bilder. I optisk holografi låter man ljusstrålar från ett objekt och en referensstråle interferera med varandra. Deras interferensmönster fångas upp på en fotografisk film. Ett hologram av objektet kan därefter rekonstrueras genom att belysa den fotografiska filmen med samma referensstråle.
För att återge tredimensionella holografiska bilder så behövs en SLM (”Spatial Light Modulator”). Den SLM som används här är Texas Instruments DLP (”Digital Light Processing”). Denna återfinns i DLP-projektorer i vilken huvudkomponenten är en DMD (”Digital Micromirror Device”). En DMD är ett datorchip bestående av mikroskopiska små speglar i ett rutmönster. DMD:n belyses i projektorn av en lampa och här av en laser. Vardera mikrospegel kan vinklas mot resp. från ljuskällan och därigenom föra sitt lilla ljusknippe vidare eller inte.
Datorgenererad holografi simulerar optisk holografi, genom en fouriertransform. Denna transform har som indata en numerisk beskrivning av ett objekt och som utdata ett interferensmönster som matas in i DLP:n. De infallande ljusstrålarna på DMD:n agerar utifrån interferensmönstret och återger ett hologram. Jämför här med den fotografiska filmen inom optisk holografi.
Den andra delen av examensarbetet hanterar min återskapning av metoden. För att beskriva transformen valdes datorprogrammet Matlab. Indata till programmet är två tvådimensionella bilder. Dessa placeras i en rymd med ett inbördes avstånd mellan varandra i z-led. Denna rymd är det objekt som ska skapas ett hologram för. Programmet ger som utdata en tvådimensionell bild som utgör interferensmönstret för objektet.
Stor vikt har lagts vid optimering av detta program genom att utnyttja Matlabs styrka i matrisoperationer och att förenkla beräkningen för de punkter som i hologrammet är genomskinliga, dvs. de punkter som inte hör till objektet.
I resultatdelen presenteras interferensmönstret för ett givet objekt. En slutsats är att beräkna transformen för normalstora eller större objekt är en mycket tidsödande process. Det krävs stor datorkraft och bättre optimering för att få acceptabla tider för beräkningen. Här beräknas bara interferensmönster för enstaka objekt, för att skapa holografisk video så behövs runt 24 bilder per sekund. Det är fullt möjligt att skapa holografisk video med det presenterade programmet men det skulle ta allt för lång tid för beräkning.
Parnow, Klaus. "Arbeitsgruppe Video." Universität Potsdam, 1999. http://opus.kobv.de/ubp/volltexte/2005/304/.
Full textYilmaz, Fatih Levent. "Video Encryption." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-12604.
Full textDaniel, G. W. "Video visualisation." Thesis, Swansea University, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.636344.
Full textSasnett, Russ. "Reconfigurable video." Thesis, Massachusetts Institute of Technology, 1985. http://hdl.handle.net/1721.1/15100.
Full textMICROFICHE COPY AVAILABLE IN ARCHIVES AND ROTCH
Bibliography: leaves 105-107.
by Russell Mayo Sasnett.
M.S.V.S.
Lee, Ying 1979. "Scalable video." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/9071.
Full textIncludes bibliographical references (p. 51).
This thesis presents the design and implementation of a scalable video scheme that accommodates the uncertainties in networks and the differences in receivers' displaying mechanisms. To achieve scalability, a video stream is encoded into two kinds of layers, namely the base layer and the enhancement layer. The decoder must process the base layer in order to display minimally acceptable video quality. For higher quality, the decoder simply combines the base layer with one or more enhancement layers. Incorporated with the IP multicast system, the result is a highly flexible and extensible structure that facilitates video viewing to a wide variety of devices, yet customizes the presentation for each individual receiver.
by Ying Lee.
M.Eng.
Jovičic, Zoran. "Video - film." Master's thesis, Vysoké učení technické v Brně. Fakulta výtvarných umění, 2009. http://www.nusl.cz/ntk/nusl-232206.
Full textJirka, Roman. "Časosběrné video." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-236934.
Full textRichtr, Pavel. "Video syntezátor." Master's thesis, Vysoké učení technické v Brně. Fakulta výtvarných umění, 2016. http://www.nusl.cz/ntk/nusl-240574.
Full textArrieta, Concha José Luis, and Huamán Glendha Falconí. "Video Wall." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2013. http://hdl.handle.net/10757/273539.
Full textHoryna, Miroslav. "Video telefon." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-221059.
Full textWang, Yi. "Design and Evaluation of Contextualized Video Interfaces." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/28798.
Full textPh. D.
But, Jason. "A novel MPEG-1 partial encryption scheme for the purposes of streaming video." Monash University, Dept. of Electrical and Computer Systems Engineering, 2004. http://arrow.monash.edu.au/hdl/1959.1/9709.
Full textLopes, Jose E. F. C. "Audio-coupled video content understanding of unconstrained video sequences." Thesis, Loughborough University, 2011. https://dspace.lboro.ac.uk/2134/8306.
Full textBarannik, Vlad, Y. Babenko, S. Shulgin, and M. Parkhomenko. "Video encoding to increase video availability in telecommunication systems." Thesis, Taras Shevchenko National University of Kyiv, 2020. https://openarchive.nure.ua/handle/document/16582.
Full textPark, Dong-Jun. "Video event detection framework on large-scale video data." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/2754.
Full textNouri, Marwen. "Propagation de Marquages pour le Matting Vidéo." Phd thesis, Université René Descartes - Paris V, 2013. http://tel.archives-ouvertes.fr/tel-00799753.
Full textBarrios, Núñez Juan Manuel. "Content-based video copy detection." Tesis, Universidad de Chile, 2013. http://www.repositorio.uchile.cl/handle/2250/115521.
Full textLa cantidad y el uso de videos en Internet ha aumentado exponencialmente durante los últimos años. La investigación académica en tópicos de videos se ha desarrollado durante décadas, sin embargo la actual ubicuidad de los videos presiona por el desarrollo de nuevos y mejores algoritmos. Actualmente existen variadas necesidades por satisfacer y muchos problemas abiertos que requieren de investigación científica. En particular, la Detección de Copias de Video (DCV) aborda la necesidad de buscar los videos que son copia de un documento original. El proceso de detección compara el contenido de los videos en forma robusta a diferentes transformaciones audiovisuales. Esta tesis presenta un sistema de DCV llamado P-VCD, el cual utiliza algoritmos y técnicas novedosas para lograr alta efectividad y eficiencia. Esta tesis se divide en dos partes. La primera parte se enfoca en el estado del arte, donde se revisan técnicas comunes de procesamiento de imágenes y búsqueda por similitud, se analiza la definición y alcance de la DCV, y se presentan técnicas actuales para resolver este problema. La segunda parte de esta tesis detalla el trabajo realizado y sus contribuciones al estado del arte, analizando cada una de las tareas que componen esta solución, a saber: preprocesamiento de videos, segmentación de videos, extracción de características, búsqueda por similitud y localización de copias. En relación a la efectividad, se desarrollan las ideas de normalización de calidad de videos, descripción múltiple de contenidos, combinación de distancias, y uso de distancias métricas versus no-métricas. Como resultado se proponen las técnicas de creación automática de descriptores espacio-temporales a partir de descriptores de fotogramas, descriptores de audio combinables con descriptores visuales, selección automática de pesos, y distancia espacio-temporal para combinación de descriptores. En relación a la eficiencia, se desarrollan los enfoques de espacios métricos y tabla de pivotes para acelerar las búsquedas. Como resultado se proponen una búsqueda aproximada utilizando objetos pivotes para estimar y descartar distancias, búsquedas multimodales en grandes colecciones, y un índice que explota la similitud entre objetos de consulta consecutivos. Esta tesis ha sido evaluada usando la colección MUSCLE-VCD-2007 y participando en las evaluaciones TRECVID 2010 y 2011. El desempeño logrado en estas evaluaciones es satisfactorio. En el caso de MUSCLE-VCD-2007 se supera el mejor resultado publicado para esa colección, logrando la máxima efectividad posible, mientras que en el caso de TRECVID se obtiene una performance competitiva con otros sistemas del estado del arte.
Dye, Brigham R. "Reliability of Pre-Service Teachers Coding of Teaching Videos Using Video-Annotation Tools." BYU ScholarsArchive, 2007. https://scholarsarchive.byu.edu/etd/990.
Full textCorbillon, Xavier. "Enable the next generation of interactive video streaming." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2018. http://www.theses.fr/2018IMTA0103/document.
Full textOmnidirectional videos, also denoted as spherical videos or 360° videos, are videos with pixels recorded from a given viewpoint in every direction of space. A user watching such an omnidirectional content with a Head Mounted Display (HMD) can select the portion of the videoto display, usually denoted as viewport, by moving her head. To feel high immersion inside the content a user needs to see viewport with 4K resolutionand 90 Hz frame rate. With traditional streaming technologies, providing such quality would require a data rate of more than 100 Mbit s−1, which is far too high compared to the median Internet access band width. In this dissertation, I present my contributions to enable the streaming of highly immersive omnidirectional videos on the Internet. We can distinguish six contributions : a viewport-adaptive streaming architecture proposal reusing a part of existing technologies ; an extension of this architecture for videos with six degrees of freedom ; two theoretical studies of videos with non homogeneous spatial quality ; an open-source software for handling 360° videos ; and a dataset of recorded users’ trajectories while watching 360° videos
Cain, Julia. "Understanding film and video as tools for change : applying participatory video and video advocacy in South Africa." Thesis, Stellenbosch : Stellenbosch University, 2009. http://hdl.handle.net/10019.1/1431.
Full textThe purpose of this study is to examine critically the phenomenon of participatory video and to situate within this the participatory video project that was initiated as part of this study in the informal settlement area of Kayamandi, South Africa. The overall objective of the dissertation is to consider the potential of participatory video within current-day South Africa towards enabling marginalised groups to represent themselves and achieve social change. As will be shown, the term ‘participatory video’ has been used broadly and applied to many different types of video products and processes. For the preliminary purposes of this dissertation, participatory video is defined as any video (or film) process dedicated to achieving change through which the subject(s) has been an integral part of the planning and/or production, as well as a primary end-user or target audience. The two key elements that distinguish participatory video are thus (1) understanding video (or film) as a tool for social change; and (2) understanding participation by the subject as integral to the video process. An historical analysis thus considers various filmmaking developments that fed into the emergence of participatory video. These include various film practices that used film as a tool for change -- from soviet agitprop through to the documentary movement of the 1930s, as well as various types of filmmaking in the 1960s that opened up questions of participation. The Fogo process, developed in the late 1960s, marked the start of participatory video and video advocacy and provided guiding principles for the Kayamandi project initiated as part of this dissertation. Practitioners of the Fogo process helped initiate participatory video practice in South Africa when they brought the process to South African anti-apartheid activists in the early 1970s. The Kayamandi Participatory Video Project draws on this background and context in its planned methodology and its implementation. Out of this, various theoretical issues arising from participatory video practice contextualise a reflection and an analysis of the Kayamandi project. Lastly, this study draws conclusions and recommendations on participatory video practice in South Africa.
He, Chao. "Advanced wavelet application for video compression and video object tracking." Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1125659908.
Full textTitle from first page of PDF file. Document formatted into pages; contains xvii, 158 p.; also includes graphics (some col.). Includes bibliographical references (p. 150-158). Available online via OhioLINK's ETD Center
Kozica, Ermin. "Paradigms for Real-Time Video Communication and for Video Distribution." Doctoral thesis, KTH, Ljud- och bildbehandling, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-32203.
Full textQC 20110411
Chhina, Gagun S. "Video gaming parlours : the emergence of video gaming in India." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/video-gaming-parlours-the-emergence-of-video-gaming-in-india(75217f0f-c060-4c68-b708-ed496b3988e1).html.
Full textKeen, Seth. "Video chaos : multilinear narrative structuration in new media video practice /." Electronic version, 2005. http://adt.lib.uts.edu.au/public/adt-NTSM20050921.151215/index.html.
Full textChen, Liyong. "Joint image/video inpainting for error concealment in video coding." Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/HKUTO/record/B39558915.
Full textChen, Liyong, and 陳黎勇. "Joint image/video inpainting for error concealment in video coding." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B39558915.
Full textDi, Caterina Gaetano. "Video analyytics algorithms and distributed solutions for smart video surveillance." Thesis, University of Strathclyde, 2013. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=18949.
Full textBai, Yannan. "Video analytics system for surveillance videos." Thesis, 2018. https://hdl.handle.net/2144/30739.
Full textParimala, Anusha. "Video Enhancement: Video Stabilization." Thesis, 2018. http://ethesis.nitrkl.ac.in/9977/1/2018_MT_216EC6252_AParimala_Video.pdf.
Full text