Academic literature on the topic 'Video'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Video.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Video"

1

Yulianto, Agus, Sisworo Sisworo, and Erry Hidayanto. "Pembelajaran Matematika Berbantuan Video Pembelajaran untuk Meningkatkan Motivasi dan Hasil Belajar Peserta Didik." Mosharafa: Jurnal Pendidikan Matematika 11, no. 3 (September 30, 2022): 403–14. http://dx.doi.org/10.31980/mosharafa.v11i3.1396.

Full text
Abstract:
Kemampuan guru memilih media dan mengemas proses belajar mengajar sangat menentukan keberhasilan belajar. Sebab, minat siswa dalam menggunakan buku teks masih kurang. Penelitian bertujuan menerpakan video pembelajaran guna meningkatkan motivasi dan hasil belajar. Video pembelajaran dibuat untuk mendampingi LKPD. Subyek penelitian adalah 36 siswa kelas X Akuntansi salah satu SMKN di Trenggalek. Data hasil penelitian di olah dan dianalisis secara deskriptif. Siswa pada awalnya diberikan Vidio pembelajaran dan LKPD melalui WAG, selanjutnya sesuai jadwal masuk ke googlemeet yang sudah disediakan untuk pembahasan apa saja yang kurang jelas dari video pembelajaran. Hasil penelitian menunjukkan peningkatan motivasi belajar dan hasil belajar siswa, meliputi: Siswa aktif dalam mengikuti kegiatan pembelajaran daring, menyelesaikan LKPD yang diberikan tepat waktu sesuai dengan petunjuk yang diberikan, dan prestasi siswa meningkat dengan bantuan Vidio Pembelajaran. Pada siklus 1 tingkat ketuntasan peserta didik mencapai 77,8 % dan pada siklus II mencapai 92%. Video pembelajaran terbukti bermanfaat dalam meningkatkan motivasi belajar.The teacher's ability to choose the media and package the teaching and learning process will determine success in learning. That was because students' interest to use textbooks is still lacking. This study aims to apply a learning video to increase motivation and learning outcomes. Learning videos made to accompany LKPD. The research subjects were 36 X student's Accounting at one of the Vocational High Schools in Trenggalek. Data from the research were processed and analyzed descriptively. Students are initially given learning videos and LKPD through WAG, then, according to the schedule enter the google meet that has been provided to discuss anything that is not clear from the learning video. The results showed an increase in motivation and student learning outcomes, including students being active in participating in online learning activities, students completing the LKPD given on time according to the instructions given, and student achievement increased with the help of learning videos. In cycle 1, the level of completeness of students reached 77.8%, and in cycle II, it reached 92%. Learning videos are proven to be useful in increasing learning motivation.
APA, Harvard, Vancouver, ISO, and other styles
2

Yulianto, Agus, Sisworo Yulianto, and Erry Hidayanto. "Pembelajaran Matematika Berbantuan Video Pembelajaran untuk Meningkatkan Motivasi dan Hasil Belajar Peserta Didik." Mosharafa: Jurnal Pendidikan Matematika 11, no. 3 (September 30, 2022): 403–14. http://dx.doi.org/10.31980/mosharafa.v11i3.731.

Full text
Abstract:
Kemampuan guru memilih media dan mengemas proses belajar mengajar sangat menentukan keberhasilan belajar. Sebab, minat siswa dalam menggunakan buku teks masih kurang. Penelitian bertujuan menerpakan video pembelajaran guna meningkatkan motivasi dan hasil belajar. Video pembelajaran dibuat untuk mendampingi LKPD. Subyek penelitian adalah 36 siswa kelas X Akuntansi salah satu SMKN di Trenggalek. Data hasil penelitian di olah dan dianalisis secara deskriptif. Siswa pada awalnya diberikan Vidio pembelajaran dan LKPD melalui WAG, selanjutnya sesuai jadwal masuk ke googlemeet yang sudah disediakan untuk pembahasan apa saja yang kurang jelas dari video pembelajaran. Hasil penelitian menunjukkan peningkatan motivasi belajar dan hasil belajar siswa, meliputi: Siswa aktif dalam mengikuti kegiatan pembelajaran daring, menyelesaikan LKPD yang diberikan tepat waktu sesuai dengan petunjuk yang diberikan, dan prestasi siswa meningkat dengan bantuan Vidio Pembelajaran. Pada siklus 1 tingkat ketuntasan peserta didik mencapai 77,8 % dan pada siklus II mencapai 92%. Video pembelajaran terbukti bermanfaat dalam meningkatkan motivasi belajar. The teacher's ability to choose the media and package the teaching and learning process will determine success in learning. That was because students' interest to use textbooks is still lacking. This study aims to apply a learning video to increase motivation and learning outcomes. Learning videos made to accompany LKPD. The research subjects were 36 X student's Accounting at one of the Vocational High Schools in Trenggalek. Data from the research were processed and analyzed descriptively. Students are initially given learning videos and LKPD through WAG, then, according to the schedule enter the google meet that has been provided to discuss anything that is not clear from the learning video. The results showed an increase in motivation and student learning outcomes, including students being active in participating in online learning activities, students completing the LKPD given on time according to the instructions given, and student achievement increased with the help of learning videos. In cycle 1, the level of completeness of students reached 77.8%, and in cycle II, it reached 92%. Learning videos are proven to be useful in increasing learning motivation.
APA, Harvard, Vancouver, ISO, and other styles
3

S., Sankirti, and P. M. Kamade. "Video OCR for Video Indexing." International Journal of Engineering and Technology 3, no. 3 (2011): 287–89. http://dx.doi.org/10.7763/ijet.2011.v3.239.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tafesse, Wondwesen. "YouTube marketing: how marketers' video optimization practices influence video views." Internet Research 30, no. 6 (July 3, 2020): 1689–707. http://dx.doi.org/10.1108/intr-10-2019-0406.

Full text
Abstract:
PurposeYouTube's vast and engaged user base makes it central to firms' digital marketing effort. With extant studies focusing on viewers' post-view engagement behavior, however, research into what motivates viewers to click on and watch YouTube videos is scarce. This study investigates the implications of marketers' video optimization practices for video views on YouTube.Design/methodology/approachThe study employed a data set of videos (N = 4,398) gathered by scraping YouTube's trending list. Using a combination of text and sentiment analysis, the study measured four video optimization practices: information content of video titles, emotional intensity of video titles, information content of video descriptions and volume of video tags. It then analyzed the effect of these video optimization practices on video views.FindingsThe study finds that greater availability of information in video titles is negatively associated with video views, whereas intensity of negative emotional sentiment in video titles is positively associated with video views. Further, greater availability of information in video descriptions is positively associated with video views. Finally, an inverted U-shaped relationship is found between volume of video tags and video views. Up to 17 video tags can contribute to more video views; however, beyond 17 tags, the relationship turns negative.Originality/valueThis study investigates the effect of marketers' video optimization practices on video views. While extant studies mainly focus on viewers' post-view engagement behavior, such as liking, commenting on and sharing videos, this study examines video views. Similarly, extant studies investigate videos' internal content, while this study investigates elements of the video metadata.
APA, Harvard, Vancouver, ISO, and other styles
5

Song, Yaguang, Junyu Gao, Xiaoshan Yang, and Changsheng Xu. "Learning Hierarchical Video Graph Networks for One-Stop Video Delivery." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 1 (January 31, 2022): 1–23. http://dx.doi.org/10.1145/3466886.

Full text
Abstract:
The explosive growth of video data has brought great challenges to video retrieval, which aims to find out related videos from a video collection. Most users are usually not interested in all the content of retrieved videos but have a more fine-grained need. In the meantime, most existing methods can only return a ranked list of retrieved videos lacking a proper way to present the video content. In this paper, we introduce a distinctively new task, namely One-Stop Video Delivery (OSVD) aiming to realize a comprehensive retrieval system with the following merits: it not only retrieves the relevant videos but also filters out irrelevant information and presents compact video content to users, given a natural language query and video collection. To solve this task, we propose an end-to-end Hierarchical Video Graph Reasoning framework (HVGR) , which considers relations of different video levels and jointly accomplishes the one-stop delivery task. Specifically, we decompose the video into three levels, namely the video-level, moment-level, and the clip-level in a coarse-to-fine manner, and apply Graph Neural Networks (GNNs) on the hierarchical graph to model the relations. Furthermore, a pairwise ranking loss named Progressively Refined Loss is proposed based on prior knowledge that there is a relative order of the similarity of query-video, query-moment, and query-clip due to the different granularity of matched information. Extensive experimental results on benchmark datasets demonstrate that the proposed method achieves superior performance compared with baseline methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Lin, Meihan. "Impacts of Short Video to Long Video and the Corresponding Countermeasures: Taking Tencent Video as an Example." Highlights in Science, Engineering and Technology 92 (April 10, 2024): 194–98. http://dx.doi.org/10.54097/rnxg6e63.

Full text
Abstract:
The video industry is a comprehensive field that integrates multiple attributes such as culture, technology, and economy. It uses artificial intelligence and high-tech means as a communication medium, with film and television entertainment content as its core, and has become an important part of the tertiary industry. At the same time, the video industry has a profound impact on people's living conditions and spiritual world. With the rapid rise and prosperity of short videos in recent years, the traditional video industry has been greatly impacted. Taking Tencent Video as an example, this article deeply analyzes the impact of short videos on long videos in terms of copyright and profit and proposes feasible measures to promote Tencent Video to adjust its profit structure and development layout, and further promote the new development of the long video field.
APA, Harvard, Vancouver, ISO, and other styles
7

Handiani, Riana Ezra Savitry Is, and Surya Bintarti. "Pengaruh Conversation Dan Co-Creation Terhadap Customer Loyalty Dengan Mediasi Experience Quality Dan Moderasi Currency Pada Pengguna Layanan Vod Vidio Di Kabupaten Bekasi." Journal of Economic, Bussines and Accounting (COSTING) 7, no. 4 (June 24, 2024): 9159–70. http://dx.doi.org/10.31539/costing.v7i4.10365.

Full text
Abstract:
Since the Covid-19 pandemic arrived in Indonesia, all people have become accustomed to carrying out their activities from inside the house, one of which is watching films. Vidio is an online streaming service platform that provides various online videos such as films, sports broadcasts, original series and many more. The aim of this research is to test the effect customer loyalty to conversation, co-creation, currency and experience quality on the Vidio VoD application service. This research was conducted within the boundaries of the Bekasi Regency community area with a total of 114 respondents, namely users who have used Vidio. The sampling technique used isNonprobability sampling namely by method purposive sampling. This research tests correlation and regression with the help ofsoftware SmartPLS 3.0 is used to test validity and reliability. This research shows that: 1) Activities Conversation carried out by the serviceVideo on Demand Vidio is able to push the level Experience Quality consumer; 2) Co-Creation which is set by the serviceVideo on Demand Vidio is able to push the level Experience Quality consumer; 3) Activities Conversation What is done is able to moderate Currency on serviceVideo on Demand Vidio against the level Experience Quality consumer; 4) Co-Creation determined can moderate Currency on Vidio's Video on Demand service Experience Quality consumer; 5) Experience Quality what consumers feel about the service Video on Demand Vidio is able to push the level Customer Loyalty; 6) Activities Conversation that can be done to mediate Experience Quality on service Video on Demand Vidio against Customer Loyalty; 7) Co-Creation determined to be able to mediate Experience Quality on service Video on Demand Vidio against the level Customer Loyalty.
APA, Harvard, Vancouver, ISO, and other styles
8

Rachmaniar, Rachmaniar, and Renata Anisa. "Video Inovasi Bisnis Kuliner di Youtube (Studi Etnografi Virtual tentang Keberadaan Video-video Inovasi Bisnis Kuliner di Youtube)." Proceeding of Community Development 1 (April 4, 2018): 89. http://dx.doi.org/10.30874/comdev.2017.14.

Full text
Abstract:
The purpose of this study is to analyze the presence of culinary business innovation videos on YouTube, viewed from videos that have high views and video content uploaded by YouTuber related to culinary business innovation videos. The method used in this study is a qualitative method with a virtual ethnography approach to knowing the existence of culinary business innovation videos on YouTube. The main object of this research is the videos related to culinary business innovation on YouTube.Teknik data collection conducted through participatory observation and study of literature. The results of this study indicate that the videos related to culinary business innovation on YouTube that has high views are the videos that many show bananas as the basic ingredients of processed foods made innovatively and can be used as a source of business for anyone. While video content uploaded a lot by YouTuber related to culinary business innovation videos is a food processing video that can be used as a source of business.
APA, Harvard, Vancouver, ISO, and other styles
9

Ji, Wanting, and Ruili Wang. "A Multi-instance Multi-label Dual Learning Approach for Video Captioning." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 2s (June 10, 2021): 1–18. http://dx.doi.org/10.1145/3446792.

Full text
Abstract:
Video captioning is a challenging task in the field of multimedia processing, which aims to generate informative natural language descriptions/captions to describe video contents. Previous video captioning approaches mainly focused on capturing visual information in videos using an encoder-decoder structure to generate video captions. Recently, a new encoder-decoder-reconstructor structure was proposed for video captioning, which captured the information in both videos and captions. Based on this, this article proposes a novel multi-instance multi-label dual learning approach (MIMLDL) to generate video captions based on the encoder-decoder-reconstructor structure. Specifically, MIMLDL contains two modules: caption generation and video reconstruction modules. The caption generation module utilizes a lexical fully convolutional neural network (Lexical FCN) with a weakly supervised multi-instance multi-label learning mechanism to learn a translatable mapping between video regions and lexical labels to generate video captions. Then the video reconstruction module synthesizes visual sequences to reproduce raw videos using the outputs of the caption generation module. A dual learning mechanism fine-tunes the two modules according to the gap between the raw and the reproduced videos. Thus, our approach can minimize the semantic gap between raw videos and the generated captions by minimizing the differences between the reproduced and the raw visual sequences. Experimental results on a benchmark dataset demonstrate that MIMLDL can improve the accuracy of video captioning.
APA, Harvard, Vancouver, ISO, and other styles
10

Nuratika, Sikin, Safra Apriani Zahraa, and M. I. Gunawan. "THE MAKING OF PROFILE VIDEO ABOUT TOURISM IN SIAK REGENCY." INOVISH JOURNAL 4, no. 1 (June 29, 2019): 102. http://dx.doi.org/10.35314/inovish.v4i1.958.

Full text
Abstract:
Tourism is very important in Indonesia. However, there are many ways to promote tourism. One of the ways is through video. In promoting tourism in Siak Regency, many people have made tourism video but only in short videos. They are advertisement videos. The duration of the videos was limited and the dubber explained the exposure of the video used Bahasa Indonesia. Therefore, this profile video about tourism in Siak Regency will help Siak Regency in promoting tourism destinations. The main purpose of this final project is to explain the processes of making a profile video about tourism in Siak Regency. The method of this study is descriptive method. There are several steps in making this video such as collecting data, providing materials recording the video, giving the subtitles, continuing proceed with the process of dubbing, and the last was editing process. This video contains of ten places and a tourism event. This video can be used in order to help students, Tourism Office of Siak Regency, local community, and especially International community get information about the history and the tourism destinations in Siak Regency easily. Tourism is very important in Indonesia. However, there are many ways to promote tourism. One of the ways is through video. In promoting tourism in Siak Regency, many people have made tourism video but only in short videos. They are advertisement videos. The duration of the videos was limited and the dubber explained the exposure of the video used Bahasa Indonesia. Therefore, this profile video about tourism in Siak Regency will help Siak Regency in promoting tourism destinations. The main purpose of this final project is to explain the processes of making a profile video about tourism in Siak Regency. The method of this study is descriptive method. There are several steps in making this video such as collecting data, providing materials,iirecording the video, giving the subtitles, continuing proceed with the process of dubbing, and the last was editing process. This video contains of ten places and a tourism event. This video can be used in order to help students, Tourism Office of Siak Regency, local community, and especially International community get information about the history and the tourism destinations in Siak Regency easily.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Video"

1

Sedlařík, Vladimír. "Informační strategie firmy." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2012. http://www.nusl.cz/ntk/nusl-223526.

Full text
Abstract:
This thesis analyzes the YouTube service and describes its main deficiencies. Based on theoretical methods and analyses, its main goal is to design a service that will solve the main YouTube problems, build a company around this service and introduce this service to the market. This service will not replace YouTube, but it will supplement it. Further, this work will suggest a possible structure, strategy and information strategy of this new company and its estimated financial results in the first few years.
APA, Harvard, Vancouver, ISO, and other styles
2

Lindskog, Eric, and Wrang Jesper. "Design of video players for branched videos." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148592.

Full text
Abstract:
Interactive branched video allows users to make viewing decisions while watching, that affect the playback path of the video and potentially the outcome of the story. This type of video introduces new challenges in terms of design, for example displaying the playback progress, the structure of the branched video as well as the choices that the viewers can make. In this thesis we test three implementations of working video players with different types of playback bars: one fully viewed with no moving parts, one that zooms into the currently watched section of the video, and one that leverages a fisheye distortion. A number of usability tests are carried out using surveys complemented with observations made during the tests. Based on these user tests we concluded that the implementation with a zoomed in playback bar was the easiest to understand and that fisheye effect received mixed results, ranging from distracting and annoying to interesting and clear. With this feedback a new set of implementations was created and solutions for each component of the video player were identified. These new implementations support more general solutions for the shape of the branch segments and the position and location of the choices for upcoming branches. The new implementations have not gone through any testing, but we expect that future work can further explore this subject with the help of our code and suggestions.
APA, Harvard, Vancouver, ISO, and other styles
3

Salam, Sazilah. "VidIO : a model for personalized video information management." Thesis, University of Southampton, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.242411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Aklouf, Mourad. "Video for events : Compression and transport of the next generation video codec." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG029.

Full text
Abstract:
L'acquisition et la diffusion de contenus avec une latence minimale sont devenus essentiel dans plusieurs domaines d'activités tels que la diffusion d'évènements sportifs, la vidéoconférence, la télé-présence, la télé-opération de véhicules ou le contrôle à distance de systèmes. L'industrie de la diffusion en direct a connu une croissance en 2020, et son importance va encore croitre au cours des prochaines années grâce à l'émergence de nouveaux codecs vidéo à haute efficacité reposant sur le standard Versatile Video Coding(VVC)et à la cinquième génération de réseaux mobiles (5G).Les méthodes de streaming de type HTTP Adaptive Streaming (HAS) telles que MPEG-DASH, grâce aux algorithmes d'adaptation du débit de transmission de vidéo compressée, se sont révélées très efficaces pour améliorer la qualité d'expérience (QoE) dans un contexte de vidéo à la demande (VOD).Cependant, dans les applications où la latence est critique, minimiser le délai entre l'acquisition de l'image et son affichage au récepteur est essentiel. La plupart des algorithmes d'adaptation de débit sont développés pour optimiser la transmission vidéo d'un serveur situé dans le cœur de réseau vers des clients mobiles. Dans les applications nécessitant un streaming à faible latence, le rôle du serveur est joué par un terminal mobile qui va acquérir, compresser et transmettre les images via une liaison montante comportant un canal radio vers un ou plusieurs clients. Les approches d'adaptation de débit pilotées par le client sont par conséquent inadaptées. De plus, les HAS, pour lesquelles la prise de décision se fait avec une périodicité de l'ordre de la seconde ne sont pas suffisamment réactives lors d'une mobilité importante du serveur et peuvent engendrer des délais importants. Il est donc essentiel d'utiliser une granularité d'adaptation très fine afin de réduire le délai de bout-en-bout. En effet, la taille réduite des tampons d'émission et de réception afin de minimiser la latence rend plus délicate l'adaptation du débit dans notre cas d'usage. Lorsque la bande passante varie avec une constante de temps plus petite que la période avec laquelle la régulation est faite, les mauvaises décisions de débit de transmission peuvent induire un surcroit de latence important.L'objet de cette thèse est d'apporter des éléments de réponse à la problématique de la transmission vidéo à faible latence depuis des terminaux (émetteurs) mobiles. Nous présentons d'abord un algorithme d'adaptation de débit image-par-image pour la diffusion à faible latence. Une approche de type Model Predictive Control (MPC) est proposée pour déterminer le débit de codage de chaque image à transmettre. Cette approche utilise des informations relatives au niveau de tampon de l'émetteur et aux caractéristiques du canal de transmission. Les images étant codées en direct, un modèle reliant le paramètre de quantification (QP) au débit de sortie du codeur vidéo est nécessaire. Nous avons donc proposé un nouveau modèle reliant le débit au paramètre de quantification et à la distorsion de l'image précédente. Ce modèle fournit de bien meilleurs résultats dans le contexte d'une décision prise image par image du débit de codage que les modèle de référence de la littérature.En complément des techniques précédentes, nous avons également proposé des outils permettant de réduire la complexité de codeurs vidéo tels que VVC. La version actuelle du codeur VVC (VTM10) a un temps d'exécution neuf fois supérieur à celui du codeur HEVC. Par conséquent, le codeur VVC n'est pas adapté aux applications de codage et diffusion en temps réel sur les plateformes actuellement disponibles. Dans ce contexte, nous présentons une méthode systématique, de type branch-and-prune, permettant d'identifier un ensemble d'outils de codage pouvant être désactivés tout en satisfaisant une contrainte sur l'efficacité de codage. Ce travail contribue à la réalisation d'un codeur VVC temps réel
The acquisition and delivery of video content with minimal latency has become essential in several business areas such as sports broadcasting, video conferencing, telepresence, remote vehicle operation, or remote system control. The live streaming industry has grown in 2020 and it will expand further in the next few years with the emergence of new high-efficiency video codecs based on the Versatile Video Coding (VVC) standard and the fifth generation of mobile networks (5G).HTTP Adaptive Streaming (HAS) methods such as MPEG-DASH, using algorithms to adapt the transmission rate of compressed video, have proven to be very effective in improving the quality of experience (QoE) in a video-on-demand (VOD) context.Nevertheless, minimizing the delay between image acquisition and display at the receiver is essential in applications where latency is critical. Most rate adaptation algorithms are developed to optimize video transmission from a server situated in the core network to mobile clients. In applications requiring low-latency streaming, such as remote control of drones or broadcasting of sports events, the role of the server is played by a mobile terminal. The latter will acquire, compress, and transmit the video and transmit the compressed stream via a radio access channel to one or more clients. Therefore, client-driven rate adaptation approaches are unsuitable in this context because of the variability of the channel characteristics. In addition, HAS, for which the decision-making is done with a periodicity of the order of a second, are not sufficiently reactive when the server is moving, which may generate significant delays. It is therefore important to use a very fine adaptation granularity in order to reduce the end-to-end delay. The reduced size of the transmission and reception buffers (to minimize latency) makes it more difficult to adapt the throughput in our use case. When the bandwidth varies with a time constant smaller than the period with which the regulation is made, bad transmission rate decisions can induce a significant latency overhead.The aim of this thesis is to provide some answers to the problem of low-latency delivery of video acquired, compressed, and transmitted by mobile terminals. We first present a frame-by-frame rate adaptation algorithm for low latency broadcasting. A Model Predictive Control (MPC) approach is proposed to determine the coding rate of each frame to be transmitted. This approach uses information about the buffer level of the transmitter and about the characteristics of the transmission channel. Since the frames are coded live, a model relating the quantization parameter (QP) to the output rate of the video encoder is required. Hence, we have proposed a new model linking the rate to the QP of the current frame and to the distortion of the previous frame. This model provides much better results in the context of a frame-by-frame decision on the coding rate than the reference models in the literature.In addition to the above techniques, we have also proposed tools to reduce the complexity of video encoders such as VVC. The current version of the VVC encoder (VTM10) has an execution time nine times higher than that of the HEVC encoder. Therefore, the VVC encoder is not suitable for real-time encoding and streaming applications on currently available platforms. In this context, we present a systematic branch-and-prune method to identify a set of coding tools that can be disabled while satisfying a constraint on coding efficiency. This work contributes to the realization of a real-time VVC coder
APA, Harvard, Vancouver, ISO, and other styles
5

Le, Thuc Trinh. "Video inpainting and semi-supervised object removal." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT026/document.

Full text
Abstract:
De nos jours, l'augmentation rapide de les vidéos crée une demande massive d'applications d'édition de vidéos. Dans cette thèse, nous résolvons plusieurs problèmes relatifs au post-traitement vidéo. Nous nous concentrons sur l'application de suppression d'objets en vidéo. Pour mener à bien cette tâche, nous l'avons divisé en deux problèmes: (1) une étape de segmentation des objets vidéo pour sélectionner les objets à supprimer et (2) une étape d'inpainting vidéo pour remplir les zones endommagées. Pour le problème de la segmentation vidéo, nous concevons un système adapté aux applications de suppression d’objets avec différentes exigences en termes de précision et d’efficacité. Notre approche repose sur la combinaison de réseaux de neurones convolutifs (CNN) pour la segmentation et de la méthode classique de suivi des masks. Nous adoptons des réseaux de segmentation d’images et les appliquons à la casse vidéo en effectuant une segmentation image par image. En exploitant à la fois les formations en ligne et hors ligne avec uniquement une annotation de première image, les réseaux sont en mesure de produire une segmentation extrêmement précise des objets vidéo. En outre, nous proposons un module de suivi de masque pour assurer la continuité temporelle et un module de liaison de masque pour assurer la cohérence de l'identité entre les trames. De plus, nous présentons un moyen simple d’apprendre la couche de dilatation dans le masque, ce qui nous aide à créer des masques appropriés pour l’application de suppression d’objets vidéo.Pour le problème d’inpainting vidéo, nous divisons notre travail en deux catégories basées sur le type de fond. En particulier, nous présentons une méthode simple de propagation de pixels guidée par le mouvement pour traiter les cas d’arrière-plan statiques. Nous montrons que le problème de la suppression d'objets avec un arrière-plan statique peut être résolu efficacement en utilisant une technique simple basée sur le mouvement. Pour traiter le fond dynamique, nous introduisons la méthode d’inpainting vidéo en optimisant une fonction d’énergie globale basée sur des patchs. Pour augmenter la vitesse de l'algorithme, nous avons proposé une extension parallèle de l'algorithme 3D PatchMatch. Pour améliorer la précision, nous intégrons systématiquement le flux optique dans le processus global. Nous nous retrouvons avec une méthode d’inpainting vidéo capable de reconstruire des objets en mouvement ainsi que de reproduire des textures dynamiques tout en fonctionnant dans des délais raisonnables.Enfin, nous combinons les méthodes de segmentation des objets vidéo et d’inpainting vidéo dans un système unifié pour supprimer les objets non souhaités dans les vidéos. A notre connaissance, il s'agit du premier système de ce type. Dans notre système, l'utilisateur n'a qu'à délimiter approximativement dans le premier cadre les objets à modifier. Ce processus d'annotation est facilité par l'aide de superpixels. Ensuite, ces annotations sont affinées et propagées dans la vidéo par la méthode de segmentation des objets vidéo. Un ou plusieurs objets peuvent ensuite être supprimés automatiquement à l’aide de nos méthodes d’inpainting vidéo. Il en résulte un outil de montage vidéo informatique flexible, avec de nombreuses applications potentielles, allant de la suppression de la foule à la correction de scènes non physiques
Nowadays, the rapid increase of video creates a massive demand for video-based editing applications. In this dissertation, we solve several problems relating to video post-processing and focus on objects removal application in video. To complete this task, we divided it into two problems: (1) A video objects segmentation step to select which objects to remove and (2) a video inpainting step to filling the damaged regions.For the video segmentation problem, we design a system which is suitable for object removal applications with different requirements in terms of accuracy and efficiency. Our approach relies on the combination of Convolutional Neural Networks (CNNs) for segmentation and the classical mask tracking method. In particular, we adopt the segmentation networks for image case and apply them to video case by performing frame-by-frame segmentation. By exploiting both offline and online training with first frame annotation only, the networks are able to produce highly accurate video object segmentation. Besides, we propose a mask tracking module to ensure temporal continuity and a mask linking module to ensure the identity coherence across frames. Moreover, we introduce a simple way to learn the dilation layer in the mask, which helps us create suitable masks for video objects removal application.For the video inpainting problem, we divide our work into two categories base on the type of background. In particular, we present a simple motion-guided pixel propagation method to deal with static background cases. We show that the problem of objects removal with a static background can be solved efficiently using a simple motion-based technique. To deal with dynamic background, we introduce video inpainting method by optimization a global patch-based energy function. To increase the speed of the algorithm, we proposed a parallel extension of the 3D PatchMatch algorithm. To improve accuracy, we systematically incorporate the optical flow in the overall process. We end up with a video inpainting method which is able to reconstruct moving objects as well as reproduce dynamic textures while running in a reasonable time.Finally, we combine the video objects segmentation and video inpainting methods into a unified system to removes undesired objects in videos. To the best of our knowledge, this is the first system of this kind. In our system, the user only needs to approximately delimit in the first frame the objects to be edited. These annotation process is facilitated by the help of superpixels. Then, these annotations are refined and propagated through the video by the video objects segmentation method. One or several objects can then be removed automatically using our video inpainting methods. This results in a flexible computational video editing tool, with numerous potential applications, ranging from crowd suppression to unphysical scenes correction
APA, Harvard, Vancouver, ISO, and other styles
6

Lei, Zhijun. "Video transcoding techniques for wireless video communications." Thesis, University of Ottawa (Canada), 2004. http://hdl.handle.net/10393/29134.

Full text
Abstract:
The transmission of compressed video over channels with different capacities may require a reduction in bit rate if the transmission channel has a lower capacity than the capacity required by the video bit-stream, or when the channel capacity is changing over time. The process of converting a compressed video format into another compressed format is known as transcoding. This thesis addresses the specific transcoding problem of dynamic bit-rate adaptation for transmission over low bandwidth wireless channels. Transmitting compressed video over lower bandwidth wireless channels require accurate and efficient rate-control schemes. In this thesis, we propose several techniques to improve transcoding performance. Based on our experimental results, we present an approximate linear bit allocation model and macroblock layer rate-control algorithm, which can achieve accurate transcoding bit-rate. By reusing useful statistics information from the incoming compressed video, the bit-rate of the transcoded video can be determined according to the video scene context. Considering a specific bursty error wireless channel, we propose a solution which combines video transcoding and an ARQ protocol to transmit compressed video over this channel. In order to make sure that the end decoder can decode and play the transcoded video within the required end-to-end delay, we analyze the rate and buffer constraints of the transcoder and derive the conditions that have to be met by the transcoder. In order to test the proposed solution, we use a statistical channel model to simulate the wireless channel and use this model and channel observation to estimate the effective channel bandwidth, which will be fed back to the transcoder for better rate control. In this thesis, we discuss two applications. For real time video communication over wireless channel, we propose an algorithm that determines the transcoding scaling factor considering end-to-end delay, buffer fullness and effective channel bandwidth. For pre-encoded video distribution over wireless channels, we propose an algorithm which can determine the transcoding bit budget based on end-to-end delay, effective bandwidth, and original video bit profile. The proposed algorithm outperforms H.263 TMN8 in terms of video quality and buffer behavior with the same computational requirements.
APA, Harvard, Vancouver, ISO, and other styles
7

Milovanovic, Marta. "Pruning and compression of multi-view content for immersive video coding." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT023.

Full text
Abstract:
Cette thèse aborde le problème de la compression efficace de contenus vidéo immersifs, représentés avec le format Multiview Video plus Depth (MVD). Le standard du Moving Picture Experts Group (MPEG) pour la transmission des données MVD est appelé MPEG Immersive Video (MIV), qui utilise des codecs vidéo 2D compresser les informations de texture et de profondeur de la source. Par rapport au codage vidéo traditionnel, le codage vidéo immersif est complexe et limité non seulement par le compromis entre le débit binaire et la qualité, mais aussi par le débit de pixels. C'est pourquoi la MIV utilise le pruning pour réduire le débit de pixels et les corrélations entre les vues et crée une mosaïque de morceaux d'images (patches). L'estimation de la profondeur côté décodeur (DSDE) est apparue comme une approche alternative pour améliorer le système vidéo immersif en évitant la transmission de cartes de profondeur et en déplaçant le processus d'estimation de la profondeur du côté du décodeur. DSDE a été étudiée dans le cas de nombreuses vues entièrement transmises (sans pruning). Dans cette thèse, nous démontrons les avancées possibles en matière de codage vidéo immersif, en mettant l'accent sur le pruning du contenu de source. Nous allons au-delà du DSDE et examinons l'effet distinct de la restauration de la profondeur au niveau du patch du côté du décodeur. Nous proposons deux approches pour intégrer la DSDE sur le contenu traité avec le pruning du MIV. La première approche exclut un sous-ensemble de cartes de profondeur de la transmission, et la seconde approche utilise la qualité des patchs de profondeur estimés du côté de l'encodeur pour distinguer ceux qui doivent être transmis de ceux qui peuvent être récupérés du côté du décodeur. Nos expériences montrent un gain de 4.63 BD-rate pour Y-PSNR en moyenne. En outre, nous étudions également l'utilisation de techniques neuronales de synthèse basées sur l'image (IBR) pour améliorer la qualité de la synthèse de nouvelles vues et nous montrons que la synthèse neuronale elle-même fournit les informations nécessaires au pruning du contenu. Nos résultats montrent un bon compromis entre le taux de pixels et la qualité de la synthèse, permettant d'améliorer la synthèse visuelle de 3.6 dB en moyenne
This thesis addresses the problem of efficient compression of immersive video content, represented with Multiview Video plus Depth (MVD) format. The Moving Picture Experts Group (MPEG) standard for the transmission of MVD data is called MPEG Immersive Video (MIV), which utilizes 2D video codecs to compress the source texture and depth information. Compared to traditional video coding, immersive video coding is more complex and constrained not only by trade-off between bitrate and quality, but also by the pixel rate. Because of that, MIV uses pruning to reduce the pixel rate and inter-view correlations and creates a mosaic of image pieces (patches). Decoder-side depth estimation (DSDE) has emerged as an alternative approach to improve the immersive video system by avoiding the transmission of depth maps and moving the depth estimation process to the decoder side. DSDE has been studied for the case of numerous fully transmitted views (without pruning). In this thesis, we demonstrate possible advances in immersive video coding, emphasized on pruning the input content. We go beyond DSDE and examine the distinct effect of patch-level depth restoration at the decoder side. We propose two approaches to incorporate decoder-side depth estimation (DSDE) on content pruned with MIV. The first approach excludes a subset of depth maps from the transmission, and the second approach uses the quality of depth patches estimated at the encoder side to distinguish between those that need to be transmitted and those that can be recovered at the decoder side. Our experiments show 4.63 BD-rate gain for Y-PSNR on average. Furthermore, we also explore the use of neural image-based rendering (IBR) techniques to enhance the quality of novel view synthesis and show that neural synthesis itself provides the information needed to prune the content. Our results show a good trade-off between pixel rate and synthesis quality, achieving the view synthesis improvements of 3.6 dB on average
APA, Harvard, Vancouver, ISO, and other styles
8

Arrufat, Batalla Adrià. "Multiple transforms for video coding." Thesis, Rennes, INSA, 2015. http://www.theses.fr/2015ISAR0025/document.

Full text
Abstract:
Les codeurs vidéo état de l’art utilisent des transformées pour assurer une représentation compacte du signal. L’étape de transformation constitue le domaine dans lequel s’effectue la compression, pourtant peu de variabilité dans les types de transformations est constatée dans les systèmes de codage vidéo normalisés : souvent, une seule transformée est considérée, habituellement la transformée en cosinus discrète (DCT). Récemment, d’autres transformées ont commencé à être considérées en complément de la DCT. Par exemple, dans le dernier standard de compression vidéo, nommé HEVC (High Efficiency Video Coding), les blocs de taille 4x4 peuvent utiliser la transformée en sinus discrète (DST), de plus, il est également possible de ne pas les transformer. Ceci révèle un intérêt croissant pour considérer une pluralité de transformées afin d’augmenter les taux de compression. Cette thèse se concentre sur l’extension de HEVC au travers de l’utilisation de multiples transformées. Après une introduction générale au codage vidéo et au codage par transformée, une étude détaillée de deux méthodes de construction de transformations est menée : la transformée de Karhunen Loève (KLT) et une transformée optimisée en débit et distorsion sont considérées. Ces deux méthodes sont comparées entre-elles en substituant les transformées utilisées par HEVC. Une expérimentation valide la pertinence des approches. Un schéma de codage qui incorpore et augmente l’utilisation de multiples transformées est alors introduit : plusieurs transformées sont mises à disposition de l’encodeur, qui sélectionne celle qui apporte le meilleur compromis dans le plan débit distorsion. Pour ce faire, une méthode de construction qui permet de concevoir des systèmes comportant de multiples transformations est décrite. Avec ce schéma de codage, le débit est significativement réduit par rapport à HEVC, tout particulièrement lorsque les transformées sont nombreuses et complexes à mettre en oeuvre. Néanmoins, ces améliorations viennent au prix d’une complexité accrue en termes d’encodage, de décodage et de contrainte de stockage. En conséquence, des simplifications sont considérées dans la suite du document, qui ont vocation à limiter l’impact en réduction de débit. Une première approche est introduite dans laquelle des transformées incomplètes sont motivées. Les transformations de ce type utilisent un seul vecteur de base, et sont conçues pour travailler de concert avec les transformations de HEVC. Cette technique est évaluée et apporte une réduction de complexité significative par rapport au précédent système, bien que la réduction de débit soit modeste. Une méthode systématique, qui détermine les meilleurs compromis entre le nombre de transformées et l’économie de débit est alors définie. Cette méthode utilise deux types différents de transformée : basés sur des transformées orthogonales séparables et des transformées trigonométriques discrètes (DTT) en particulier. Plusieurs points d’opération sont présentés qui illustrent plusieurs compromis complexité / gain en débit. Ces systèmes révèlent l’intérêt de l’utilisation de transformations multiples pour le codage vidéo
State of the art video codecs use transforms to ensure a compact signal representation. The transform stage is where compression takes place, however, little variety is observed in the type of transforms used for standardised video coding schemes: often, a single transform is considered, usually a Discrete Cosine Transform (DCT). Recently, other transforms have started being considered in addition to the DCT. For instance, in the latest video coding standard, High Efficiency Video Coding (HEVC), the 4x4 sized blocks can make use of the Discrete Sine Transform (DST) and, in addition, it also possible not to transform them. This fact reveals an increasing interest to consider a plurality of transforms to achieve higher compression rates. This thesis focuses on extending HEVC through the use of multiple transforms. After a general introduction to video compression and transform coding, two transform designs are studied in detail: the Karhunen Loève Transform (KLT) and a Rate-Distortion Optimised Transform are considered. These two methods are compared against each other by replacing the transforms in HEVC. This experiment validates the appropriateness of the design. A coding scheme that incorporates and boosts the use of multiple transforms is introduced: several transforms are made available to the encoder, which chooses the one that provides the best rate-distortion trade-off. Consequently, a design method for building systems using multiple transforms is also described. With this coding scheme, significant amounts of bit-rate savings are achieved over HEVC, especially when using many complex transforms. However, these improvements come at the expense of increased complexity in terms of coding, decoding and storage requirements. As a result, simplifications are considered while limiting the impact on bit-rate savings. A first approach is introduced, in which incomplete transforms are used. This kind of transforms use one single base vector and are conceived to work as companions of the HEVC transforms. This technique is evaluated and provides significant complexity reductions over the previous system, although the bit-rate savings are modest. A systematic method, which specifically determines the best trade-offs between the number of transforms and bit-rate savings, is designed. This method uses two different types of transforms based separable orthogonal transforms and Discrete Trigonometric Transforms (DTTs) in particular. Several designs are presented, allowing for different complexity and bitrate savings trade-offs. These systems reveal the interest of using multiple transforms for video coding
APA, Harvard, Vancouver, ISO, and other styles
9

Le, Thuc Trinh. "Video inpainting and semi-supervised object removal." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT026.

Full text
Abstract:
De nos jours, l'augmentation rapide de les vidéos crée une demande massive d'applications d'édition de vidéos. Dans cette thèse, nous résolvons plusieurs problèmes relatifs au post-traitement vidéo. Nous nous concentrons sur l'application de suppression d'objets en vidéo. Pour mener à bien cette tâche, nous l'avons divisé en deux problèmes: (1) une étape de segmentation des objets vidéo pour sélectionner les objets à supprimer et (2) une étape d'inpainting vidéo pour remplir les zones endommagées. Pour le problème de la segmentation vidéo, nous concevons un système adapté aux applications de suppression d’objets avec différentes exigences en termes de précision et d’efficacité. Notre approche repose sur la combinaison de réseaux de neurones convolutifs (CNN) pour la segmentation et de la méthode classique de suivi des masks. Nous adoptons des réseaux de segmentation d’images et les appliquons à la casse vidéo en effectuant une segmentation image par image. En exploitant à la fois les formations en ligne et hors ligne avec uniquement une annotation de première image, les réseaux sont en mesure de produire une segmentation extrêmement précise des objets vidéo. En outre, nous proposons un module de suivi de masque pour assurer la continuité temporelle et un module de liaison de masque pour assurer la cohérence de l'identité entre les trames. De plus, nous présentons un moyen simple d’apprendre la couche de dilatation dans le masque, ce qui nous aide à créer des masques appropriés pour l’application de suppression d’objets vidéo.Pour le problème d’inpainting vidéo, nous divisons notre travail en deux catégories basées sur le type de fond. En particulier, nous présentons une méthode simple de propagation de pixels guidée par le mouvement pour traiter les cas d’arrière-plan statiques. Nous montrons que le problème de la suppression d'objets avec un arrière-plan statique peut être résolu efficacement en utilisant une technique simple basée sur le mouvement. Pour traiter le fond dynamique, nous introduisons la méthode d’inpainting vidéo en optimisant une fonction d’énergie globale basée sur des patchs. Pour augmenter la vitesse de l'algorithme, nous avons proposé une extension parallèle de l'algorithme 3D PatchMatch. Pour améliorer la précision, nous intégrons systématiquement le flux optique dans le processus global. Nous nous retrouvons avec une méthode d’inpainting vidéo capable de reconstruire des objets en mouvement ainsi que de reproduire des textures dynamiques tout en fonctionnant dans des délais raisonnables.Enfin, nous combinons les méthodes de segmentation des objets vidéo et d’inpainting vidéo dans un système unifié pour supprimer les objets non souhaités dans les vidéos. A notre connaissance, il s'agit du premier système de ce type. Dans notre système, l'utilisateur n'a qu'à délimiter approximativement dans le premier cadre les objets à modifier. Ce processus d'annotation est facilité par l'aide de superpixels. Ensuite, ces annotations sont affinées et propagées dans la vidéo par la méthode de segmentation des objets vidéo. Un ou plusieurs objets peuvent ensuite être supprimés automatiquement à l’aide de nos méthodes d’inpainting vidéo. Il en résulte un outil de montage vidéo informatique flexible, avec de nombreuses applications potentielles, allant de la suppression de la foule à la correction de scènes non physiques
Nowadays, the rapid increase of video creates a massive demand for video-based editing applications. In this dissertation, we solve several problems relating to video post-processing and focus on objects removal application in video. To complete this task, we divided it into two problems: (1) A video objects segmentation step to select which objects to remove and (2) a video inpainting step to filling the damaged regions.For the video segmentation problem, we design a system which is suitable for object removal applications with different requirements in terms of accuracy and efficiency. Our approach relies on the combination of Convolutional Neural Networks (CNNs) for segmentation and the classical mask tracking method. In particular, we adopt the segmentation networks for image case and apply them to video case by performing frame-by-frame segmentation. By exploiting both offline and online training with first frame annotation only, the networks are able to produce highly accurate video object segmentation. Besides, we propose a mask tracking module to ensure temporal continuity and a mask linking module to ensure the identity coherence across frames. Moreover, we introduce a simple way to learn the dilation layer in the mask, which helps us create suitable masks for video objects removal application.For the video inpainting problem, we divide our work into two categories base on the type of background. In particular, we present a simple motion-guided pixel propagation method to deal with static background cases. We show that the problem of objects removal with a static background can be solved efficiently using a simple motion-based technique. To deal with dynamic background, we introduce video inpainting method by optimization a global patch-based energy function. To increase the speed of the algorithm, we proposed a parallel extension of the 3D PatchMatch algorithm. To improve accuracy, we systematically incorporate the optical flow in the overall process. We end up with a video inpainting method which is able to reconstruct moving objects as well as reproduce dynamic textures while running in a reasonable time.Finally, we combine the video objects segmentation and video inpainting methods into a unified system to removes undesired objects in videos. To the best of our knowledge, this is the first system of this kind. In our system, the user only needs to approximately delimit in the first frame the objects to be edited. These annotation process is facilitated by the help of superpixels. Then, these annotations are refined and propagated through the video by the video objects segmentation method. One or several objects can then be removed automatically using our video inpainting methods. This results in a flexible computational video editing tool, with numerous potential applications, ranging from crowd suppression to unphysical scenes correction
APA, Harvard, Vancouver, ISO, and other styles
10

Dufour, Sophie-Isabelle. "Imaginem video : L'image vidéo dans l'"histoire longue" des images." Paris 3, 2004. http://www.theses.fr/2004PA030054.

Full text
Abstract:
Que vois-je devant la vidéo? Je ne vois pas une vidéo, mais de l'image. La présente étude propose d'interroger le statut de l'image vidéo dans l'"histoire longue" des images. Il s'agit de déployer des problèmes multiséculaires qui ont surgi bien avant l'invention technique du médium considéré. Notre présupposé est qu'il y a une différence entre l'image - en tant que notion - et les images: l'on pourrait dire que l'image, difficile à définir, ne peut être appréhendée que dans les différents médiums qui l'actualisent. Aussi la vidéo est-elle traitée, ici, en tant qu'elle questionne l'image elle-même. Priorité sera donnée aux œuvres d'art, considérées comme plus révélatrices du statut de l'image vidéo. Mais par delà l'esthétique, les pouvoirs de l'image dépasseront ceux de l'art. Si la première question qu'affonte la présente étude est celle de l'amour de l'image, posée exemplairement par le mythe de Narcisse, c'est que celle-ci fait ensuite surgir d'autres questions fondamentales. C'est ainsi que la notion de fluidité se posera comme le fil conducteur de notre réflexion sur la fantomalité de l'image vidéo et sur sa spatialité. Notre étude des rapports entre l'image vidéo et le temps sera, quant à elle, orientée par la notion de flux - celle de Bergson en particulier. Il s'agira en fin de compte de penser l'image vidéo dans toute sa singularité
What do I see when I look at a video? Actually, I do not see a video but an image. My purpose is to study the status of video image from the point of view of the so-called " long history " of images, dealing therefore with very ancient problems that occured long before the technical invention of the medium. A distinction must be made between images and the very notion of image : one could say that the difficult notion of image can be specified only through the various media in which it embodies itself. In this study, video questions image itself. Art works will keep their privilege, because through them the status of video image is best revealed; but my intention is to show that the powers of image go far beyond aesthetics. The first problem will be the one raised by the myth of Narcissus, as a lover of image(s), because it is seminal. It leads, for instance, to the notion of fluidity, which will prove essential in my study of the " ghostliness " of video image (as well as in my study of space in video). Last but not least, the relations between time and video image should be specified with Bergson's help, and I shall try to prove how useful can be this philosopher's notion of time when one hopes to understand the singularity of video image
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Video"

1

European Commission. Directorate General Information, Communication, Culture, Audiovisual. Video-Katalog =: Video catalogue = Catalogue vidéo. Luxembourg: Office for Official Publications of the European Communities = Office des publications officielles des Communautés européennes, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Goethe-Institut. Video Katalog: Videos for loan. London: Goethe-Institut, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

European Commission. Directorate General X for Information, Communication, Culture, Audiovisual. Video-Katalog =: Video catalogue = Catalogue video. Luxemburg: Amt für amtliche Veröffentlichungen der Europäischen Gemainschaften, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

1967-, Weir Kathryn Elizabeth, Chambers Nicholas, and Queensland Art Gallery, eds. Video hits: Art & music video. South Brisbane: Queensland Art Gallery, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Publications, Educational Technology, ed. Interactive video. Englewood Cliffs, N.J: Educational Technology Publications, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sokol, Erich. Video. Wien: Jugend & Volk, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Coulter, George. Video. Vero Beach, Fla: Rourke Publications, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Biel, Jackie. Video. Tarrytown, N.Y: Benchmark Books, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mike, Lavery, and Rinvolucri Mario, eds. Video. Oxford: Oxford University Press, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Castro, Vicente González. Video. Vedado, Ciudad de La Habana: Editorial P. de la Torriente, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Video"

1

Tassi, Laura, Valeria Mariani, Veronica Pelliccia, and Roberto Mai. "Video-Electroencephalography (Video-EEG)." In Clinical Electroencephalography, 305–17. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-04573-9_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Baumgardt, Michael. "Video." In Web Design kreativ!, 144–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/978-3-642-56961-6_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Baldwin, Dennis, Jamie Macdonald, Keith Peters, Jon Steer, David Tudury, Jerome Turner, Steve Webster, Alex White, and Todd Yard. "Video." In Flash MX Studio, 493–528. Berkeley, CA: Apress, 2002. http://dx.doi.org/10.1007/978-1-4302-5166-8_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Farkas, Bart, and Jeff Govier. "Video." In Use Your PC to Build an Incredible Home Theater System, 3–31. Berkeley, CA: Apress, 2003. http://dx.doi.org/10.1007/978-1-4302-5174-3_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Green, Tom, and Tiago Dias. "Video." In Foundation Flash CS5 for Designers, 527–99. Berkeley, CA: Apress, 2010. http://dx.doi.org/10.1007/978-1-4302-2995-7_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Green, Tom, and David Stiller. "Video." In Foundation Flash CS4 for Designers, 441–91. Berkeley, CA: Apress, 2009. http://dx.doi.org/10.1007/978-1-4302-1094-8_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Green, Tom, and Joseph Labrecque. "Video." In Beginning Adobe Animate CC, 477–518. Berkeley, CA: Apress, 2017. http://dx.doi.org/10.1007/978-1-4842-2376-5_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Weik, Martin H. "video." In Computer Science and Communications Dictionary, 1889. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_20759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Roach, J. W. "Video." In Human-Machine Interactive Systems, 185–97. Boston, MA: Springer US, 1991. http://dx.doi.org/10.1007/978-1-4684-5883-1_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lemke, Inga. "Video." In Handbuch Populäre Kultur, 472–78. Stuttgart: J.B. Metzler, 2003. http://dx.doi.org/10.1007/978-3-476-05001-4_103.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Video"

1

Han, Haochen, and Yu Sun. "A Video Note Taking System to Make Online Video Learning Easier." In 10th International Conference on Information Technology Convergence and Services (ITCSE 2021). AIRCC Publishing Corporation, 2021. http://dx.doi.org/10.5121/csit.2021.110917.

Full text
Abstract:
Recent coronavirus lockdowns have had a significant impact on how students study. As states shut down schools, millions of students are now required to study at home with pre-recorded videos. This, however, proves challenging, as teachers have no way of knowing whether or not students are paying attention to the videos, and students may be easily distracted from important parts of the videos. Currently, there is virtually no research and development of applications revolving specifically around the subject of effectively taking digital notes from videos. This paper introduces the web application we developed for streamlined, video-focused auto-schematic note-taking. We applied our application to school-related video lectures and conducted a qualitative evaluation of the approach. The results show that the tools increase productivity when taking notes from a video, and are more effective and informational than conventional paper notes.
APA, Harvard, Vancouver, ISO, and other styles
2

D., Minola Davids, and Seldev Christopher C. "Surveillance Video Summarization based on Target Object Detection." In The International Conference on scientific innovations in Science, Technology, and Management. International Journal of Advanced Trends in Engineering and Management, 2023. http://dx.doi.org/10.59544/jist4192/ngcesi23p94.

Full text
Abstract:
The recent trend increases the use of surveillance cameras in many of the private and public premises, which causes the number of surveillance videos to grow exponentially. The information gained from these surveillance videos not only helps the owner of the property, but also helps in crime investigations for police and security officials. Though there are several applications of such videos, yet their storage, management and retrieval processes are still challenging. Hence, it is important to develop an efficient technique to describe a long video into a shorter video with semantic information by eliminating the redundant and unimportant frames. This technique makes the larger video to shrink in length for efficient storage and also helps the users to attain a complete knowledge of the video by only watching the shorter video, without spending more time in watching the original longer video. To achieve this objective, this paper proposes a video summarization technique for summarizing the surveillance videos by extracting the target object using YOLO then discarding the remaining frames and finally combining the extracted key frames into a single video. This method first detects the target object related in the original video frames and then eliminates the remaining frames that are irrelevant without prominent objects, resulting in video with only the key frames which are into the interest of the user, finally those frames are combined to form a summarized video.
APA, Harvard, Vancouver, ISO, and other styles
3

Yu, Yipeng, Xiao Chen, and Hui Zhan. "VideoMaster: A Multimodal Micro Game Video Recreator." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/844.

Full text
Abstract:
To free human from laborious video production, this paper proposes the building of VideoMaster, a multimodal system equipped with four capabilities: highlight extraction, video describing, video dubbing and video editing. It extracts interesting episodes from long game videos, generates subtitles for each episode, reads the subtitles through synthesized speech, and finally re-creates a better short video through video editing. Notably, VideoMaster takes a combination of deep learning and traditional computer vision techniques to extract highlights with fine-to-coarse labels, utilizes a novel framework named PCSG-v (probabilistic context sensitive grammar for video) for video description generation, and imitates a target speaker's voice to read the description. To the best of our knowledge, VideoMaster is the first multimedia system that can automatically produce product-level micro-videos without heavy human annotation.
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Ziling, Jinyu Yang, Mingqi Gao, and Feng Zheng. "Place Anything into Any Video." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/1019.

Full text
Abstract:
Controllable video editing has demonstrated remarkable potential across diverse applications, particularly in scenarios where capturing or re-capturing real-world videos is either impractical or costly. This paper introduces a novel and efficient system named Place-Anything, which facilitates the insertion of any object into any video solely based on a picture or text description of the target object or element. The system comprises three modules: 3D generation, video reconstruction, and 3D target insertion. This integrated approach offers an efficient and effective solution for producing and editing high-quality videos by naturally inserting realistic objects. Through experiment, we demonstrate that our system can effortlessly place any object into any video using just a photograph of the object. Our demo video can be found at https://youtu.be/afXqgLLRnTE. Please also visit our project page https://place-anything.github.io to get more information.
APA, Harvard, Vancouver, ISO, and other styles
5

Lu, Xinyuan, Shengyuan Huang, Li Niu, Wenyan Cong, and Liqing Zhang. "Deep Video Harmonization With Color Mapping Consistency." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/172.

Full text
Abstract:
Video harmonization aims to adjust the foreground of a composite video to make it compatible with the background. So far, video harmonization has only received limited attention and there is no public dataset for video harmonization. In this work, we construct a new video harmonization dataset HYouTube by adjusting the foreground of real videos to create synthetic composite videos. Moreover, we consider the temporal consistency in video harmonization task. Unlike previous works which establish the spatial correspondence, we design a novel framework based on the assumption of color mapping consistency, which leverages the color mapping of neighboring frames to refine the current frame. Extensive experiments on our HYouTube dataset prove the effectiveness of our proposed framework. Our dataset and code are available at https://github.com/bcmi/Video-Harmonization-Dataset-HYouTube.
APA, Harvard, Vancouver, ISO, and other styles
6

Звездакова, Анастасия, Anastasia Zvezdakova, Сергей Звездаков, Sergey Zvezdakov, Дмитрий Куликов, Dmitriy Kulikov, Дмитрий Ватолин, and Dmitriy Vatolin. "Hacking VMAF with Video Color and Contrast Distortion." In 29th International Conference on Computer Graphics, Image Processing and Computer Vision, Visualization Systems and the Virtual Environment GraphiCon'2019. Bryansk State Technical University, 2019. http://dx.doi.org/10.30987/graphicon-2019-2-53-57.

Full text
Abstract:
Video quality measurement takes an important role in many applications. Full-reference quality metrics which are usually used in video codecs comparisons are expected to reflect any changes in videos. In this article, we consider different color corrections of compressed videos which increase the values of full-reference metric VMAF and almost don’t decrease other widely-used metric SSIM. The proposed video contrast enhancement approach shows the metric in-applicability in some cases for video codecs comparisons, as it may be used for cheating in the comparisons via tuning to improve this metric values.
APA, Harvard, Vancouver, ISO, and other styles
7

Mendes, Paulo, and Sérgio Colcher. "Spatio-temporal Localization of Actors in Video/360-Video and its Applications." In Simpósio Brasileiro de Sistemas Multimídia e Web. Sociedade Brasileira de Computação - SBC, 2022. http://dx.doi.org/10.5753/webmedia_estendido.2022.224999.

Full text
Abstract:
The popularity of platforms for storing and transmitting video content has created a substantial volume of video data. Given a set of actors present in a video, generating metadata with the temporal determination of the interval in which each actor is present and their spatial 2D localization in each frame in these intervals can facilitate video retrieval and recommendation. In this work, we investigate Video Face Clustering for this spatio-temporal localization of actors in videos. We first describe our method for Video Face Clustering in which we take advantage of face detection, embeddings, and clustering methods to group similar faces of actors in different frames and provide the spatio-temporal localization of them. Then, we explore, propose, and investigate innovative applications of this spatio-temporal localization in three different tasks: (i) Video Face Recognition, (ii) Educational Video Recommendation and (iii) Subtitles Positioning in 360-video.
APA, Harvard, Vancouver, ISO, and other styles
8

Cai, Jia-Jia, Jun Tang, Qing-Guo Chen, Yao Hu, Xiaobo Wang, and Sheng-Jun Huang. "Multi-View Active Learning for Video Recommendation." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/284.

Full text
Abstract:
On many video websites, the recommendation is implemented as a prediction problem of video-user pairs, where the videos are represented by text features extracted from the metadata. However, the metadata is manually annotated by users and is usually missing for online videos. To train an effective recommender system with lower annotation cost, we propose an active learning approach to fully exploit the visual view of videos, while querying as few annotations as possible from the text view. On one hand, a joint model is proposed to learn the mapping from visual view to text view by simultaneously aligning the two views and minimizing the classification loss. On the other hand, a novel strategy based on prediction inconsistency and watching frequency is proposed to actively select the most important videos for metadata querying. Experiments on both classification datasets and real video recommendation tasks validate that the proposed approach can significantly reduce the annotation cost.
APA, Harvard, Vancouver, ISO, and other styles
9

Rogers, Steven K. "Confessions of a video warrior." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1991. http://dx.doi.org/10.1364/oam.1991.thn3.

Full text
Abstract:
This talk will present results from recent experiences with technical videos at the Air Force Institute of Technology. From the perspective of a technical person trying to accomplish information transfer several failures and successes are reviewed. The video examples shown will include technical video proceedings, technical tutorials, laboratory commercials, and research summaries for a nontechnical audience.
APA, Harvard, Vancouver, ISO, and other styles
10

Moreira, Daniel, Siome Goldenstein, and Anderson Rocha. "Sensitive-Video Analysis." In XXX Concurso de Teses e Dissertações da SBC. Sociedade Brasileira de Computação - SBC, 2017. http://dx.doi.org/10.5753/ctd.2017.3466.

Full text
Abstract:
Sensitive videos that may be inadequate to some audiences (e.g., pornography and violence, towards underages) are constantly being shared over the Internet. Employing humans for filtering them is daunting. The huge amount of data and the tediousness of the task ask for computer-aided sensitive videoanalysis, which we tackle in two ways. In the first one (sensitive-video classification), we explore efficient methods to decide whether or not a video contains sensitive material. In the second one (sensitive-content localization), we explore manners to find the moments a video starts and ceases to display sensitive content. Hypotheses are stated and validated, leading to contributions (papers, dataset, and patents) in the fields of Digital Forensics and Computer Vision.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Video"

1

Zakhor, Avideh. Video Compression Algorithms for Transmission and Video. Fort Belvoir, VA: Defense Technical Information Center, May 1997. http://dx.doi.org/10.21236/ada327255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Patel, Deep, Kenneth Graf, and David Fuller. Hip Surgical Preparation Educational Video. Rowan Digital Works, January 2021. http://dx.doi.org/10.31986/issn.2689-0690_rdw.oer.1022.

Full text
Abstract:
This series of open educational videos provides an in depth overview of various surgical preparation procedures. These instructional videos could be of interest to various medical and health science trainees in a variety of fields such as nursing or medicine. All patients featured in this video series have signed consent and release forms authorizing the release of these educational videos.
APA, Harvard, Vancouver, ISO, and other styles
3

Patel, Deep, Catherine Fedorka, and David Fuller. Shoulder Surgical Preparation Educational Video. Rowan Digital Works, January 2021. http://dx.doi.org/10.31986/issn.2689-0690_rdw.oer.1023.

Full text
Abstract:
This series of open educational videos provides an in depth overview of various surgical preparation procedures. These instructional videos could be of interest to various medical and health science trainees in a variety of fields such as nursing or medicine. All patients featured in this video series have signed consent and release forms authorizing the release of these educational videos.
APA, Harvard, Vancouver, ISO, and other styles
4

Patel, Deep, Julio Rodriguez, Vishal Khatri, and David Fuller. Spine Surgical Preparation Educational Video. Rowan Digital Works, January 2021. http://dx.doi.org/10.31986/issn.2689-0690_rdw.oer.1021.

Full text
Abstract:
This series of open educational videos provides an in depth overview of various surgical preparation procedures. These instructional videos could be of interest to various medical and health science trainees in a variety of fields such as nursing or medicine. All patients featured in this video series have signed consent and release forms authorizing the release of these educational videos.
APA, Harvard, Vancouver, ISO, and other styles
5

Mascarenas, David Dennis Lee. Video Magic. Office of Scientific and Technical Information (OSTI), August 2019. http://dx.doi.org/10.2172/1557196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Patel, Deep, Eric Freeland, and David Fuller. Foot and Ankle Surgical Preparation Educational Video. Rowan Digital Works, January 2021. http://dx.doi.org/10.31986/issn.2689-0690_rdw.oer.1020.

Full text
Abstract:
This series of open educational videos provides an in depth overview of various surgical preparation procedures. These instructional videos could be of interest to various medical and health science trainees in a variety of fields such as nursing or medicine. All patients featured in this video series have signed consent and release forms authorizing the release of these educational videos.
APA, Harvard, Vancouver, ISO, and other styles
7

Patel, Deep, Alisina Shahi, and David Fuller. Hand and Wrist Surgical Preparation Educational Video. Rowan Digital Works, January 2021. http://dx.doi.org/10.31986/issn.2689-0690_rdw.oer.1019.

Full text
Abstract:
This series of open educational videos provides an in depth overview of various surgical preparation procedures. These instructional videos could be of interest to various medical and health science trainees in a variety of fields such as nursing or medicine. All patients featured in this video series have signed consent and release forms authorizing the release of these educational videos.
APA, Harvard, Vancouver, ISO, and other styles
8

Blask, Steven. Airborne Video Surveillance. Fort Belvoir, VA: Defense Technical Information Center, February 2002. http://dx.doi.org/10.21236/ada402884.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fletcher, Jonathan, David Doria, and David Druno. Android Video Streaming. Fort Belvoir, VA: Defense Technical Information Center, May 2014. http://dx.doi.org/10.21236/ada601489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kobla, Vikrant, David Doermann, and Azriel Rosenfeld. Compressed Video Segmentation. Fort Belvoir, VA: Defense Technical Information Center, September 1996. http://dx.doi.org/10.21236/ada458852.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography