Gotowa bibliografia na temat „Video”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Video”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Video"
Yulianto, Agus, Sisworo Sisworo i Erry Hidayanto. "Pembelajaran Matematika Berbantuan Video Pembelajaran untuk Meningkatkan Motivasi dan Hasil Belajar Peserta Didik". Mosharafa: Jurnal Pendidikan Matematika 11, nr 3 (30.09.2022): 403–14. http://dx.doi.org/10.31980/mosharafa.v11i3.1396.
Pełny tekst źródłaYulianto, Agus, Sisworo Yulianto i Erry Hidayanto. "Pembelajaran Matematika Berbantuan Video Pembelajaran untuk Meningkatkan Motivasi dan Hasil Belajar Peserta Didik". Mosharafa: Jurnal Pendidikan Matematika 11, nr 3 (30.09.2022): 403–14. http://dx.doi.org/10.31980/mosharafa.v11i3.731.
Pełny tekst źródłaS., Sankirti, i P. M. Kamade. "Video OCR for Video Indexing". International Journal of Engineering and Technology 3, nr 3 (2011): 287–89. http://dx.doi.org/10.7763/ijet.2011.v3.239.
Pełny tekst źródłaTafesse, Wondwesen. "YouTube marketing: how marketers' video optimization practices influence video views". Internet Research 30, nr 6 (3.07.2020): 1689–707. http://dx.doi.org/10.1108/intr-10-2019-0406.
Pełny tekst źródłaSong, Yaguang, Junyu Gao, Xiaoshan Yang i Changsheng Xu. "Learning Hierarchical Video Graph Networks for One-Stop Video Delivery". ACM Transactions on Multimedia Computing, Communications, and Applications 18, nr 1 (31.01.2022): 1–23. http://dx.doi.org/10.1145/3466886.
Pełny tekst źródłaLin, Meihan. "Impacts of Short Video to Long Video and the Corresponding Countermeasures: Taking Tencent Video as an Example". Highlights in Science, Engineering and Technology 92 (10.04.2024): 194–98. http://dx.doi.org/10.54097/rnxg6e63.
Pełny tekst źródłaHandiani, Riana Ezra Savitry Is, i Surya Bintarti. "Pengaruh Conversation Dan Co-Creation Terhadap Customer Loyalty Dengan Mediasi Experience Quality Dan Moderasi Currency Pada Pengguna Layanan Vod Vidio Di Kabupaten Bekasi". Journal of Economic, Bussines and Accounting (COSTING) 7, nr 4 (24.06.2024): 9159–70. http://dx.doi.org/10.31539/costing.v7i4.10365.
Pełny tekst źródłaRachmaniar, Rachmaniar, i Renata Anisa. "Video Inovasi Bisnis Kuliner di Youtube (Studi Etnografi Virtual tentang Keberadaan Video-video Inovasi Bisnis Kuliner di Youtube)". Proceeding of Community Development 1 (4.04.2018): 89. http://dx.doi.org/10.30874/comdev.2017.14.
Pełny tekst źródłaJi, Wanting, i Ruili Wang. "A Multi-instance Multi-label Dual Learning Approach for Video Captioning". ACM Transactions on Multimedia Computing, Communications, and Applications 17, nr 2s (10.06.2021): 1–18. http://dx.doi.org/10.1145/3446792.
Pełny tekst źródłaNuratika, Sikin, Safra Apriani Zahraa i M. I. Gunawan. "THE MAKING OF PROFILE VIDEO ABOUT TOURISM IN SIAK REGENCY". INOVISH JOURNAL 4, nr 1 (29.06.2019): 102. http://dx.doi.org/10.35314/inovish.v4i1.958.
Pełny tekst źródłaRozprawy doktorskie na temat "Video"
Sedlařík, Vladimír. "Informační strategie firmy". Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2012. http://www.nusl.cz/ntk/nusl-223526.
Pełny tekst źródłaLindskog, Eric, i Wrang Jesper. "Design of video players for branched videos". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148592.
Pełny tekst źródłaSalam, Sazilah. "VidIO : a model for personalized video information management". Thesis, University of Southampton, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.242411.
Pełny tekst źródłaAklouf, Mourad. "Video for events : Compression and transport of the next generation video codec". Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG029.
Pełny tekst źródłaThe acquisition and delivery of video content with minimal latency has become essential in several business areas such as sports broadcasting, video conferencing, telepresence, remote vehicle operation, or remote system control. The live streaming industry has grown in 2020 and it will expand further in the next few years with the emergence of new high-efficiency video codecs based on the Versatile Video Coding (VVC) standard and the fifth generation of mobile networks (5G).HTTP Adaptive Streaming (HAS) methods such as MPEG-DASH, using algorithms to adapt the transmission rate of compressed video, have proven to be very effective in improving the quality of experience (QoE) in a video-on-demand (VOD) context.Nevertheless, minimizing the delay between image acquisition and display at the receiver is essential in applications where latency is critical. Most rate adaptation algorithms are developed to optimize video transmission from a server situated in the core network to mobile clients. In applications requiring low-latency streaming, such as remote control of drones or broadcasting of sports events, the role of the server is played by a mobile terminal. The latter will acquire, compress, and transmit the video and transmit the compressed stream via a radio access channel to one or more clients. Therefore, client-driven rate adaptation approaches are unsuitable in this context because of the variability of the channel characteristics. In addition, HAS, for which the decision-making is done with a periodicity of the order of a second, are not sufficiently reactive when the server is moving, which may generate significant delays. It is therefore important to use a very fine adaptation granularity in order to reduce the end-to-end delay. The reduced size of the transmission and reception buffers (to minimize latency) makes it more difficult to adapt the throughput in our use case. When the bandwidth varies with a time constant smaller than the period with which the regulation is made, bad transmission rate decisions can induce a significant latency overhead.The aim of this thesis is to provide some answers to the problem of low-latency delivery of video acquired, compressed, and transmitted by mobile terminals. We first present a frame-by-frame rate adaptation algorithm for low latency broadcasting. A Model Predictive Control (MPC) approach is proposed to determine the coding rate of each frame to be transmitted. This approach uses information about the buffer level of the transmitter and about the characteristics of the transmission channel. Since the frames are coded live, a model relating the quantization parameter (QP) to the output rate of the video encoder is required. Hence, we have proposed a new model linking the rate to the QP of the current frame and to the distortion of the previous frame. This model provides much better results in the context of a frame-by-frame decision on the coding rate than the reference models in the literature.In addition to the above techniques, we have also proposed tools to reduce the complexity of video encoders such as VVC. The current version of the VVC encoder (VTM10) has an execution time nine times higher than that of the HEVC encoder. Therefore, the VVC encoder is not suitable for real-time encoding and streaming applications on currently available platforms. In this context, we present a systematic branch-and-prune method to identify a set of coding tools that can be disabled while satisfying a constraint on coding efficiency. This work contributes to the realization of a real-time VVC coder
Le, Thuc Trinh. "Video inpainting and semi-supervised object removal". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT026/document.
Pełny tekst źródłaNowadays, the rapid increase of video creates a massive demand for video-based editing applications. In this dissertation, we solve several problems relating to video post-processing and focus on objects removal application in video. To complete this task, we divided it into two problems: (1) A video objects segmentation step to select which objects to remove and (2) a video inpainting step to filling the damaged regions.For the video segmentation problem, we design a system which is suitable for object removal applications with different requirements in terms of accuracy and efficiency. Our approach relies on the combination of Convolutional Neural Networks (CNNs) for segmentation and the classical mask tracking method. In particular, we adopt the segmentation networks for image case and apply them to video case by performing frame-by-frame segmentation. By exploiting both offline and online training with first frame annotation only, the networks are able to produce highly accurate video object segmentation. Besides, we propose a mask tracking module to ensure temporal continuity and a mask linking module to ensure the identity coherence across frames. Moreover, we introduce a simple way to learn the dilation layer in the mask, which helps us create suitable masks for video objects removal application.For the video inpainting problem, we divide our work into two categories base on the type of background. In particular, we present a simple motion-guided pixel propagation method to deal with static background cases. We show that the problem of objects removal with a static background can be solved efficiently using a simple motion-based technique. To deal with dynamic background, we introduce video inpainting method by optimization a global patch-based energy function. To increase the speed of the algorithm, we proposed a parallel extension of the 3D PatchMatch algorithm. To improve accuracy, we systematically incorporate the optical flow in the overall process. We end up with a video inpainting method which is able to reconstruct moving objects as well as reproduce dynamic textures while running in a reasonable time.Finally, we combine the video objects segmentation and video inpainting methods into a unified system to removes undesired objects in videos. To the best of our knowledge, this is the first system of this kind. In our system, the user only needs to approximately delimit in the first frame the objects to be edited. These annotation process is facilitated by the help of superpixels. Then, these annotations are refined and propagated through the video by the video objects segmentation method. One or several objects can then be removed automatically using our video inpainting methods. This results in a flexible computational video editing tool, with numerous potential applications, ranging from crowd suppression to unphysical scenes correction
Lei, Zhijun. "Video transcoding techniques for wireless video communications". Thesis, University of Ottawa (Canada), 2004. http://hdl.handle.net/10393/29134.
Pełny tekst źródłaMilovanovic, Marta. "Pruning and compression of multi-view content for immersive video coding". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT023.
Pełny tekst źródłaThis thesis addresses the problem of efficient compression of immersive video content, represented with Multiview Video plus Depth (MVD) format. The Moving Picture Experts Group (MPEG) standard for the transmission of MVD data is called MPEG Immersive Video (MIV), which utilizes 2D video codecs to compress the source texture and depth information. Compared to traditional video coding, immersive video coding is more complex and constrained not only by trade-off between bitrate and quality, but also by the pixel rate. Because of that, MIV uses pruning to reduce the pixel rate and inter-view correlations and creates a mosaic of image pieces (patches). Decoder-side depth estimation (DSDE) has emerged as an alternative approach to improve the immersive video system by avoiding the transmission of depth maps and moving the depth estimation process to the decoder side. DSDE has been studied for the case of numerous fully transmitted views (without pruning). In this thesis, we demonstrate possible advances in immersive video coding, emphasized on pruning the input content. We go beyond DSDE and examine the distinct effect of patch-level depth restoration at the decoder side. We propose two approaches to incorporate decoder-side depth estimation (DSDE) on content pruned with MIV. The first approach excludes a subset of depth maps from the transmission, and the second approach uses the quality of depth patches estimated at the encoder side to distinguish between those that need to be transmitted and those that can be recovered at the decoder side. Our experiments show 4.63 BD-rate gain for Y-PSNR on average. Furthermore, we also explore the use of neural image-based rendering (IBR) techniques to enhance the quality of novel view synthesis and show that neural synthesis itself provides the information needed to prune the content. Our results show a good trade-off between pixel rate and synthesis quality, achieving the view synthesis improvements of 3.6 dB on average
Arrufat, Batalla Adrià. "Multiple transforms for video coding". Thesis, Rennes, INSA, 2015. http://www.theses.fr/2015ISAR0025/document.
Pełny tekst źródłaState of the art video codecs use transforms to ensure a compact signal representation. The transform stage is where compression takes place, however, little variety is observed in the type of transforms used for standardised video coding schemes: often, a single transform is considered, usually a Discrete Cosine Transform (DCT). Recently, other transforms have started being considered in addition to the DCT. For instance, in the latest video coding standard, High Efficiency Video Coding (HEVC), the 4x4 sized blocks can make use of the Discrete Sine Transform (DST) and, in addition, it also possible not to transform them. This fact reveals an increasing interest to consider a plurality of transforms to achieve higher compression rates. This thesis focuses on extending HEVC through the use of multiple transforms. After a general introduction to video compression and transform coding, two transform designs are studied in detail: the Karhunen Loève Transform (KLT) and a Rate-Distortion Optimised Transform are considered. These two methods are compared against each other by replacing the transforms in HEVC. This experiment validates the appropriateness of the design. A coding scheme that incorporates and boosts the use of multiple transforms is introduced: several transforms are made available to the encoder, which chooses the one that provides the best rate-distortion trade-off. Consequently, a design method for building systems using multiple transforms is also described. With this coding scheme, significant amounts of bit-rate savings are achieved over HEVC, especially when using many complex transforms. However, these improvements come at the expense of increased complexity in terms of coding, decoding and storage requirements. As a result, simplifications are considered while limiting the impact on bit-rate savings. A first approach is introduced, in which incomplete transforms are used. This kind of transforms use one single base vector and are conceived to work as companions of the HEVC transforms. This technique is evaluated and provides significant complexity reductions over the previous system, although the bit-rate savings are modest. A systematic method, which specifically determines the best trade-offs between the number of transforms and bit-rate savings, is designed. This method uses two different types of transforms based separable orthogonal transforms and Discrete Trigonometric Transforms (DTTs) in particular. Several designs are presented, allowing for different complexity and bitrate savings trade-offs. These systems reveal the interest of using multiple transforms for video coding
Le, Thuc Trinh. "Video inpainting and semi-supervised object removal". Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT026.
Pełny tekst źródłaNowadays, the rapid increase of video creates a massive demand for video-based editing applications. In this dissertation, we solve several problems relating to video post-processing and focus on objects removal application in video. To complete this task, we divided it into two problems: (1) A video objects segmentation step to select which objects to remove and (2) a video inpainting step to filling the damaged regions.For the video segmentation problem, we design a system which is suitable for object removal applications with different requirements in terms of accuracy and efficiency. Our approach relies on the combination of Convolutional Neural Networks (CNNs) for segmentation and the classical mask tracking method. In particular, we adopt the segmentation networks for image case and apply them to video case by performing frame-by-frame segmentation. By exploiting both offline and online training with first frame annotation only, the networks are able to produce highly accurate video object segmentation. Besides, we propose a mask tracking module to ensure temporal continuity and a mask linking module to ensure the identity coherence across frames. Moreover, we introduce a simple way to learn the dilation layer in the mask, which helps us create suitable masks for video objects removal application.For the video inpainting problem, we divide our work into two categories base on the type of background. In particular, we present a simple motion-guided pixel propagation method to deal with static background cases. We show that the problem of objects removal with a static background can be solved efficiently using a simple motion-based technique. To deal with dynamic background, we introduce video inpainting method by optimization a global patch-based energy function. To increase the speed of the algorithm, we proposed a parallel extension of the 3D PatchMatch algorithm. To improve accuracy, we systematically incorporate the optical flow in the overall process. We end up with a video inpainting method which is able to reconstruct moving objects as well as reproduce dynamic textures while running in a reasonable time.Finally, we combine the video objects segmentation and video inpainting methods into a unified system to removes undesired objects in videos. To the best of our knowledge, this is the first system of this kind. In our system, the user only needs to approximately delimit in the first frame the objects to be edited. These annotation process is facilitated by the help of superpixels. Then, these annotations are refined and propagated through the video by the video objects segmentation method. One or several objects can then be removed automatically using our video inpainting methods. This results in a flexible computational video editing tool, with numerous potential applications, ranging from crowd suppression to unphysical scenes correction
Dufour, Sophie-Isabelle. "Imaginem video : L'image vidéo dans l'"histoire longue" des images". Paris 3, 2004. http://www.theses.fr/2004PA030054.
Pełny tekst źródłaWhat do I see when I look at a video? Actually, I do not see a video but an image. My purpose is to study the status of video image from the point of view of the so-called " long history " of images, dealing therefore with very ancient problems that occured long before the technical invention of the medium. A distinction must be made between images and the very notion of image : one could say that the difficult notion of image can be specified only through the various media in which it embodies itself. In this study, video questions image itself. Art works will keep their privilege, because through them the status of video image is best revealed; but my intention is to show that the powers of image go far beyond aesthetics. The first problem will be the one raised by the myth of Narcissus, as a lover of image(s), because it is seminal. It leads, for instance, to the notion of fluidity, which will prove essential in my study of the " ghostliness " of video image (as well as in my study of space in video). Last but not least, the relations between time and video image should be specified with Bergson's help, and I shall try to prove how useful can be this philosopher's notion of time when one hopes to understand the singularity of video image
Książki na temat "Video"
European Commission. Directorate General Information, Communication, Culture, Audiovisual. Video-Katalog =: Video catalogue = Catalogue vidéo. Luxembourg: Office for Official Publications of the European Communities = Office des publications officielles des Communautés européennes, 1997.
Znajdź pełny tekst źródłaGoethe-Institut. Video Katalog: Videos for loan. London: Goethe-Institut, 1990.
Znajdź pełny tekst źródłaEuropean Commission. Directorate General X for Information, Communication, Culture, Audiovisual. Video-Katalog =: Video catalogue = Catalogue video. Luxemburg: Amt für amtliche Veröffentlichungen der Europäischen Gemainschaften, 1997.
Znajdź pełny tekst źródła1967-, Weir Kathryn Elizabeth, Chambers Nicholas i Queensland Art Gallery, red. Video hits: Art & music video. South Brisbane: Queensland Art Gallery, 2004.
Znajdź pełny tekst źródłaPublications, Educational Technology, red. Interactive video. Englewood Cliffs, N.J: Educational Technology Publications, 1989.
Znajdź pełny tekst źródłaSokol, Erich. Video. Wien: Jugend & Volk, 1990.
Znajdź pełny tekst źródłaCoulter, George. Video. Vero Beach, Fla: Rourke Publications, 1996.
Znajdź pełny tekst źródłaBiel, Jackie. Video. Tarrytown, N.Y: Benchmark Books, 1996.
Znajdź pełny tekst źródłaMike, Lavery, i Rinvolucri Mario, red. Video. Oxford: Oxford University Press, 1991.
Znajdź pełny tekst źródłaCastro, Vicente González. Video. Vedado, Ciudad de La Habana: Editorial P. de la Torriente, 1987.
Znajdź pełny tekst źródłaCzęści książek na temat "Video"
Tassi, Laura, Valeria Mariani, Veronica Pelliccia i Roberto Mai. "Video-Electroencephalography (Video-EEG)". W Clinical Electroencephalography, 305–17. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-04573-9_18.
Pełny tekst źródłaBaumgardt, Michael. "Video". W Web Design kreativ!, 144–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/978-3-642-56961-6_10.
Pełny tekst źródłaBaldwin, Dennis, Jamie Macdonald, Keith Peters, Jon Steer, David Tudury, Jerome Turner, Steve Webster, Alex White i Todd Yard. "Video". W Flash MX Studio, 493–528. Berkeley, CA: Apress, 2002. http://dx.doi.org/10.1007/978-1-4302-5166-8_14.
Pełny tekst źródłaFarkas, Bart, i Jeff Govier. "Video". W Use Your PC to Build an Incredible Home Theater System, 3–31. Berkeley, CA: Apress, 2003. http://dx.doi.org/10.1007/978-1-4302-5174-3_1.
Pełny tekst źródłaGreen, Tom, i Tiago Dias. "Video". W Foundation Flash CS5 for Designers, 527–99. Berkeley, CA: Apress, 2010. http://dx.doi.org/10.1007/978-1-4302-2995-7_10.
Pełny tekst źródłaGreen, Tom, i David Stiller. "Video". W Foundation Flash CS4 for Designers, 441–91. Berkeley, CA: Apress, 2009. http://dx.doi.org/10.1007/978-1-4302-1094-8_10.
Pełny tekst źródłaGreen, Tom, i Joseph Labrecque. "Video". W Beginning Adobe Animate CC, 477–518. Berkeley, CA: Apress, 2017. http://dx.doi.org/10.1007/978-1-4842-2376-5_10.
Pełny tekst źródłaWeik, Martin H. "video". W Computer Science and Communications Dictionary, 1889. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_20759.
Pełny tekst źródłaRoach, J. W. "Video". W Human-Machine Interactive Systems, 185–97. Boston, MA: Springer US, 1991. http://dx.doi.org/10.1007/978-1-4684-5883-1_8.
Pełny tekst źródłaLemke, Inga. "Video". W Handbuch Populäre Kultur, 472–78. Stuttgart: J.B. Metzler, 2003. http://dx.doi.org/10.1007/978-3-476-05001-4_103.
Pełny tekst źródłaStreszczenia konferencji na temat "Video"
Han, Haochen, i Yu Sun. "A Video Note Taking System to Make Online Video Learning Easier". W 10th International Conference on Information Technology Convergence and Services (ITCSE 2021). AIRCC Publishing Corporation, 2021. http://dx.doi.org/10.5121/csit.2021.110917.
Pełny tekst źródłaD., Minola Davids, i Seldev Christopher C. "Surveillance Video Summarization based on Target Object Detection". W The International Conference on scientific innovations in Science, Technology, and Management. International Journal of Advanced Trends in Engineering and Management, 2023. http://dx.doi.org/10.59544/jist4192/ngcesi23p94.
Pełny tekst źródłaYu, Yipeng, Xiao Chen i Hui Zhan. "VideoMaster: A Multimodal Micro Game Video Recreator". W Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/844.
Pełny tekst źródłaLiu, Ziling, Jinyu Yang, Mingqi Gao i Feng Zheng. "Place Anything into Any Video". W Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/1019.
Pełny tekst źródłaLu, Xinyuan, Shengyuan Huang, Li Niu, Wenyan Cong i Liqing Zhang. "Deep Video Harmonization With Color Mapping Consistency". W Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/172.
Pełny tekst źródłaЗвездакова, Анастасия, Anastasia Zvezdakova, Сергей Звездаков, Sergey Zvezdakov, Дмитрий Куликов, Dmitriy Kulikov, Дмитрий Ватолин i Dmitriy Vatolin. "Hacking VMAF with Video Color and Contrast Distortion". W 29th International Conference on Computer Graphics, Image Processing and Computer Vision, Visualization Systems and the Virtual Environment GraphiCon'2019. Bryansk State Technical University, 2019. http://dx.doi.org/10.30987/graphicon-2019-2-53-57.
Pełny tekst źródłaMendes, Paulo, i Sérgio Colcher. "Spatio-temporal Localization of Actors in Video/360-Video and its Applications". W Simpósio Brasileiro de Sistemas Multimídia e Web. Sociedade Brasileira de Computação - SBC, 2022. http://dx.doi.org/10.5753/webmedia_estendido.2022.224999.
Pełny tekst źródłaCai, Jia-Jia, Jun Tang, Qing-Guo Chen, Yao Hu, Xiaobo Wang i Sheng-Jun Huang. "Multi-View Active Learning for Video Recommendation". W Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/284.
Pełny tekst źródłaRogers, Steven K. "Confessions of a video warrior". W OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1991. http://dx.doi.org/10.1364/oam.1991.thn3.
Pełny tekst źródłaMoreira, Daniel, Siome Goldenstein i Anderson Rocha. "Sensitive-Video Analysis". W XXX Concurso de Teses e Dissertações da SBC. Sociedade Brasileira de Computação - SBC, 2017. http://dx.doi.org/10.5753/ctd.2017.3466.
Pełny tekst źródłaRaporty organizacyjne na temat "Video"
Zakhor, Avideh. Video Compression Algorithms for Transmission and Video. Fort Belvoir, VA: Defense Technical Information Center, maj 1997. http://dx.doi.org/10.21236/ada327255.
Pełny tekst źródłaPatel, Deep, Kenneth Graf i David Fuller. Hip Surgical Preparation Educational Video. Rowan Digital Works, styczeń 2021. http://dx.doi.org/10.31986/issn.2689-0690_rdw.oer.1022.
Pełny tekst źródłaPatel, Deep, Catherine Fedorka i David Fuller. Shoulder Surgical Preparation Educational Video. Rowan Digital Works, styczeń 2021. http://dx.doi.org/10.31986/issn.2689-0690_rdw.oer.1023.
Pełny tekst źródłaPatel, Deep, Julio Rodriguez, Vishal Khatri i David Fuller. Spine Surgical Preparation Educational Video. Rowan Digital Works, styczeń 2021. http://dx.doi.org/10.31986/issn.2689-0690_rdw.oer.1021.
Pełny tekst źródłaMascarenas, David Dennis Lee. Video Magic. Office of Scientific and Technical Information (OSTI), sierpień 2019. http://dx.doi.org/10.2172/1557196.
Pełny tekst źródłaPatel, Deep, Eric Freeland i David Fuller. Foot and Ankle Surgical Preparation Educational Video. Rowan Digital Works, styczeń 2021. http://dx.doi.org/10.31986/issn.2689-0690_rdw.oer.1020.
Pełny tekst źródłaPatel, Deep, Alisina Shahi i David Fuller. Hand and Wrist Surgical Preparation Educational Video. Rowan Digital Works, styczeń 2021. http://dx.doi.org/10.31986/issn.2689-0690_rdw.oer.1019.
Pełny tekst źródłaBlask, Steven. Airborne Video Surveillance. Fort Belvoir, VA: Defense Technical Information Center, luty 2002. http://dx.doi.org/10.21236/ada402884.
Pełny tekst źródłaFletcher, Jonathan, David Doria i David Druno. Android Video Streaming. Fort Belvoir, VA: Defense Technical Information Center, maj 2014. http://dx.doi.org/10.21236/ada601489.
Pełny tekst źródłaKobla, Vikrant, David Doermann i Azriel Rosenfeld. Compressed Video Segmentation. Fort Belvoir, VA: Defense Technical Information Center, wrzesień 1996. http://dx.doi.org/10.21236/ada458852.
Pełny tekst źródła