Academic literature on the topic 'Video'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Video.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Video"
Yulianto, Agus, Sisworo Sisworo, and Erry Hidayanto. "Pembelajaran Matematika Berbantuan Video Pembelajaran untuk Meningkatkan Motivasi dan Hasil Belajar Peserta Didik." Mosharafa: Jurnal Pendidikan Matematika 11, no. 3 (September 30, 2022): 403–14. http://dx.doi.org/10.31980/mosharafa.v11i3.1396.
Full textYulianto, Agus, Sisworo Yulianto, and Erry Hidayanto. "Pembelajaran Matematika Berbantuan Video Pembelajaran untuk Meningkatkan Motivasi dan Hasil Belajar Peserta Didik." Mosharafa: Jurnal Pendidikan Matematika 11, no. 3 (September 30, 2022): 403–14. http://dx.doi.org/10.31980/mosharafa.v11i3.731.
Full textS., Sankirti, and P. M. Kamade. "Video OCR for Video Indexing." International Journal of Engineering and Technology 3, no. 3 (2011): 287–89. http://dx.doi.org/10.7763/ijet.2011.v3.239.
Full textTafesse, Wondwesen. "YouTube marketing: how marketers' video optimization practices influence video views." Internet Research 30, no. 6 (July 3, 2020): 1689–707. http://dx.doi.org/10.1108/intr-10-2019-0406.
Full textSong, Yaguang, Junyu Gao, Xiaoshan Yang, and Changsheng Xu. "Learning Hierarchical Video Graph Networks for One-Stop Video Delivery." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 1 (January 31, 2022): 1–23. http://dx.doi.org/10.1145/3466886.
Full textLin, Meihan. "Impacts of Short Video to Long Video and the Corresponding Countermeasures: Taking Tencent Video as an Example." Highlights in Science, Engineering and Technology 92 (April 10, 2024): 194–98. http://dx.doi.org/10.54097/rnxg6e63.
Full textHandiani, Riana Ezra Savitry Is, and Surya Bintarti. "Pengaruh Conversation Dan Co-Creation Terhadap Customer Loyalty Dengan Mediasi Experience Quality Dan Moderasi Currency Pada Pengguna Layanan Vod Vidio Di Kabupaten Bekasi." Journal of Economic, Bussines and Accounting (COSTING) 7, no. 4 (June 24, 2024): 9159–70. http://dx.doi.org/10.31539/costing.v7i4.10365.
Full textRachmaniar, Rachmaniar, and Renata Anisa. "Video Inovasi Bisnis Kuliner di Youtube (Studi Etnografi Virtual tentang Keberadaan Video-video Inovasi Bisnis Kuliner di Youtube)." Proceeding of Community Development 1 (April 4, 2018): 89. http://dx.doi.org/10.30874/comdev.2017.14.
Full textJi, Wanting, and Ruili Wang. "A Multi-instance Multi-label Dual Learning Approach for Video Captioning." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 2s (June 10, 2021): 1–18. http://dx.doi.org/10.1145/3446792.
Full textNuratika, Sikin, Safra Apriani Zahraa, and M. I. Gunawan. "THE MAKING OF PROFILE VIDEO ABOUT TOURISM IN SIAK REGENCY." INOVISH JOURNAL 4, no. 1 (June 29, 2019): 102. http://dx.doi.org/10.35314/inovish.v4i1.958.
Full textDissertations / Theses on the topic "Video"
Sedlařík, Vladimír. "Informační strategie firmy." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2012. http://www.nusl.cz/ntk/nusl-223526.
Full textLindskog, Eric, and Wrang Jesper. "Design of video players for branched videos." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148592.
Full textSalam, Sazilah. "VidIO : a model for personalized video information management." Thesis, University of Southampton, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.242411.
Full textAklouf, Mourad. "Video for events : Compression and transport of the next generation video codec." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG029.
Full textThe acquisition and delivery of video content with minimal latency has become essential in several business areas such as sports broadcasting, video conferencing, telepresence, remote vehicle operation, or remote system control. The live streaming industry has grown in 2020 and it will expand further in the next few years with the emergence of new high-efficiency video codecs based on the Versatile Video Coding (VVC) standard and the fifth generation of mobile networks (5G).HTTP Adaptive Streaming (HAS) methods such as MPEG-DASH, using algorithms to adapt the transmission rate of compressed video, have proven to be very effective in improving the quality of experience (QoE) in a video-on-demand (VOD) context.Nevertheless, minimizing the delay between image acquisition and display at the receiver is essential in applications where latency is critical. Most rate adaptation algorithms are developed to optimize video transmission from a server situated in the core network to mobile clients. In applications requiring low-latency streaming, such as remote control of drones or broadcasting of sports events, the role of the server is played by a mobile terminal. The latter will acquire, compress, and transmit the video and transmit the compressed stream via a radio access channel to one or more clients. Therefore, client-driven rate adaptation approaches are unsuitable in this context because of the variability of the channel characteristics. In addition, HAS, for which the decision-making is done with a periodicity of the order of a second, are not sufficiently reactive when the server is moving, which may generate significant delays. It is therefore important to use a very fine adaptation granularity in order to reduce the end-to-end delay. The reduced size of the transmission and reception buffers (to minimize latency) makes it more difficult to adapt the throughput in our use case. When the bandwidth varies with a time constant smaller than the period with which the regulation is made, bad transmission rate decisions can induce a significant latency overhead.The aim of this thesis is to provide some answers to the problem of low-latency delivery of video acquired, compressed, and transmitted by mobile terminals. We first present a frame-by-frame rate adaptation algorithm for low latency broadcasting. A Model Predictive Control (MPC) approach is proposed to determine the coding rate of each frame to be transmitted. This approach uses information about the buffer level of the transmitter and about the characteristics of the transmission channel. Since the frames are coded live, a model relating the quantization parameter (QP) to the output rate of the video encoder is required. Hence, we have proposed a new model linking the rate to the QP of the current frame and to the distortion of the previous frame. This model provides much better results in the context of a frame-by-frame decision on the coding rate than the reference models in the literature.In addition to the above techniques, we have also proposed tools to reduce the complexity of video encoders such as VVC. The current version of the VVC encoder (VTM10) has an execution time nine times higher than that of the HEVC encoder. Therefore, the VVC encoder is not suitable for real-time encoding and streaming applications on currently available platforms. In this context, we present a systematic branch-and-prune method to identify a set of coding tools that can be disabled while satisfying a constraint on coding efficiency. This work contributes to the realization of a real-time VVC coder
Le, Thuc Trinh. "Video inpainting and semi-supervised object removal." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT026/document.
Full textNowadays, the rapid increase of video creates a massive demand for video-based editing applications. In this dissertation, we solve several problems relating to video post-processing and focus on objects removal application in video. To complete this task, we divided it into two problems: (1) A video objects segmentation step to select which objects to remove and (2) a video inpainting step to filling the damaged regions.For the video segmentation problem, we design a system which is suitable for object removal applications with different requirements in terms of accuracy and efficiency. Our approach relies on the combination of Convolutional Neural Networks (CNNs) for segmentation and the classical mask tracking method. In particular, we adopt the segmentation networks for image case and apply them to video case by performing frame-by-frame segmentation. By exploiting both offline and online training with first frame annotation only, the networks are able to produce highly accurate video object segmentation. Besides, we propose a mask tracking module to ensure temporal continuity and a mask linking module to ensure the identity coherence across frames. Moreover, we introduce a simple way to learn the dilation layer in the mask, which helps us create suitable masks for video objects removal application.For the video inpainting problem, we divide our work into two categories base on the type of background. In particular, we present a simple motion-guided pixel propagation method to deal with static background cases. We show that the problem of objects removal with a static background can be solved efficiently using a simple motion-based technique. To deal with dynamic background, we introduce video inpainting method by optimization a global patch-based energy function. To increase the speed of the algorithm, we proposed a parallel extension of the 3D PatchMatch algorithm. To improve accuracy, we systematically incorporate the optical flow in the overall process. We end up with a video inpainting method which is able to reconstruct moving objects as well as reproduce dynamic textures while running in a reasonable time.Finally, we combine the video objects segmentation and video inpainting methods into a unified system to removes undesired objects in videos. To the best of our knowledge, this is the first system of this kind. In our system, the user only needs to approximately delimit in the first frame the objects to be edited. These annotation process is facilitated by the help of superpixels. Then, these annotations are refined and propagated through the video by the video objects segmentation method. One or several objects can then be removed automatically using our video inpainting methods. This results in a flexible computational video editing tool, with numerous potential applications, ranging from crowd suppression to unphysical scenes correction
Lei, Zhijun. "Video transcoding techniques for wireless video communications." Thesis, University of Ottawa (Canada), 2004. http://hdl.handle.net/10393/29134.
Full textMilovanovic, Marta. "Pruning and compression of multi-view content for immersive video coding." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT023.
Full textThis thesis addresses the problem of efficient compression of immersive video content, represented with Multiview Video plus Depth (MVD) format. The Moving Picture Experts Group (MPEG) standard for the transmission of MVD data is called MPEG Immersive Video (MIV), which utilizes 2D video codecs to compress the source texture and depth information. Compared to traditional video coding, immersive video coding is more complex and constrained not only by trade-off between bitrate and quality, but also by the pixel rate. Because of that, MIV uses pruning to reduce the pixel rate and inter-view correlations and creates a mosaic of image pieces (patches). Decoder-side depth estimation (DSDE) has emerged as an alternative approach to improve the immersive video system by avoiding the transmission of depth maps and moving the depth estimation process to the decoder side. DSDE has been studied for the case of numerous fully transmitted views (without pruning). In this thesis, we demonstrate possible advances in immersive video coding, emphasized on pruning the input content. We go beyond DSDE and examine the distinct effect of patch-level depth restoration at the decoder side. We propose two approaches to incorporate decoder-side depth estimation (DSDE) on content pruned with MIV. The first approach excludes a subset of depth maps from the transmission, and the second approach uses the quality of depth patches estimated at the encoder side to distinguish between those that need to be transmitted and those that can be recovered at the decoder side. Our experiments show 4.63 BD-rate gain for Y-PSNR on average. Furthermore, we also explore the use of neural image-based rendering (IBR) techniques to enhance the quality of novel view synthesis and show that neural synthesis itself provides the information needed to prune the content. Our results show a good trade-off between pixel rate and synthesis quality, achieving the view synthesis improvements of 3.6 dB on average
Arrufat, Batalla Adrià. "Multiple transforms for video coding." Thesis, Rennes, INSA, 2015. http://www.theses.fr/2015ISAR0025/document.
Full textState of the art video codecs use transforms to ensure a compact signal representation. The transform stage is where compression takes place, however, little variety is observed in the type of transforms used for standardised video coding schemes: often, a single transform is considered, usually a Discrete Cosine Transform (DCT). Recently, other transforms have started being considered in addition to the DCT. For instance, in the latest video coding standard, High Efficiency Video Coding (HEVC), the 4x4 sized blocks can make use of the Discrete Sine Transform (DST) and, in addition, it also possible not to transform them. This fact reveals an increasing interest to consider a plurality of transforms to achieve higher compression rates. This thesis focuses on extending HEVC through the use of multiple transforms. After a general introduction to video compression and transform coding, two transform designs are studied in detail: the Karhunen Loève Transform (KLT) and a Rate-Distortion Optimised Transform are considered. These two methods are compared against each other by replacing the transforms in HEVC. This experiment validates the appropriateness of the design. A coding scheme that incorporates and boosts the use of multiple transforms is introduced: several transforms are made available to the encoder, which chooses the one that provides the best rate-distortion trade-off. Consequently, a design method for building systems using multiple transforms is also described. With this coding scheme, significant amounts of bit-rate savings are achieved over HEVC, especially when using many complex transforms. However, these improvements come at the expense of increased complexity in terms of coding, decoding and storage requirements. As a result, simplifications are considered while limiting the impact on bit-rate savings. A first approach is introduced, in which incomplete transforms are used. This kind of transforms use one single base vector and are conceived to work as companions of the HEVC transforms. This technique is evaluated and provides significant complexity reductions over the previous system, although the bit-rate savings are modest. A systematic method, which specifically determines the best trade-offs between the number of transforms and bit-rate savings, is designed. This method uses two different types of transforms based separable orthogonal transforms and Discrete Trigonometric Transforms (DTTs) in particular. Several designs are presented, allowing for different complexity and bitrate savings trade-offs. These systems reveal the interest of using multiple transforms for video coding
Le, Thuc Trinh. "Video inpainting and semi-supervised object removal." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT026.
Full textNowadays, the rapid increase of video creates a massive demand for video-based editing applications. In this dissertation, we solve several problems relating to video post-processing and focus on objects removal application in video. To complete this task, we divided it into two problems: (1) A video objects segmentation step to select which objects to remove and (2) a video inpainting step to filling the damaged regions.For the video segmentation problem, we design a system which is suitable for object removal applications with different requirements in terms of accuracy and efficiency. Our approach relies on the combination of Convolutional Neural Networks (CNNs) for segmentation and the classical mask tracking method. In particular, we adopt the segmentation networks for image case and apply them to video case by performing frame-by-frame segmentation. By exploiting both offline and online training with first frame annotation only, the networks are able to produce highly accurate video object segmentation. Besides, we propose a mask tracking module to ensure temporal continuity and a mask linking module to ensure the identity coherence across frames. Moreover, we introduce a simple way to learn the dilation layer in the mask, which helps us create suitable masks for video objects removal application.For the video inpainting problem, we divide our work into two categories base on the type of background. In particular, we present a simple motion-guided pixel propagation method to deal with static background cases. We show that the problem of objects removal with a static background can be solved efficiently using a simple motion-based technique. To deal with dynamic background, we introduce video inpainting method by optimization a global patch-based energy function. To increase the speed of the algorithm, we proposed a parallel extension of the 3D PatchMatch algorithm. To improve accuracy, we systematically incorporate the optical flow in the overall process. We end up with a video inpainting method which is able to reconstruct moving objects as well as reproduce dynamic textures while running in a reasonable time.Finally, we combine the video objects segmentation and video inpainting methods into a unified system to removes undesired objects in videos. To the best of our knowledge, this is the first system of this kind. In our system, the user only needs to approximately delimit in the first frame the objects to be edited. These annotation process is facilitated by the help of superpixels. Then, these annotations are refined and propagated through the video by the video objects segmentation method. One or several objects can then be removed automatically using our video inpainting methods. This results in a flexible computational video editing tool, with numerous potential applications, ranging from crowd suppression to unphysical scenes correction
Dufour, Sophie-Isabelle. "Imaginem video : L'image vidéo dans l'"histoire longue" des images." Paris 3, 2004. http://www.theses.fr/2004PA030054.
Full textWhat do I see when I look at a video? Actually, I do not see a video but an image. My purpose is to study the status of video image from the point of view of the so-called " long history " of images, dealing therefore with very ancient problems that occured long before the technical invention of the medium. A distinction must be made between images and the very notion of image : one could say that the difficult notion of image can be specified only through the various media in which it embodies itself. In this study, video questions image itself. Art works will keep their privilege, because through them the status of video image is best revealed; but my intention is to show that the powers of image go far beyond aesthetics. The first problem will be the one raised by the myth of Narcissus, as a lover of image(s), because it is seminal. It leads, for instance, to the notion of fluidity, which will prove essential in my study of the " ghostliness " of video image (as well as in my study of space in video). Last but not least, the relations between time and video image should be specified with Bergson's help, and I shall try to prove how useful can be this philosopher's notion of time when one hopes to understand the singularity of video image
Books on the topic "Video"
European Commission. Directorate General Information, Communication, Culture, Audiovisual. Video-Katalog =: Video catalogue = Catalogue vidéo. Luxembourg: Office for Official Publications of the European Communities = Office des publications officielles des Communautés européennes, 1997.
Find full textGoethe-Institut. Video Katalog: Videos for loan. London: Goethe-Institut, 1990.
Find full textEuropean Commission. Directorate General X for Information, Communication, Culture, Audiovisual. Video-Katalog =: Video catalogue = Catalogue video. Luxemburg: Amt für amtliche Veröffentlichungen der Europäischen Gemainschaften, 1997.
Find full text1967-, Weir Kathryn Elizabeth, Chambers Nicholas, and Queensland Art Gallery, eds. Video hits: Art & music video. South Brisbane: Queensland Art Gallery, 2004.
Find full textPublications, Educational Technology, ed. Interactive video. Englewood Cliffs, N.J: Educational Technology Publications, 1989.
Find full textSokol, Erich. Video. Wien: Jugend & Volk, 1990.
Find full textCoulter, George. Video. Vero Beach, Fla: Rourke Publications, 1996.
Find full textBiel, Jackie. Video. Tarrytown, N.Y: Benchmark Books, 1996.
Find full textMike, Lavery, and Rinvolucri Mario, eds. Video. Oxford: Oxford University Press, 1991.
Find full textCastro, Vicente González. Video. Vedado, Ciudad de La Habana: Editorial P. de la Torriente, 1987.
Find full textBook chapters on the topic "Video"
Tassi, Laura, Valeria Mariani, Veronica Pelliccia, and Roberto Mai. "Video-Electroencephalography (Video-EEG)." In Clinical Electroencephalography, 305–17. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-04573-9_18.
Full textBaumgardt, Michael. "Video." In Web Design kreativ!, 144–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/978-3-642-56961-6_10.
Full textBaldwin, Dennis, Jamie Macdonald, Keith Peters, Jon Steer, David Tudury, Jerome Turner, Steve Webster, Alex White, and Todd Yard. "Video." In Flash MX Studio, 493–528. Berkeley, CA: Apress, 2002. http://dx.doi.org/10.1007/978-1-4302-5166-8_14.
Full textFarkas, Bart, and Jeff Govier. "Video." In Use Your PC to Build an Incredible Home Theater System, 3–31. Berkeley, CA: Apress, 2003. http://dx.doi.org/10.1007/978-1-4302-5174-3_1.
Full textGreen, Tom, and Tiago Dias. "Video." In Foundation Flash CS5 for Designers, 527–99. Berkeley, CA: Apress, 2010. http://dx.doi.org/10.1007/978-1-4302-2995-7_10.
Full textGreen, Tom, and David Stiller. "Video." In Foundation Flash CS4 for Designers, 441–91. Berkeley, CA: Apress, 2009. http://dx.doi.org/10.1007/978-1-4302-1094-8_10.
Full textGreen, Tom, and Joseph Labrecque. "Video." In Beginning Adobe Animate CC, 477–518. Berkeley, CA: Apress, 2017. http://dx.doi.org/10.1007/978-1-4842-2376-5_10.
Full textWeik, Martin H. "video." In Computer Science and Communications Dictionary, 1889. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_20759.
Full textRoach, J. W. "Video." In Human-Machine Interactive Systems, 185–97. Boston, MA: Springer US, 1991. http://dx.doi.org/10.1007/978-1-4684-5883-1_8.
Full textLemke, Inga. "Video." In Handbuch Populäre Kultur, 472–78. Stuttgart: J.B. Metzler, 2003. http://dx.doi.org/10.1007/978-3-476-05001-4_103.
Full textConference papers on the topic "Video"
Han, Haochen, and Yu Sun. "A Video Note Taking System to Make Online Video Learning Easier." In 10th International Conference on Information Technology Convergence and Services (ITCSE 2021). AIRCC Publishing Corporation, 2021. http://dx.doi.org/10.5121/csit.2021.110917.
Full textD., Minola Davids, and Seldev Christopher C. "Surveillance Video Summarization based on Target Object Detection." In The International Conference on scientific innovations in Science, Technology, and Management. International Journal of Advanced Trends in Engineering and Management, 2023. http://dx.doi.org/10.59544/jist4192/ngcesi23p94.
Full textYu, Yipeng, Xiao Chen, and Hui Zhan. "VideoMaster: A Multimodal Micro Game Video Recreator." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/844.
Full textLiu, Ziling, Jinyu Yang, Mingqi Gao, and Feng Zheng. "Place Anything into Any Video." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/1019.
Full textLu, Xinyuan, Shengyuan Huang, Li Niu, Wenyan Cong, and Liqing Zhang. "Deep Video Harmonization With Color Mapping Consistency." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/172.
Full textЗвездакова, Анастасия, Anastasia Zvezdakova, Сергей Звездаков, Sergey Zvezdakov, Дмитрий Куликов, Dmitriy Kulikov, Дмитрий Ватолин, and Dmitriy Vatolin. "Hacking VMAF with Video Color and Contrast Distortion." In 29th International Conference on Computer Graphics, Image Processing and Computer Vision, Visualization Systems and the Virtual Environment GraphiCon'2019. Bryansk State Technical University, 2019. http://dx.doi.org/10.30987/graphicon-2019-2-53-57.
Full textMendes, Paulo, and Sérgio Colcher. "Spatio-temporal Localization of Actors in Video/360-Video and its Applications." In Simpósio Brasileiro de Sistemas Multimídia e Web. Sociedade Brasileira de Computação - SBC, 2022. http://dx.doi.org/10.5753/webmedia_estendido.2022.224999.
Full textCai, Jia-Jia, Jun Tang, Qing-Guo Chen, Yao Hu, Xiaobo Wang, and Sheng-Jun Huang. "Multi-View Active Learning for Video Recommendation." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/284.
Full textRogers, Steven K. "Confessions of a video warrior." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1991. http://dx.doi.org/10.1364/oam.1991.thn3.
Full textMoreira, Daniel, Siome Goldenstein, and Anderson Rocha. "Sensitive-Video Analysis." In XXX Concurso de Teses e Dissertações da SBC. Sociedade Brasileira de Computação - SBC, 2017. http://dx.doi.org/10.5753/ctd.2017.3466.
Full textReports on the topic "Video"
Zakhor, Avideh. Video Compression Algorithms for Transmission and Video. Fort Belvoir, VA: Defense Technical Information Center, May 1997. http://dx.doi.org/10.21236/ada327255.
Full textPatel, Deep, Kenneth Graf, and David Fuller. Hip Surgical Preparation Educational Video. Rowan Digital Works, January 2021. http://dx.doi.org/10.31986/issn.2689-0690_rdw.oer.1022.
Full textPatel, Deep, Catherine Fedorka, and David Fuller. Shoulder Surgical Preparation Educational Video. Rowan Digital Works, January 2021. http://dx.doi.org/10.31986/issn.2689-0690_rdw.oer.1023.
Full textPatel, Deep, Julio Rodriguez, Vishal Khatri, and David Fuller. Spine Surgical Preparation Educational Video. Rowan Digital Works, January 2021. http://dx.doi.org/10.31986/issn.2689-0690_rdw.oer.1021.
Full textMascarenas, David Dennis Lee. Video Magic. Office of Scientific and Technical Information (OSTI), August 2019. http://dx.doi.org/10.2172/1557196.
Full textPatel, Deep, Eric Freeland, and David Fuller. Foot and Ankle Surgical Preparation Educational Video. Rowan Digital Works, January 2021. http://dx.doi.org/10.31986/issn.2689-0690_rdw.oer.1020.
Full textPatel, Deep, Alisina Shahi, and David Fuller. Hand and Wrist Surgical Preparation Educational Video. Rowan Digital Works, January 2021. http://dx.doi.org/10.31986/issn.2689-0690_rdw.oer.1019.
Full textBlask, Steven. Airborne Video Surveillance. Fort Belvoir, VA: Defense Technical Information Center, February 2002. http://dx.doi.org/10.21236/ada402884.
Full textFletcher, Jonathan, David Doria, and David Druno. Android Video Streaming. Fort Belvoir, VA: Defense Technical Information Center, May 2014. http://dx.doi.org/10.21236/ada601489.
Full textKobla, Vikrant, David Doermann, and Azriel Rosenfeld. Compressed Video Segmentation. Fort Belvoir, VA: Defense Technical Information Center, September 1996. http://dx.doi.org/10.21236/ada458852.
Full text