Academic literature on the topic 'MULTI VIEW VIDEOS'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'MULTI VIEW VIDEOS.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "MULTI VIEW VIDEOS"
Luo, Lei, Rong Xin Jiang, Xiang Tian, and Yao Wu Chen. "Reference Viewpoints Selection for Multi-View Video Plus Depth Coding Based on the Network Bandwidth Constraint." Applied Mechanics and Materials 303-306 (February 2013): 2134–38. http://dx.doi.org/10.4028/www.scientific.net/amm.303-306.2134.
Full textChen, Jiawei, Zhenshi Zhang, and Xupeng Wen. "Target Identification via Multi-View Multi-Task Joint Sparse Representation." Applied Sciences 12, no. 21 (October 28, 2022): 10955. http://dx.doi.org/10.3390/app122110955.
Full textZhong, Chengzhang, Amy R. Reibman, Hansel A. Mina, and Amanda J. Deering. "Multi-View Hand-Hygiene Recognition for Food Safety." Journal of Imaging 6, no. 11 (November 7, 2020): 120. http://dx.doi.org/10.3390/jimaging6110120.
Full textKumar, Yaman, Rohit Jain, Khwaja Mohd Salik, Rajiv Ratn Shah, Yifang Yin, and Roger Zimmermann. "Lipper: Synthesizing Thy Speech Using Multi-View Lipreading." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 2588–95. http://dx.doi.org/10.1609/aaai.v33i01.33012588.
Full textAta, Sezin Kircali, Yuan Fang, Min Wu, Jiaqi Shi, Chee Keong Kwoh, and Xiaoli Li. "Multi-View Collaborative Network Embedding." ACM Transactions on Knowledge Discovery from Data 15, no. 3 (April 12, 2021): 1–18. http://dx.doi.org/10.1145/3441450.
Full textPan, Yingwei, Yue Chen, Qian Bao, Ning Zhang, Ting Yao, Jingen Liu, and Tao Mei. "Smart Director: An Event-Driven Directing System for Live Broadcasting." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 4 (November 30, 2021): 1–18. http://dx.doi.org/10.1145/3448981.
Full textSalik, Khwaja Mohd, Swati Aggarwal, Yaman Kumar, Rajiv Ratn Shah, Rohit Jain, and Roger Zimmermann. "Lipper: Speaker Independent Speech Synthesis Using Multi-View Lipreading." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 10023–24. http://dx.doi.org/10.1609/aaai.v33i01.330110023.
Full textObayashi, Mizuki, Shohei Mori, Hideo Saito, Hiroki Kajita, and Yoshifumi Takatsume. "Multi-View Surgical Camera Calibration with None-Feature-Rich Video Frames: Toward 3D Surgery Playback." Applied Sciences 13, no. 4 (February 14, 2023): 2447. http://dx.doi.org/10.3390/app13042447.
Full textMing Du, Aswin C. Sankaranarayanan, and Rama Chellappa. "Robust Face Recognition From Multi-View Videos." IEEE Transactions on Image Processing 23, no. 3 (March 2014): 1105–17. http://dx.doi.org/10.1109/tip.2014.2300812.
Full textMallik, Bruhanth, Akbar Sheikh-Akbari, Pooneh Bagheri Zadeh, and Salah Al-Majeed. "HEVC Based Frame Interleaved Coding Technique for Stereo and Multi-View Videos." Information 13, no. 12 (November 25, 2022): 554. http://dx.doi.org/10.3390/info13120554.
Full textDissertations / Theses on the topic "MULTI VIEW VIDEOS"
Wang, Dongang. "Action Recognition in Multi-view Videos." Thesis, The University of Sydney, 2018. http://hdl.handle.net/2123/19740.
Full textCanavan, Shaun. "Face recognition by multi-frame fusion of rotating heads in videos /." Connect to resource online, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1210446052.
Full textCanavan, Shaun J. "Face Recognition by Multi-Frame Fusion of Rotating Heads in Videos." Youngstown State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1210446052.
Full textBalusu, Anusha. "Multi-Vehicle Detection and Tracking in Traffic Videos Obtained from UAVs." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1593266183551245.
Full textTwinanda, Andru Putra. "Vision-based approaches for surgical activity recognition using laparoscopic and RBGD videos." Thesis, Strasbourg, 2017. http://www.theses.fr/2017STRAD005/document.
Full textThe main objective of this thesis is to address the problem of activity recognition in the operating room (OR). Activity recognition is an essential component in the development of context-aware systems, which will allow various applications, such as automated assistance during difficult procedures. Here, we focus on vision-based approaches since cameras are a common source of information to observe the OR without disrupting the surgical workflow. Specifically, we propose to use two complementary video types: laparoscopic and OR-scene RGBD videos. We investigate how state-of-the-art computer vision approaches perform on these videos and propose novel approaches, consisting of deep learning approaches, to carry out the tasks. To evaluate our proposed approaches, we generate large datasets of recordings of real surgeries. The results demonstrate that the proposed approaches outperform the state-of-the-art methods in performing surgical activity recognition on these new datasets
Ozcinar, Cagri. "Multi-view video communication." Thesis, University of Surrey, 2015. http://epubs.surrey.ac.uk/807807/.
Full textSalvador, Marcos Jordi. "Surface reconstruction for multi-view video." Doctoral thesis, Universitat Politècnica de Catalunya, 2011. http://hdl.handle.net/10803/108907.
Full textAquesta tesi presenta diferents tècniques per a la definiciò d’una metodologia per obtenir una representaciò alternativa de les seqüències de vídeo capturades per sistemes multi-càmera calibrats en entorns controlats, amb fons de l’escena conegut. Com el títol de la tesi suggereix, aquesta representació consisteix en una descripció tridimensional de les superfícies dels objectes de primer pla. Aquesta aproximació per la representació de les dades multi-vista permet recuperar part de la informació tridimensional de l’escena original perduda en el procés de projecció que fa cada càmera. L’elecció del tipus de representació i el disseny de les tècniques per la reconstrucció de l’escena responen a tres requeriments que apareixen en entorns controlats del tipus smart room o estudis de gravació, en què les seqüències capturades pel sistema multi-càmera són utilitzades tant per aplicacions d’anàlisi com per diferents mètodes de visualització interactius. El primer requeriment és que el mètode de reconstrucció ha de ser ràpid, per tal de poder-ho utilitzar en aplicacions interactives. El segon és que la representació de les superfícies sigui eficient, de manera que en resulti una compressió de les dades multi-vista. El tercer requeriment és que aquesta representació sigui efectiva, és a dir, que pugui ser utilitzada en aplicacions d’anàlisi, així com per visualitació. Un cop separats els continguts de primer pla i de fons de cada vista –possible en entorns controlats amb fons conegut–, l’estratègia que es segueix en el desenvolupament de la tesi és la de dividir el procés de reconstrucció en dues etapes. La primera consisteix en obtenir un mostreig de les superfícies (incloent orientació i textura). La segona etapa proporciona superfícies tancades, contínues, a partir del conjunt de mostres, mitjançant un procés d’interpolació. El resultat de la primera etapa és un conjunt de punts orientats a l’espai 3D que representen localment la posició, orientació i textura de les superfícies visibles pel conjunt de càmeres. El procés de mostreig s’interpreta com un procés de cerca de posicions 3D que resulten en correspondències de característiques de la imatge entre diferents vistes. Aquest procés de cerca pot ser conduït mitjançant diferents mecanismes, els quals es presenten a la primera part d’aquesta tesi. La primera proposta és fer servir un mètode basat en les imatges que busca mostres de superfície al llarg de la semi-recta que comença al centre de projeccions de cada càmera i passa per un determinat punt de la imatge corresponent. Aquest mètode s’adapta correctament al cas de voler explotar foto-consistència en un escenari estàtic i presenta caracterìstiques favorables per la seva utilizació en GPUs–desitjable–, però no està orientat a explotar les redundàncies temporals existentsen seqüències multi-vista ni proporciona superfícies tancades. El segon mètode efectua la cerca a partir d’una superfície inicial mostrejada que tanca l’espai on es troben els objectes a reconstruir. La cerca en direcció inversa a les normals –apuntant a l’interior– permet obtenir superfícies tancades amb un algorisme que explota la correlació temporal de l’escena per a l’evolució de reconstruccions 3D successives al llarg del temps. Un inconvenient d’aquest mètode és el conjunt d’operacions topològiques sobre la superfície inicial, que en general no són aplicables eficientment en GPUs. La tercera estratègia de mostreig està orientada a la paral·lelització –GPU– i l’explotació de correlacions temporals i espacials en la cerca de mostres de superfície. Definint un espai inicial de cerca que inclou els objectes a reconstruir, es busquen aleatòriament unes quantes mostres llavor sobre la superfície dels objectes. A continuació, es continuen buscant noves mostres de superfície al voltant de cada llavor –procés d’expansió– fins que s’aconsegueix una densitat suficient. Per tal de millorar l’eficiència de la cerca inicial de llavors, es proposa reduir l’espai de cerca, explotant d’una banda correlacions temporals en seqüències multi-vista i de l’altra aplicant multi-resolució. A continuació es procedeix amb l’expansió, que explota la correlació espacial en la distribució de les mostres de superfície. A la segona part de la tesi es presenta un algorisme de mallat que permet interpolar la superfície entre les mostres. A partir d’un triangle inicial, que connecta tres punts coherentment orientats, es procedeix a una expansió iterativa de la superfície sobre el conjunt complet de mostres. En relació amb l’estat de l’art, el mètode proposat presenta una reconstrucció molt precisa (no modifica la posició de les mostres) i resulta en una topologia correcta. A més, és prou ràpid com per ser utilitzable en aplicacions interactives, a diferència de la majoria de mètodes disponibles. Els resultats finals, aplicant ambdues etapes –mostreig i interpolació–, demostren la validesa de la proposta. Les dades experimentals mostren com la metodologia presentada permet obtenir una representació ràpida, eficient –compressió– i efectiva –completa– dels elements de primer pla de l’escena.
Abdullah, Jan Mirza, and Mahmododfateh Ahsan. "Multi-View Video Transmission over the Internet." Thesis, Linköping University, Department of Electrical Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-57903.
Full text3D television using multiple views rendering is receiving increasing interest. In this technology a number of video sequences are transmitted simultaneously and provides a larger view of the scene or stereoscopic viewing experience. With two views stereoscopic rendition is possible. Nowadays 3D displays are available that are capable of displaying several views simultaneously and the user is able to see different views by moving his head.
The thesis work aims at implementing a demonstration system with a number of simultaneous views. The system will include two cameras, computers at both the transmitting and receiving end and a multi-view display. Besides setting up the hardware, the main task is to implement software so that the transmission can be done over an IP-network.
This thesis report includes an overview and experiences of similar published systems, the implementation of real time video, its compression, encoding, and transmission over the internet with the help of socket programming and finally the multi-view display in 3D format. This report also describes the design considerations more precisely regarding the video coding and network protocols.
Fecker, Ulrich. "Coding techniques for multi-view video signals /." Aachen : Shaker, 2009. http://d-nb.info/993283179/04.
Full textOzkalayci, Burak Oguz. "Multi-view Video Coding Via Dense Depth Field." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607517/index.pdf.
Full textBooks on the topic "MULTI VIEW VIDEOS"
Coelho, Alessandra Martins. Multimedia Networking and Coding: State-of-the Art Motion Estimation in the Context of 3D TV. Cyprus: INTECH, 2013.
Find full textBook chapters on the topic "MULTI VIEW VIDEOS"
Yashika, B. L., and Vinod B. Durdi. "Image Fusion in Multi-view Videos Using SURF Algorithm." In Information and Communication Technology for Competitive Strategies (ICTCS 2020), 1061–71. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-0882-7_96.
Full textHossain, Emdad, Girija Chetty, and Roland Goecke. "Multi-view Multi-modal Gait Based Human Identity Recognition from Surveillance Videos." In Lecture Notes in Computer Science, 88–99. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37081-6_11.
Full textHossain, Emdad, and Girija Chetty. "Gait Based Human Identity Recognition from Multi-view Surveillance Videos." In Algorithms and Architectures for Parallel Processing, 319–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33065-0_34.
Full textWang, Chao, Yunhong Wang, Zhaoxiang Zhang, and Yiding Wang. "Model-Based Multi-view Face Construction and Recognition in Videos." In Lecture Notes in Computer Science, 280–87. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-31576-3_37.
Full textNosrati, Masoud S., Jean-Marc Peyrat, Julien Abinahed, Osama Al-Alao, Abdulla Al-Ansari, Rafeef Abugharbieh, and Ghassan Hamarneh. "Efficient Multi-organ Segmentation in Multi-view Endoscopic Videos Using Pre-operative Priors." In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2014, 324–31. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10470-6_41.
Full textHe, Ming, Yong Ge, Le Wu, Enhong Chen, and Chang Tan. "Predicting the Popularity of DanMu-enabled Videos: A Multi-factor View." In Database Systems for Advanced Applications, 351–66. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-32049-6_22.
Full textSanthoshkumar, R., and M. Kalaiselvi Geetha. "Emotion Recognition on Multi View Static Action Videos Using Multi Blocks Maximum Intensity Code (MBMIC)." In New Trends in Computational Vision and Bio-inspired Computing, 1143–51. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-41862-5_116.
Full textHossain, Emdad, and Girija Chetty. "Multi-view Gait Fusion for Large Scale Human Identification in Surveillance Videos." In Advanced Concepts for Intelligent Vision Systems, 527–37. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33140-4_46.
Full textGuo, Lintao, Hunter Quant, Nikolas Lamb, Benjamin Lowit, Sean Banerjee, and Natasha Kholgade Banerjee. "Spatiotemporal 3D Models of Aging Fruit from Multi-view Time-Lapse Videos." In MultiMedia Modeling, 466–78. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73603-7_38.
Full textGuo, Lintao, Hunter Quant, Nikolas Lamb, Benjamin Lowit, Natasha Kholgade Banerjee, and Sean Banerjee. "Multi-camera Microenvironment to Capture Multi-view Time-Lapse Videos for 3D Analysis of Aging Objects." In MultiMedia Modeling, 381–85. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73600-6_37.
Full textConference papers on the topic "MULTI VIEW VIDEOS"
Cai, Jia-Jia, Jun Tang, Qing-Guo Chen, Yao Hu, Xiaobo Wang, and Sheng-Jun Huang. "Multi-View Active Learning for Video Recommendation." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/284.
Full textLin, Xinyu, Vlado Kitanovski, Qianni Zhang, and Ebroul Izquierdo. "Enhanced multi-view dancing videos synchronisation." In 2012 13th International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS). IEEE, 2012. http://dx.doi.org/10.1109/wiamis.2012.6226773.
Full textWang, Xueting. "Viewing support system for multi-view videos." In ICMI '16: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2993148.2997613.
Full textLee, Ji-Tang, De-Nian Yang, and Wanjiun Liao. "Efficient Caching for Multi-View 3D Videos." In GLOBECOM 2016 - 2016 IEEE Global Communications Conference. IEEE, 2016. http://dx.doi.org/10.1109/glocom.2016.7841773.
Full textShuai, Qing, Chen Geng, Qi Fang, Sida Peng, Wenhao Shen, Xiaowei Zhou, and Hujun Bao. "Novel View Synthesis of Human Interactions from Sparse Multi-view Videos." In SIGGRAPH '22: Special Interest Group on Computer Graphics and Interactive Techniques Conference. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3528233.3530704.
Full textPanda, Rameswar, Abir Das, and Amit K. Roy-Chowdhury. "Embedded sparse coding for summarizing multi-view videos." In 2016 IEEE International Conference on Image Processing (ICIP). IEEE, 2016. http://dx.doi.org/10.1109/icip.2016.7532345.
Full textShimizu, Tomohiro, Kei Oishi, Hideo Saito, Hiroki Kajita, and Yoshifumi Takatsume. "Automatic Viewpoint Switching for Multi-view Surgical Videos." In 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 2019. http://dx.doi.org/10.1109/ismar-adjunct.2019.00037.
Full textKuntintara, Wichukorn, Kanokphan Lertniphonphan, and Punnarai Siricharoen. "Multi-class Vehicle Counting System for Multi-view Traffic Videos." In 2022 Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2022. http://dx.doi.org/10.23919/apsipaasc55919.2022.9980202.
Full textWei, Chen-Hao, Chen-Kuo Chiang, and Shang-Hong Lai. "Iterative depth recovery for multi-view video synthesis from stereo videos." In 2014 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA). IEEE, 2014. http://dx.doi.org/10.1109/apsipa.2014.7041695.
Full textDavila, Daniel, Dawei Du, Bryon Lewis, Christopher Funk, Joseph Van Pelt, Roderic Collins, Kellie Corona, et al. "MEVID: Multi-view Extended Videos with Identities for Video Person Re-Identification." In 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE, 2023. http://dx.doi.org/10.1109/wacv56688.2023.00168.
Full textReports on the topic "MULTI VIEW VIDEOS"
Tao, Yang, Amos Mizrach, Victor Alchanatis, Nachshon Shamir, and Tom Porter. Automated imaging broiler chicksexing for gender-specific and efficient production. United States Department of Agriculture, December 2014. http://dx.doi.org/10.32747/2014.7594391.bard.
Full textAnderson, Gerald L., and Kalman Peleg. Precision Cropping by Remotely Sensed Prorotype Plots and Calibration in the Complex Domain. United States Department of Agriculture, December 2002. http://dx.doi.org/10.32747/2002.7585193.bard.
Full text