Tesi sul tema "Video processing"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Video processing".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.
Aggoun, Amar. "DPCM video signal/image processing". Thesis, University of Nottingham, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335792.
Testo completoZhao, Jin. "Video/Image Processing on FPGA". Digital WPI, 2015. https://digitalcommons.wpi.edu/etd-theses/503.
Testo completoIsaieva, O. A., e О. Г. Аврунін. "Image processing for video dermatoscopy". Thesis, Osaka, Japan, 2019. http://openarchive.nure.ua/handle/document/10347.
Testo completoChen, Juan. "Content-based Digital Video Processing. Digital Videos Segmentation, Retrieval and Interpretation". Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4256.
Testo completoHaynes, Simon Dominic. "Reconfigurable architectures for video image processing". Thesis, Imperial College London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.322797.
Testo completoFernando, Warnakulasuriya Anil Chandana. "Video processing in the compressed domain". Thesis, University of Bristol, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.326724.
Testo completoLeonce, Andrew. "HDR video enhancement, processing and coding". Thesis, Loughborough University, 2015. https://dspace.lboro.ac.uk/2134/19639.
Testo completoWu, Hao-Yu M. Eng Massachusetts Institute of Technology. "Eulerian Video Processing and medical applications". Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77452.
Testo completoCataloged from PDF version of thesis.
Includes bibliographical references (p. 68-69).
Our goal is to reveal subtle yet informative signals in videos that are difficult or impossible to see with the naked eye. We can either display them in an indicative manner, or analyse them to extract important measurements, such as vital signs. Our method, which we call Eulerian Video Processing, takes a standard video sequence as input, and applies spatial decomposition, followed by temporal filtering to the frames. The resulting signals can be visually amplified to reveal hidden information, the process we called Eulerian Video Magnification. Using Eulerian Video Magnification, we are able to visualize the flow of blood as it fills the face and to amplify and reveal small motions. Our technique can be run in real time to instantly show phenomena occurring at the temporal frequencies selected by the user. Those signals can also be used to extract vital signs contactlessly. We presented a heart rate extraction system that is able to estimate heart rate of newborns from videos recorded in the real nursery environment. Our system can produce heart rate measurement that has clinical accuracy when newborns only have mild motions, and when the videos are acquired in brightly lit environments.
by Hao-Yu Wu.
M.Eng.and S.B.
Tsoligkas, Nick A. "Video/Image Processing Algorithms for Video Compression and Image Stabilization Applications". Thesis, Teesside University, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.517469.
Testo completoTsoi, Yau Chat. "Video cosmetics : digital removal of blemishes from video /". View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?COMP%202003%20TSOI.
Testo completoIncludes bibliographical references (leaves 83-86). Also available in electronic version. Access restricted to campus users.
Korpinen, K. P. (Kalle-Pekka). "Projektinhallinan video yliopisto-opetuksessa". Master's thesis, University of Oulu, 2014. http://urn.fi/URN:NBN:fi:oulu-201405241497.
Testo completoRaihani, Nilgoun. "Respiration Pattern Using Amplified Video". Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case151272961173245.
Testo completoLazcano, Vanel. "Some problems in depth enhanced video processing". Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/373917.
Testo completoEn esta tesis se abordan dos problemas: interpolación de datos en el contexto del cálculo de disparidades tanto para imágenes como para video, y el problema de la estimación del movimiento aparente de objetos en una secuencia de imágenes. El primer problema trata de la completación de datos de profundidad en una región de la imagen o video dónde los datos se han perdido debido a oclusiones, datos no confiables, datos dañados o pérdida de datos durante la adquisición. En esta tesis estos problemas se abordan de dos maneras. Primero, se propone una energía basada en gradientes no-locales, energía que puede (localmente) completar planos. Se considera este modelo como una extensión del filtro bilateral al dominio del gradiente. Se ha evaluado en forma exitosa el modelo para completar datos sintéticos y también mapas de profundidad incompletos de un sensor Kinect. El segundo enfoque, para abordar el problema, es un estudio experimental del biased AMLE (Biased Absolutely Minimizing Lipschitz Extension) para interpolación anisotrópica de datos de profundidad en grandes regiones sin información. El operador AMLE es un interpolador de conos, pero el operador biased AMLE es un interpolador de conos exponenciales lo que lo hace estar más adaptado a mapas de profundidad de escenas reales (las que comunmente presentan superficies convexas, concavas y suaves). Además, el operador biased AMLE puede expandir datos de profundidad a regiones grandes. Considerando al dominio de la imagen dotado de una métrica anisotrópica, el método propuesto puede tomar en cuenta información geométrica subyacente para no interpolar a través de los límites de los objetos a diferentes profundidades. Se ha propuesto un modelo numérico, basado en el operador eikonal, para calcular la solución del biased AMLE. Adicionalmente, se ha extendido el modelo numérico a sequencias de video. El cálculo del flujo óptico es uno de los problemas más desafiantes para la visión por computador. Los modelos tradicionales fallan al estimar el flujo óptico en presencia de oclusiones o iluminación no uniforme. Para abordar este problema se propone un modelo variacional para conjuntamente estimar flujo óptico y oclusiones. Además, el modelo propuesto puede tolerar, una limitación tradicional de los métodos variacionales, desplazamientos rápidos de objetos que son más grandes que el tamaño objeto en la escena. La adición de un término para el balance de gradientes e intensidades aumenta la robustez del modelo propuesto ante cambios de iluminación. La inclusión de correspondencias adicionales (obtenidas usando búsqueda exhaustiva en ubicaciones específicas) ayuda a estimar grandes desplazamientos.
Toivonen, T. (Tuukka). "Efficient methods for video coding and processing". Doctoral thesis, University of Oulu, 2008. http://urn.fi/urn:isbn:9789514286957.
Testo completoGause, Jõrn. "Reconfigurable computing for shape-adaptive video processing". Thesis, Imperial College London, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.397539.
Testo completoJavadi, Seyed Mahdi Sadreddinhajseyed. "Research into illumination variance in video processing". Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/17027.
Testo completoBlount, Alan Wayne. "Display manager for a video processing system". Thesis, Massachusetts Institute of Technology, 1991. http://hdl.handle.net/1721.1/12951.
Testo completoIncludes bibliographical references (leaves 44-45).
by Alan Wayne Blount.
B.S.
Lazcano, Vanel. "Some problems in depth enjanced video processing". Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/373917.
Testo completoEn esta tesis se abordan dos problemas: interpolación de datos en el contexto del cálculo de disparidades tanto para imágenes como para video, y el problema de la estimación del movimiento aparente de objetos en una secuencia de imágenes. El primer problema trata de la completación de datos de profundidad en una región de la imagen o video dónde los datos se han perdido debido a oclusiones, datos no confiables, datos dañados o pérdida de datos durante la adquisición. En esta tesis estos problemas se abordan de dos maneras. Primero, se propone una energía basada en gradientes no-locales, energía que puede (localmente) completar planos. Se considera este modelo como una extensión del filtro bilateral al dominio del gradiente. Se ha evaluado en forma exitosa el modelo para completar datos sintéticos y también mapas de profundidad incompletos de un sensor Kinect. El segundo enfoque, para abordar el problema, es un estudio experimental del biased AMLE (Biased Absolutely Minimizing Lipschitz Extension) para interpolación anisotrópica de datos de profundidad en grandes regiones sin información. El operador AMLE es un interpolador de conos, pero el operador biased AMLE es un interpolador de conos exponenciales lo que lo hace estar más adaptado a mapas de profundidad de escenas reales (las que comunmente presentan superficies convexas, concavas y suaves). Además, el operador biased AMLE puede expandir datos de profundidad a regiones grandes. Considerando al dominio de la imagen dotado de una métrica anisotrópica, el método propuesto puede tomar en cuenta información geométrica subyacente para no interpolar a través de los límites de los objetos a diferentes profundidades. Se ha propuesto un modelo numérico, basado en el operador eikonal, para calcular la solución del biased AMLE. Adicionalmente, se ha extendido el modelo numérico a sequencias de video. El cálculo del flujo óptico es uno de los problemas más desafiantes para la visión por computador. Los modelos tradicionales fallan al estimar el flujo óptico en presencia de oclusiones o iluminación no uniforme. Para abordar este problema se propone un modelo variacional para conjuntamente estimar flujo óptico y oclusiones. Además, el modelo propuesto puede tolerar, una limitación tradicional de los métodos variacionales, desplazamientos rápidos de objetos que son más grandes que el tamaño objeto en la escena. La adición de un término para el balance de gradientes e intensidades aumenta la robustez del modelo propuesto ante cambios de iluminación. La inclusión de correspondencias adicionales (obtenidas usando búsqueda exhaustiva en ubicaciones específicas) ayuda a estimar grandes desplazamientos.
Kourennyi, Dmitri Dmitrievich. "Customer Tracking Through Security Camera Video Processing". Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1290122439.
Testo completoWedge, Daniel John. "Video sequence synchronization". University of Western Australia. School of Computer Science and Software Engineering, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0084.
Testo completo蔡固庭 e Koo-ting Choi. "Improved processing techniques for picture sequence coding". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31220642.
Testo completoChoi, Koo-ting. "Improved processing techniques for picture sequence coding /". Hong Kong : University of Hong Kong, 1998. http://sunzi.lib.hku.hk/hkuto/record.jsp?B20565550.
Testo completoKang, Jung Won. "Effective temporal video segmentation and content-based audio-visual video clustering". Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/13731.
Testo completoJiang, Xiaofeng. "Multipoint digital video communications". Thesis, University of Cambridge, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239548.
Testo completoSo, Wai-ki, e 蘇慧琪. "Shadow identification in traffic video sequences". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B32045967.
Testo completoBiswas, Mainak. "Content adaptive video processing algorithms for digital TV /". Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2005. http://wwwlib.umi.com/cr/ucsd/fullcit?p3189792.
Testo completoHaro, Antonio. "Example Based Processing For Image And Video Synthesis". Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/5283.
Testo completoLi, Min. "Markov Random field edge-centric image/video processing". Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2007. http://wwwlib.umi.com/cr/ucsd/fullcit?p3274746.
Testo completoTitle from first page of PDF file (viewed October 8, 2007). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 117-125).
Coimbra, Miguel Tavares. "Compressed domain video processing with applications to surveillance". Thesis, Queen Mary, University of London, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.413780.
Testo completoHamosfakidis, Anastasios. "MPEG-4 software video encoding using parallel processing". Thesis, Queen Mary, University of London, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.436200.
Testo completoHu, Yongtao, e 胡永涛. "Multimodal speaker localization and identification for video processing". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/212633.
Testo completoSims, Oliver. "Efficient implementation of video processing algorithms on FPGA". Thesis, University of Glasgow, 2007. http://theses.gla.ac.uk/4119/.
Testo completoChen, Jiawen (Jiawen Kevin). "Efficient data structures for piecewise-smooth video processing". Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66003.
Testo completoCataloged from PDF version of thesis.
Includes bibliographical references (p. 95-102).
A number of useful image and video processing techniques, ranging from low level operations such as denoising and detail enhancement to higher level methods such as object manipulation and special effects, rely on piecewise-smooth functions computed from the input data. In this thesis, we present two computationally efficient data structures for representing piecewise-smooth visual information and demonstrate how they can dramatically simplify and accelerate a variety of video processing algorithms. We start by introducing the bilateral grid, an image representation that explicitly accounts for intensity edges. By interpreting brightness values as Euclidean coordinates, the bilateral grid enables simple expressions for edge-aware filters. Smooth functions defined on the bilateral grid are piecewise-smooth in image space. Within this framework, we derive efficient reinterpretations of a number of edge-aware filters commonly used in computational photography as operations on the bilateral grid, including the bilateral filter, edgeaware scattered data interpolation, and local histogram equalization. We also show how these techniques can be easily parallelized onto modern graphics hardware for real-time processing of high definition video. The second data structure we introduce is the video mesh, designed as a flexible central data structure for general-purpose video editing. It represents objects in a video sequence as 2.5D "paper cutouts" and allows interactive editing of moving objects and modeling of depth, which enables 3D effects and post-exposure camera control. In our representation, we assume that motion and depth are piecewise-smooth, and encode them sparsely as a set of points tracked over time. The video mesh is a triangulation over this point set and per-pixel information is obtained by interpolation. To handle occlusions and detailed object boundaries, we rely on the user to rotoscope the scene at a sparse set of frames using spline curves. We introduce an algorithm to robustly and automatically cut the mesh into local layers with proper occlusion topology, and propagate the splines to the remaining frames. Object boundaries are refined with per-pixel alpha mattes. At its core, the video mesh is a collection of texture-mapped triangles, which we can edit and render interactively using graphics hardware. We demonstrate the effectiveness of our representation with special effects such as 3D viewpoint changes, object insertion, depthof- field manipulation, and 2D to 3D video conversion.
by Jiawen Chen.
Ph.D.
Grundmann, Matthias. "Computational video: post-processing methods for stabilization, retargeting and segmentation". Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47596.
Testo completoBailey, Kira Marie. "Individual differences in video game experience cognitive control, affective processing, and visuospatial processing /". [Ames, Iowa : Iowa State University], 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1473177.
Testo completoGu, Lifang. "Video analysis in MPEG compressed domain". University of Western Australia. School of Computer Science and Software Engineering, 2003. http://theses.library.uwa.edu.au/adt-WU2003.0016.
Testo completoMonaco, Joseph W. "Generalized motion models for video applications". Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/14926.
Testo completoPao, I.-Ming. "Improved standard-conforming video coding techniques /". Thesis, Connect to this title online; UW restricted, 1999. http://hdl.handle.net/1773/5936.
Testo completoHuang, Jianzhong. "Motion estimation and compensation for video image sequences". Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/14950.
Testo completoSaari, M. (Marko). "How usability is visible in video games". Bachelor's thesis, University of Oulu, 2017. http://urn.fi/URN:NBN:fi:oulu-201702231258.
Testo completoDworaczyk, Wiltshire Austin Aaron. "CUDA ENHANCED FILTERING IN A PIPELINED VIDEO PROCESSING FRAMEWORK". DigitalCommons@CalPoly, 2013. https://digitalcommons.calpoly.edu/theses/1072.
Testo completoGrundmann, Matthias. "Real-time content aware resizing of video". Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26622.
Testo completoCommittee Chair: Essa, Irfan; Committee Member: Dellaert, Frank; Committee Member: Turk, Greg. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Jiang, Min. "Hardware architectures for high-performance image and video processing". Thesis, Queen's University Belfast, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.492001.
Testo completoFriedland, Gerald. "Adaptive audio and video processing for electronic chalkboard lectures". [S.l.] : [s.n.], 2006. http://www.diss.fu-berlin.de/2006/514/index.html.
Testo completoAltilar, Deniz Turgay. "Data partitioning and scheduling for parallel digital video processing". Thesis, Queen Mary, University of London, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.399730.
Testo completoCase, David Robert. "Real-time signal processing of multi-path video signals". Thesis, University of Salford, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.334170.
Testo completoDickinson, Keith William. "Traffic data capture and analysis using video image processing". Thesis, University of Sheffield, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306374.
Testo completoParimala, Anusha. "Video Enhancement: Video Stabilization". Thesis, 2018. http://ethesis.nitrkl.ac.in/9977/1/2018_MT_216EC6252_AParimala_Video.pdf.
Testo completoChang, Sheng-Hung, e 張盛紘. "Android Video Processing APP". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/61903393845787879753.
Testo completo明新科技大學
電子工程研究所
102
We implemented a visual-based image on the Android-based smartphone. This system uses only build-in camera in an Android phone. The system works with no other assisted equipment, today's information era action fast, real-time, so we are not in the image but rather to render the video. In this paper, using the Android system implements an Android phone application to study and explore the color space and skin detection technology and practical application, firstly, we use only build-in camera to setting in an Android phone, The enactment opens the lens while being like the image seeing to carry on decoding or wanting to circulate, the appearance seen by our lens carries on the processing of each 5 f/sec after, let us reach instant of the condition, we start carrying on detecting of some colors after, we used basic RGB in the color space to detect first black and white, then come to further skin color to detect, then want to use is like HSV, YIQ and technique with this type of YCbCr to carry on, these techniques want will by oneself the formula come to carry on again to detect after converting and also mix some more complicated and triangle functions to carry on an operation in the formula, however also increased the accuracy of technique like this.
Lin, Wei-Xian, e 林威憲. "Video Processing for an Object-Based Virtual Video Conferencing". Thesis, 2000. http://ndltd.ncl.edu.tw/handle/04899495861925372387.
Testo completo國立清華大學
電機工程學系
88
In a general multipoint video conferencing, the participants focus on the other participants rather than the background, and they will pay more attention on the active conferees. Therefore, we propose a bit-allocation method which may distribute the constrained bit-rates to each object based on its characteristics. From the rate and distortion model proposed in ITU-T TMN8, we derive an object-based R-D optimized bit allocation equation, which only considers spatial activity. Then we introduce the factors of temporal activity and object size into the bit allocation equation to derive the joint bit-allocation method. Simulation results show that the image quality of interest is improved while the image quality of less interest is degraded. In this thesis, two multipoint virtual conferencing environments are proposed to provide a more realistic conference, the conferencing with two-dimensional virtual scene and the conferencing with three-dimensional virtual environment. In the conferencing with two-dimensional virtual scene, the objects are resized and composed onto a pre-designed virtual scene. In the conferencing with three-dimensional virtual environment, the objects are manipulated onto a three-dimensional environment based on their specific locations. An object-segmentation procedure is also proposed in this thesis to discriminate the objects from the video sequence based on the background subtraction, morphological operations, and row-column scan method.