Academic literature on the topic 'Video tapes – editing – data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Video tapes – editing – data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Video tapes – editing – data processing"

1

Reddy, D. Akash, T. Venkat Raju, and V. Shashank. "Audio Assistant Based Image Captioning System Using RLSTM and CNN." International Journal for Research in Applied Science and Engineering Technology 10, no. 6 (June 30, 2022): 1864–67. http://dx.doi.org/10.22214/ijraset.2022.44289.

Full text
Abstract:
Abstract-- As we know, visually impaired or partially sighted people face a lot of problems reading or identifying any local scenarios. To vanquish this situation, we developed an audio-based image captioner that will identify the objects in an image and form a meaningful sentence that gives the output in the aural form. Image processing is a widely used method for developing many new applications. It isalso open source, so developers can use it easily. We used NLP (Natural Language Processing) to understand the description of an imageand convert the text to speech. A combination of R-LSTM and CNN is used, which is nothing but a reference based long-short term memory which matches different text data and takes it as reference and gives the output. Some of the other applications of image captioning are social media platforms like Instagram, etc., virtual assistants, and video editing software.
APA, Harvard, Vancouver, ISO, and other styles
2

Lu, Hengyang, Jiabing Li, Melisa A. Martinez-Paniagua, Irfan N. Bandey, Amit Amritkar, Harjeet Singh, David Mayerich, Navin Varadarajan, and Badrinath Roysam. "TIMING 2.0: high-throughput single-cell profiling of dynamic cell–cell interactions by time-lapse imaging microscopy in nanowell grids." Bioinformatics 35, no. 4 (August 1, 2018): 706–8. http://dx.doi.org/10.1093/bioinformatics/bty676.

Full text
Abstract:
Abstract Motivation Automated profiling of cell–cell interactions from high-throughput time-lapse imaging microscopy data of cells in nanowell grids (TIMING) has led to fundamental insights into cell–cell interactions in immunotherapy. This application note aims to enable widespread adoption of TIMING by (i) enabling the computations to occur on a desktop computer with a graphical processing unit instead of a server; (ii) enabling image acquisition and analysis to occur in the laboratory avoiding network data transfers to/from a server and (iii) providing a comprehensive graphical user interface. Results On a desktop computer, TIMING 2.0 takes 5 s/block/image frame, four times faster than our previous method on the same computer, and twice as fast as our previous method (TIMING) running on a Dell PowerEdge server. The cell segmentation accuracy (f-number = 0.993) is superior to our previous method (f-number = 0.821). A graphical user interface provides the ability to inspect the video analysis results, make corrective edits efficiently (one-click editing of an entire nanowell video sequence in 5–10 s) and display a summary of the cell killing efficacy measurements. Availability and implementation Open source Python software (GPL v3 license), instruction manual, sample data and sample results are included with the Supplement (https://github.com/RoysamLab/TIMING2). Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
3

Soedarso, Nick. "Mengolah Data Video Analog Menjadi Video Digital Sederhana." Humaniora 1, no. 2 (October 31, 2010): 569. http://dx.doi.org/10.21512/humaniora.v1i2.2897.

Full text
Abstract:
Nowadays, editing technology has entered the digital age. Technology will demonstrate the evidence of processing analog to digital data has become simpler since editing technology has been integrated in the society in all aspects. Understanding the technique of processing analog to digital data is important in producing a video. To utilize this technology, the introduction of equipments is fundamental to understand the features. The next phase is the capturing process that supports the preparation in editing process from scene to scene; therefore, it will become a watchable video.
APA, Harvard, Vancouver, ISO, and other styles
4

Kirk, R. L., E. Howington-Kraus, K. Edmundson, B. Redding, D. Galuszka, T. Hare, and K. Gwinner. "COMMUNITY TOOLS FOR CARTOGRAPHIC AND PHOTOGRAMMETRIC PROCESSING OF MARS EXPRESS HRSC IMAGES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W1 (July 25, 2017): 69–76. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w1-69-2017.

Full text
Abstract:
The High Resolution Stereo Camera (HRSC) on the Mars Express orbiter (Neukum et al. 2004) is a multi-line pushbroom scanner that can obtain stereo and color coverage of targets in a single overpass, with pixel scales as small as 10&amp;thinsp;m at periapsis. Since commencing operations in 2004 it has imaged ~&amp;thinsp;77&amp;thinsp;% of Mars at 20&amp;thinsp;m/pixel or better. The instrument team uses the Video Image Communication And Retrieval (VICAR) software to produce and archive a range of data products from uncalibrated and radiometrically calibrated images to controlled digital topographic models (DTMs) and orthoimages and regional mosaics of DTM and orthophoto data (Gwinner et al. 2009; 2010b; 2016). Alternatives to this highly effective standard processing pipeline are nevertheless of interest to researchers who do not have access to the full VICAR suite and may wish to make topographic products or perform other (e. g., spectrophotometric) analyses prior to the release of the highest level products. We have therefore developed software to ingest HRSC images and model their geometry in the USGS Integrated Software for Imagers and Spectrometers (ISIS3), which can be used for data preparation, geodetic control, and analysis, and the commercial photogrammetric software SOCET SET (® BAE Systems; Miller and Walker 1993; 1995) which can be used for independent production of DTMs and orthoimages. <br><br> The initial implementation of this capability utilized the then-current ISIS2 system and the generic pushbroom sensor model of SOCET SET, and was described in the DTM comparison of independent photogrammetric processing by different elements of the HRSC team (Heipke et al. 2007). A major drawback of this prototype was that neither software system then allowed for pushbroom images in which the exposure time changes from line to line. Except at periapsis, HRSC makes such timing changes every few hundred lines to accommodate changes of altitude and velocity in its elliptical orbit. As a result, it was necessary to split observations into blocks of constant exposure time, greatly increasing the effort needed to control the images and collect DTMs. <br><br> Here, we describe a substantially improved HRSC processing capability that incorporates sensor models with varying line timing in the current ISIS3 system (Sides 2017) and SOCET SET. This enormously reduces the work effort for processing most images and eliminates the artifacts that arose from segmenting them. In addition, the software takes advantage of the continuously evolving capabilities of ISIS3 and the improved image matching module NGATE (Next Generation Automatic Terrain Extraction, incorporating area and feature based algorithms, multi-image and multi-direction matching) of SOCET SET, thus greatly reducing the need for manual editing of DTM errors. We have also developed a procedure for geodetically controlling the images to Mars Orbiter Laser Altimeter (MOLA) data by registering a preliminary stereo topographic model to MOLA by using the point cloud alignment (<i>pc_align</i>) function of the NASA Ames Stereo Pipeline (ASP; Moratto et al. 2010). This effectively converts inter-image tiepoints into ground control points in the MOLA coordinate system. The result is improved absolute accuracy and a significant reduction in work effort relative to manual measurement of ground control. <i>The ISIS and ASP software used are freely available; SOCET SET, is a commercial product.</i> By the end of 2017 we expect to have ported our SOCET SET HRSC sensor model to the Community Sensor Model (CSM; Community Sensor Model Working Group 2010; Hare and Kirk 2017) standard utilized by the successor photogrammetric system SOCET GXP that is currently offered by BAE. In early 2018, we are also working with BAE to release the CSM source code under a BSD or MIT open source license. <br><br> We illustrate current HRSC processing capabilities with three examples, of which the first two come from the DTM comparison of 2007. Candor Chasma (h1235_0001) was a near-periapse observation with constant exposure time that could be processed relatively easily at that time. We show qualitative and quantitative improvements in DTM resolution and precision as well as greatly reduced need for manual editing, and illustrate some of the photometric applications possible in ISIS. At the Nanedi Valles site we are now able to process all 3 long-arc orbits (h0894_0000, h0905_0000 and h0927_0000) without segmenting the images. Finally, processing image set h4235_0001, which covers the landing site of the Mars Science Laboratory (MSL) rover and its rugged science target of Aeolus Mons in Gale crater, provides a rare opportunity to evaluate DTM resolution and precision because extensive High Resolution Imaging Science Experiment (HiRISE) DTMs are available (Golombek et al. 2012). The HiRISE products have ~&amp;thinsp;50x smaller pixel scale so that discrepancies can mostly be attributed to HRSC. We use the HiRISE DTMs to compare the resolution and precision of our HRSC DTMs with the (evolving) standard products. <br><br> We find that the vertical precision of HRSC DTMs is comparable to the pixel scale but the horizontal resolution may be 15&amp;ndash;30 image pixels, depending on processing. This is significantly coarser than the lower limit of 3&amp;ndash;5 pixels based on the minimum size for image patches to be matched. Stereo DTMs registered to MOLA altimetry by surface fitting typically deviate by 10thinsp;m or less in mean elevation. Estimates of the RMS deviation are strongly influenced by the sparse sampling of the altimetry, but range from <thinsp;50thinsp;m in flat areas to ~&amp;thinsp;100thinsp;m in rugged areas.
APA, Harvard, Vancouver, ISO, and other styles
5

Gupta, Rajeev, Jon D. Fricker, and David P. Moffett. "Reduction of Video License Plate Data." Transportation Research Record: Journal of the Transportation Research Board 1804, no. 1 (January 2002): 31–38. http://dx.doi.org/10.3141/1804-05.

Full text
Abstract:
Video license plate surveys have been used for more than a decade in Indiana to help produce origin-destination tables in corridors and small areas. In video license plate surveys, license plate images are captured on videotape for data reduction at the analyst’s office. In most cases, the letters and numbers on a license plate are manually transcribed to a data file. This manual process is tedious, time-consuming, and expensive. Although automated license plate readers are being implemented with success elsewhere, their dependence on high-end equipment makes them too expensive for most applications in Indiana. Presented are the results of an attempt to use standard video cameras and tapes, readily available video processing equipment, and open-source software to minimize the human role in the data reduction process and thus reduce the expenses involved. The process of automatically transcribing video data can be divided into subprocesses. Analog video data are digitized and stored on a computer hard disk. The resulting digital images are further processed, by using image-processing algorithms, to locate and extract the license plate and time stamp information. Character recognition techniques can then be applied to read the license plate number into an electronic file for the desired analysis. The described video license plate data reduction (VLPDR) software can identify video frames that contain vehicles and discard the remaining frames. VLPDR can locate and read the time stamps in most of these frames. Although VLPDR cannot read the license plate numbers into a data file, this final step is made easier by a user-friendly graphical user interface. VLPDR saves a significant amount of manual data reduction. The amount of labor saved depends on the parameters chosen by the user.
APA, Harvard, Vancouver, ISO, and other styles
6

Hufri, Hufri. "IMPROVEMENT OF TEACHER CAPABILITY IN SOLOK SELATAN DISTRICT IN DEVELOPING MULTIMEDIA THROUGH TRAINING VIDEO EDITING." Pelita Eksakta 2, no. 1 (July 5, 2019): 41. http://dx.doi.org/10.24036/pelitaeksakta/vol2-iss1/63.

Full text
Abstract:
Education in South Solok District, still lagging behind in the South Sumatra region, the UN ranking in 2016 is ranked 16th out of 19 existing cities / districts. This happens because the teacher uses video processing software that is recorded, so that many phenomena that cannot be used optimally. This PKM aims to improve the ability of teachers to develop text and video editing. The method used is the stage of interaction and discussion, training, training, training, training and development. Monitoring stages and, end of data analysis and report preparation. The instrument was a test of multimedia and video editing and questionnaires for the implementation of PKM activities. The results of data analysis, obtained by the average from the pretest is 14.42, for the average posttest average is 17.21. Then a paired t test was carried out, obtained a significance value of 0,000 <0,05, thus giving the conclusion that this PKM activity had been able to improve the ability of the Solok district junior school teachers in developing learning videos. Based on the questionnaire given at the end of the training activity, results were obtained that participants were very happy and involved in learning at school
APA, Harvard, Vancouver, ISO, and other styles
7

Debski, R., O. Schmitt, P. Trenz, M. Reimann, J. Döllner, M. Trapp, A. Semmo, and S. Pasewaldt. "A Framework for Art-directed Augmentation of Human Motion in Videos on Mobile Devices." Journal of WSCG 31, no. 1-2 (July 2023): 80–90. http://dx.doi.org/10.24132/jwscg.2023.9.

Full text
Abstract:
This paper presents a framework and mobile video editing app for interactive artistic augmentation of human motion in videos. While creating motion effects with industry-standard software is time-intensive and requires expertise, and popular video effect apps have limited customization options, our approach enables a multitude of art-directable, highly customizable motion effects. We propose a graph-based video processing framework that uses mobile-optimized machine learning models for human segmentation and pose estimation to augment RGB video data, enabling the rendering and animation of content-adaptive graphical elements that highlight and emphasize motion. Our modular framework architecture enables effect designers to create diverse motion effects that include body pose-based effects such as glow stick or light trail effects, silhouette-based effects such as halos and outlines, and layer-based effects that provide depth perception and enable interaction with virtual objects.
APA, Harvard, Vancouver, ISO, and other styles
8

Ariyanti, Maya, and Yumna Tazkia. "UNDERSTANDING SERVICE QUALITY OF MOBILE VIDEO EDITING : MAPPING THE NEGATIVE IMPRESSION BY TEXT MINING APPROACH." Jurnal Ilmiah Manajemen, Ekonomi, & Akuntansi (MEA) 8, no. 2 (July 17, 2024): 1952–71. http://dx.doi.org/10.31955/mea.v8i2.4256.

Full text
Abstract:
KineMaster is a video editing application that supports the content creator industry; however, compared to its competitors, that app falls short in release year, download numbers, and ratings. This research aims to determine the service quality of the Android-based KineMaster application based on sentiment analysis and the classification of mobile app service quality (MASQ) dimensions. The data used is secondary data from 5,000 reviews of Google Play Store using Google Colab and processed using RapidMiner Studi version 10.2. Naïve Bayes and k-Nearest Neighbors (KNN) algorithms are applied to determine the best one. Negative sentiment data resulting from the worst MASQ dimension classification will be carried out by WordCloud using Google Colab to determine complaint priorities. The research results show that positive sentiment dominates at 62.24% using the KNN algorithm as the best algorithm in this research. Nevertheless, the 37.76% negative sentiment is not ignored. Based on the number of negative sentiments in each dimension, technical reliability is the worst dimension, valence is the second worst dimension, and performance is the third worst. Prioritized complaints are update reliability, watermarks, app, feature downloads, inability to open apps, export capabilities, high price, and processing speed.
APA, Harvard, Vancouver, ISO, and other styles
9

Ijemaru, Gerald K., Augustine O. Nwajana, Emmanuel U. Oleka, Richard I. Otuka, Isibor K. Ihianle, Solomon H. Ebenuwa, and Emenike Raymond Obi. "Image processing system using MATLAB-based analytics." Bulletin of Electrical Engineering and Informatics 10, no. 5 (October 1, 2021): 2566–77. http://dx.doi.org/10.11591/eei.v10i5.3160.

Full text
Abstract:
Owing to recent technological advancement, computers and other devices running several image editing applications can be further exploited for digital image processing operations. This paper evaluates various image processing techniques using matrix laboratory (MATLAB-based analytics). Compared to the conventional techniques, MATLAB gives several advantages for image processing. MATLAB-based technique provides easy debugging with extensive data analysis and visualization, easy implementation and algorithmic-testing without recompilation. Besides, MATLAB's computational codes can be enhanced and exploited to process and create simulations of both still and video images. Moreover, MATLAB codes are much concise compared to C++, thus making it easier for perusing and troubleshooting. MATLAB can handle errors prior to execution by proposing various ways to make the codes faster. The proposed technique enables advanced image processing operations such as image cropping/resizing, image denoising, blur removal, and image sharpening. The study aims at providing readers with the most recent MATLAB-based image processing application-tools. We also provide an empirical-based method using two-dimensional discrete cosine transform (2D-DCT) derived from its coefficients. Using the most recent algorithms running on MATLAB toolbox, we performed simulations to evaluate the performance of our proposed technique. The results largely present MATLAB as a veritable approach for image processing operations.
APA, Harvard, Vancouver, ISO, and other styles
10

Hagen, Christina S., Leila Bighash, Andrea B. Hollingshead, Sonia Jawaid Shaikh, and Kristen S. Alexander. "Why are you watching? Video surveillance in organizations." Corporate Communications: An International Journal 23, no. 2 (April 3, 2018): 274–91. http://dx.doi.org/10.1108/ccij-04-2017-0043.

Full text
Abstract:
Purpose Organizations and their actors are increasingly using video surveillance to monitor organizational members, employees, clients, and customers. The use of such technologies in workplaces creates a virtual panopticon and increases uncertainty for those under surveillance. Video surveillance in organizations poses several concerns for the privacy of individuals and creates a security-privacy dilemma for organizations to address. The purpose of this paper is to offer a decision-making model that ties in ethical considerations of access, equality, and transparency at four stages of video surveillance use in organizations: deployment of cameras and equipment, capturing footage, processing and storing data, and editing and sharing video footage. At each stage, organizational actors should clearly identify the purpose for video surveillance, adopt a minimum capability necessary to achieve their goals, and communicate decisions made and actions taken that involve video surveillance in order to reduce uncertainty and address privacy concerns of those being surveilled. Design/methodology/approach The paper proposes a normative model for ethical video surveillance organizational decision making based on a review of relevant literature and recent events. Findings The paper provides several implications for the future of dealing with security-privacy dilemmas in organizations and offers structured considerations for corporation leaders and decision makers. Practical implications The paper includes implications for organizations to approach video surveillance with ethical considerations for stakeholder privacy while balancing security demands. Originality/value This paper offers a framework for decision-makers that also offers opportunities for further research around the concept of ethics in organizational video surveillance.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Video tapes – editing – data processing"

1

Mohapatra, Deepankar. "Automatic Removal of Complex Shadows From Indoor Videos." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc804942/.

Full text
Abstract:
Shadows in indoor scenarios are usually characterized with multiple light sources that produce complex shadow patterns of a single object. Without removing shadow, the foreground object tends to be erroneously segmented. The inconsistent hue and intensity of shadows make automatic removal a challenging task. In this thesis, a dynamic thresholding and transfer learning-based method for removing shadows is proposed. The method suppresses light shadows with a dynamically computed threshold and removes dark shadows using an online learning strategy that is built upon a base classifier trained with manually annotated examples and refined with the automatically identified examples in the new videos. Experimental results demonstrate that despite variation of lighting conditions in videos our proposed method is able to adapt to the videos and remove shadows effectively. The sensitivity of shadow detection changes slightly with different confidence levels used in example selection for classifier retraining and high confidence level usually yields better performance with less retraining iterations.
APA, Harvard, Vancouver, ISO, and other styles
2

"Efficient image/video restyling and collage on GPU." 2013. http://library.cuhk.edu.hk/record=b5549735.

Full text
Abstract:
創意媒體研究中,圖像/視頻再藝術作為有表現力的用戶定制外觀的創作手段受到了很大關注。交互設計中,特別是在圖像空間只有單張圖像或視頻輸入的情況下,運用計算機輔助設計虛擬地再渲染關注物體的風格化外觀來實現紋理替換是很強大的。現行的紋理替換往往通過操作圖像空間中像素的間距來處理紋理扭曲,原始圖像中潛在的紋理扭曲總是被破壞,因為現行的方法要麼存在由於手動網格拉伸導致的不恰當扭曲,要麼就由於紋理合成而導致不可避免的紋理開裂。圖像/視頻拼貼畫是被發明用以支持在顯示畫布上並行展示多個物體和活動。隨著數字視頻俘獲裝置的快速發展,相關的議題就是快速檢閱和摘要大量的視覺媒體數據集來找出關注的資料。這會是一項繁瑣的任務來審查長且乏味的監控視頻並快速把握重要信息。以關鍵信息和縮短視頻形式為交流媒介,視頻摘要是增強視覺數據集瀏覽效率和簡易理解的手段。
本文首先將圖像/視頻再藝術聚焦在高效紋理替換和風格化上。我們展示了一種交互紋理替換方法,能夠在不知潛在幾何結構和光照環境的情況下保持相似的紋理扭曲。我們運用SIFT 棱角特徵來自然地發現潛在紋理扭曲,並應用梯度深度圖復原和皺褶重要性優化來完成扭曲過程。我們運用GPU-CUDA 的並行性,通過實時雙邊網格和特徵導向的扭曲優化來促成交互紋理替換。我們運用基於塊的實時高精度TV-L¹光流,通過基於關鍵幀的紋理傳遞來完成視頻紋理替換。我們進一步研究了基於GPU 的風格化方法,並運用梯度優化保持原始圖像的精細結構。我們提出了一種能夠自然建模原始圖像精細結構的圖像結構圖,並運用基於梯度的切線生成和切線導向的形態學來構建這個結構圖。我們在GPU-CUDA 上通過並行雙邊網格和結構保持促成最終風格化。實驗中,我們的方法實時連續地展現了高質量的圖像/視頻的抽象再藝術。
當前,視頻拼貼畫大多創作靜態的基於關鍵幀的拼貼圖片,該結果只包含動態視頻有限的信息,會很大程度影響視覺數據集的理解。爲了便於瀏覽,我們展示了一種在顯示畫布上有效並行摘要動態活動的動態視頻拼貼畫。我們提出應用活動長方體來重組織及提取事件,執行視頻防抖來生成穩定的活動長方體,實行時空域優化來優化活動長方體在三維拼貼空間的位置。我們通過在GPU 上的事件相似性和移動關係優化來完成高效的動態拼貼畫,允許多視頻輸入。擁有再序核函數CUDA 處理,我們的視頻拼貼畫爲便捷瀏覽長視頻激活了動態摘要,節省大量存儲傳輸空間。實驗和調查表明我們的動態拼貼畫快捷有效,能被廣泛應用于視頻摘要。將來,我們會擴展交互紋理替換來支持更複雜的具大運動和遮蔽場景的一般視頻,避免紋理跳動。我們會採用最新視頻技術靈感使視頻紋理替換更加穩定。我們未來關於視頻拼貼畫的工作包括審查監控業中動態拼貼畫應用,並研究含有大量相機運動和不同種視頻過度的移動相機和一般視頻。
Image/video restyling as an expressive way for producing usercustomized appearances has received much attention in creative media researches. In interactive design, it would be powerful to re-render the stylized presentation of interested objects virtually using computer-aided design tools for retexturing, especially in the image space with a single image or video as input. The nowaday retexturing methods mostly process texture distortion by inter-pixel distance manipulation in image space, the underlying texture distortion is always destroyed due to limitations like improper distortion caused by human mesh stretching, or unavoidable texture splitting caused by texture synthesis. Image/ video collage techniques are invented to allow parallel presenting of multiple objects and events on the display canvas. With the rapid development of digital video capture devices, the related issues are to quickly review and brief such large amount of visual media datasets to find out interested video materials. It will be a tedious task to investigate long boring surveillance videos and grasp the essential information quickly. By applying key information and shortened video forms as vehicles for communication, video abstraction and summary are the means to enhance the browsing efficiency and easy understanding of visual media datasets.
In this thesis, we first focused our image/video restyling work on efficient retexturing and stylization. We present an interactive retexturing that preserves similar texture distortion without knowing the underlying geometry and lighting environment. We utilized SIFT corner features to naturally discover the underlying texture distortion. The gradient depth recovery and wrinkle stress optimization are applied to accomplish the distortion process. We facilitate the interactive retexturing via real-time bilateral grids and feature-guided distortion optimization using GPU-CUDA parallelism. Video retexturing is achieved through a keyframe-based texture transferring strategy using accurate TV-L¹ optical flow with patch motion tracking techniques in real-time. Further, we work on GPU-based abstract stylization that preserves the fine structure in the original images using gradient optimization. We propose an image structure map to naturally distill the fine structure of the original images. Gradientbased tangent generation and tangent-guided morphology are applied to build the structure map. We facilitate the final stylization via parallel bilateral grids and structure-aware stylizing in real-time on GPU-CUDA. In the experiments, our proposed methods consistently demonstrate high quality performance of image/video abstract restyling in real-time.
Currently, in video abstraction, video collages are mostly produced with static keyfame-based collage pictures, which contain limited information of dynamic videos and in uence understanding of visual media datasets greatly. We present dynamic video collage that effectively summarizes condensed dynamic activities in parallel on the canvas for easy browsing. We propose to utilize activity cuboids to reorganize and extract dynamic objects for further collaging, and video stabilization is performed to generate stabilized activity cuboids. Spatial-temporal optimization is carried out to optimize the positions of activity cuboids in the 3D collage space. We facilitate the efficient dynamic collage via event similarity and moving relationship optimization on GPU allowing multi-video inputs. Our video collage approach with kernel reordering CUDA processing enables dynamic summaries for easy browsing of long videos, while saving huge memory space for storing and transmitting them. The experiments and user study have shown the efficiency and usefulness of our dynamic video collage, which can be widely applied for video briefing and summary applications. In the future, we will further extend the interactive retexturing to more complicated general video applications with large motion and occluded scene avoiding textures icking. We will also work on new approaches to make video retexturing more stable by inspiration from latest video processing techniques. Our future work for video collage includes investigating applications of dynamic collage into the surveillance industry, and working on moving camera and general videos, which may contain large amount of camera motions and different types of video shot transitions.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Li, Ping.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2013.
Includes bibliographical references (leaves 109-121).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts also in Chinese.
Abstract --- p.i
Acknowledgements --- p.v
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Background --- p.1
Chapter 1.2 --- Main Contributions --- p.5
Chapter 1.3 --- Thesis Overview --- p.7
Chapter 2 --- Efficient Image/video Retexturing --- p.8
Chapter 2.1 --- Introduction --- p.8
Chapter 2.2 --- Related Work --- p.11
Chapter 2.3 --- Image/video Retexturing on GPU --- p.16
Chapter 2.3.1 --- Wrinkle Stress Optimization --- p.19
Chapter 2.3.2 --- Efficient Video Retexturing --- p.24
Chapter 2.3.3 --- Interactive Parallel Retexturing --- p.29
Chapter 2.4 --- Results and Discussion --- p.35
Chapter 2.5 --- Chapter Summary --- p.41
Chapter 3 --- Structure-Aware Image Stylization --- p.43
Chapter 3.1 --- Introduction --- p.43
Chapter 3.2 --- Related Work --- p.46
Chapter 3.3 --- Structure-Aware Stylization --- p.50
Chapter 3.3.1 --- Approach Overview --- p.50
Chapter 3.3.2 --- Gradient-Based Tangent Generation --- p.52
Chapter 3.3.3 --- Tangent-Guided Image Morphology --- p.54
Chapter 3.3.4 --- Structure-Aware Optimization --- p.56
Chapter 3.3.5 --- GPU-Accelerated Stylization --- p.58
Chapter 3.4 --- Results and Discussion --- p.61
Chapter 3.5 --- Chapter Summary --- p.66
Chapter 4 --- Dynamic Video Collage --- p.67
Chapter 4.1 --- Introduction --- p.67
Chapter 4.2 --- Related Work --- p.70
Chapter 4.3 --- Dynamic Video Collage on GPU --- p.74
Chapter 4.3.1 --- Activity Cuboid Generation --- p.75
Chapter 4.3.2 --- Spatial-Temporal Optimization --- p.80
Chapter 4.3.3 --- GPU-Accelerated Parallel Collage --- p.86
Chapter 4.4 --- Results and Discussion --- p.90
Chapter 4.5 --- Chapter Summary --- p.100
Chapter 5 --- Conclusion --- p.101
Chapter 5.1 --- Research Summary --- p.101
Chapter 5.2 --- Future Work --- p.104
Chapter A --- Publication List --- p.107
Bibliography --- p.109
APA, Harvard, Vancouver, ISO, and other styles
3

Kesireddy, Akitha. "A new adaptive trilateral filter for in-loop filtering." Thesis, 2014. http://hdl.handle.net/1805/5927.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
HEVC has achieved significant coding efficiency improvement beyond existing video coding standard by employing many new coding tools. Deblocking Filter, Sample Adaptive Offset and Adaptive Loop Filter for in-loop filtering are currently introduced for the HEVC standardization. However these filters are implemented in spatial domain despite the fact of temporal correlation within video sequences. To reduce the artifacts and better align object boundaries in video , a new algorithm in in-loop filtering is proposed. The proposed algorithm is implemented in HM-11.0 software. This proposed algorithm allows an average bitrate reduction of about 0.7% and improves the PSNR of the decoded frame by 0.05%, 0.30% and 0.35% in luminance and chroma.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Video tapes – editing – data processing"

1

Cope, Peter. Digital video and pc editing. London: Teach Yourself, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Grebler, Ron. Desktop digital video. Indianapolis, IN: Prompt Publications, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nels, Johnson. How to digitize video. New York: Wiley & Sons, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Benford, Tom. Introducing desktop video. New York, NY: MIS Press, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ohanian, Thomas A. Digital nonlinear editing: Editing film and video on the desktop. 2nd ed. Boston: Focal Press, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ohanian, Thomas A. Digital nonlinear editing: New approaches to editing film and video. Boston: Focal Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

James, Jack. Fix it in post: Solutions for postproduction problems. Amsterdam: Focal Press, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wolsky, Tom. Final Cut Pro 5 Editing Essentials. Burlington: Elsevier, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rubin, Michael. Beginner's Final cut pro: Learn to edit digital video. Berkeley, CA: Peachpit Press, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wolsky, Tom. Final Cut Pro 3 editing workshop. 2nd ed. Lawrence, Kan: CMP Books, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Video tapes – editing – data processing"

1

Naskar, Ruchira, Pankaj Malviya, and Rajat Subhra Chakraborty. "Digital Forensics." In Biometrics, 1769–87. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-0983-7.ch074.

Full text
Abstract:
Digital forensics deal with cyber crime detection from digital multimedia data. In the present day, multimedia data such as images and videos are major sources of evidence in the courts of law worldwide. However, the immense proliferation and easy availability of low-cost or free, user-friendly and powerful image and video processing software, poses as the largest threat to today's digital world as well as the legal industry. This is due to the fact that such software allow efficient image and video editing, manipulation and synthesis, with a few mouse clicks even by a novice user. Such software also enable formation realistic of computer-generated images. In this chapter, we discuss different types of digital image forgeries and state-of-the-art digital forensic techniques to detect them. Through these discussions, we also give an idea of the challenges and open problems in the field of digital forensics.
APA, Harvard, Vancouver, ISO, and other styles
2

Naskar, Ruchira, Pankaj Malviya, and Rajat Subhra Chakraborty. "Digital Forensics." In Innovative Research in Attention Modeling and Computer Vision Applications, 260–78. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-8723-3.ch010.

Full text
Abstract:
Digital forensics deal with cyber crime detection from digital multimedia data. In the present day, multimedia data such as images and videos are major sources of evidence in the courts of law worldwide. However, the immense proliferation and easy availability of low-cost or free, user-friendly and powerful image and video processing software, poses as the largest threat to today's digital world as well as the legal industry. This is due to the fact that such software allow efficient image and video editing, manipulation and synthesis, with a few mouse clicks even by a novice user. Such software also enable formation realistic of computer-generated images. In this chapter, we discuss different types of digital image forgeries and state-of-the-art digital forensic techniques to detect them. Through these discussions, we also give an idea of the challenges and open problems in the field of digital forensics.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Qing, Yi Zhuang, Jun Yang, and Yueting Zhuang. "Multimedia Information Retrieval at a Crossroad." In Encyclopedia of Multimedia Technology and Networking, Second Edition, 986–94. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-014-1.ch134.

Full text
Abstract:
From late 1990s to early 2000s, the availability of powerful computing capability, large storage devices, high-speed networking, and especially the advent of the Internet, led to a phenomenal growth of digital multimedia content in terms of size, diversity, and impact. As suggested by its name, “multimedia” is a name given to a collection of data of multiple types, which include not only “traditional multimedia” such as images and videos, but also emerging media such as 3D graphics (like VRML objects) and Web animations (like Flash animations). Furthermore, relevant techniques have been developed for a growing number of applications, ranging from document editing software to digital libraries and many Web applications. For example, most people who have used Microsoft Word have tried to insert pictures and diagrams into their documents, and they have the experience of watching online video clips such as movie trailers from Web sites such as YouTube.com. Multimedia data have been available in every corner of the digital world. With the huge volume of multimedia data, finding and accessing the multimedia documents that satisfy people’s needs in an accurate and efficient manner becomes a nontrivial problem. This problem is referred to as multimedia information retrieval. The core of multimedia information retrieval is to compute the degree of relevance between users’ information needs and multimedia data. A user’s information need is expressed as a query, which can be in various forms such as a line of free text like “Find me the photos of George Washington,” a few keywords like “George Washington photo,” a media object like a sample picture of George Washington, or their combinations. On the other hand, multimedia data are represented using a certain form of summarization, typically called index, which is directly matched against queries. Similar to a query, the index can take a variety of forms, including keywords, visual features such as color histogram and motion vector, depending on the data and task characteristics. For textual documents, mature information retrieval (IR) technologies have been developed and successfully applied in commercial systems such as Web search engines. In comparison, the research on multimedia retrieval is still in its early stage. Unlike textual data, which can be well represented by term vectors that are descriptive of data semantics, multimedia data lack an effective, semantic-level representation that can be computed automatically, which makes multimedia retrieval a much harder research problem. On the other hand, the diversity and complexity of multimedia data offer new opportunities for the retrieval task to be leveraged by the techniques in other research areas. In fact, research on multimedia retrieval has been initiated and investigated by researchers from areas of multimedia database, computer vision, natural language processing, human-computer interaction, and so forth. Overall, it is currently a very active research area that has many interactions with other areas. In the coming sections, we will overview the techniques for multimedia information retrieval, followed by a review on the applications and challenges in this area. Then, the future trends will be discussed, and some important terms in this area are defined at the end of this chapter.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Video tapes – editing – data processing"

1

Xu, Jianfeng, Toshihiko Yamasaki, and Kiyoharu Aizawa. "Motion Editing in 3D Video Database." In Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06). IEEE, 2006. http://dx.doi.org/10.1109/3dpvt.2006.95.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jensen, Bob. "Processing techniques for technical video production." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1991. http://dx.doi.org/10.1364/oam.1991.tha1.

Full text
Abstract:
This presentation will focus on how a technical video production is assembled. Participants will discover the advantages and secrets of producing good technical videos. Various productions from the Maui Space Surveillance Site will be used to demonstrate how many separate elements are combined to achieve videos that meet complex technical objectives. Discussion of key production elements will include establishing objectives, choosing production values for a particular audience, script and storyboard writing, pre- production work, and techniques for production/post-production. Participants will learn about camera set-up, different camera shooting techniques, and the basic elements of lighting a scene. Effective use of audio will also be explored, including microphone types and applications, narration types and effects, and how to enhance productions through the use of appropriate music tracks. Basic editing will be covered, including aesthetics, transitions, movement, and shot variety. Effective presentation of data in technical productions will be demonstrated. Participants will learn how to use switcher and special effects, slow motion, still frames, and animation to provide scene emphasis and heighten viewer interest. Incorporating documentary footage and video from outside sources will be explained. Finally, effective methods of editing to upgrade and update older productions will also be presented.
APA, Harvard, Vancouver, ISO, and other styles
3

Radescu, Radu, and Bogdan andrei Preda. "MULTIMEDIA APPLICATION DESIGNED TO CREATE E-LEARNING CONTENT FOR VIDEO COMPRESSION AND EDITING." In eLSE 2018. ADL Romania, 2018. http://dx.doi.org/10.12753/2066-026x-18-066.

Full text
Abstract:
This article aims to create a theoretical and practical support for video engineering through an e-learning editing and video compression application. The original contribution is a new perspective of learning the basic practical skills of video signal processing: representation, editing and compression. The e-learning content application has a main menu with three categories and it is divided into 10 work areas. The application is flexible, accepting additions and enhancements to the video formats or codecs. The concepts of format, wrapper, and container refer to the same thing: a software entity that contains one or more codecs inside it. Even though it has a number of shortcomings and modern times have brought to market a number of media formats (Matroska, MP4 or OGG) without its shortcomings, AVI is still the most popular media format due to ease of use and modification, such as and due to a the very good of the quality/resources ratio. The video editing case study in online learning proposed in this paper has important educational features, helping users to learn video processing and to acquire basic skills in video engineering, based on nonlinear video editing and modular construction. As for performances, the application provides good response times for cutting, black-and-white conversion, file merging, and webcam capture. Very good response times are also gained in fast frame shuffle, snapshot capture, data compression, and video capture. The application interface is very simple and easy to use, with clear and explicit menus, intuitive and strategically placed buttons. It is studied at the Information Technology & Computer Systems Master program from the University "Politehnica" of Bucharest, within the laboratory classes of Multimedia Equipment and Technologies.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography