Gotowa bibliografia na temat „Video frame”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Video frame”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Video frame"

1

Liu, Dianting, Mei-Ling Shyu, Chao Chen i Shu-Ching Chen. "Within and Between Shot Information Utilisation in Video Key Frame Extraction". Journal of Information & Knowledge Management 10, nr 03 (wrzesień 2011): 247–59. http://dx.doi.org/10.1142/s0219649211002961.

Pełny tekst źródła
Streszczenie:
In consequence of the popularity of family video recorders and the surge of Web 2.0, increasing amounts of videos have made the management and integration of the information in videos an urgent and important issue in video retrieval. Key frames, as a high-quality summary of videos, play an important role in the areas of video browsing, searching, categorisation, and indexing. An effective set of key frames should include major objects and events of the video sequence, and should contain minimum content redundancies. In this paper, an innovative key frame extraction method is proposed to select representative key frames for a video. By analysing the differences between frames and utilising the clustering technique, a set of key frame candidates (KFCs) is first selected at the shot level, and then the information within a video shot and between video shots is used to filter the candidate set to generate the final set of key frames. Experimental results on the TRECVID 2007 video dataset have demonstrated the effectiveness of our proposed key frame extraction method in terms of the percentage of the extracted key frames and the retrieval precision.
Style APA, Harvard, Vancouver, ISO itp.
2

Gong, Tao, Kai Chen, Xinjiang Wang, Qi Chu, Feng Zhu, Dahua Lin, Nenghai Yu i Huamin Feng. "Temporal ROI Align for Video Object Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 2 (18.05.2021): 1442–50. http://dx.doi.org/10.1609/aaai.v35i2.16234.

Pełny tekst źródła
Streszczenie:
Video object detection is challenging in the presence of appearance deterioration in certain video frames. Therefore, it is a natural choice to aggregate temporal information from other frames of the same video into the current frame. However, ROI Align, as one of the most core procedures of video detectors, still remains extracting features from a single-frame feature map for proposals, making the extracted ROI features lack temporal information from videos. In this work, considering the features of the same object instance are highly similar among frames in a video, a novel Temporal ROI Align operator is proposed to extract features from other frames feature maps for current frame proposals by utilizing feature similarity. The proposed Temporal ROI Align operator can extract temporal information from the entire video for proposals. We integrate it into single-frame video detectors and other state-of-the-art video detectors, and conduct quantitative experiments to demonstrate that the proposed Temporal ROI Align operator can consistently and significantly boost the performance. Besides, the proposed Temporal ROI Align can also be applied into video instance segmentation.
Style APA, Harvard, Vancouver, ISO itp.
3

Alsrehin, Nawaf O., i Ahmad F. Klaib. "VMQ: an algorithm for measuring the Video Motion Quality". Bulletin of Electrical Engineering and Informatics 8, nr 1 (1.03.2019): 231–38. http://dx.doi.org/10.11591/eei.v8i1.1418.

Pełny tekst źródła
Streszczenie:
This paper proposes a new full-reference algorithm, called Video Motion Quality (VMQ) that evaluates the relative motion quality of the distorted video generated from the reference video based on all the frames from both videos. VMQ uses any frame-based metric to compare frames from the original and distorted videos. It uses the time stamp for each frame to measure the intersection values. VMQ combines the comparison values with the intersection values in an aggregation function to produce the final result. To explore the efficiency of the VMQ, we used a set of raw, uncompressed videos to generate a new set of encoded videos. These encoded videos are then used to generate a new set of distorted videos which have the same video bit rate and frame size but with reduced frame rate. To evaluate the VMQ, we applied the VMQ by comparing the encoded videos with the distorted videos and recorded the results. The initial evaluation results showed compatible trends with most of subjective evaluation results.
Style APA, Harvard, Vancouver, ISO itp.
4

Park, Sunghyun, Kangyeol Kim, Junsoo Lee, Jaegul Choo, Joonseok Lee, Sookyung Kim i Edward Choi. "Vid-ODE: Continuous-Time Video Generation with Neural Ordinary Differential Equation". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 3 (18.05.2021): 2412–22. http://dx.doi.org/10.1609/aaai.v35i3.16342.

Pełny tekst źródła
Streszczenie:
Video generation models often operate under the assumption of fixed frame rates, which leads to suboptimal performance when it comes to handling flexible frame rates (e.g., increasing the frame rate of the more dynamic portion of the video as well as handling missing video frames). To resolve the restricted nature of existing video generation models' ability to handle arbitrary timesteps, we propose continuous-time video generation by combining neural ODE (Vid-ODE) with pixel-level video processing techniques. Using ODE-ConvGRU as an encoder, a convolutional version of the recently proposed neural ODE, which enables us to learn continuous-time dynamics, Vid-ODE can learn the spatio-temporal dynamics of input videos of flexible frame rates. The decoder integrates the learned dynamics function to synthesize video frames at any given timesteps, where the pixel-level composition technique is used to maintain the sharpness of individual frames. With extensive experiments on four real-world video datasets, we verify that the proposed Vid-ODE outperforms state-of-the-art approaches under various video generation settings, both within the trained time range (interpolation) and beyond the range (extrapolation). To the best of our knowledge, Vid-ODE is the first work successfully performing continuous-time video generation using real-world videos.
Style APA, Harvard, Vancouver, ISO itp.
5

Chang, Yuchou, i Hong Lin. "Irrelevant frame removal for scene analysis using video hyperclique pattern and spectrum analysis". Journal of Advanced Computer Science & Technology 5, nr 1 (6.02.2016): 1. http://dx.doi.org/10.14419/jacst.v5i1.4035.

Pełny tekst źródła
Streszczenie:
<p>Video often include frames that are irrelevant to the scenes for recording. These are mainly due to imperfect shooting, abrupt movements of camera, or unintended switching of scenes. The irrelevant frames should be removed before the semantic analysis of video scene is performed for video retrieval. An unsupervised approach for automatic removal of irrelevant frames is proposed in this paper. A novel log-spectral representation of color video frames based on Fibonacci lattice-quantization has been developed for better description of the global structures of video contents to measure similarity of video frames. Hyperclique pattern analysis, used to detect redundant data in textual analysis, is extended to extract relevant frame clusters in color videos. A new strategy using the k-nearest neighbor algorithm is developed for generating a video frame support measure and an h-confidence measure on this hyperclique pattern based analysis method. Evaluation of the proposed irrelevant video frame removal algorithm reveals promising results for datasets with irrelevant frames.</p>
Style APA, Harvard, Vancouver, ISO itp.
6

Li, WenLin, DeYu Qi, ChangJian Zhang, Jing Guo i JiaJun Yao. "Video Summarization Based on Mutual Information and Entropy Sliding Window Method". Entropy 22, nr 11 (12.11.2020): 1285. http://dx.doi.org/10.3390/e22111285.

Pełny tekst źródła
Streszczenie:
This paper proposes a video summarization algorithm called the Mutual Information and Entropy based adaptive Sliding Window (MIESW) method, which is specifically for the static summary of gesture videos. Considering that gesture videos usually have uncertain transition postures and unclear movement boundaries or inexplicable frames, we propose a three-step method where the first step involves browsing a video, the second step applies the MIESW method to select candidate key frames, and the third step removes most redundant key frames. In detail, the first step is to convert the video into a sequence of frames and adjust the size of the frames. In the second step, a key frame extraction algorithm named MIESW is executed. The inter-frame mutual information value is used as a metric to adaptively adjust the size of the sliding window to group similar content of the video. Then, based on the entropy value of the frame and the average mutual information value of the frame group, the threshold method is applied to optimize the grouping, and the key frames are extracted. In the third step, speeded up robust features (SURF) analysis is performed to eliminate redundant frames in these candidate key frames. The calculation of Precision, Recall, and Fmeasure are optimized from the perspective of practicality and feasibility. Experiments demonstrate that key frames extracted using our method provide high-quality video summaries and basically cover the main content of the gesture video.
Style APA, Harvard, Vancouver, ISO itp.
7

Li, Xin, QiLin Li, Dawei Yin, Lijun Zhang i Dezhong Peng. "Unsupervised Video Summarization Based on An Encoder-Decoder Architecture". Journal of Physics: Conference Series 2258, nr 1 (1.04.2022): 012067. http://dx.doi.org/10.1088/1742-6596/2258/1/012067.

Pełny tekst źródła
Streszczenie:
Abstract The purpose of video summarization is to facilitate large-scale video browsing. Video summarization is a short and concise synopsis of original video. It usually composed of a set of representative video frames from the original video. This paper solves the problem of unsupervised video summarization by developing a Video Summarization Network (VSN) to summarize videos, which is formulated as selecting a sparse subset of video frames that best represents the input video. VSN predicts a probability for each video frame, which indicates the possibility of a frame being selected, and then takes actions to select frames according to the probability distribution to form a video summary. We designed a novel loss function which takes into account the diversity and representativeness of the generated summarization without labels or user interaction.
Style APA, Harvard, Vancouver, ISO itp.
8

Mahum, Rabbia, Aun Irtaza, Saeed Ur Rehman, Talha Meraj i Hafiz Tayyab Rauf. "A Player-Specific Framework for Cricket Highlights Generation Using Deep Convolutional Neural Networks". Electronics 12, nr 1 (24.12.2022): 65. http://dx.doi.org/10.3390/electronics12010065.

Pełny tekst źródła
Streszczenie:
Automatic ways to generate video summarization is a key technique to manage huge video content nowadays. The aim of video summaries is to provide important information in less time to viewers. There exist some techniques for video summarization in the cricket domain, however, to the best of our knowledge our proposed model is the first one to deal with specific player summaries in cricket videos successfully. In this study, we provide a novel framework and a valuable technique for cricket video summarization and classification. For video summary specific to the player, the proposed technique exploits the fact i.e., presence of Score Caption (SC) in frames. In the first stage, optical character recognition (OCR) is applied to extract text summary from SC to find all frames of the specific player such as the Start Frame (SF) to the Last Frame (LF). In the second stage, various frames of cricket videos are used in the supervised AlexNet classifier for training along with class labels such as positive and negative for binary classification. A pre-trained network is trained for binary classification of those frames which are attained from the first phase exhibiting the performance of a specific player along with some additional scenes. In the third phase, the person identification technique is employed to recognize frames containing the specific player. Then, frames are cropped and SIFT features are extracted from identified person to further cluster these frames using the fuzzy c-means clustering method. The reason behind the third phase is to further optimize the video summaries as the frames attained in the second stage included the partner player’s frame as well. The proposed framework successfully utilizes the cricket videoo dataset. Additionally, the technique is very efficient and useful in broadcasting cricket video highlights of a specific player. The experimental results signify that our proposed method surpasses the previously stated results, improving the overall accuracy of up to 95%.
Style APA, Harvard, Vancouver, ISO itp.
9

Wang, Yifan, Hao Wang, Kaijie Wang i Wei Zhang. "Cloud Gaming Video Coding Optimization Based on Camera Motion-Guided Reference Frame Enhancement". Applied Sciences 12, nr 17 (25.08.2022): 8504. http://dx.doi.org/10.3390/app12178504.

Pełny tekst źródła
Streszczenie:
Recent years have witnessed tremendous advances in clouding gaming. To alleviate the bandwidth pressure due to transmissions of high-quality cloud gaming videos, this paper optimized existing video codecs with deep learning networks to reduce the bitrate consumption of cloud gaming videos. Specifically, a camera motion-guided network, i.e., CMGNet, was proposed for the reference frame enhancement, leveraging the camera motion information of cloud gaming videos and the reconstructed frames in the reference frame list. The obtained high-quality reference frame was then added to the reference frame list to improve the compression efficiency. The decoder side performs the same operation to generate the reconstructed frames using the updated reference frame list. In the CMGNet, camera motions were used as guidance to estimate the frame motion and weight masks to achieve more accurate frame alignment and fusion, respectively. As a result, the quality of the reference frame was significantly enhanced and thus being more suitable as a prediction candidate for the target frame. Experimental results demonstrate the effectiveness of the proposed algorithm, which achieves 4.91% BD-rate reduction on average. Moreover, a cloud gaming video dataset with camera motion data was made available to promote research on game video compression.
Style APA, Harvard, Vancouver, ISO itp.
10

Kawin, Bruce. "Video Frame Enlargments". Film Quarterly 61, nr 3 (2008): 52–57. http://dx.doi.org/10.1525/fq.2008.61.3.52.

Pełny tekst źródła
Streszczenie:
Abstract This essay discusses frame-enlargment technology, comparing digital and photographic alternatives and concluding, after the analysis of specific examples, that frames photographed from a 35mm print are much superior in quality.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Video frame"

1

Chau, Wing San. "Key frame selection for video transcoding /". View abstract or full-text, 2005. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202005%20CHAU.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Yoon, Kyongil. "Key-frame appearance analysis for video surveillance". College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/2818.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.) -- University of Maryland, College Park, 2005.
Thesis research directed by: Computer Science. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Style APA, Harvard, Vancouver, ISO itp.
3

Czerepinski, Przemyslaw Jan. "Displaced frame difference coding for video compression". Thesis, University of Bristol, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267009.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

SCHAPHORST, RICHARD A., i ALAN R. DEUTERMANN. "FRAME RATE REDUCTION IN VIDEO TELEMETRY SYSTEMS". International Foundation for Telemetering, 1989. http://hdl.handle.net/10150/614503.

Pełny tekst źródła
Streszczenie:
International Telemetering Conference Proceedings / October 30-November 02, 1989 / Town & Country Hotel & Convention Center, San Diego, California
In video telemetry systems the transmitted picture rate, or temporal resolution, is a critical parameter in determining system performance as well as the transmitted bit rate. In many applications it is important to transmit every TV frame because the maximum temporal resolution must be maintained to analyze critical events such as an encounter between a missile and a target. Typical transmission bit rates for operation at these picture rates are 5.0 to 10.0 mbps. In other cases the frame rate can be reduced slightly to 15 or 7.5 frames/sec. without significantly reducing the value of the output video. At these frame rates it is still possible to sense the continuity of motion although some jerkiness may appear on rapidly moving objects. At these reduced frame rates the transmitted bit rate can go as low as 1.0 mbps. There is a third class of video telemetry applications where the scene is changing very slowly, and it is permissible to transmit a series of still pictures at very reduced rates. For example one picture can be transmitted every second at a transmission bit rate of 100 Kbps. The purpose of this paper is to examine operation of the standard video coding system (Range Commander Council Standard RCC 209) at conventional frame rates as well as a wide range of reduced frame rates. The following section describes the basic digital TV system which employs the standard codec. Two particular modes of operation are discussed: (1) those which reduce the frame rate by a fixed amount and vary the spatial resolution according to the complexity of the TV image; (2) those which maintain the spatial resolution at a fixed level and automatically vary the temporal resolution according to the complexity of the image. A tradeoff analysis is presented illustrating the interaction of spatial resolution, temporal resolution, and transmission bit rate. A video tape is described and presented illustrating system operation at a wide range of frame rates. Finally, conclusions are drawn.
Style APA, Harvard, Vancouver, ISO itp.
5

Mackin, Alex. "High frame rate formats for immersive video". Thesis, University of Bristol, 2017. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.730841.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Arici, Tarik. "Single and multi-frame video quality enhancement". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29722.

Pełny tekst źródła
Streszczenie:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Yucel Altunbasak; Committee Member: Brani Vidakovic; Committee Member: Ghassan AlRegib; Committee Member: James Hamblen; Committee Member: Russ Mersereau. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Style APA, Harvard, Vancouver, ISO itp.
7

Levy, Alfred K. "Object tracking in low frame-rate video sequences". Honors in the Major Thesis, University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/339.

Pełny tekst źródła
Streszczenie:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf.edu/Systems/DigitalInitiatives/DigitalCollections/InternetDistributionConsentAgreementForm.pdf You may also contact the project coordinator, Kerri Bottorff, at kerri.bottorff@ucf.edu for more information.
Bachelors
Engineering
Computer Science
Style APA, Harvard, Vancouver, ISO itp.
8

Sharon, C. M. (Colin Michael) Carleton University Dissertation Engineering Systems and Computer. "Compressed video in integrated services frame relay networks". Ottawa, 1994.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

King, Donald V. "(Frame) /-bridge-\ !bang! ((spill)) *sparkle* (mapping Mogadore) /". [Kent, Ohio] : Kent State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=kent1216759724.

Pełny tekst źródła
Streszczenie:
Thesis (M.F.A.)--Kent State University, 2008.
Title from PDF t.p. (viewed Oct. 19, 2009). Advisor: Paul O'Keeffe. Keywords: Sculpture, Installation Art, Video Art. Includes bibliographical references (p. 25).
Style APA, Harvard, Vancouver, ISO itp.
10

Amin, A. M. "Geometrical analysis and rectification of thermal infrared video frame scanner imagery video frame scanner imagery and its potential applications to topographic mapping". Thesis, University of Glasgow, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.375444.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Video frame"

1

Wiegand, Thomas, i Bernd Girod. Multi-Frame Motion-Compensated Prediction for Video Transmission. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4615-1487-9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Wiegand, Thomas. Multi-Frame Motion-Compensated Prediction for Video Transmission. Boston, MA: Springer US, 2001.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Bernd, Girod, red. Multi-frame motion-compensated prediction for video transmission. Boston: Kluwer Academic Publishers, 2001.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Schuster, Guido M. Rate-distortion based video compression: Optimal video frame compression and object boundary encoding. Boston: Kluwer Academic Publishers, 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Schuster, Guido M. Rate-Distortion Based Video Compression: Optimal Video Frame Compression and Object Boundary Encoding. Boston, MA: Springer US, 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Kempster, Kurt A. Frame rate effects on human spatial perception in video intelligence. Monterey, Calif: Naval Postgraduate School, 2000.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Fowler, Luke. Luke Fowler: Two-frame films 2006-2012. London]: MACK, 2014.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Wanner, Franz. Franz Wanner: Foes at the edge of the frame. Berlin: Distanz Verlag, 2020.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Hartz, William G. Data compression techniques applied to high resolutin high frame rate video technology. Cleveland, Ohio: Analex Corp^Lewis Research Center, 1989.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Hartz, William G. Data compression techniques applied to high resolution high frame rate video technology. [Washington, DC]: National Aeronautics and Space Administration, Office of Management, Scientific and Technical Information Division, 1990.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Video frame"

1

Kemp, Jonathan. "Frame rate". W Film on Video, 14–24. London ; New York : Routledge, 2019.: Routledge, 2019. http://dx.doi.org/10.4324/9780429468872-2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Zhang, Shilin, Heping Li i Shuwu Zhang. "Video Frame Segmentation". W Lecture Notes in Electrical Engineering, 193–97. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27314-8_27.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Long, Chengjiang, Arslan Basharat i Anthony Hoogs. "Video Frame Deletion and Duplication". W Multimedia Forensics, 333–62. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-7621-5_13.

Pełny tekst źródła
Streszczenie:
AbstractVideos can be manipulated in a number of different ways, including object addition or removal, deep fake videos, temporal removal or duplication of parts of the video, etc. In this chapter, we provide an overview of the previous work related to video frame deletion and duplication and dive into the details of two deep-learning-based approaches for detecting and localizing frame deletion (Chengjiang et al. 2017) and duplication (Chengjiang et al. 2019) manipulations.
Style APA, Harvard, Vancouver, ISO itp.
4

Willett, Rebekah. "In the Frame: Mapping Camcorder Cultures". W Video Cultures, 1–22. London: Palgrave Macmillan UK, 2009. http://dx.doi.org/10.1057/9780230244696_1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Feixas, Miquel, Anton Bardera, Jaume Rigan, Mateu Sbert i Qing Xu. "Video Key Frame Selection". W Information Theory Tools for Image Processing, 75–95. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-031-79555-8_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Saldanha, Mário, Gustavo Sanchez, César Marcon i Luciano Agostini. "VVC Intra-frame Prediction". W Versatile Video Coding (VVC), 23–33. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Miyaji, Chikara. "Frame by frame playback on the Internet video". W Proceedings of the 10th International Symposium on Computer Science in Sports (ISCSS), 59–66. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24560-7_8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Yu, Zhiyang, Yu Zhang, Xujie Xiang, Dongqing Zou, Xijun Chen i Jimmy S. Ren. "Deep Bayesian Video Frame Interpolation". W Lecture Notes in Computer Science, 144–60. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19784-0_9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Galshetwar, Vijay M., Prashant W. Patil i Sachin Chaudhary. "Video Enhancement with Single Frame". W Communications in Computer and Information Science, 206–18. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11349-9_18.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Behseta, Sam, Charles Lam, Joseph E. Sutton i Robert L. Webb. "A Time Series Intra-Video Collusion Attack on Frame-by-Frame Video Watermarking". W Digital Watermarking, 31–44. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04438-0_3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Video frame"

1

Liu, Ruixin, Zhenyu Weng, Yuesheng Zhu i Bairong Li. "Temporal Adaptive Alignment Network for Deep Video Inpainting". W Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/129.

Pełny tekst źródła
Streszczenie:
Video inpainting aims to synthesize visually pleasant and temporally consistent content in missing regions of video. Due to a variety of motions across different frames, it is highly challenging to utilize effective temporal information to recover videos. Existing deep learning based methods usually estimate optical flow to align frames and thereby exploit useful information between frames. However, these methods tend to generate artifacts once the estimated optical flow is inaccurate. To alleviate above problem, we propose a novel end-to-end Temporal Adaptive Alignment Network(TAAN) for video inpainting. The TAAN aligns reference frames with target frame via implicit motion estimation at a feature level and then reconstruct target frame by taking the aggregated aligned reference frame features as input. In the proposed network, a Temporal Adaptive Alignment (TAA) module based on deformable convolutions is designed to perform temporal alignment in a local, dense and adaptive manner. Both quantitative and qualitative evaluation results show that our method significantly outperforms existing deep learning based methods.
Style APA, Harvard, Vancouver, ISO itp.
2

Liu, Da, Debin Zhao, Siwe Ma i Wen Gao. "Frame layer rate control for dual frame motion compensation". W 2010 18th International Packet Video Workshop (PV). IEEE, 2010. http://dx.doi.org/10.1109/pv.2010.5706823.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Deng, Kangle, Tianyi Fei, Xin Huang i Yuxin Peng. "IRC-GAN: Introspective Recurrent Convolutional GAN for Text-to-video Generation". W Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/307.

Pełny tekst źródła
Streszczenie:
Automatically generating videos according to the given text is a highly challenging task, where visual quality and semantic consistency with captions are two critical issues. In existing methods, when generating a specific frame, the information in those frames generated before is not fully exploited. And an effective way to measure the semantic accordance between videos and captions remains to be established. To address these issues, we present a novel Introspective Recurrent Convolutional GAN (IRC-GAN) approach. First, we propose a recurrent transconvolutional generator, where LSTM cells are integrated with 2D transconvolutional layers. As 2D transconvolutional layers put more emphasis on the details of each frame than 3D ones, our generator takes both the definition of each video frame and temporal coherence across the whole video into consideration, and thus can generate videos with better visual quality. Second, we propose mutual information introspection to semantically align the generated videos to text. Unlike other methods simply judging whether the video and the text match or not, we further take mutual information to concretely measure the semantic consistency. In this way, our model is able to introspect the semantic distance between the generated video and the corresponding text, and try to minimize it to boost the semantic consistency.We conduct experiments on 3 datasets and compare with state-of-the-art methods. Experimental results demonstrate the effectiveness of our IRC-GAN to generate plausible videos from given text.
Style APA, Harvard, Vancouver, ISO itp.
4

Lu, Xinyuan, Shengyuan Huang, Li Niu, Wenyan Cong i Liqing Zhang. "Deep Video Harmonization With Color Mapping Consistency". W Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/172.

Pełny tekst źródła
Streszczenie:
Video harmonization aims to adjust the foreground of a composite video to make it compatible with the background. So far, video harmonization has only received limited attention and there is no public dataset for video harmonization. In this work, we construct a new video harmonization dataset HYouTube by adjusting the foreground of real videos to create synthetic composite videos. Moreover, we consider the temporal consistency in video harmonization task. Unlike previous works which establish the spatial correspondence, we design a novel framework based on the assumption of color mapping consistency, which leverages the color mapping of neighboring frames to refine the current frame. Extensive experiments on our HYouTube dataset prove the effectiveness of our proposed framework. Our dataset and code are available at https://github.com/bcmi/Video-Harmonization-Dataset-HYouTube.
Style APA, Harvard, Vancouver, ISO itp.
5

Shen, Wang, Wenbo Bao, Guangtao Zhai, Li Chen, Xiongkuo Min i Zhiyong Gao. "Blurry Video Frame Interpolation". W 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020. http://dx.doi.org/10.1109/cvpr42600.2020.00516.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Shi, Zhihao, Xiangyu XU, Xiaohong Liu, Jun Chen i Ming-Hsuan Yang. "Video Frame Interpolation Transformer". W 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.01696.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Zheng, Jiping, i Ganfeng Lu. "k-SDPP: Fixed-Size Video Summarization via Sequential Determinantal Point Processes". W Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/108.

Pełny tekst źródła
Streszczenie:
With the explosive growth of video data, video summarization which converts long-time videos to key frame sequences has become an important task in information retrieval and machine learning. Determinantal point processes (DPPs) which are elegant probabilistic models have been successfully applied to video summarization. However, existing DPP-based video summarization methods suffer from poor efficiency of outputting a specified size summary or neglecting inherent sequential nature of videos. In this paper, we propose a new model in the DPP lineage named k-SDPP in vein of sequential determinantal point processes but with fixed user specified size k. Our k-SDPP partitions sampled frames of a video into segments where each segment is with constant number of video frames. Moreover, an efficient branch and bound method (BB) considering sequential nature of the frames is provided to optimally select k frames delegating the summary from the divided segments. Experimental results show that our proposed BB method outperforms not only k-DPP and sequential DPP (seqDPP) but also the partition and Markovian assumption based methods.
Style APA, Harvard, Vancouver, ISO itp.
8

Chao, Sun, Liu Yukun i Jiang Shouda. "A Video Frame Indexing Based Frame Select Method". W 2008 4th IEEE International Conference on Circuits and Systems for Communications (ICCSC 2008). IEEE, 2008. http://dx.doi.org/10.1109/iccsc.2008.135.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Mendes, Paulo, i Sérgio Colcher. "Spatio-temporal Localization of Actors in Video/360-Video and its Applications". W Simpósio Brasileiro de Sistemas Multimídia e Web. Sociedade Brasileira de Computação - SBC, 2022. http://dx.doi.org/10.5753/webmedia_estendido.2022.224999.

Pełny tekst źródła
Streszczenie:
The popularity of platforms for storing and transmitting video content has created a substantial volume of video data. Given a set of actors present in a video, generating metadata with the temporal determination of the interval in which each actor is present and their spatial 2D localization in each frame in these intervals can facilitate video retrieval and recommendation. In this work, we investigate Video Face Clustering for this spatio-temporal localization of actors in videos. We first describe our method for Video Face Clustering in which we take advantage of face detection, embeddings, and clustering methods to group similar faces of actors in different frames and provide the spatio-temporal localization of them. Then, we explore, propose, and investigate innovative applications of this spatio-temporal localization in three different tasks: (i) Video Face Recognition, (ii) Educational Video Recommendation and (iii) Subtitles Positioning in 360-video.
Style APA, Harvard, Vancouver, ISO itp.
10

Wu, Yue, Qiang Wen i Qifeng Chen. "Optimizing Video Prediction via Video Frame Interpolation". W 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.01729.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Video frame"

1

Terry, P. L. An evaluation of solid state video frame recorders. Office of Scientific and Technical Information (OSTI), sierpień 1994. http://dx.doi.org/10.2172/10181994.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Cooke, B., i A. Saucier. Correction of Line Interleaving Displacement in Frame Captured Aerial Video Imagery. New Orleans, LA: U.S. Department of Agriculture, Forest Service, Southern Forest Experiment Station, 1995. http://dx.doi.org/10.2737/so-rn-380.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Decleir, Cyril, Mohand-Saïd Hacid i Jacques Kouloumdjian. A Database Approach for Modeling and Querying Video Data. Aachen University of Technology, 1999. http://dx.doi.org/10.25368/2022.90.

Pełny tekst źródła
Streszczenie:
Indexing video data is essential for providing content based access. In this paper, we consider how database technology can offer an integrated framework for modeling and querying video data. As many concerns in video (e.g., modeling and querying) are also found in databases, databases provide an interesting angle to attack many of the problems. From a video applications perspective, database systems provide a nice basis for future video systems. More generally, database research will provide solutions to many video issues even if these are partial or fragmented. From a database perspective, video applications provide beautiful challenges. Next generation database systems will need to provide support for multimedia data (e.g., image, video, audio). These data types require new techniques for their management (i.e., storing, modeling, querying, etc.). Hence new solutions are significant. This paper develops a data model and a rule-based query language for video content based indexing and retrieval. The data model is designed around the object and constraint paradigms. A video sequence is split into a set of fragments. Each fragment can be analyzed to extract the information (symbolic descriptions) of interest that can be put into a database. This database can then be searched to find information of interest. Two types of information are considered: (1) the entities (objects) of interest in the domain of a video sequence, (2) video frames which contain these entities. To represent these information, our data model allows facts as well as objects and constraints. We present a declarative, rule-based, constraint query language that can be used to infer relationships about information represented in the model. The language has a clear declarative and operational semantics. This work is a major revision and a consolidation of [12, 13].
Style APA, Harvard, Vancouver, ISO itp.
4

Bates, C. Richards, Melanie Chocholek, Clive Fox, John Howe i Neil Jones. Scottish Inshore Fisheries Integrated Data System (SIFIDS): Work package (3) final report development of a novel, automated mechanism for the collection of scallop stock data. Redaktorzy Mark James i Hannah Ladd-Jones. Marine Alliance for Science and Technology for Scotland (MASTS), 2019. http://dx.doi.org/10.15664/10023.23449.

Pełny tekst źródła
Streszczenie:
[Extract from Executive Summary] This project, aimed at the development of a novel, automated mechanism for the collection of scallop stock data was a sub-part of the Scottish Inshore Fisheries Integrated Data Systems (SIFIDS) project. The project reviewed the state-of-the-art remote sensing (geophysical and camera-based) technologies available from industry and compared these to inexpensive, off-the -shelf equipment. Sea trials were conducted on scallop dredge sites and also hand-dived scallop sites. Data was analysed manually, and tests conducted with automated processing methods. It was concluded that geophysical acoustic technologies cannot presently detect individual scallop but the remote sensing technologies can be used for broad scale habitat mapping of scallop harvest areas. Further, the techniques allow for monitoring these areas in terms of scallop dredging impact. Camera (video and still) imagery is effective for scallop count and provide data that compares favourably with diver-based ground truth information for recording scallop density. Deployment of cameras is possible through inexpensive drop-down camera frames which it is recommended be deployed on a wide area basis for further trials. In addition, implementation of a ‘citizen science’ approach to wide area recording is suggested to increase the stock assessment across the widest possible variety of seafloor types around Scotland. Armed with such data a full, statistical analysis could be completed and data used with automated processing routines for future long-term monitoring of stock.
Style APA, Harvard, Vancouver, ISO itp.
5

Tao, Yang, Amos Mizrach, Victor Alchanatis, Nachshon Shamir i Tom Porter. Automated imaging broiler chicksexing for gender-specific and efficient production. United States Department of Agriculture, grudzień 2014. http://dx.doi.org/10.32747/2014.7594391.bard.

Pełny tekst źródła
Streszczenie:
Extending the previous two years of research results (Mizarch, et al, 2012, Tao, 2011, 2012), the third year’s efforts in both Maryland and Israel were directed towards the engineering of the system. The activities included the robust chick handling and its conveyor system development, optical system improvement, online dynamic motion imaging of chicks, multi-image sequence optimal feather extraction and detection, and pattern recognition. Mechanical System Engineering The third model of the mechanical chick handling system with high-speed imaging system was built as shown in Fig. 1. This system has the improved chick holding cups and motion mechanisms that enable chicks to open wings through the view section. The mechanical system has achieved the speed of 4 chicks per second which exceeds the design specs of 3 chicks per second. In the center of the conveyor, a high-speed camera with UV sensitive optical system, shown in Fig.2, was installed that captures chick images at multiple frames (45 images and system selectable) when the chick passing through the view area. Through intensive discussions and efforts, the PIs of Maryland and ARO have created the protocol of joint hardware and software that uses sequential images of chick in its fall motion to capture opening wings and extract the optimal opening positions. This approached enables the reliable feather feature extraction in dynamic motion and pattern recognition. Improving of Chick Wing Deployment The mechanical system for chick conveying and especially the section that cause chicks to deploy their wings wide open under the fast video camera and the UV light was investigated along the third study year. As a natural behavior, chicks tend to deploy their wings as a mean of balancing their body when a sudden change in the vertical movement was applied. In the latest two years, this was achieved by causing the chicks to move in a free fall, in the earth gravity (g) along short vertical distance. The chicks have always tended to deploy their wing but not always in wide horizontal open situation. Such position is requested in order to get successful image under the video camera. Besides, the cells with checks bumped suddenly at the end of the free falling path. That caused the chicks legs to collapse inside the cells and the image of wing become bluer. For improving the movement and preventing the chick legs from collapsing, a slowing down mechanism was design and tested. This was done by installing of plastic block, that was printed in a predesign variable slope (Fig. 3) at the end of the path of falling cells (Fig.4). The cells are moving down in variable velocity according the block slope and achieve zero velocity at the end of the path. The slop was design in a way that the deacceleration become 0.8g instead the free fall gravity (g) without presence of the block. The tests showed better deployment and wider chick's wing opening as well as better balance along the movement. Design of additional sizes of block slops is under investigation. Slops that create accelerations of 0.7g, 0.9g, and variable accelerations are designed for improving movement path and images.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii