Gotowa bibliografia na temat „Video frame”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Video frame”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Video frame"

1

Liu, Dianting, Mei-Ling Shyu, Chao Chen, and Shu-Ching Chen. "Within and Between Shot Information Utilisation in Video Key Frame Extraction." Journal of Information & Knowledge Management 10, no. 03 (2011): 247–59. http://dx.doi.org/10.1142/s0219649211002961.

Pełny tekst źródła
Streszczenie:
In consequence of the popularity of family video recorders and the surge of Web 2.0, increasing amounts of videos have made the management and integration of the information in videos an urgent and important issue in video retrieval. Key frames, as a high-quality summary of videos, play an important role in the areas of video browsing, searching, categorisation, and indexing. An effective set of key frames should include major objects and events of the video sequence, and should contain minimum content redundancies. In this paper, an innovative key frame extraction method is proposed to select
Style APA, Harvard, Vancouver, ISO itp.
2

Gong, Tao, Kai Chen, Xinjiang Wang, et al. "Temporal ROI Align for Video Object Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (2021): 1442–50. http://dx.doi.org/10.1609/aaai.v35i2.16234.

Pełny tekst źródła
Streszczenie:
Video object detection is challenging in the presence of appearance deterioration in certain video frames. Therefore, it is a natural choice to aggregate temporal information from other frames of the same video into the current frame. However, ROI Align, as one of the most core procedures of video detectors, still remains extracting features from a single-frame feature map for proposals, making the extracted ROI features lack temporal information from videos. In this work, considering the features of the same object instance are highly similar among frames in a video, a novel Temporal ROI Alig
Style APA, Harvard, Vancouver, ISO itp.
3

Park, Sunghyun, Kangyeol Kim, Junsoo Lee, et al. "Vid-ODE: Continuous-Time Video Generation with Neural Ordinary Differential Equation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 3 (2021): 2412–22. http://dx.doi.org/10.1609/aaai.v35i3.16342.

Pełny tekst źródła
Streszczenie:
Video generation models often operate under the assumption of fixed frame rates, which leads to suboptimal performance when it comes to handling flexible frame rates (e.g., increasing the frame rate of the more dynamic portion of the video as well as handling missing video frames). To resolve the restricted nature of existing video generation models' ability to handle arbitrary timesteps, we propose continuous-time video generation by combining neural ODE (Vid-ODE) with pixel-level video processing techniques. Using ODE-ConvGRU as an encoder, a convolutional version of the recently proposed ne
Style APA, Harvard, Vancouver, ISO itp.
4

Alsrehin, Nawaf O., and Ahmad F. Klaib. "VMQ: an algorithm for measuring the Video Motion Quality." Bulletin of Electrical Engineering and Informatics 8, no. 1 (2019): 231–38. http://dx.doi.org/10.11591/eei.v8i1.1418.

Pełny tekst źródła
Streszczenie:
This paper proposes a new full-reference algorithm, called Video Motion Quality (VMQ) that evaluates the relative motion quality of the distorted video generated from the reference video based on all the frames from both videos. VMQ uses any frame-based metric to compare frames from the original and distorted videos. It uses the time stamp for each frame to measure the intersection values. VMQ combines the comparison values with the intersection values in an aggregation function to produce the final result. To explore the efficiency of the VMQ, we used a set of raw, uncompressed videos to gene
Style APA, Harvard, Vancouver, ISO itp.
5

Nawaf, O. Alsrehin, and F. Klaib Ahmad. "VMQ: an algorithm for measuring the Video Motion Quality." Bulletin of Electrical Engineering and Informatics 8, no. 1 (2019): 231–38. https://doi.org/10.11591/eei.v8i1.1418.

Pełny tekst źródła
Streszczenie:
This paper proposes a new full-reference algorithm, called Video Motion Quality (VMQ) that evaluates the relative motion quality of the distorted video generated from the reference video based on all the frames from both videos. VMQ uses any frame-based metric to compare frames from the original and distorted videos. It uses the time stamp for each frame to measure the intersection values. VMQ combines the comparison values with the intersection values in an aggregation function to produce the final result. To explore the efficiency of the VMQ, we used a set of raw, uncompressed videos to gene
Style APA, Harvard, Vancouver, ISO itp.
6

Chang, Yuchou, and Hong Lin. "Irrelevant frame removal for scene analysis using video hyperclique pattern and spectrum analysis." Journal of Advanced Computer Science & Technology 5, no. 1 (2016): 1. http://dx.doi.org/10.14419/jacst.v5i1.4035.

Pełny tekst źródła
Streszczenie:
<p>Video often include frames that are irrelevant to the scenes for recording. These are mainly due to imperfect shooting, abrupt movements of camera, or unintended switching of scenes. The irrelevant frames should be removed before the semantic analysis of video scene is performed for video retrieval. An unsupervised approach for automatic removal of irrelevant frames is proposed in this paper. A novel log-spectral representation of color video frames based on Fibonacci lattice-quantization has been developed for better description of the global structures of video contents to measure s
Style APA, Harvard, Vancouver, ISO itp.
7

Hadi Ali, Israa, and Talib T. Al â€“ Fatlawi. "A Proposed Method for Key Frame Extraction." International Journal of Engineering & Technology 7, no. 4.19 (2018): 889–92. http://dx.doi.org/10.14419/ijet.v7i4.19.28063.

Pełny tekst źródła
Streszczenie:
Video structure analysis can be considered as a major step in too many applications, such as video summarization, video browsing, content-based video indexing,and retrieval and so on.  Video structure analysis aims to split the video into its major components( scenes, shots, keyframes). A key frame is one of the fundamental components of video; it can be defined as a frame or set of frames that give a good representation and summarization of whole contents of a shot. It must contain most of the features of the shot that it represented. In this paper, we proposed an easy method for key frame
Style APA, Harvard, Vancouver, ISO itp.
8

Li, Li, Jianfeng Lu, Shanqing Zhang, Linda Mohaisen, and Mahmoud Emam. "Frame Duplication Forgery Detection in Surveillance Video Sequences Using Textural Features." Electronics 12, no. 22 (2023): 4597. http://dx.doi.org/10.3390/electronics12224597.

Pełny tekst źródła
Streszczenie:
Frame duplication forgery is the most common inter-frame video forgery type to alter the contents of digital video sequences. It can be used for removing or duplicating some events within the same video sequences. Most of the existing frame duplication forgery detection methods fail to detect highly similar frames in the surveillance videos. In this paper, we propose a frame duplication forgery detection method based on textural feature analysis of video frames for digital video sequences. Firstly, we compute the single-level 2-D wavelet decomposition for each frame in the forged video sequenc
Style APA, Harvard, Vancouver, ISO itp.
9

K, Ragavan, Venkatalakshmi K, and Vijayalakshmi K. "A Case Study of Key Frame Extraction in Video Processing." Perspectives in Communication, Embedded-systems and Signal-processing - PiCES 4, no. 4 (2020): 17–20. https://doi.org/10.5281/zenodo.3974504.

Pełny tekst źródła
Streszczenie:
Video is an integral part of our everyday lives and in too many fields such as content-based video browsing, compression, video analysing, etc., Video has a complex structure that includes scene, shot, and frame. One of the fundamental techniques in content-based video browsing is key frame extraction. In general, to minimize redundancy the key frame should be representative of the video content. A video can be more than one keyframes. The utilization of key frame extraction method speeds up the framework by choosing fundamental frames and thereby removing additional computation on redundant f
Style APA, Harvard, Vancouver, ISO itp.
10

Li, WenLin, DeYu Qi, ChangJian Zhang, Jing Guo, and JiaJun Yao. "Video Summarization Based on Mutual Information and Entropy Sliding Window Method." Entropy 22, no. 11 (2020): 1285. http://dx.doi.org/10.3390/e22111285.

Pełny tekst źródła
Streszczenie:
This paper proposes a video summarization algorithm called the Mutual Information and Entropy based adaptive Sliding Window (MIESW) method, which is specifically for the static summary of gesture videos. Considering that gesture videos usually have uncertain transition postures and unclear movement boundaries or inexplicable frames, we propose a three-step method where the first step involves browsing a video, the second step applies the MIESW method to select candidate key frames, and the third step removes most redundant key frames. In detail, the first step is to convert the video into a se
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł

Rozprawy doktorskie na temat "Video frame"

1

Chau, Wing San. "Key frame selection for video transcoding /." View abstract or full-text, 2005. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202005%20CHAU.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Yoon, Kyongil. "Key-frame appearance analysis for video surveillance." College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/2818.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.) -- University of Maryland, College Park, 2005.<br>Thesis research directed by: Computer Science. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Style APA, Harvard, Vancouver, ISO itp.
3

Czerepinski, Przemyslaw Jan. "Displaced frame difference coding for video compression." Thesis, University of Bristol, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267009.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

SCHAPHORST, RICHARD A., and ALAN R. DEUTERMANN. "FRAME RATE REDUCTION IN VIDEO TELEMETRY SYSTEMS." International Foundation for Telemetering, 1989. http://hdl.handle.net/10150/614503.

Pełny tekst źródła
Streszczenie:
International Telemetering Conference Proceedings / October 30-November 02, 1989 / Town & Country Hotel & Convention Center, San Diego, California<br>In video telemetry systems the transmitted picture rate, or temporal resolution, is a critical parameter in determining system performance as well as the transmitted bit rate. In many applications it is important to transmit every TV frame because the maximum temporal resolution must be maintained to analyze critical events such as an encounter between a missile and a target. Typical transmission bit rates for operation at these picture rates are
Style APA, Harvard, Vancouver, ISO itp.
5

Mackin, Alex. "High frame rate formats for immersive video." Thesis, University of Bristol, 2017. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.730841.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Arici, Tarik. "Single and multi-frame video quality enhancement." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29722.

Pełny tekst źródła
Streszczenie:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.<br>Committee Chair: Yucel Altunbasak; Committee Member: Brani Vidakovic; Committee Member: Ghassan AlRegib; Committee Member: James Hamblen; Committee Member: Russ Mersereau. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Style APA, Harvard, Vancouver, ISO itp.
7

Levy, Alfred K. "Object tracking in low frame-rate video sequences." Honors in the Major Thesis, University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/339.

Pełny tekst źródła
Streszczenie:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf.edu/Systems/DigitalInitiatives/DigitalCollections/InternetDistributionConsentAgreementForm.pdf You may also contact the project coordinator, Kerri Bottorff, at kerri.bottorff@ucf.edu for more information.<br>Bachelors<br>Engineering<br>Computer Science
Style APA, Harvard, Vancouver, ISO itp.
8

Sharon, C. M. (Colin Michael) Carleton University Dissertation Engineering Systems and Computer. "Compressed video in integrated services frame relay networks." Ottawa, 1994.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Kanj, Hind. "Zero-Latency strategies for video transmission using frame extrapolation." Electronic Thesis or Diss., Valenciennes, Université Polytechnique Hauts-de-France, 2024. https://ged.uphf.fr/nuxeo/site/esupversions/53e0c0d3-296e-477f-9adc-2dbc315128f5.

Pełny tekst źródła
Streszczenie:
La demande de diffusion sans interruption de contenu vidéo et de haute qualité avec une latence minimale est essentielle dans les applications telles que la diffusion sportive et le contrôle de systèmes à distance. Cependant, la diffusion vidéo reste exposée à des défis en raison des caractéristiques variables des canaux de communication, qui peuvent avoir un impact sur la qualité de l'expérience en termes de qualité vidéo et de latence de bout en bout (le temps entre l'acquisition de la vidéo à l'émetteur et son affichage au récepteur).L'objectif de cette thèse est d'aborder le problème des a
Style APA, Harvard, Vancouver, ISO itp.
10

King, Donald V. "(Frame) /-bridge-\ !bang! ((spill)) *sparkle* (mapping Mogadore) /." [Kent, Ohio] : Kent State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=kent1216759724.

Pełny tekst źródła
Streszczenie:
Thesis (M.F.A.)--Kent State University, 2008.<br>Title from PDF t.p. (viewed Oct. 19, 2009). Advisor: Paul O'Keeffe. Keywords: Sculpture, Installation Art, Video Art. Includes bibliographical references (p. 25).
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł

Książki na temat "Video frame"

1

Wiegand, Thomas, and Bernd Girod. Multi-Frame Motion-Compensated Prediction for Video Transmission. Springer US, 2001. http://dx.doi.org/10.1007/978-1-4615-1487-9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Bernd, Girod, ed. Multi-frame motion-compensated prediction for video transmission. Kluwer Academic Publishers, 2001.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Wiegand, Thomas. Multi-Frame Motion-Compensated Prediction for Video Transmission. Springer US, 2001.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Schuster, Guido M. Rate-distortion based video compression: Optimal video frame compression and object boundary encoding. Kluwer Academic Publishers, 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Schuster, Guido M. Rate-Distortion Based Video Compression: Optimal Video Frame Compression and Object Boundary Encoding. Springer US, 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Fowler, Luke. Luke Fowler: Two-frame films 2006-2012. MACK, 2014.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Kempster, Kurt A. Frame rate effects on human spatial perception in video intelligence. Naval Postgraduate School, 2000.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Wanner, Franz. Franz Wanner: Foes at the edge of the frame. Distanz Verlag, 2020.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Hartz, William G. Data compression techniques applied to high resolutin high frame rate video technology. Analex Corp^Lewis Research Center, 1989.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Hartz, William G. Data compression techniques applied to high resolution high frame rate video technology. National Aeronautics and Space Administration, Office of Management, Scientific and Technical Information Division, 1990.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł

Części książek na temat "Video frame"

1

Kemp, Jonathan. "Frame rate." In Film on Video. Routledge, 2019. http://dx.doi.org/10.4324/9780429468872-2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Zhang, Shilin, Heping Li, and Shuwu Zhang. "Video Frame Segmentation." In Lecture Notes in Electrical Engineering. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27314-8_27.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Long, Chengjiang, Arslan Basharat, and Anthony Hoogs. "Video Frame Deletion and Duplication." In Multimedia Forensics. Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-7621-5_13.

Pełny tekst źródła
Streszczenie:
AbstractVideos can be manipulated in a number of different ways, including object addition or removal, deep fake videos, temporal removal or duplication of parts of the video, etc. In this chapter, we provide an overview of the previous work related to video frame deletion and duplication and dive into the details of two deep-learning-based approaches for detecting and localizing frame deletion (Chengjiang et al. 2017) and duplication (Chengjiang et al. 2019) manipulations.
Style APA, Harvard, Vancouver, ISO itp.
4

Willett, Rebekah. "In the Frame: Mapping Camcorder Cultures." In Video Cultures. Palgrave Macmillan UK, 2009. http://dx.doi.org/10.1057/9780230244696_1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Feixas, Miquel, Anton Bardera, Jaume Rigan, Mateu Sbert, and Qing Xu. "Video Key Frame Selection." In Information Theory Tools for Image Processing. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-031-79555-8_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Saldanha, Mário, Gustavo Sanchez, César Marcon, and Luciano Agostini. "VVC Intra-frame Prediction." In Versatile Video Coding (VVC). Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11640-7_3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Miyaji, Chikara. "Frame by frame playback on the Internet video." In Proceedings of the 10th International Symposium on Computer Science in Sports (ISCSS). Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24560-7_8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Yu, Zhiyang, Yu Zhang, Xujie Xiang, Dongqing Zou, Xijun Chen, and Jimmy S. Ren. "Deep Bayesian Video Frame Interpolation." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19784-0_9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Galshetwar, Vijay M., Prashant W. Patil, and Sachin Chaudhary. "Video Enhancement with Single Frame." In Communications in Computer and Information Science. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11349-9_18.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Behseta, Sam, Charles Lam, Joseph E. Sutton, and Robert L. Webb. "A Time Series Intra-Video Collusion Attack on Frame-by-Frame Video Watermarking." In Digital Watermarking. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04438-0_3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Video frame"

1

Pangestu, Muhamad Wisnu, and Arief Setyanto. "Reconstructing Missing Frame in Video Transmission with Frame Interpolation." In 2024 8th International Conference on Information Technology, Information Systems and Electrical Engineering (ICITISEE). IEEE, 2024. http://dx.doi.org/10.1109/icitisee63424.2024.10730211.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Rahman, Aimon, Malsha V. Perera, and Vishal M. Patel. "Frame by Familiar Frame: Understanding Replication in Video Diffusion Models." In 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE, 2025. https://doi.org/10.1109/wacv61041.2025.00274.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Liu, Ruixin, Zhenyu Weng, Yuesheng Zhu, and Bairong Li. "Temporal Adaptive Alignment Network for Deep Video Inpainting." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/129.

Pełny tekst źródła
Streszczenie:
Video inpainting aims to synthesize visually pleasant and temporally consistent content in missing regions of video. Due to a variety of motions across different frames, it is highly challenging to utilize effective temporal information to recover videos. Existing deep learning based methods usually estimate optical flow to align frames and thereby exploit useful information between frames. However, these methods tend to generate artifacts once the estimated optical flow is inaccurate. To alleviate above problem, we propose a novel end-to-end Temporal Adaptive Alignment Network(TAAN) for video
Style APA, Harvard, Vancouver, ISO itp.
4

Molnar, Benjamin, Toby Terpstra, and Tilo Voitel. "Accuracy of Timestamps in Digital and Network Video Recorders." In WCX SAE World Congress Experience. SAE International, 2025. https://doi.org/10.4271/2025-01-8690.

Pełny tekst źródła
Streszczenie:
&lt;div class="section abstract"&gt;&lt;div class="htmlview paragraph"&gt;Video analysis plays a major role in many forensic fields. Many articles, publications, and presentations have covered the importance and difficulty in properly establishing frame timing. In many cases, the analyst is given video files that do not contain native metadata. In other cases, the files contain video recordings of the surveillance playback monitor which eliminates all original metadata from the video recording. These “video of video” recordings prevent an analyst from determining frame timing using metadata fr
Style APA, Harvard, Vancouver, ISO itp.
5

Liu, Da, Debin Zhao, Siwe Ma, and Wen Gao. "Frame layer rate control for dual frame motion compensation." In 2010 18th International Packet Video Workshop (PV). IEEE, 2010. http://dx.doi.org/10.1109/pv.2010.5706823.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Deng, Kangle, Tianyi Fei, Xin Huang, and Yuxin Peng. "IRC-GAN: Introspective Recurrent Convolutional GAN for Text-to-video Generation." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/307.

Pełny tekst źródła
Streszczenie:
Automatically generating videos according to the given text is a highly challenging task, where visual quality and semantic consistency with captions are two critical issues. In existing methods, when generating a specific frame, the information in those frames generated before is not fully exploited. And an effective way to measure the semantic accordance between videos and captions remains to be established. To address these issues, we present a novel Introspective Recurrent Convolutional GAN (IRC-GAN) approach. First, we propose a recurrent transconvolutional generator, where LSTM cells are
Style APA, Harvard, Vancouver, ISO itp.
7

Lu, Xinyuan, Shengyuan Huang, Li Niu, Wenyan Cong, and Liqing Zhang. "Deep Video Harmonization With Color Mapping Consistency." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/172.

Pełny tekst źródła
Streszczenie:
Video harmonization aims to adjust the foreground of a composite video to make it compatible with the background. So far, video harmonization has only received limited attention and there is no public dataset for video harmonization. In this work, we construct a new video harmonization dataset HYouTube by adjusting the foreground of real videos to create synthetic composite videos. Moreover, we consider the temporal consistency in video harmonization task. Unlike previous works which establish the spatial correspondence, we design a novel framework based on the assumption of color mapping cons
Style APA, Harvard, Vancouver, ISO itp.
8

Ribeiro, Victor Nascimento, and Nina S. T. Hirata. "Efficient License Plate Recognition in Videos Using Visual Rhythm and Accumulative Line Analysis." In Anais Estendidos da Conference on Graphics, Patterns and Images. Sociedade Brasileira de Computação - SBC, 2024. https://doi.org/10.5753/sibgrapi.est.2024.31664.

Pełny tekst źródła
Streszczenie:
Video-based Automatic License Plate Recognition (ALPR) involves extracting vehicle license plate text information from video captures. Traditional systems typically rely heavily on high-end computing resources and utilize multiple frames to recognize license plates, leading to increased computational overhead. In this paper, we propose two methods capable of efficiently extracting exactly one frame per vehicle and recognizing its license plate characters from this single image, thus significantly reducing computational demands. The first method uses Visual Rhythm (VR) to generate time-spatial
Style APA, Harvard, Vancouver, ISO itp.
9

Zheng, Jiping, and Ganfeng Lu. "k-SDPP: Fixed-Size Video Summarization via Sequential Determinantal Point Processes." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/108.

Pełny tekst źródła
Streszczenie:
With the explosive growth of video data, video summarization which converts long-time videos to key frame sequences has become an important task in information retrieval and machine learning. Determinantal point processes (DPPs) which are elegant probabilistic models have been successfully applied to video summarization. However, existing DPP-based video summarization methods suffer from poor efficiency of outputting a specified size summary or neglecting inherent sequential nature of videos. In this paper, we propose a new model in the DPP lineage named k-SDPP in vein of sequential determinan
Style APA, Harvard, Vancouver, ISO itp.
10

Shen, Wang, Wenbo Bao, Guangtao Zhai, Li Chen, Xiongkuo Min, and Zhiyong Gao. "Blurry Video Frame Interpolation." In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020. http://dx.doi.org/10.1109/cvpr42600.2020.00516.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Video frame"

1

Zanaty, M., E. Berger, and S. Nandakumar. Video Frame Marking RTP Header Extension. RFC Editor, 2025. https://doi.org/10.17487/rfc9626.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Terry, P. L. An evaluation of solid state video frame recorders. Office of Scientific and Technical Information (OSTI), 1994. http://dx.doi.org/10.2172/10181994.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Wolf, Stephen, and Margaret Pinson. Video Quality Model for Variable Frame Delay (VQM_VFD). Institute for Telecommunication Sciences, 2011. https://doi.org/10.70220/4fbgs8f1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Wolf, Stephen. Variable Frame Delay (VFD) Parameters for Video Quality Measurements. Institute for Telecommunication Sciences, 2011. https://doi.org/10.70220/nt342qqc.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Cooke, B., and A. Saucier. Correction of Line Interleaving Displacement in Frame Captured Aerial Video Imagery. U.S. Department of Agriculture, Forest Service, Southern Forest Experiment Station, 1995. http://dx.doi.org/10.2737/so-rn-380.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Wolf, Stephen. A No Reference (NR) and Reduced Reference (RR) Metric for Detecting Dropped Video Frames. Institute for Telecommunication Sciences, 2008. https://doi.org/10.70220/mkjlbtqp.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Decleir, Cyril, Mohand-Saïd Hacid, and Jacques Kouloumdjian. A Database Approach for Modeling and Querying Video Data. Aachen University of Technology, 1999. http://dx.doi.org/10.25368/2022.90.

Pełny tekst źródła
Streszczenie:
Indexing video data is essential for providing content based access. In this paper, we consider how database technology can offer an integrated framework for modeling and querying video data. As many concerns in video (e.g., modeling and querying) are also found in databases, databases provide an interesting angle to attack many of the problems. From a video applications perspective, database systems provide a nice basis for future video systems. More generally, database research will provide solutions to many video issues even if these are partial or fragmented. From a database perspective, v
Style APA, Harvard, Vancouver, ISO itp.
8

Debroux, Patrick. The Use of Adjacent Video Frames to Increase Convolutional Neural Network Classification Robustness in Stressed Environments. DEVCOM Analaysis Center, 2023. http://dx.doi.org/10.21236/ad1205367.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Hillenbrand, Tobias, Bruno Martorano, and Melissa Siegel. Not as innocent as it seems? The effects of "neutral" messaging on refugee attitudes. UNU-MERIT, 2025. https://doi.org/10.53330/ytid6699.

Pełny tekst źródła
Streszczenie:
Immigration has become one of the most divisive political issues in Europe and around the world. In Germany, Europe’s largest refugee hosting country, public attitudes have reached a low point. Besides increased “real-life” exposure to immigrants, exposure to all sorts of messages centered around immigration and refugees may be behind this worrying trend. While prior research has investigated the effects of specific subjects of the immigration discourse, such as specific frames or statistical information, it remains unclear how “neutral” reporting on refugee migration impacts public attitudes.
Style APA, Harvard, Vancouver, ISO itp.
10

Bates, C. Richards, Melanie Chocholek, Clive Fox, John Howe, and Neil Jones. Scottish Inshore Fisheries Integrated Data System (SIFIDS): Work package (3) final report development of a novel, automated mechanism for the collection of scallop stock data. Edited by Mark James and Hannah Ladd-Jones. Marine Alliance for Science and Technology for Scotland (MASTS), 2019. http://dx.doi.org/10.15664/10023.23449.

Pełny tekst źródła
Streszczenie:
[Extract from Executive Summary] This project, aimed at the development of a novel, automated mechanism for the collection of scallop stock data was a sub-part of the Scottish Inshore Fisheries Integrated Data Systems (SIFIDS) project. The project reviewed the state-of-the-art remote sensing (geophysical and camera-based) technologies available from industry and compared these to inexpensive, off-the -shelf equipment. Sea trials were conducted on scallop dredge sites and also hand-dived scallop sites. Data was analysed manually, and tests conducted with automated processing methods. It was con
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!