Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Video processing.

Статті в журналах з теми "Video processing"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Video processing".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Jiang, Zhiying, Chong Guan, and Ivo L. de Haaij. "Congruity and processing fluency." Asia Pacific Journal of Marketing and Logistics 32, no. 5 (October 4, 2019): 1070–88. http://dx.doi.org/10.1108/apjml-03-2019-0128.

Повний текст джерела
Анотація:
Purpose The purpose of this paper is to investigate the benefits of Ad-Video and Product-Video congruity for embedded online video advertising. A conceptual model is constructed to test how congruity between online advertisements, advertised products and online videos impact consumer post-viewing attitudes via processing fluency. Design/methodology/approach An online experiment with eight versions of mock video sections (with embedded online video advertisements) was conducted. The study is a 2 (type of appeal: informational vs emotional) × 2 (Ad-Video congruity: congruent vs incongruent) × 2 (Product-Video congruity: congruent vs incongruent) full-factorial between-subject design. A total of 252 valid responses were collected for data analysis. Findings Results show that congruity is related to the improvement of processing fluency only for informational ads/videos. The positive effect of Ad-Video congruity on processing fluency is only significant for informational appeals but not emotional appeal. Similarly, the positive effects of Product-Video congruity on processing fluency are only significant for informational appeals but not emotional appeal. Involvement has been found to be positively related to processing fluency too. Processing fluency has a positive impact on the attitudes toward the ads, advertised products and videos. Research limitations/implications The finding that congruity is related to the improvement of processing fluency only for informational ads/videos extends the existing literature by identifying the type of appeal as a boundary condition. Practical implications Both brand managers and online video platform owners should monitor and operationalize the content and appeal congruity, especially for informational ads on a large scale to improve consumers’ responses. Originality/value To the best of the authors’ knowledge, this is the first paper to examine the effects of Ad-Video and Product-Video congruity of embedded advertisements on video sharing platforms. The findings of this study add to the literature on congruity and processing fluency.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Coutts, Maurice D., and Dennis L. Matthies. "Video disc processing." Journal of the Acoustical Society of America 81, no. 5 (May 1987): 1659. http://dx.doi.org/10.1121/1.395033.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Uytterhoeven, G. "Digital video processing." Journal of Computational and Applied Mathematics 66, no. 1-2 (January 1996): N5—N6. http://dx.doi.org/10.1016/0377-0427(96)80474-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ritwik Baranwal. "Automatic Summarization of Cricket Highlights using Audio Processing." January 2021 7, no. 01 (January 4, 2021): 48–53. http://dx.doi.org/10.46501/ijmtst070111.

Повний текст джерела
Анотація:
The problem of automatic excitement detection in cricket videos is considered and applied for highlight generation. This paper focuses on detecting exciting events in video using complementary information from the audio and video domains. First, a method of audio and video elements separation is proposed. Thereafter, the “level-of-excitement” is measured using features such as amplitude, and spectral center of gravity extracted from the commentators speech’s amplitude to decide the threshold. Our experiments using actual cricket videos show that these features are well correlated with human assessment of excitability. Finally, audio/video information is fused according to time-order scenes which has “excitability” in order to generate highlights of cricket. The techniques described in this paper are generic and applicable to a variety of topic and video/acoustic domains.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Лун, Сюй, Xu Long, Йан Йихуа, Yan Yihua, Чэн Цзюнь, and Cheng Jun. "Guided filtering for solar image/video processing." Solar-Terrestrial Physics 3, no. 2 (August 9, 2017): 9–15. http://dx.doi.org/10.12737/stp-3220172.

Повний текст джерела
Анотація:
A new image enhancement algorithm employing guided filtering is proposed in this work for enhancement of solar images and videos, so that users can easily figure out important fine structures imbedded in the recorded images/movies for solar observation. The proposed algorithm can efficiently remove image noises, including Gaussian and impulse noises. Meanwhile, it can further highlight fibrous structures on/beyond the solar disk. These fibrous structures can clearly demonstrate the progress of solar flare, prominence coronal mass emission, magnetic field, and so on. The experimental results prove that the proposed algorithm gives significant enhancement of visual quality of solar images beyond original input and several classical image en-hancement algorithms, thus facilitating easier determi-nation of interesting solar burst activities from recorded images/movies.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Wira Widjanarko, Kresna, Krisna Aditya Herlambang, and Muhamad Abdul Karim. "Faster Video Processing Menggunakan Teknik Parallel Processing Dengan Library OpenCV." Jurnal Komunikasi, Sains dan Teknologi 1, no. 1 (June 30, 2022): 10–18. http://dx.doi.org/10.61098/jkst.v1i1.2.

Повний текст джерела
Анотація:
Video yang diputar seringkali terlalu lama diproses, terutama aplikasi yang membutuhkan pemrosesan real-time, seperti pemrosesan aplikasi video webcam, oleh karena itu dilakukan pemrosesan paralel untuk mempercepat proses komputasi video. Penelitian ini membahas tentang pemrosesan paralel pada video agar proses komputasi video berjalan lebih cepat dibandingkan tanpa pemrosesan paralel. Pengujian dilakukan dengan dua jenis data: aliran video dari webcam laptop dan file video .mp4. Bahasa pemrograman yang digunakan dalam pengujian ini adalah Python dengan bantuan library OpenCV. Penelitian ini menghasilkan perbedaan yang signifikan dalam pemrosesan video baik dengan sumber webcam maupun File Video dalam format .mp4 tanpa pemrosesan paralel (multithreading) dengan Video Show dan Video Read serta penggabungan keduanya (Multithreading).
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Et. al., G. Megala,. "State-Of-The-Art In Video Processing: Compression, Optimization And Retrieval." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 5 (April 11, 2021): 1256–72. http://dx.doi.org/10.17762/turcomat.v12i5.1793.

Повний текст джерела
Анотація:
Video compression plays a vital role in the modern social media networking with plethora of multimedia applications. It empowers transmission medium to competently transfer videos and enable resources to store the video efficiently. Nowadays high-resolution video data are transferred through the communication channel having high bit rate in order to send multiple compressed videos. There are many advances in transmission ability, efficient storage ways of these compressed video where compression is the primary task involved in multimedia services. This paper summarizes the compression standards, describes the main concepts involved in video coding. Video compression performs conversion of large raw bits of video sequence into a small compact one, achieving high compression ratio with good video perceptual quality. Removing redundant information is the main task in the video sequence compression. A survey on various block matching algorithms, quantization and entropy coding are focused. It is found that many of the methods having computational complexities needs improvement with optimization.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Moon, Nazmun Nessa, Imrus Salehin, Masuma Parvin, Md Mehedi Hasan, Iftakhar Mohammad Talha, Susanta Chandra Debnath, Fernaz Narin Nur, and Mohd Saifuzzaman. "Natural language processing based advanced method of unnecessary video detection." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 6 (December 1, 2021): 5411. http://dx.doi.org/10.11591/ijece.v11i6.pp5411-5419.

Повний текст джерела
Анотація:
<span>In this study we have described the process of identifying unnecessary video using an advanced combined method of natural language processing and machine learning. The system also includes a framework that contains analytics databases and which helps to find statistical accuracy and can detect, accept or reject unnecessary and unethical video content. In our video detection system, we extract text data from video content in two steps, first from video to MPEG-1 audio layer 3 (MP3) and then from MP3 to WAV format. We have used the text part of natural language processing to analyze and prepare the data set. We use both Naive Bayes and logistic regression classification algorithms in this detection system to determine the best accuracy for our system. In our research, our video MP4 data has converted to plain text data using the python advance library function. This brief study discusses the identification of unauthorized, unsocial, unnecessary, unfinished, and malicious videos when using oral video record data. By analyzing our data sets through this advanced model, we can decide which videos should be accepted or rejected for the further actions.</span>
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Beric, Aleksandar, Jef van Meerbergen, Gerard de Haan, and Ramanathan Sethuraman. "Memory-centric video processing." IEEE Transactions on Circuits and Systems for Video Technology 18, no. 4 (April 2008): 439–52. http://dx.doi.org/10.1109/tcsvt.2008.918775.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Merigot, Alain, and Alfredo Petrosino. "Parallel processing for image and video processing." Parallel Computing 34, no. 12 (December 2008): 693. http://dx.doi.org/10.1016/j.parco.2008.09.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Yamashina, Masakazu. "Video Signal Processing LSI. Image Processing DSP." Journal of the Institute of Television Engineers of Japan 48, no. 1 (1994): 38–43. http://dx.doi.org/10.3169/itej1978.48.38.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Wang, Wei Hua. "The Design and Implementation of a Video Image Acquisition System Based on VFW." Applied Mechanics and Materials 380-384 (August 2013): 3787–90. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.3787.

Повний текст джерела
Анотація:
Along with the development of technologies on computer, electronic and communication, there are more and more applications of digital image acquisition and processing technology used in computer and portable systems, such as videophone, digital cameras, digital television, video monitoring, camera phones and video conferencing etc. Digitized image makes image digital transmit with high quality, which facilitates image retrieval, analysis, processing and storage. In applications such as video conferencing, it is a crucial premise to capture videos. So, in this paper, we mainly introduce the video capture technology by exploiting the VFM video services library developed by Microsoft. Software based on VFW can directly capture digital videos or digitize the traditional analog videos and then clipping them.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Jeon, Eun-Seon, Dae-Il Kang, Chang-Bong Ban, and Seong-Yul Yang. "Implementation of Video Processing Module for Integrated Modular Avionics System." Journal of Korea Navigation Institute 18, no. 5 (October 30, 2014): 437–44. http://dx.doi.org/10.12673/jant.2014.18.5.437.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

He, Wenjia, Ibrahim Sabek, Yuze Lou, and Michael Cafarella. "PAINE Demo: Optimizing Video Selection Queries with Commonsense Knowledge." Proceedings of the VLDB Endowment 16, no. 12 (August 2023): 3902–5. http://dx.doi.org/10.14778/3611540.3611581.

Повний текст джерела
Анотація:
Because video is becoming more popular and constitutes a major part of data collection, we have the need to process video selection queries --- selecting videos that contain target objects. However, a naïve scan of a video corpus without optimization would be extremely inefficient due to applying complex detectors to irrelevant videos. This demo presents Paine; a video query system that employs a novel index mechanism to optimize video selection queries via commonsense knowledge. Paine samples video frames to build an inexpensive lossy index, then leverages probabilistic models based on existing commonsense knowledge sources to capture the semantic-level correlation among video frames, thereby allowing Paine to predict the content of unindexed video. These models can predict which videos are likely to satisfy selection predicates so as to avoid Paine from processing irrelevant videos. We will demonstrate a system prototype of Paine for accelerating the processing of video selection queries, allowing VLDB'23 participants to use the Paine interface to run queries. Users can compare Paine with the baseline, the SCAN method.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Gambhir, Akshat, K. S. Boppaiah, M. Shruthi Subbaiah, Pooja M, and Kiran P. "Video Oculographic System using Real-Time Video Processing." International Journal of Computer Applications 119, no. 22 (June 20, 2015): 15–18. http://dx.doi.org/10.5120/21368-4400.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Kavitha S Patil,. "Digital Image and Video Processing: Algorithms and Applications." Journal of Electrical Systems 20, no. 3s (April 4, 2024): 1390–96. http://dx.doi.org/10.52783/jes.1516.

Повний текст джерела
Анотація:
Many of the techniques that are used in digital image and video processing were developed in the 1960s at Bell Laboratories. These techniques have applications in a variety of fields, including medical imaging, videophone, character recognition, satellite imagery, and wire-photo standards conversion. Additional applications include enhancement of photographs or vidoes. The early stages of image and video processing were developed with the intention of enhancing the overall quality of the image or video. For the purpose of enhancing the visual effect of humans, it is intended for human beings. When it comes to image and video processing, the input is an image of poor quality, and the output is an image and video of higher quality. Research on algorithms and applications of digital image and video processing is the primary purpose of this study, which aims to investigate these topics extensively. The methodology employed in this study is qualitative research technique. In accordance with the findings of this research, "Image Processing" refers to the process of analyzing images with the objective of determining the significance of objects and identifying them. Image analysts analyze data that has been remotely sensed and attempt to detect, identify, classify, measure, and evaluate the significance of physical and cultural objects, as well as their patterns and spatial relationships. One subcategory of signal processing is known as video processing, and it is distinguished by the fact that the signals that are input and output are video files or video streams. Technology such as television sets, videocassette recorders (VCRs), DVD players, and other devices all make use of video processing algorithms. The processing of images and videos is extremely useful in a variety of contexts.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Yamaguchi, Futao, Hiroshi Sugawara, and Tetsuya Senda. "Video Signal Processing LSI. Recent Trends in Camera and Video Signal Processing LSIs." Journal of the Institute of Television Engineers of Japan 48, no. 1 (1994): 20–24. http://dx.doi.org/10.3169/itej1978.48.20.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Shilpa, Bagade, Budati Anil Kumar, and L. Koteswara Rao. "Optimized Visual Internet of Things in Video Processing for Video Streaming." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 5s (June 2, 2023): 362–69. http://dx.doi.org/10.17762/ijritcc.v11i5s.7045.

Повний текст джерела
Анотація:
The global expansion of the Visual Internet of Things (VIoT) has enabled various new applications during the last decade through the interconnection of a wide range of devices and sensors.Frame freezing and buffering are the major artefacts in broad area of multimedia networking applications occurring due to significant packet loss and network congestion. Numerous studies have been carried out in order to understand the impact of packet loss on QoE for a wide range of applications. This paper improves the video streaming quality by using the proposed framework Lossy Video Transmission (LVT) for simulating the effect of network congestion on the performance of encrypted static images sent over wireless sensor networks.The simulations are intended for analysing video quality and determining packet drop resilience during video conversations.The assessment of emerging trends in quality measurement, including picture preference, visual attention, and audio visual quality is checked. To appropriately quantify the video quality loss caused by the encoding system, various encoders compress video sequences at various data rates.Simulation results for different QoE metrics with respect to user developed videos have been demonstrated which outperforms the existing metrics.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Espeland, Håvard, Håkon Kvale Stensland, Dag Haavi Finstad, and Pål Halvorsen. "Reducing Processing Demands for Multi-Rate Video Encoding." International Journal of Multimedia Data Engineering and Management 3, no. 2 (April 2012): 1–19. http://dx.doi.org/10.4018/jmdem.2012040101.

Повний текст джерела
Анотація:
Segmented adaptive HTTP streaming has become the de facto standard for video delivery over the Internet for its ability to scale video quality to the available network resources. Here, each video is encoded in multiple qualities, i.e., running the expensive encoding process for each quality layer. However, these operations consume both a lot of time and resources, and in this paper, the authors propose a system for reusing redundant steps in a video encoder to improve the multi-layer encoding pipeline. The idea is to have multiple outputs for each of the target bitrates and qualities where the intermediate processing steps share and reuse the computational heavy analysis. A prototype has been implemented using the VP8 reference encoder, and their experimental results show that for both low- and high-resolution videos the proposed method can significantly reduce the processing demands and time when encoding the different quality layers.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Et. al., R. P. Dahake,. "Face Recognition from Video using Threshold based Clustering." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 1S (April 11, 2021): 272–85. http://dx.doi.org/10.17762/turcomat.v12i1s.1768.

Повний текст джерела
Анотація:
Video processing has gained significant attention due to the rapid growth in video feed collected from a variety of domains. Face recognition and summary generation is gaining attention in the branch of video data processing. The recognition includes face identification from video frames and face authentication. The face authentication is nothing but labelling the faces. Face recognition strategies used in image processing techniques cannot be directly applied to video processing due to bulk data. The video processing techniques face multiple problems such as pose variation, expression variation, illumination variation, camera angles, etc. A lot of research work is done for face authentication in terms of accuracy and efficiency improvement. The second important aspect is the video summarization. Very few works have been done on the video summarization due to its complexity, computational overhead, and lack of appropriate training data. In some of the existing work analysing celebrity video for finding association in name node or face node of video dataset using graphical representation need script or dynamic caption details As well as there can be multiple faces of same person per frame so using K- Means clustering further for recognition purpose needs cluster count initially considering total person in the video. The proposed system works on video face recognition and summary generation. The system automatically identifies the front and profile faces of users. The similar faces are grouped together using threshold based a fixed-width clustering which is one of the novel approach in face recognition process best of our knowledge and only top k faces are used for authentication. This improves system efficiency. After face authentication, the occurrence count of each user is extracted and a visual co-occurrence graph is generated as a video summarization. The system is tested on the video dataset of multi persons occurring in different videos. Total 20 videos are consider for training and testing containing multiple person in one frame. To evaluate the accuracy of recognition. 80% of faces are correctly identified and authenticated from the video.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

VIDAN, Cristian, Gavril ALEXANDRU, Razvan MIHAI, and Florin CATARGIU. "ON-BOARD UAV VIDEO PROCESSING FOR GROUND TARGET TRACKING." SCIENTIFIC RESEARCH AND EDUCATION IN THE AIR FORCE 20 (June 18, 2018): 281–84. http://dx.doi.org/10.19062/2247-3173.2018.20.36.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Tseng, Shu-Ming, Zhi-Ting Yeh, Chia-Yang Wu, Jia-Bin Chang, and Mehdi Norouzi. "Video Scene Detection Using Transformer Encoding Linker Network (TELNet)." Sensors 23, no. 16 (August 9, 2023): 7050. http://dx.doi.org/10.3390/s23167050.

Повний текст джерела
Анотація:
This paper introduces a transformer encoding linker network (TELNet) for automatically identifying scene boundaries in videos without prior knowledge of their structure. Videos consist of sequences of semantically related shots or chapters, and recognizing scene boundaries is crucial for various video processing tasks, including video summarization. TELNet utilizes a rolling window to scan through video shots, encoding their features extracted from a fine-tuned 3D CNN model (transformer encoder). By establishing links between video shots based on these encoded features (linker), TELNet efficiently identifies scene boundaries where consecutive shots lack links. TELNet was trained on multiple video scene detection datasets and demonstrated results comparable to other state-of-the-art models in standard settings. Notably, in cross-dataset evaluations, TELNet demonstrated significantly improved results (F-score). Furthermore, TELNet’s computational complexity grows linearly with the number of shots, making it highly efficient in processing long videos.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Vinayak Naik, Tadvidi. "Video Processing Based Smart Helmet." International Journal for Research in Applied Science and Engineering Technology V, no. IV (March 25, 2017): 144–47. http://dx.doi.org/10.22214/ijraset.2017.4030.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

KAMIYA, KIYOSHI. "Image processing in video microscopy." Acta Histochemica et Cytochemica 24, no. 3 (1991): 353–56. http://dx.doi.org/10.1267/ahc.24.353.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Daozheng Chen, M. Bilgic, L. Getoor, and D. Jacobs. "Dynamic Processing Allocation in Video." IEEE Transactions on Pattern Analysis and Machine Intelligence 33, no. 11 (November 2011): 2174–87. http://dx.doi.org/10.1109/tpami.2011.55.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Desor, H. J. "Single-chip video processing system." IEEE Transactions on Consumer Electronics 37, no. 3 (1991): 182–89. http://dx.doi.org/10.1109/30.85511.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Wadhwa, Neal, Michael Rubinstein, Frédo Durand, and William T. Freeman. "Phase-based video motion processing." ACM Transactions on Graphics 32, no. 4 (July 21, 2013): 1–10. http://dx.doi.org/10.1145/2461912.2461966.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Ahmad, Ishfaq, Yong He, and Ming L. Liou. "Video compression with parallel processing." Parallel Computing 28, no. 7-8 (August 2002): 1039–78. http://dx.doi.org/10.1016/s0167-8191(02)00100-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Sony Corporation. "Motion-dependent video signal processing." Displays 14, no. 1 (January 1993): 60. http://dx.doi.org/10.1016/0141-9382(93)90023-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Lin, Dennis, Xiaohuang Huang, Quang Nguyen, Joshua Blackburn, Christopher Rodrigues, Thomas Huang, Minh Do, Sanjay Patel, and Wen-Mei Hwu. "The parallelization of video processing." IEEE Signal Processing Magazine 26, no. 6 (November 2009): 103–12. http://dx.doi.org/10.1109/msp.2009.934116.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Faroudja, Y., and N. Balram. "Video Processing for Pixellized Displays." SID Symposium Digest of Technical Papers 30, no. 1 (1999): 48. http://dx.doi.org/10.1889/1.1834064.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Niwa, K., T. Araseki, and T. Nishitani. "Digital signal processing for video." IEEE Circuits and Devices Magazine 6, no. 1 (January 1990): 27–33. http://dx.doi.org/10.1109/101.47583.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Kinugasa, T., A. Nishizawa, K. Koshio, T. Iguchi, J. Kamimura, and H. Marumori. "A video pre/post-processing LSI for video capture." IEEE Transactions on Consumer Electronics 42, no. 3 (1996): 776–80. http://dx.doi.org/10.1109/30.536184.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Bilal A. Ozturk. "Independent Video Steganalysis Framework Perspective of Secure Video Processing." Journal of Electrical Systems 20, no. 6s (May 2, 2024): 2541–49. http://dx.doi.org/10.52783/jes.3241.

Повний текст джерела
Анотація:
We have designed an algorithm of cross-domain feature set extraction for cross-domain video steganalysis of video steganography in multiple domains. For video steganography, we implemented the recently proposed steganography methods such as PMs, MVs, and IPMs. The outcomes of these techniques have fed as input to the proposed Steganalysis approach. We have designed a cross-domain technique for Steganalysis in which the global feature set is extracted according to common statistical properties. After the extraction of global features, the domain-specific features have been extracted in the local features set. Both global and local features are set to form the cross-domain steganalysis technique. The classification has been performed by using the conventional machine learning classifiers such as Support Vector Machine (SVM) and Artificial Neural Network (ANN).
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Elgamml, Mohamed M., Fazly S. Abas, and H. Ann Goh. "Semantic Analysis in Soccer Videos Using Support Vector Machine." International Journal of Pattern Recognition and Artificial Intelligence 34, no. 09 (December 20, 2019): 2055018. http://dx.doi.org/10.1142/s0218001420550186.

Повний текст джерела
Анотація:
A tremendous increase in the video content uploaded on the internet has made it necessary for auto-recognition of videos in order to analyze, moderate or categorize certain content that can be accessed easily later on. Video analysis requires the study of proficient methodologies at the semantic level in order to address the issues such as occlusions, changes in illumination, noise, etc. This paper is aimed at the analysis of the soccer videos and semantic processing as an application in the video semantic analysis field. This study proposes a framework for automatically generating and annotating the highlights from a soccer video. The proposed framework identifies the interesting clips containing possible scenes of interest, such as goals, penalty kicks, etc. by parsing and processing the audio/video components. The framework analyzes, separates and annotates the individual scenes inside the video clips and saves using kernel support vector machine. The results show that semantic analysis of videos using kernel support vector machines is a reliable method to separate and annotate events of interest in a soccer game.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Grois, Dan, Evgeny Kaminsky, and Ofer Hadar. "Efficient Real-Time Video-in-Video Insertion into a Pre-Encoded Video Stream." ISRN Signal Processing 2011 (February 14, 2011): 1–11. http://dx.doi.org/10.5402/2011/975462.

Повний текст джерела
Анотація:
This work relates to the developing and implementing of an efficient method and system for the fast real-time Video-in-Video (ViV) insertion, thereby enabling efficiently inserting a video sequence into a predefined location within a pre-encoded video stream. The proposed method and system are based on dividing the video insertion process into two steps. The first step (i.e., the Video-in-Video Constrained Format (ViVCF) encoder) includes the modification of the conventional H.264/AVC video encoder to support the visual content insertion Constrained Format (CF), including generation of isolated regions without using the Frequent Macroblock Ordering (FMO) slicing, and to support the fast real-time insertion of overlays. Although, the first step is computationally intensive, it should to be performed only once even if different overlays have to be modified (e.g., for different users). The second step for performing the ViV insertion (i.e., the ViVCF inserter) is relatively simple (operating mostly in a bit-domain), and is performed separately for each different overlay. The performance of the presented method and system is demonstrated and compared with the H.264/AVC reference software (JM 12); according to our experimental results, there is a significantly low bit-rate overhead, while there is substantially no degradation in the PSNR quality.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

XIAO, JIANGJIAN, HUI CHENG, FENG HAN, and HARPREET SAWHNEY. "GEO-BASED AERIAL SURVEILLANCE VIDEO PROCESSING FOR SCENE UNDERSTANDING AND OBJECT TRACKING." International Journal of Pattern Recognition and Artificial Intelligence 23, no. 07 (November 2009): 1285–307. http://dx.doi.org/10.1142/s0218001409007582.

Повний текст джерела
Анотація:
This paper presents an approach to extract semantic layers from aerial surveillance videos for scene understanding and object tracking. The input videos are captured by low flying aerial platforms and typically consist of strong parallax from non-ground-plane structures as well as moving objects. Our approach leverages the geo-registration between video frames and reference images (such as those available from Terraserver and Google satellite imagery) to establish a unique geo-spatial coordinate system for pixels in the video. The geo-registration process enables Euclidean 3D reconstruction with absolute scale unlike traditional monocular structure from motion where continuous scale estimation over long periods of time is an issue. Geo-registration also enables correlation of video data to other stored information sources such as GIS (Geo-spatial Information System) databases. In addition to the geo-registration and 3D reconstruction aspects, the other key contributions of this paper also include: (1) providing a reliable geo-based solution to estimate camera pose for 3D reconstruction, (2) exploiting appearance and 3D shape constraints derived from geo-registered videos for labeling of structures such as buildings, foliage, and roads for scene understanding, and (3) elimination of moving object detection and tracking errors using 3D parallax constraints and semantic labels derived from geo-registered videos. Experimental results on extended time aerial video data demonstrates the qualitative and quantitative aspects of our work.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Mochurad, Lesia. "A NEW APPROACH FOR TEXT RECOGNITION ON A VIDEO CARD." Computer systems and information technologies, no. 3 (September 28, 2022): 22–30. http://dx.doi.org/10.31891/csit-2022-3-3.

Повний текст джерела
Анотація:
An important task is to develop a computer system that can automatically read text content from images or videos with a complex background. Due to a large number of calculations, it is quite difficult to apply them in real-time. Therefore, the use of parallel and distributed computing in the development of real-time or near real-time systems is relevant. The latter is especially relevant in such areas as automation of video recording of traffic violations, text recognition, machine vision, fingerprint recognition, speech, and more. The paper proposes a new approach to text recognition on a video card. A parallel algorithm for processing a group of images and a video sequence has been developed and tested. Parallelization on the video-core is provided by the OpenCL framework and CUDA technology. Without reducing the generality, the problem of processing images on which there are vehicles, which allowed to obtain text from the license plate. A system was developed that was tested for the processing speed of a group of images and videos while achieving an average processing speed of 207 frames per second. As for the execution time of the parallel algorithm, for 50 images and video in 63 frames, image preprocessing took 0.4 seconds, which is sufficient for real-time or near real-time systems. The maximum acceleration of image processing is obtained up to 8 times, and the video sequence – up to 12. The general tendency to increase the acceleration with increasing dimensionality of the processed image is preserved, which indicates the relevance of parallel calculations in solving the problem.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Shahadi, Haider Ismael, Zaid Jabbar Al-allaq, and Hayder Jawad Albattat. "Efficient denoising approach based eulerian videomagnification forcolour and motion variations." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 5 (October 1, 2020): 4701. http://dx.doi.org/10.11591/ijece.v10i5.pp4701-4711.

Повний текст джерела
Анотація:
Digital video magnification is a computer-based microscope, which is useful to detect subtle changes to human eyes in recorded videos. This technology can be employed in several areas such as medical, biological, mechanical and physical applications. Eulerian is the most popular approach in video magnification. However, amplifying the subtle changes in video produces amplifying the subtle noise. This paper proposes an approach to reduce amplified noise in magnified video for both type of changes amplifications, color and motion. The proposed approach processes the resulted video from Eulerian algorithm whether linear or phase based in order to noise cancellation. The approach utilizes wavelet denoising method to localize the frequencies of distributed noise over the different frequency bands. Subsequently, the energy of the coefficients under localized frequencies are attenuated by attenuating the amplitude of these coefficients. The experimental results of the proposed approach show its superiority over conventional linear and phase based Eulerian video magnification approaches in terms of quality of the resulted magnified videos. This allows to amplify the videos by larger amplification factor, so that several new applications can be added to the list of Eulerian video magnification users. Furthermore, the processing time does not significantly increase, the increment is only less than 3% of the overall processing compare to conventional Eulerian video magnification.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Hari Krishna, Yaram, Kanagala Bharath Kumar, Dasari Maharshi, and J. Amudhavel. "Image Processing and Restriction of Video Downloads Using Cloud." International Journal of Engineering & Technology 7, no. 2.32 (May 31, 2018): 327. http://dx.doi.org/10.14419/ijet.v7i2.32.15705.

Повний текст джерела
Анотація:
Flower image classification using deep learning and convolutional neural network (CNN) based on machine learning in Tensor flow. Tensor flow IDE is used to implement machine learning algorithms. Flower image processing is based on supervised learning which detects the parameters of image. Parameters of the image were compared by decision algorithms. These images are classified by neurons in convolutional neural network. Video processing based on machine learning is used in restriction of downloading the videos by preventing the second response from the server and enabling the debugging of the video by removing the request from the user.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

M., Raja Suguna, Kalaivani A., and Anusuya S. "The Detection of Video Shot Transitions Based on Primary Segments Using the Adaptive Threshold of Colour-Based Histogram Differences and Candidate Segments Using the SURF Feature Descriptor." Symmetry 14, no. 10 (September 30, 2022): 2041. http://dx.doi.org/10.3390/sym14102041.

Повний текст джерела
Анотація:
Aim: Advancements in multimedia technology have facilitated the uploading and processing of videos with substantial content. Automated tools and techniques help to manage vast volumes of video content. Video shot segmentation is the basic symmetry step underlying video processing techniques such as video indexing, content-based video retrieval, video summarization, and intelligent surveillance. Video shot boundary detection segments a video into temporal segments called shots and identifies the video frame in which a shot change occurs. The two types of shot transitions are cut and gradual. Illumination changes, camera motion, and fast-moving objects in videos reduce the detection accuracy of cut and gradual transitions. Materials and Methods: In this paper, a novel symmetry shot boundary detection system is proposed to maximize detection accuracy by analysing the transition behaviour of a video, segmenting it initially into primary segments and candidate segments by using the colour feature and the local adaptive threshold of each segment. Thereafter, the cut and gradual transitions are fine-tuned from the candidate segment using Speeded-Up Robust Features (SURF) extracted from the boundary frames to reduce the algorithmic complexity. The proposed symmetry method is evaluated using the TRECVID 2001 video dataset, and the results show an increase in detection accuracy. Result: The F1 score obtained for the detection of cut and gradual transitions is 98.7% and 90.8%, respectively. Conclusions: The proposed symmetry method surpasses recent state-of-the-art SBD methods, demonstrating increased accuracy for both cut and gradual transitions in videos.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Sakurai, Masaru. "Video Signal Processing LSI. Signal Processing LSIs for HDTV." Journal of the Institute of Television Engineers of Japan 48, no. 1 (1994): 25–30. http://dx.doi.org/10.3169/itej1978.48.25.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

El rai, Marwa Chendeb, Muna Darweesh, and Mina Al-Saad. "Semi-Supervised Segmentation of Echocardiography Videos Using Graph Signal Processing." Electronics 11, no. 21 (October 26, 2022): 3462. http://dx.doi.org/10.3390/electronics11213462.

Повний текст джерела
Анотація:
Machine learning and computer vision algorithms can provide a precise and automated interpretation of medical videos. The segmentation of the left ventricle of echocardiography videos plays an essential role in cardiology for carrying out clinical cardiac diagnosis and monitoring the patient’s condition. Most of the developed deep learning algorithms for video segmentation require an enormous amount of labeled data to generate accurate results. Thus, there is a need to develop new semi-supervised segmentation methods due to the scarcity and costly labeled data. In recent research, semi-supervised learning approaches based on graph signal processing emerged in computer vision due to their ability to avail the geometrical structure of data. Video object segmentation can be considered as a node classification problem. In this paper, we propose a new approach called GraphECV based on the use of graph signal processing for semi-supervised learning of video object segmentation applied for the segmentation of the left ventricle in echordiography videos. GraphECV includes instance segmentation, extraction of temporal, texture and statistical features to represent the nodes, construction of a graph using K-nearest neighbors, graph sampling to embed the graph with small amount of labeled nodes or graph signals, and finally a semi-supervised learning approach based on the minimization of the Sobolov norm of graph signals. The new algorithm is evaluated using two publicly available echocardiography videos, EchoNet-Dynamic and CAMUS datasets. The proposed approach outperforms other state-of-the-art methods under challenging background conditions.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Oh, Hyungsuk, and Wonha Kim. "Video Processing for Human Perceptual Visual Quality-Oriented Video Coding." IEEE Transactions on Image Processing 22, no. 4 (April 2013): 1526–35. http://dx.doi.org/10.1109/tip.2012.2233485.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Matsuda, Kiichi, Fumitaka Asami, and Osamu Kawai. "Video Signal Processing LSI. LSIs for Highly Efficient Video Coding." Journal of the Institute of Television Engineers of Japan 48, no. 1 (1994): 31–37. http://dx.doi.org/10.3169/itej1978.48.31.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Bennett, Stephanie, Tarek Nasser El Harake, Rafik Goubran, and Frank Knoefel. "Adaptive Eulerian Video Processing of Thermal Video: An Experimental Analysis." IEEE Transactions on Instrumentation and Measurement 66, no. 10 (October 2017): 2516–24. http://dx.doi.org/10.1109/tim.2017.2684518.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Soedarso, Nick. "Mengolah Data Video Analog Menjadi Video Digital Sederhana." Humaniora 1, no. 2 (October 31, 2010): 569. http://dx.doi.org/10.21512/humaniora.v1i2.2897.

Повний текст джерела
Анотація:
Nowadays, editing technology has entered the digital age. Technology will demonstrate the evidence of processing analog to digital data has become simpler since editing technology has been integrated in the society in all aspects. Understanding the technique of processing analog to digital data is important in producing a video. To utilize this technology, the introduction of equipments is fundamental to understand the features. The next phase is the capturing process that supports the preparation in editing process from scene to scene; therefore, it will become a watchable video.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Rascioni, Giorgio, Susanna Spinsante, and Ennio Gambi. "An Optimized Dynamic Scene Change Detection Algorithm for H.264/AVC Encoded Video Sequences." International Journal of Digital Multimedia Broadcasting 2010 (2010): 1–9. http://dx.doi.org/10.1155/2010/864123.

Повний текст джерела
Анотація:
Scene change detection plays an important role in a number of video applications, including video indexing, semantic features extraction, and, in general, pre- and post-processing operations. This paper deals with the design and performance evaluation of a dynamic scene change detector optimized for H.264/AVC encoded video sequences. The detector is based on a dynamic threshold that adaptively tracks different features of the video sequence, to increase the whole scheme accuracy in correctly locating true scene changes. The solution has been tested on suitable video sequences resembling real-world videos thanks to a number of different motion features, and has provided good performance without requiring an increase in decoder complexity. This is a valuable issue, considering the possible application of the proposed algorithm in post-processing operations, such as error concealment for video decoding in typical error prone video transmission environments, such as wireless networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Al-Jarrah, Mohammad A., and Faruq A. Al-Omari. "Fast Video Shot Boundary Detection Technique based on Stochastic Model." International Journal of Computer Vision and Image Processing 6, no. 2 (July 2016): 1–17. http://dx.doi.org/10.4018/ijcvip.2016070101.

Повний текст джерела
Анотація:
A video is composed of set of shots, where shot is defined as a sequence of consecutive frames captured by one camera without interruption. In video shot transition could be a prompt (hard cut) or gradual (fade, dissolve, and wipe). Shot boundary detection is an essential component of video processing. These boundaries are utilized on many aspect of video processing such as video indexing, and video in demand. In this paper, the authors proposed a new shot boundary detection algorithm. The proposed algorithm detects all type of shot boundaries in a high accuracy. The algorithm is developed based on a global stochastic model for video stream. The proposed stochastic model utilizes the joined characteristic function and consequently the joined momentum to model the video stream. The proposed algorithm is implemented and tested against different types of categorized videos. The proposed algorithm detects cuts fades, dissolves, and wipes transitions. Experimental results show that the algorithm has high performance. The computed precision and recall rates validated its performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Leszczuk, Mikołaj, Lucjan Janowski, Jakub Nawała, and Atanas Boev. "Objective Video Quality Assessment Method for Face Recognition Tasks." Electronics 11, no. 8 (April 7, 2022): 1167. http://dx.doi.org/10.3390/electronics11081167.

Повний текст джерела
Анотація:
Nowadays, there are many metrics for overall Quality of Experience (QoE), both those with Full Reference (FR), such as Peak Signal-to-Noise Ratio (PSNR) or Structural Similarity (SSIM), and those with No Reference (NR), such as Video Quality Indicators (VQI), which are successfully used in video processing systems to evaluate videos whose quality is degraded by different processing scenarios. However, they are not suitable for video sequences used for recognition tasks (Target Recognition Videos, TRV). Therefore, correctly estimating the performance of the video processing pipeline in both manual and Computer Vision (CV) recognition tasks is still a major research challenge. There is a need for objective methods to evaluate video quality for recognition tasks. In response to this need, we show in this paper that it is possible to develop the new concept of an objective model for evaluating video quality for face recognition tasks. The model is trained, tested and validated on a representative set of image sequences. The set of degradation scenarios is based on the model of a digital camera and how the luminous flux reflected from the scene eventually becomes a digital image. The resulting degraded images are evaluated using a CV library for face recognition as well as VQI. The measured accuracy of a model, expressed as the value of the F-measure parameter, is 0.87.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії