Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Methods of video data processing.

Статті в журналах з теми "Methods of video data processing"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Methods of video data processing".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Chavate, Shrikant, and Ravi Mishra. "Efficient Detection of Abrupt Transitions Using Statistical Methods." ECS Transactions 107, no. 1 (April 24, 2022): 6541–52. http://dx.doi.org/10.1149/10701.6541ecst.

Повний текст джерела
Анотація:
The rapid increment of technological advancements in the field of multimedia stream has piled up data production and consumption in cyberspace in the past two decades. It has prompted a swift increase in data transmission volume and repository size. Video is the most consumed data type on the internet. So, retrieval of selected video clips from the extra-large database of videos is extremely complex. The video shot boundary detection (SBD) is used for retrieval of desired video clips. It also is the fundamental step in video processing, which also was found important for applications like video browsing and indexing. The SBD handles the identification of abrupt and gradual transitions in the video database. In this paper, the statistical methods have implemented to detect the abrupt cuts. This benefitted the advantage of less complexity. The experiments are performed on TRECVID 2007 dataset, and the results obtained with high accuracy in less computational time.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Gu, Chong, and Zhan Jun Si. "Applied Research of Assessment Methods on Video Quality." Applied Mechanics and Materials 262 (December 2012): 157–62. http://dx.doi.org/10.4028/www.scientific.net/amm.262.157.

Повний текст джерела
Анотація:
With the rapid development of modern video technology, the range of video applications is increasing, such as online video conferencing, online classroom, online medical, etc. However, due to the quantity of video data is large, video has to be compressed and encoded appropriately, but the encoding process may cause some distortions on video quality. Therefore, how to evaluate the video quality efficiently and accurately is essential in the fields of video processing, video quality monitoring and multimedia video applications. In this article, subjective, and comprehensive evaluation method of video quality were introduced, a video quality assessment system was completed, four ITU recommended videos were encoded and evaluated by Degradation Category Rating (DCR) and Structural Similarity (SSIM) methods using five different formats. After that, comprehensive evaluations with weights were applied. Results show that data of all three evaluations have good consistency; H.264 is the best encoding method, followed by Xvid and wmv8; the higher the encoding bit rate is, the better the evaluations are, but comparing to 1000kbps, the subjective and objective evaluation scores of 1400kbps couldn’t improve obviously. The whole process could also evaluate new encodings methods, and is applicable for high-definition video, finally plays a significant role in promoting the video quality evaluation and video encoding.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Et. al., G. Megala,. "State-Of-The-Art In Video Processing: Compression, Optimization And Retrieval." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 5 (April 11, 2021): 1256–72. http://dx.doi.org/10.17762/turcomat.v12i5.1793.

Повний текст джерела
Анотація:
Video compression plays a vital role in the modern social media networking with plethora of multimedia applications. It empowers transmission medium to competently transfer videos and enable resources to store the video efficiently. Nowadays high-resolution video data are transferred through the communication channel having high bit rate in order to send multiple compressed videos. There are many advances in transmission ability, efficient storage ways of these compressed video where compression is the primary task involved in multimedia services. This paper summarizes the compression standards, describes the main concepts involved in video coding. Video compression performs conversion of large raw bits of video sequence into a small compact one, achieving high compression ratio with good video perceptual quality. Removing redundant information is the main task in the video sequence compression. A survey on various block matching algorithms, quantization and entropy coding are focused. It is found that many of the methods having computational complexities needs improvement with optimization.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wei, Bo, Kai Li, Chengwen Luo, Weitao Xu, Jin Zhang, and Kuan Zhang. "No Need of Data Pre-processing." ACM Transactions on Internet of Things 2, no. 4 (November 30, 2021): 1–26. http://dx.doi.org/10.1145/3467980.

Повний текст джерела
Анотація:
Device-free context awareness is important to many applications. There are two broadly used approaches for device-free context awareness, i.e., video-based and radio-based. Video-based approaches can deliver good performance, but privacy is a serious concern. Radio-based context awareness applications have drawn researchers' attention instead, because it does not violate privacy and radio signal can penetrate obstacles. The existing works design explicit methods for each radio-based application. Furthermore, they use one additional step to extract features before conducting classification and exploit deep learning as a classification tool. Although this feature extraction step helps explore patterns of raw signals, it generates unnecessary noise and information loss. The use of raw CSI signal without initial data processing was, however, considered as no usable patterns. In this article, we are the first to propose an innovative deep learning–based general framework for both signal processing and classification. The key novelty of this article is that the framework can be generalised for all the radio-based context awareness applications with the use of raw CSI. We also eliminate the extra work to extract features from raw radio signals. We conduct extensive evaluations to show the superior performance of our proposed method and its generalisation.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Li, Hui, Yapeng Liu, Wenzhong Lin, Lingwei Xu, and Junyin Wang. "Data Association Methods via Video Signal Processing in Imperfect Tracking Scenarios: A Review and Evaluation." Mathematical Problems in Engineering 2020 (August 31, 2020): 1–26. http://dx.doi.org/10.1155/2020/7549816.

Повний текст джерела
Анотація:
In 5G scenarios, there are a large number of video signals that need to be processed. Multiobject tracking is one of the main directions in video signal processing. Data association is a very important link in tracking algorithms. Complexity and efficiency of association method have a direct impact on the performance of multiobject tracking. Breakthroughs have been made in data association methods based on deep learning, and the performance has been greatly improved compared with traditional methods. However, there is a lack of overviews about data association methods. Therefore, this article first analyzes characteristics and performance of three traditional data association methods and then focuses on data association methods based on deep learning, which is divided into different deep network structures: SOT methods, end-to-end methods, and Wasserstein metric methods. The performance of each tracking method is compared and analyzed. Finally, it summarizes the current common datasets and evaluation criteria for multiobject tracking and discusses challenges and development trends of data association technology and data association methods which ensure robust and real time need to be continuously improved.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Guo, Jianbang, Peng Sun, and Sang-Bing Tsai. "A Study on the Optimization Simulation of Big Data Video Image Keyframes in Motion Models." Wireless Communications and Mobile Computing 2022 (March 16, 2022): 1–12. http://dx.doi.org/10.1155/2022/2508174.

Повний текст джерела
Анотація:
In this paper, the signal of athletic sports video image frames is processed and studied according to the technology of big data. The sports video image-multiprocessing technology achieves interference-free research and analysis of sports technology and can meet multiple visual needs of sports technology analysis and evaluation through key technologies such as split-screen synchronous comparison, superimposed synchronous comparison, and video trajectory tracking. The sports video image-processing technology realizes the rapid extraction of key technical parameters of the sports scene, the panoramic map technology of sports video images, the split-lane calibration technology, and the development of special video image analysis software that is innovative in the field of athletics research. An image-blending approach is proposed to alleviate the problem of simple and complex background data imbalance, while enhancing the generalization ability of the network trained using small-scale datasets. Local detail features of the target are introduced in the online-tracking process by an efficient block-filter network. Moreover, online hard-sample learning is utilized to avoid the interference of similar objects to the tracker, thus improving the overall tracking performance. For the feature extraction problem of fuzzy videos, this paper proposes a fuzzy kernel extraction scheme based on the low-rank theory. The scheme fuses multiple fuzzy kernels of keyframe images by low-rank decomposition and then deblurs the video. Next, a double-detection mechanism is used to detect tampering points on the blurred video frames. Finally, the video-tampering points are located, and the specific way of video tampering is determined. Experiments on two public video databases and self-recorded videos show that the method is robust in fuzzy video forgery detection, and the efficiency of fuzzy video detection is improved compared to traditional video forgery detection methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Li, Gang, Ainiwaer Aizimaiti, and Yan Liu. "Quaternion Model of Fast Video Quality Assessment Based on Structural Similarity Normalization." Applied Mechanics and Materials 380-384 (August 2013): 3982–85. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.3982.

Повний текст джерела
Анотація:
Video quality evaluation methods have been widely studied because of an increasing need in variety of video processing applications, such as compression, analysis, communication, enhancement and restoration. The quaternion models are also widely used to measure image or video quality. In this paper, we proposed a new quaternion model which mainly describes the contour feature, surface feature and temporal information of the video. We use structure similarity comparison to normalize four quaternion parts respectively, because each part of the quaternion use different metric. Structure similarity comparison is also used to measure the difference between reference videos and distortion videos. The results of experiments show that the new method has good correlation with perceived video quality when tested on the video quality experts group (VQEG) Phase I FR-TV test data set.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Kandriasari, Annis, Robinson Situmorang, Suyitno Muslim, and Jhoni Lagun Siang. "HOW TO DEVELOP A BREAD PROCESSING VIDEO STORYBOARD." Asia Proceedings of Social Sciences 5, no. 2 (December 30, 2019): 137–41. http://dx.doi.org/10.31580/apss.v5i2.1132.

Повний текст джерела
Анотація:
The purpose of this study was to produce a video storyboard guide for bread processing practicum and the feasibility level of a practical guide video storyboard in a Bread Processing practicum course. This research uses research and development methods, conducted with the development of practicum guide video storyboards. Data collection techniques through interviews and questionnaires. Data analysis techniques using quantitative and qualitative data analysis. The procedure for developing instructional media is carried out by preparing an outline of the contents of the material, preparing a description of the material and preparing a storyboard. Furthermore, validation of the assessment by material experts, media experts, learning experts to assess the feasibility of the contents of the storyboard. The results of the expert material assessment showed that the storyboard was feasible, the assessment of the storyboard media experts in the feasible category, the assessment of the learning expert was very feasible to be developed into a practicum guidance video.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Sabot, F., M. Naaim, F. Granada, E. Suriñach, P. Planet, and G. Furdada. "Study of avalanche dynamics by seismic methods, image-processing techniques and numerical models." Annals of Glaciology 26 (1998): 319–23. http://dx.doi.org/10.3189/1998aog26-1-319-323.

Повний текст джерела
Анотація:
Seismic signals of avalanches, related video images and numerical models were compared to improve the characterization of avalanche phenomena. Seismic data and video images from two artificially released avalanches were analysed to obtain more information about the origin of the signals. Image processing was used to compare the evolution of one avalanche front and the corresponding seismic signals. A numerical model was also used to simulate an avalanche flow in order to obtain mean- and maximum-velocity profiles. Prior to this, the simulated avalanche was verified using video images. The results indicate that the seismic signals recorded correspond to changes in avalanche type and path slope, interaction with obstacles and to phenomena associated with the stopping stage of the avalanche, suggesting that only part of the avalanche was recorded. These results account for the seismic signals previously obtained automatically in a wide avalanche area.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Sabot, F., M. Naaim, F. Granada, E. Suriñach, P. Planet, and G. Furdada. "Study of avalanche dynamics by seismic methods, image-processing techniques and numerical models." Annals of Glaciology 26 (1998): 319–23. http://dx.doi.org/10.1017/s0260305500015032.

Повний текст джерела
Анотація:
Seismic signals of avalanches, related video images and numerical models were compared to improve the characterization of avalanche phenomena. Seismic data and video images from two artificially released avalanches were analysed to obtain more information about the origin of the signals. Image processing was used to compare the evolution of one avalanche front and the corresponding seismic signals. A numerical model was also used to simulate an avalanche flow in order to obtain mean- and maximum-velocity profiles. Prior to this, the simulated avalanche was verified using video images. The results indicate that the seismic signals recorded correspond to changes in avalanche type and path slope, interaction with obstacles and to phenomena associated with the stopping stage of the avalanche, suggesting that only part of the avalanche was recorded. These results account for the seismic signals previously obtained automatically in a wide avalanche area.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Zheng, Bao Guo, Bing Xu, and Zhong Jin Shi. "The Researches and Applications Based on Video Statistical Analysis of Average Population Density Estimation Methods." Applied Mechanics and Materials 513-517 (February 2014): 4539–42. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.4539.

Повний текст джерела
Анотація:
This article describes the use of the average population density estimation methods based on video statistical analysis, and mainly discussed the research and application of the air conditioning energy-efficient system in the subway. The distributed intelligent control system in the subway station platform captured video images by more than one camera sensors, according to the computer image processing methods. And it have unique advantages for the fuzzy neural network to model the human nervous system in fuzzy information processing. When tackling the video files, the image boundary is fuzzily set, it can legitimately divide the crowd by achieving the image intelligent analysis data, and whats more, it can help to get the estimation of population density.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Yadav, Piyush, Dhaval Salwala, Dibya Prakash Das, and Edward Curry. "Knowledge Graph Driven Approach to Represent Video Streams for Spatiotemporal Event Pattern Matching in Complex Event Processing." International Journal of Semantic Computing 14, no. 03 (September 2020): 423–55. http://dx.doi.org/10.1142/s1793351x20500051.

Повний текст джерела
Анотація:
Complex Event Processing (CEP) is an event processing paradigm to perform real-time analytics over streaming data and match high-level event patterns. Presently, CEP is limited to process structured data stream. Video streams are complicated due to their unstructured data model and limit CEP systems to perform matching over them. This work introduces a graph-based structure for continuous evolving video streams, which enables the CEP system to query complex video event patterns. We propose the Video Event Knowledge Graph (VEKG), a graph-driven representation of video data. VEKG models video objects as nodes and their relationship interaction as edges over time and space. It creates a semantic knowledge representation of video data derived from the detection of high-level semantic concepts from the video using an ensemble of deep learning models. A CEP-based state optimization — VEKG-Time Aggregated Graph (VEKG-TAG) — is proposed over VEKG representation for faster event detection. VEKG-TAG is a spatiotemporal graph aggregation method that provides a summarized view of the VEKG graph over a given time length. We defined a set of nine event pattern rules for two domains (Activity Recognition and Traffic Management), which act as a query and applied over VEKG graphs to discover complex event patterns. To show the efficacy of our approach, we performed extensive experiments over 801 video clips across 10 datasets. The proposed VEKG approach was compared with other state-of-the-art methods and was able to detect complex event patterns over videos with [Formula: see text]-Score ranging from 0.44 to 0.90. In the given experiments, the optimized VEKG-TAG was able to reduce 99% and 93% of VEKG nodes and edges, respectively, with 5.19[Formula: see text] faster search time, achieving sub-second median latency of 4–20[Formula: see text]ms.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Vishaka Gayathri, D., Shrutee Shree, Taru Jain, and K. Sornalakshmi. "Real Time System for Human Identification and Tracking from Surveillance Videos." International Journal of Engineering & Technology 7, no. 3.12 (July 20, 2018): 244. http://dx.doi.org/10.14419/ijet.v7i3.12.16034.

Повний текст джерела
Анотація:
The need for intelligent surveillance systems has raised the concerns of security. A viable system with automated methods for person identification to detect, track and recognize persons in real time is required. The traditional detection techniques have not been able to analyze such a huge amount of live video generated in real-time. So, there is a necessity for live streaming video analytics which includes processing and analyzing large scale visual data such as images or videos to find content that are useful for interpretation. In this work, an automated surveillance system for real-time detection, recognition and tracking of persons in video streams from multiple video inputs is presented. In addition, the current location of an individual can be searched with the tool bar provided. A model is proposed, which uses a messaging queue to receive/transfer video feeds and the frames in the video are analyzed using image processing modules to identify and recognize the person with respect to the training data sets. The main aim of this project is to overcome the challenges faced in integrating the open source tools that build up the system for tagging and searching people.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Skalkin, Anton M., and Yuliya V. Stroeva. "THE METHOD OF DATA FLOW PROCESSING INCOMING FROM AN IP-CAMERA." Автоматизация процессов управления 4, no. 66 (2021): 39–45. http://dx.doi.org/10.35752/1991-2927-2021-4-66-39-45.

Повний текст джерела
Анотація:
The article discusses methods of video stream analysis incoming from an IP-camera using the methods of image analysis, machine learning and knowledge engineering. For efficient data storage, a motion identification method was developed to determine the frame rate using a neural network (NN) classification of the block representation of the frame difference. In addition, the model of domain knowledge base was developed to analyze the events taking place in the image. The article proposes the method of event analysis based on logical conclusion where the YOLO neural network (NN) results are used as input data. The YOLO NN pops up entries in the image, such as a human being, table, TV set, etc. To confirm the module effectiveness a software system was designed comprising two modules. The first module is intended to test the effectiveness of the motion identification method and save the smallest number of frames necessary to determine the events taking place. The second module is developed to test the effectiveness of the method of analyzing occurring events resulted in the information on the state of controlled territories where video surveillance is performed. The input data for the second module are the saved frames obtained because of the operation of the first system module.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Zhang, Pingting, and JianPeng Hou. "Physical Education Teaching Strategy under Internet of Things Data Computing Intelligence Analysis." Computational Intelligence and Neuroscience 2022 (April 11, 2022): 1–10. http://dx.doi.org/10.1155/2022/5299497.

Повний текст джерела
Анотація:
Racket sports such as tennis are amongst the most popular recreational sports activities. Optimizing tennis teaching methods and improving teaching modes can effectively improve the teaching quality of tennis. In this study, a video and image action recognition system based on image processing techniques and Internet of things is developed to overcome the shortcomings of the traditional tennis teaching methods. To validate its performance, the students of tennis courses are divided into experimental group and control group, respectively. The control group is taught by using the traditional tennis teaching method whereas the experimental group is taught by using the IoT video and image recognition teaching system. Three factors of students including service throwing height, arm elbow angle, and knee bending angles of both groups are measured and compared with those of world elite tennis players. The results show that the students’ serving abilities in the experimental group are significantly improved using the video and image recognition system based on IoT, and they are better than those of the students in the control group. The proposed video and image processing technique can be applied in students’ physical education and can be employed to provide the basis for the innovation of tennis teaching strategies in physical education.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Muhammad, Khan, Mohammad S. Obaidat, Tanveer Hussain, Javier Del Ser, Neeraj Kumar, Mohammad Tanveer, and Faiyaz Doctor. "Fuzzy Logic in Surveillance Big Video Data Analysis." ACM Computing Surveys 54, no. 3 (June 2021): 1–33. http://dx.doi.org/10.1145/3444693.

Повний текст джерела
Анотація:
CCTV cameras installed for continuous surveillance generate enormous amounts of data daily, forging the term Big Video Data (BVD). The active practice of BVD includes intelligent surveillance and activity recognition, among other challenging tasks. To efficiently address these tasks, the computer vision research community has provided monitoring systems, activity recognition methods, and many other computationally complex solutions for the purposeful usage of BVD. Unfortunately, the limited capabilities of these methods, higher computational complexity, and stringent installation requirements hinder their practical implementation in real-world scenarios, which still demand human operators sitting in front of cameras to monitor activities or make actionable decisions based on BVD. The usage of human-like logic, known as fuzzy logic, has been employed emerging for various data science applications such as control systems, image processing, decision making, routing, and advanced safety-critical systems. This is due to its ability to handle various sources of real-world domain and data uncertainties, generating easily adaptable and explainable data-based models. Fuzzy logic can be effectively used for surveillance as a complementary for huge-sized artificial intelligence models and tiresome training procedures. In this article, we draw researchers’ attention toward the usage of fuzzy logic for surveillance in the context of BVD. We carry out a comprehensive literature survey of methods for vision sensory data analytics that resort to fuzzy logic concepts. Our overview highlights the advantages, downsides, and challenges in existing video analysis methods based on fuzzy logic for surveillance applications. We enumerate and discuss the datasets used by these methods, and finally provide an outlook toward future research directions derived from our critical assessment of the efforts invested so far in this exciting field.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Koriashkina, L. S., and H. V. Symonets. "APPLICATION OF MACHINE LEARNING ALGORITHMS FOR PROCESSING COMMENTS FROM THE YOUTUBE VIDEO HOSTING UNDER TRAINING VIDEOS." Science and Transport Progress. Bulletin of Dnipropetrovsk National University of Railway Transport, no. 6(90) (April 8, 2021): 33–42. http://dx.doi.org/10.15802/stp2020/225264.

Повний текст джерела
Анотація:
Purpose. Detecting toxic comments on YouTube video hosting under training videos by classifying unstructured text using a combination of machine learning methods. Methodology. To work with the specified type of data, machine learning methods were used for cleaning, normalizing, and presenting textual data in a form acceptable for processing on a computer. Directly to classify comments as “toxic”, we used a logistic regression classifier, a linear support vector classification method without and with a learning method – stochastic gradient descent, a random forest classifier and a gradient enhancement classifier. In order to assess the work of the classifiers, the methods of calculating the matrix of errors, accuracy, completeness and F-measure were used. For a more generalized assessment, a cross-validation method was used. Python programming language. Findings. Based on the assessment indicators, the most optimal methods were selected – support vector machine (Linear SVM), without and with the training method using stochastic gradient descent. The described technologies can be used to analyze the textual comments under any training videos to detect toxic reviews. Also, the approach can be useful for identifying unwanted or even aggressive information on social networks or services where reviews are provided. Originality. It consists in a combination of methods for preprocessing a specific type of text, taking into account such features as the possibility of having a timecode, emoji, links, and the like, as well as in the adaptation of classification methods of machine learning for the analysis of Russian-language comments. Practical value. It is about optimizing (simplification) the comment analysis process. The need for this processing is due to the growing volumes of text data, especially in the field of education through quarantine conditions and the transition to distance learning. The volume of educational Internet content already needs to automate the processing and analysis of feedback, over time this need will only grow.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Zhou, Ji He, and An Yang. "Comparative Study on Analytic Data Smoothing Method of High Speed Video." Applied Mechanics and Materials 703 (December 2014): 211–14. http://dx.doi.org/10.4028/www.scientific.net/amm.703.211.

Повний текст джерела
Анотація:
Commonly used video data smoothing methods in analytic system is interpolation and filtering. Images of sports biomechanics signal data is a quadratic function, such as projectile motion.In recent ten years, few scholars have conducted a comparative studyon the effect of different kinds of smoothing method on this type of signal data processing, we first list the mathematical principle of various kinds of interpolation method and filter method, with proper mathematical reasoning and combined with the literature to get the advantages and disadvantages of various methods and scope of application, and then validated with experimental method and draws the conclusion.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Sun, Hujun. "Big Data Image Processing Based on Coefficient 3D Reconstruction Model." Mobile Information Systems 2022 (April 22, 2022): 1–10. http://dx.doi.org/10.1155/2022/2301795.

Повний текст джерела
Анотація:
In the history of human society, information is an indispensable part of development, and extracting useful data from the huge amount of data can effectively solve some real-life problems. At the same time, with the continuous improvement of modern technology and computer hardware and equipment, the demand for information subjects is getting higher and higher. The field of human research is gradually moving in the direction of multiscientific exploration and deepening of related work. The related information disciplines have been studied by more scholars and experts in the field to a greater extent and put forward more demanding theories and methods. In the process of human social development, things are constantly changing and updating, which makes data mining technology more and more attention. This article first describes the relevant basic theoretical knowledge of 3D reconstruction, and secondly, it analyzes the big data platform technology, which mainly includes the analysis of the big data platform architecture, the description of the Hadoop distributed architecture, and the analysis of the HBase nonrelational database. Finally, it studies the video image feature extraction technology of video content and uses this to study the design of a big data image processing platform for 3D reconstruction models.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Гаврилов, Дмитро Сергійович, Сергій Степанович Бучік, Юрій Михайлович Бабенко, Сергій Сергійович Шульгін та Олександр Васильович Слободянюк. "Метод обробки вiдеодaних з можливістю їх захисту після квaнтувaння". RADIOELECTRONIC AND COMPUTER SYSTEMS, № 2 (2 червня 2021): 64–77. http://dx.doi.org/10.32620/reks.2021.2.06.

Повний текст джерела
Анотація:
The subject of research in the article is the video processing processes based on the JPEG platform for data transmission in the information and telecommunication network. The aim is to build a method for processing a video image with the possibility of protecting it at the quantization stage with subsequent arithmetic coding. That will allow, while preserving the structural and statistical regularity, to ensure the necessary level of accessibility, reliability, and confidentiality when transmitting video data. Task: research of known methods of selective video image processing with the subsequent formalization of the video image processing procedure at the quantization stage and statistical coding of significant blocks based on the JPEG platform. The methods used are an algorithm based on the JPEG platform, methods for selecting significant informative blocks, arithmetic coding. The following results were obtained. A method for processing a video image with the possibility of its protection at the stage of quantization with subsequent arithmetic coding has been developed. This method will allow, while preserving the structural and statistical regularity, to fulfill the set requirements for an accessible, reliable, and confidential transmission of video data. Ensuring the required level of availability is associated with a 30% reduction in the video image volume compared to the original volume. Simultaneously, the provision of the required level of confidence is confirmed by an estimate of the peak signal-to-noise ratio for an authorized user, which is dB. Ensuring the required level of confidentiality is confirmed by an estimate of the peak signal-to-noise ratio in case of unauthorized access, which is equal to dB. Conclusions. The scientific novelty of the results obtained is as follows: for the first time, two methods of processing video images at the quantization stage have been proposed. The proposed technologies fulfill the assigned tasks to ensure the required level of confidentiality at a given level of confidence. Simultaneously, the method of using encryption tables has a higher level of cryptographic stability than the method of using the key matrix. It is due to a a more complex mathematical apparatus. Which, in turn, increases the time for processing the tributes. To fulfill the requirement of data availability, it is proposed to use arithmetic coding for info-normative blocks, which should be more efficient compared with the methods of code tables. So, the method of using the scoring tables has greater cryptographic stability, and the method of using the matrix-key has higher performance. Simultaneously, the use of arithmetic coding will satisfy the need for accessibility by reducing the initial volume.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Khosravi, Mohammad R., Sadegh Samadi, and Reza Mohseni. "Spatial Interpolators for Intra-Frame Resampling of SAR Videos: A Comparative Study Using Real-Time HD, Medical and Radar Data." Current Signal Transduction Therapy 15, no. 2 (December 1, 2020): 144–96. http://dx.doi.org/10.2174/2213275912666190618165125.

Повний текст джерела
Анотація:
Background: Real-time video coding is a very interesting area of research with extensive applications into remote sensing and medical imaging. Many research works and multimedia standards for this purpose have been developed. Some processing ideas in the area are focused on second-step (additional) compression of videos coded by existing standards like MPEG 4.14. Materials and Methods: In this article, an evaluation of some techniques with different complexity orders for video compression problem is performed. All compared techniques are based on interpolation algorithms in spatial domain. In details, the acquired data is according to four different interpolators in terms of computational complexity including fixed weights quartered interpolation (FWQI) technique, Nearest Neighbor (NN), Bi-Linear (BL) and Cubic Cnvolution (CC) interpolators. They are used for the compression of some HD color videos in real-time applications, real frames of video synthetic aperture radar (video SAR or ViSAR) and a high resolution medical sample. Results: Comparative results are also described for three different metrics including two reference- based Quality Assessment (QA) measures and an edge preservation factor to achieve a general perception of various dimensions of the mentioned problem. Conclusion: Comparisons show that there is a decidable trade-off among video codecs in terms of more similarity to a reference, preserving high frequency edge information and having low computational complexity.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Kataieva, Yevheniia, and Anton Rebrikov. "URGENCY OF USING HIDDEN DATA TRANSMISSION IN VIDEO FILES." Management of Development of Complex Systems, no. 46 (June 24, 2021): 48–54. http://dx.doi.org/10.32347/2412-9933.2021.46.48-54.

Повний текст джерела
Анотація:
The article examines the problems of protecting information from unwanted access, which have tried to solve throughout the existence of mankind. Nowadays, the widespread use of electronic means of communication, electronic eavesdropping and fraud, a variety of computer viruses and other electronic hazards, electronic systems place high demands on the protection of information. Thus, the study of digital steganography is an urgent task. There are two main areas of hidden data transmission: cryptography and steganography. The purpose of cryptography is to restrict access to information by encrypting it. Unlike cryptography, steganography allows you to hide the very fact of the presence of hidden data. The study examined the main methods of covert data transmission using computer steganography, namely: the method of using system-reserved areas of digital data formats, methods of hiding information by special formatting of text files, which include the use of word, sentence or paragraph shift, selection certain positions of letters in the text or the use of properties of system fields that are not displayed on the screen, the method of using simulation functions, methods of using unused disk sectors, the method of using redundant media files (audio, photo and video). Currently, due to the growth of information and increasing the bandwidth of communication channels, the issue of hiding information in video sequences is becoming increasingly important. The transmission of digital video in recent years is a typical event and does not arouse suspicion. In the course of the research the peculiarities of hiding information in video files are considered, the comparison of existing algorithms of computer video steganography is made. The task is to develop your own algorithm for embedding information in the blue color channel of video files. The object of research is the transfer of hidden data in digital media files. The subject of research is the transmission of hidden data in the video stream. The purpose of research is to review the subject area, to examine the available methods of embedding information in media files in general and specifically in video files, to identify the advantages and disadvantages of existing algorithms, to develop their own algorithm of video steganography based on previously obtained research results. Research methods - methods of information theory, probability theory and mathematical statistics; methods of digital processing of signals, static images and video files; methods of vector analysis. The results of research - an overview of the features of hiding information in video files, compared existing algorithms of computer video steganography.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Koch, Christopher G., and Ginny Ju. "Video Image Presentation Methods for High-Speed Mail Piece Encoding." Proceedings of the Human Factors Society Annual Meeting 32, no. 19 (October 1988): 1419–23. http://dx.doi.org/10.1177/154193128803201924.

Повний текст джерела
Анотація:
A program of research was conducted to determine the design requirements for a prototype image processing system to provide high-resolution video images of mail pieces—irregular parcels and pieces, flats, and letters—and enable highspeed data entry of coding information by operators. Experiments were performed to determine effective image transition methods, pacing strategies, and image preview methods for entering numerals from ZIP Codes of mail piece addresses on a 10-key keyboard. Results showed performance advantages of response speed, throughput, and fewer misses for fade-out transition between images, combined operator and machine pacing, and image preview by early transition to the next image in queue.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Zhu, Li, and Yi Min Yang. "Real-Time Multitasking Video Encoding Processing System of Multicore." Applied Mechanics and Materials 66-68 (July 2011): 2074–79. http://dx.doi.org/10.4028/www.scientific.net/amm.66-68.2074.

Повний текст джерела
Анотація:
This paper achieved the optimize which is based on the Series processors Produced by NVIDIA, such as Geforce, Tegra, Nexus and so on, and discussed the future development of the video image processor. Expounded the most popular DSP optimization techniques and objectives in the current, to optimized the design for the methods of the various papers available in existence. Based on the NVIDIA's series of products, specific discussed CUDA GPU architecture based on NVIDIA's products, raised the hardware and algorithms of the current most popular video encoding equipment, based on real practical technology to improve the transmission and encoding of multimedia data.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Silakov, Nikolai V., and Kirill L. Tassov. "OVERVIEW OF THE TEXT AREA DETECTION ALGORITHMS IN VIDEO STREAM FRAMES." RSUH/RGGU Bulletin. Series Information Science. Information Security. Mathematics, no. 2 (2020): 27–45. http://dx.doi.org/10.28995/2686-679x-2020-2-27-45.

Повний текст джерела
Анотація:
. The widespread development of video technologies in cooperation with social networks such as YouTube and TicToc has led to a sharp increase in the volume of information. The processing and analysis of this data fall on the shoulders of a person. It happens as especially difficult to extract useful text information from the entire video stream. That issue is particularly acute in areas such as the state defense and law enforcement, where the volume of incoming video information is huge. A video stream can contain useful meta information helpful to search, classify, and link the data to specific video content. There are already systems for automatic recognition of the state registration plates, vehicles, railway car numbers, street names, and video titles. However, there may still be a lot of metainformation in the frames of the video stream, such as the marking of various objects and technical products, building numbers, ads, signs, inscriptions, text on documents, and much more. Creating a system for automatic detection of any text in video content will significantly improve the processing and increase the efficiency of information search engines. This article provides an overview of methods for detecting text regions in frames of the video stream. The methods are analyzed in terms of applicability and universality. A comparative analysis and evaluation of popular methods for detecting text areas are given.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Pal, Dibyendu, and Mallikarjuna Chunchu. "Smoothing of vehicular trajectories under heterogeneous traffic conditions to extract microscopic data." Canadian Journal of Civil Engineering 45, no. 6 (June 2018): 435–45. http://dx.doi.org/10.1139/cjce-2017-0452.

Повний текст джерела
Анотація:
Trajectory data collected using video image processing techniques are prone to noise. Trajectory data extracted using commercially available video image processing software (TRAZER) contains the noise associated with the false detection in addition to the white noise. This paper proposes a method based on complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) to smooth such trajectory data. In this approach, trajectory data are decomposed into a finite number of intrinsic modes and a unique residue is computed to obtain each mode. This monotonic residue gives the smoothed trajectory. The instantaneous speeds of the vehicles are then estimated using the method of continuous wavelet transforms, discrete wavelet transforms, and numerical differentiation. Internal consistency analyses show that the wavelet transforms methods are effective in reducing the noise amplification of the speed profile. It was also observed that the corrections applied on trajectory data have a significant effect on macroscopic traffic relations.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Betz-Stablein, Brigid, Martin L. Hazelton, and William H. Morgan. "Modelling retinal pulsatile blood flow from video data." Statistical Methods in Medical Research 27, no. 5 (September 1, 2016): 1575–84. http://dx.doi.org/10.1177/0962280216665504.

Повний текст джерела
Анотація:
Modern day datasets continue to increase in both size and diversity. One example of such ‘big data’ is video data. Within the medical arena, more disciplines are using video as a diagnostic tool. Given the large amount of data stored within a video image, it is one of most time consuming types of data to process and analyse. Therefore, it is desirable to have automated techniques to extract, process and analyse data from video images. While many methods have been developed for extracting and processing video data, statistical modelling to analyse the outputted data has rarely been employed. We develop a method to take a video sequence of periodic nature, extract the RGB data and model the changes occurring across the contiguous images. We employ harmonic regression to model periodicity with autoregressive terms accounting for the error process associated with the time series nature of the data. A linear spline is included to account for movement between frames. We apply this model to video sequences of retinal vessel pulsation, which is the pulsatile component of blood flow. Slope and amplitude are calculated for the curves generated from the application of the harmonic model, providing clinical insight into the location of obstruction within the retinal vessels. The method can be applied to individual vessels, or to smaller segments such as 2 × 2 pixels which can then be interpreted easily as a heat map.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

PRABAKARAN, S., R. SAHU, and S. VERMA. "A WAVELET APPROACH FOR CLASSIFICATION OF MICROARRAY DATA." International Journal of Wavelets, Multiresolution and Information Processing 06, no. 03 (May 2008): 375–89. http://dx.doi.org/10.1142/s0219691308002409.

Повний текст джерела
Анотація:
Microarray technologies facilitate the generation of vast amount of bio-signal or genomic signal data. The major challenge in processing these signals is the extraction of the global characteristics of the data due to their huge dimension and the complex relationship among various genes. Statistical methods are used in broad spectrum in this domain. But, various limitations like extensive preprocessing, noise sensitiveness, requirement of critical input parameters and prior knowledge about the microarray dataset emphasise the need for better exploratory techniques. Transform oriented signal processing techniques are successful in many data processing techniques like image and video processing. But, the use of wavelets in analyzing the microarray bio-signals is not sufficiently probed. The aim of this paper is to propose a wavelet power spectrum based technique for dimensionality reduction through gene selection and classification problem of gene microarray data. The proposed method was administered on such datasets and the results are encouraging. The present method is robust to noise since no preprocessing has been applied. Also, it does not require any critical input parameters or any prior knowledge about the data which is required in many existing methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Yanakova, E. S., G. T. Macharadze, L.G. Gagarina, and A. A. Shvachko. "Parallel-Pipelined Video Processing in Multicore Heterogeneous Systems on Chip." Proceedings of Universities. Electronics 26, no. 2 (April 2021): 172–83. http://dx.doi.org/10.24151/1561-5405-2021-26-2-172-183.

Повний текст джерела
Анотація:
A turn from homogeneous to heterogeneous architectures permits to achieve the advantages of the efficiency, size, weight and power consumption, which is especially important for the built-in solutions. However, the development of the parallel software for heterogeneous computer systems is rather complex task due to the requirements of high efficiency, easy programming and the process of scaling. In the paper the efficiency of parallel-pipelined processing of video information in multiprocessor heterogeneous systems on a chip (SoC) such as DSP, GPU, ISP, VDP, VPU and others, has been investigated. A typical scheme of parallel-pipelined processing of video data using various accelerators has been presented. The scheme of the parallel-pipelined video data on heterogeneous SoC 1892VM248 has been developed. The methods of efficient parallel-pipelined processing of video data in heterogeneous computers (SoC), consisting of the operating system level, programming technologies level and the application level, have been proposed. A comparative analysis of the most common programming technologies, such as OpenCL, OpenMP, MPI, OpenAMP, has been performed. The analysis has shown that depend-ing on the device finite purpose two programming paradigms should be applied: based on OpenCL technology (for built-in system) and MPI technology (for inter-cell and inter processor interaction). The results obtained of the parallel-pipelined processing within the framework of the face recognition have confirmed the effectiveness of the chosen solutions.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Scollar, Irwin, Bernd Weidner, and Karel Segeth. "Display of archaeological magnetic data." GEOPHYSICS 51, no. 3 (March 1986): 623–33. http://dx.doi.org/10.1190/1.1442116.

Повний текст джерела
Анотація:
Magnetic data from archaeological sites have traditionally been displayed by contour, isometric, and dot‐density plotting, or by simulated gray‐scale techniques using symbol overprinting. These methods do not show fine linear structures in the data which are of great interest to archaeologists. If true gray‐scale methods using a modern video display, followed by film recording for hard copy are employed, image processing techniques can be applied to enhance the geometric structures of archaeological interest. Interpolation techniques for enlarging data to full screen size, along with compression methods to keep data within gray‐scale capabilities, are needed. Such techniques would introduce minimum distortion and allow faint details to be seen in the vicinity of strong anomalies. Postprocessing methods based on rapid image spatial filtering and enhancement algorithms could then be applied in an interactive environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Lee, Yun-Gu. "Low Memory Access Video Stabilization for Low-Cost Camera SoC." Sensors 22, no. 6 (March 18, 2022): 2341. http://dx.doi.org/10.3390/s22062341.

Повний текст джерела
Анотація:
Video stabilization is one of the most important features in consumer cameras. Even simple video stabilization algorithms may need to access the frames several times to generate a stabilized output image, which places a significant burden on the camera hardware. This high-memory-access requirement makes it difficult to implement video stabilization in real time on low-cost camera SoC. Reduction of the memory usage is a critical issue in camera hardware. This paper presents a structure and layout method to efficiently implement video stabilization for low-end hardware devices in terms of shared memory access amount. The proposed method places sub-components of video stabilization in a parasitic form in other processing blocks, and the sub-components reuse data read from other processing blocks without directly accessing data in the shared memory. Although the proposed method is not superior to the state-of-the-art methods applied in post-processing in terms of video quality, it provides sufficient performance to lower the cost of camera hardware for the development of real-time devices. According to my analysis, the proposed one reduces the memory access amount by 21.1 times compared to the straightforward method.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Nguyen, Dien Van, and Jaehyuk Choi. "Toward Scalable Video Analytics Using Compressed-Domain Features at the Edge." Applied Sciences 10, no. 18 (September 14, 2020): 6391. http://dx.doi.org/10.3390/app10186391.

Повний текст джерела
Анотація:
Intelligent video analytics systems have come to play an essential role in many fields, including public safety, transportation safety, and many other industrial areas, such as automated tools for data extraction, and analyzing huge datasets, such as multiple live video streams transmitted from a large number of cameras. A key characteristic of such systems is that it is critical to perform real-time analytics so as to provide timely actionable alerts on various tasks, activities, and conditions. Due to the computation-intensive and bandwidth-intensive nature of these operations, however, video analytics servers may not fulfill the requirements when serving a large number of cameras simultaneously. To handle these challenges, we present an edge computing-based system that minimizes the transfer of video data from the surveillance camera feeds on a cloud video analytics server. Based on a novel approach of utilizing the information from the encoded bitstream, the edge can achieve low processing complexity of object tracking in surveillance videos and filter non-motion frames from the list of data that will be forwarded to the cloud server. To demonstrate the effectiveness of our approach, we implemented a video surveillance prototype consisting of edge devices with low computational capacity and a GPU-enabled server. The evaluation results show that our method can efficiently catch the characteristics of the frame and is compatible with the edge-to-cloud platform in terms of accuracy and delay sensitivity. The average processing time of this method is approximately 39 ms/frame with high definition resolution video, which outperforms most of the state-of-the-art methods. In addition to the scenario implementation of the proposed system, the method helps the cloud server reduce 49% of the load of the GPU, 49% that of the CPU, and 55% of the network traffic while maintaining the accuracy of video analytics event detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

XU, CHENLIANG, RICHARD F. DOELL, STEPHEN JOSÉ HANSON, CATHERINE HANSON, and JASON J. CORSO. "A STUDY OF ACTOR AND ACTION SEMANTIC RETENTION IN VIDEO SUPERVOXEL SEGMENTATION." International Journal of Semantic Computing 07, no. 04 (December 2013): 353–75. http://dx.doi.org/10.1142/s1793351x13400114.

Повний текст джерела
Анотація:
Existing methods in the semantic computer vision community seem unable to deal with the explosion and richness of modern, open-source and social video content. Although sophisticated methods such as object detection or bag-of-words models have been well studied, they typically operate on low level features and ultimately suffer from either scalability issues or a lack of semantic meaning. On the other hand, video supervoxel segmentation has recently been established and applied to large scale data processing, which potentially serves as an intermediate representation to high level video semantic extraction. The supervoxels are rich decompositions of the video content: they capture object shape and motion well. However, it is not yet known if the supervoxel segmentation retains the semantics of the underlying video content. In this paper, we conduct a systematic study of how well the actor and action semantics are retained in video supervoxel segmentation. Our study has human observers watching supervoxel segmentation videos and trying to discriminate both actor (human or animal) and action (one of eight everyday actions). We gather and analyze a large set of 640 human perceptions over 96 videos in 3 different supervoxel scales. Furthermore, we design a feature defined on supervoxel segmentation, called supervoxel shape context, which is inspired by the higher order processes in human perception. We conduct actor and action classification experiments with this new feature and compare to various traditional video features. Our ultimate findings suggest that a significant amount of semantics have been well retained in the video supervoxel segmentation and can be used for further video analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

He, Zhihong, Wenjie Jia, Erhua Sun, and Huilong Sun. "3D Video Image Processing Effect Optimization Method Based on Virtual Reality Technology." International Journal of Circuits, Systems and Signal Processing 16 (January 12, 2022): 385–90. http://dx.doi.org/10.46300/9106.2022.16.47.

Повний текст джерела
Анотація:
The existing optimization methods have the problem of image edge blur, which leads to a high degree of shadow residue. In order to address this problem, reduce the shadow residual degree, this paper designs a 3D video image processing effect optimization method supported by virtual reality technology. Coding was used to eliminate redundant data in video and eliminate image noise using median filtering. The virtual reality technology detects the image edge and determines the motion offset between the image frames. According to the motion parameters of the camera carrier obtained from the motion estimation, the feature point matching algorithm constructs the video image motion model, and uses the camera calibration technology to set the processing effect optimization mode. It is optimized by perspective projection transformation. Experimental results: the average shadow residual degree of the optimization method and the two existing optimization methods are 3.108%, 6.167% and 6.396% respectively, which proves that the optimization method combined with virtual reality technology has higher practical application value.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Zheng, Bao Guo, Bing Xu, Zhong Jin Shi, and Yi Huan Hui. "Average Population Density Estimation in Rail Transit Stations Based on Video Statistical Analysis." Applied Mechanics and Materials 513-517 (February 2014): 3617–20. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.3617.

Повний текст джерела
Анотація:
This article describes the use of the average population density estimation methods based on video statistical analysis, and mainly discussed the research and application of the air conditioning energy-efficient system in the subway. The distributed intelligent control system in the subway station platform captured video images by more than one camera sensors, according to the computer image processing methods, for example it have the unique advantages for the fuzzy neural network to model the human nervous system in fuzzy information processing. This article used the improved Meanshift algorithm based on pixel energy to capture the moving target in the video. This method can legitimately divide the crowd by achieving the image intelligent analysis data, and whats more, it can help to get the estimation of population density.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

OHNIEVYI, OLEKSANDR, and YURIY KHMELNYTSKY. "METHODS OF PROTECTION OF INFORMATION RESOURCES IN TELECOMMUNICATION SYSTEMS." Herald of Khmelnytskyi National University 301, no. 5 (October 2021): 27–31. http://dx.doi.org/10.31891/2307-5732-2021-301-5-27-31.

Повний текст джерела
Анотація:
The article is devoted to the study of theoretical and methodological principles and practical recommendations for the use of information resource protection technology for the effective functioning of video communication systems. With the development of international relations in society, the introduction of information and telecommunication technologies, Ukraine is actively developing and implementing video conferencing technology, creating and implementing virtual events, webinars, video conferencing and online broadcasts. The use of such systems allows you to get a variety of information, use textual and visual graphics, and so on. For video conferencing systems are relevant threats to information security, which are inherent in any information and telecommunications system. One of the main problems of organizing a reliable video conferencing system is to ensure the optimal data rate at the maximum speed of audio and video processing. The main types of information are considered and the directions of maintaining information security of information and telecommunication systems in accordance with the legislation of Ukraine are determined. The main types of information security are related to the protection of confidentiality, integrity and accessibility and to the confirmation of authorship. To ensure the reliability of the video conferencing system, the protection of open information from unauthorized actions is used, which leads to its accidental or intentional modification or destruction, authorization of users. The tasks of user authorization are management of access rights, collection of statistics and acts as an additional means of ensuring the reliability of the system. Ensuring the protection of information resources is carried out through the use of means and methods of technical protection, implementation of organizational and technical measures of a comprehensive information protection system. Studies of these methods make it possible to distribute the load on all elements of the system in proportion to their resources and characteristics.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Abdallah, Mohamed, HyungWon Kim, Mohammad Ragab, and Elsayed Hemayed. "Zero-Shot Deep Learning for Media Mining: Person Spotting and Face Clustering in Video Big Data." Electronics 8, no. 12 (November 22, 2019): 1394. http://dx.doi.org/10.3390/electronics8121394.

Повний текст джерела
Анотація:
The analysis of frame sequences in talk show videos, which is necessary for media mining and television production, requires significant manual efforts and is a very time-consuming process. Given the vast amount of unlabeled face frames from talk show videos, we address and propose a solution to the problem of recognizing and clustering faces. In this paper, we propose a TV media mining system that is based on a deep convolutional neural network approach, which has been trained with a triplet loss minimization method. The main function of the proposed system is the indexing and clustering of video data for achieving an effective media production analysis of individuals in talk show videos and rapidly identifying a specific individual in video data in real-time processing. Our system uses several face datasets from Labeled Faces in the Wild (LFW), which is a collection of unlabeled web face images, as well as YouTube Faces and talk show faces datasets. In the recognition (person spotting) task, our system achieves an F-measure of 0.996 for the collection of unlabeled web face images dataset and an F-measure of 0.972 for the talk show faces dataset. In the clustering task, our system achieves an F-measure of 0.764 and 0.935 for the YouTube Faces database and the LFW dataset, respectively, while achieving an F-measure of 0.832 for the talk show faces dataset, an improvement of 5.4%, 6.5%, and 8.2% over the previous methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Pouyanfar, Samira, and Shu-Ching Chen. "Automatic Video Event Detection for Imbalance Data Using Enhanced Ensemble Deep Learning." International Journal of Semantic Computing 11, no. 01 (March 2017): 85–109. http://dx.doi.org/10.1142/s1793351x17400050.

Повний текст джерела
Анотація:
With the explosion of multimedia data, semantic event detection from videos has become a demanding and challenging topic. In addition, when the data has a skewed data distribution, interesting event detection also needs to address the data imbalance problem. The recent proliferation of deep learning has made it an essential part of many Artificial Intelligence (AI) systems. Till now, various deep learning architectures have been proposed for numerous applications such as Natural Language Processing (NLP) and image processing. Nonetheless, it is still impracticable for a single model to work well for different applications. Hence, in this paper, a new ensemble deep learning framework is proposed which can be utilized in various scenarios and datasets. The proposed framework is able to handle the over-fitting issue as well as the information losses caused by single models. Moreover, it alleviates the imbalanced data problem in real-world multimedia data. The whole framework includes a suite of deep learning feature extractors integrated with an enhanced ensemble algorithm based on the performance metrics for the imbalanced data. The Support Vector Machine (SVM) classifier is utilized as the last layer of each deep learning component and also as the weak learners in the ensemble module. The framework is evaluated on two large-scale and imbalanced video datasets (namely, disaster and TRECVID). The extensive experimental results illustrate the advantage and effectiveness of the proposed framework. It also demonstrates that the proposed framework outperforms several well-known deep learning methods, as well as the conventional features integrated with different classifiers.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Barannik, Volodymyr V., Mykola V. Dvorsky, Valeriy V. Barannik, and Anton D. Sorokun. "METHOD OF EFFICIENT REPRESENTATION AND PROTECTION OF DYNAMIC OBJECTS IN VIDEO POTOTICS BASED ON THE TECHNOLOGY OF THEIR ROCKUM COMPENSATION." Cybersecurity: Education, Science, Technique, no. 2 (2018): 90–97. http://dx.doi.org/10.28925/2663-4023.2018.2.9097.

Повний текст джерела
Анотація:
Recently, special attention at implementation of the necessary level of information security is given to wireless technologies. Their use contributes to the growing demand for video information services. This is accompanied by an increase in the intensity of video streams and an increase in the processing time of video information, resulting in them far beyond the bandwidth of networks. Consequently, there is a contradiction that is caused: on the one hand, the growth of requirements for the quality of video information; on the other hand, difficulties in providing services of the given quality using wireless technologies.The article deals with issues related to the speed of the video stream of video information, depending on the quality of video data required, from spatial resolution and frame rate. The article concludes that with the trend of increasing the amount of video information in the complexes of the Hellenic Republic - it is necessary to improve the coding methods. In order to increase the efficiency of management and operational activities, it is proposed to improve the existing methods of encoding dynamic video streaming object with algorithms for motion compensation for video conferencing in the system of troop control. As a result, the article proposes a six-point algorithm for search, which can increase the efficiency and reduce the processing time of video information between subscribers. This approach, in the future, by improving the existing methods for encoding dynamic video streaming objects with algorithms of motion compensation, will improve the efficiency of using videoconferencing, for example, in the control system of troops.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Auysakul, Jutamanee, He Xu, and Vishwanath Pooneeth. "A Hybrid Motion Estimation for Video Stabilization Based on an IMU Sensor." Sensors 18, no. 8 (August 17, 2018): 2708. http://dx.doi.org/10.3390/s18082708.

Повний текст джерела
Анотація:
Recorded video data must be clear for accuracy and faster analysis during post-processing, which often requires video stabilization systems to remove undesired motion. In this paper, we proposed a hybrid method to estimate the motion and to stabilize videos by the switching function. This method switched the estimated motion between a Kanade–Lucus–Tomasi (KLT) tracker and an IMU-aided motion estimator. It facilitated the best function to stabilize the video in real-time as those methods had numerous advantages in estimating the motion. To achieve this, we used a KLT tracker to correct the motion for low rotations and an IMU-aided motion estimator for high rotation, owing to the poor performance of the KLT tracker during larger movements. Furthermore, a Kalman filter was used to remove the undesired motion and hence smoothen the trajectory. To increase the frame rate, a multi-threaded approach was applied to execute the algorithm in the array. Irrespective of the situations exposed to the experimental results of the moving camera from five video sequences revealed that the proposed algorithm stabilized the video efficiently.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Le, Huy D., Tuyen Ngoc Le, Jing-Wein Wang, and Yu-Shan Liang. "Singular Spectrum Analysis for Background Initialization with Spatio-Temporal RGB Color Channel Data." Entropy 23, no. 12 (December 7, 2021): 1644. http://dx.doi.org/10.3390/e23121644.

Повний текст джерела
Анотація:
In video processing, background initialization aims to obtain a scene without foreground objects. Recently, the background initialization problem has attracted the attention of researchers because of its real-world applications, such as video segmentation, computational photography, video surveillance, etc. However, the background initialization problem is still challenging because of the complex variations in illumination, intermittent motion, camera jitter, shadow, etc. This paper proposes a novel and effective background initialization method using singular spectrum analysis. Firstly, we extract the video’s color frames and split them into RGB color channels. Next, RGB color channels of the video are saved as color channel spatio-temporal data. After decomposing the color channel spatio-temporal data by singular spectrum analysis, we obtain the stable and dynamic components using different eigentriple groups. Our study indicates that the stable component contains a background image and the dynamic component includes the foreground image. Finally, the color background image is reconstructed by merging RGB color channel images obtained by reshaping the stable component data. Experimental results on the public scene background initialization databases show that our proposed method achieves a good color background image compared with state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Fu, Ting, Joshua Stipancic, Sohail Zangenehpour, Luis Miranda-Moreno, and Nicolas Saunier. "Automatic Traffic Data Collection under Varying Lighting and Temperature Conditions in Multimodal Environments: Thermal versus Visible Spectrum Video-Based Systems." Journal of Advanced Transportation 2017 (2017): 1–15. http://dx.doi.org/10.1155/2017/5142732.

Повний текст джерела
Анотація:
Vision-based monitoring systems using visible spectrum (regular) video cameras can complement or substitute conventional sensors and provide rich positional and classification data. Although new camera technologies, including thermal video sensors, may improve the performance of digital video-based sensors, their performance under various conditions has rarely been evaluated at multimodal facilities. The purpose of this research is to integrate existing computer vision methods for automated data collection and evaluate the detection, classification, and speed measurement performance of thermal video sensors under varying lighting and temperature conditions. Thermal and regular video data was collected simultaneously under different conditions across multiple sites. Although the regular video sensor narrowly outperformed the thermal sensor during daytime, the performance of the thermal sensor is significantly better for low visibility and shadow conditions, particularly for pedestrians and cyclists. Retraining the algorithm on thermal data yielded an improvement in the global accuracy of 48%. Thermal speed measurements were consistently more accurate than for the regular video at daytime and nighttime. Thermal video is insensitive to lighting interference and pavement temperature, solves issues associated with visible light cameras for traffic data collection, and offers other benefits such as privacy, insensitivity to glare, storage space, and lower processing requirements.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Chen, Lei, and Wei Guo Song. "New Method to Calculate Exit Flow Based on RFID Technology." Applied Mechanics and Materials 444-445 (October 2013): 1625–29. http://dx.doi.org/10.4028/www.scientific.net/amm.444-445.1625.

Повний текст джерела
Анотація:
Common methods of obtaining the pedestrian movement parameters include the video processing and the manual statistics. Nevertheless, both the two ways may be complicated and time consuming. In this paper, we propose a new method to calculate the flow and count the number of pedestrians of exit-selection based on RFID technology. The proposed method is validated by comparing with the results obtained from the two different methods, the RFID method and the video processing. It is found that the identified rate of the reader reaches 90% and about 90% of the errors of exit time are less than 0.075. Besides, all the errors of exit time are less than 0.1. About 80% of the errors of exit flow are less than 0.1 and 90% are less than 0.2. It is also verified the new method is much easier than the traditional video processing and more time-saving than the manual process. It is hoped that, more data of pedestrian movement parameters can be easily and quickly extracted through the method proposed in this study.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Хиггинботам, Джефф, Кайла Конуэй, and Антара Сатчидананд. "Recording and Transcribing Interactions of Individuals Using Augmentative Communication Technologies." Journal of Social Policy Studies 19, no. 4 (December 29, 2021): 601–18. http://dx.doi.org/10.17323/727-0634-2021-19-4-601-618.

Повний текст джерела
Анотація:
The purpose of this article is to provide the reader with tools and recommendations for collecting data and making microanalytic transcriptions of interaction involving people using Augmentative Communication Technologies (ACTs). This is of interest for clinicians, as well as anyone else engaged in video-based microanalysis of technology mediated interaction in other contexts. The information presented here has particular relevance to young researchers developing their own methodologies, and experienced scientists interested in social interaction research in ACTs or as well as other digital communication technologies. Tools and methods for recording social interactions to support microanalysis by making unobtrusive recordings of naturally occurring or task-driven social interactions while minimizing recording-related distractions which could alter the authenticity of the social interaction are discussed. Recommendations for the needed functionality of video and audio recording equipment are made with tips for how to capture actions that are important to the research question as opposed to capturing 'generally usable' video. In addition, tips for processing video and managing video data are outlined, including how to develop optimally functional naming conventions for stored videos, how and where to store video data (i. e. use of external hard drives, compressing videos for storage) and syncing multiple videos, offering different views of a single interaction (i. e. syncing footage of the overall interaction with footage of the device display). Finally, tools and strategies for transcription are discussed including a brief description of the role transcription plays in analysis, a suggested framework for how transcription might proceed through multiple passes, each focused on a different aspect of communication, transcription software options along with discussion of specific features that aide transcription. In addition, special issues that arise in transcribing interactions involving ACTs are addressed.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Barannik, Vladimir, Andrii Krasnorutsky, Sergii Shulgin, Valerii Yeroshenko, Yevhenii Sidchenko, and Andrii Hordiienko. "Image compression based on classification coding of constant-pitched functions transformers." RADIOELECTRONIC AND COMPUTER SYSTEMS, no. 3 (October 5, 2021): 48–62. http://dx.doi.org/10.32620/reks.2021.3.05.

Повний текст джерела
Анотація:
The subject of research in the article are the processes of video image processing using an orthogonal transformation for data transmission in information and telecommunication networks. The aim is to build a method of compression of video images while maintaining the efficiency of its delivery at a given informative probability. That will allow to provide a gain in the time of delivery of compressed video images, a necessary level of availability and authenticity at transfer of video data with preservation of strictly statistical regulations and the controlled loss of quality. Task: to study the known algorithms for selective processing of static video at the stage of approximation and statistical coding of the data based on JPEG-platform. The methods used are algorithm based on JPEG-platform, methods of approximation by orthogonal transformation of information blocks, arithmetic coding. It is a solution of scientific task-developed methods for reducing the computational complexity of transformations (compression and decompression) of static video images in the equipment for processing visual information signals, which will increase the efficiency of information delivery.The following results were obtained. The method of video image compression with preservation of the efficiency of its delivery at the set informative probability is developed. That will allow to fulfill the set requirements at the preservation of structural-statistical economy, providing a gain in time to bring compressed images based on the developed method, relative to known methods, on average up to 2 times. This gain is because with a slight difference in the compression ratio of highly saturated images compared to the JPEG-2000 method, for the developed method, the processing time will be less by at least 34%.Moreover, with the increase in the volume of transmitted images and the data transmission speed in the communication channel - the gain in the time of delivery for the developed method will increase. Here, the loss of quality of the compressed/restored image does not exceed 2% by RMS, or not worse than 45 dB by PSNR. What is unnoticeable to the human eye.Conclusions. The scientific novelty of the obtained results is as follows: for the first time the method of classification (separate) coding (compression) of high-frequency and low-frequency components of Walsh transformants of video images is offered and investigated, which allows to consider their different dynamic range and statistical redundancy reduced using arithmetic coding. This method will allow to ensure the necessary level of availability and authenticity when transmitting video data, while maintaining strict statistical statistics.Note that the proposed method fulfills the set tasks to increase the efficiency of information delivery. Simultaneously, the method for reducing the time complexity of the conversion of highly saturated video images using their representation by the transformants of the discrete Walsh transformation was further developed. It is substantiated that the perspective direction of improvement of methods of image compression is the application of orthogonal transformations on the basis of integer piecewise-constant functions, and methods of integer arithmetic coding of values of transformant transformations.It is substantiated that the joint use of Walsh transformation and arithmetic coding, which reduces the time of compression and recovery of images; reduces additional statistical redundancy. To further increase the degree of compression, a classification coding of low-frequency and high-frequency components of Walsh transformants is developed. It is shown that an additional reduction in statistical redundancy in the arrays of low-frequency components of Walsh transformants is achieved due to their difference in representation. Recommendations for the parameters of the compression method for which the lowest value of the total time of information delivery is provided are substantiated.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Rocha Neto, Aluizio, Thiago P. Silva, Thais Batista, Flávia C. Delicato, Paulo F. Pires, and Frederico Lopes. "Leveraging Edge Intelligence for Video Analytics in Smart City Applications." Information 12, no. 1 (December 31, 2020): 14. http://dx.doi.org/10.3390/info12010014.

Повний текст джерела
Анотація:
In smart city scenarios, the huge proliferation of monitoring cameras scattered in public spaces has posed many challenges to network and processing infrastructure. A few dozen cameras are enough to saturate the city’s backbone. In addition, most smart city applications require a real-time response from the system in charge of processing such large-scale video streams. Finding a missing person using facial recognition technology is one of these applications that require immediate action on the place where that person is. In this paper, we tackle these challenges presenting a distributed system for video analytics designed to leverage edge computing capabilities. Our approach encompasses architecture, methods, and algorithms for: (i) dividing the burdensome processing of large-scale video streams into various machine learning tasks; and (ii) deploying these tasks as a workflow of data processing in edge devices equipped with hardware accelerators for neural networks. We also propose the reuse of nodes running tasks shared by multiple applications, e.g., facial recognition, thus improving the system’s processing throughput. Simulations showed that, with our algorithm to distribute the workload, the time to process a workflow is about 33% faster than a naive approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Mohamadzadeh, Sajad, and Hassan Farsi. "CONTENT BASED VIDEO RETRIEVAL BASED ON HDWT AND SPARSE REPRESENTATION." Image Analysis & Stereology 35, no. 2 (April 14, 2016): 67. http://dx.doi.org/10.5566/ias.1346.

Повний текст джерела
Анотація:
Video retrieval has recently attracted a lot of research attention due to the exponential growth of video datasets and the internet. Content based video retrieval (CBVR) systems are very useful for a wide range of applications with several type of data such as visual, audio and metadata. In this paper, we are only using the visual information from the video. Shot boundary detection, key frame extraction, and video retrieval are three important parts of CBVR systems. In this paper, we have modified and proposed new methods for the three important parts of our CBVR system. Meanwhile, the local and global color, texture, and motion features of the video are extracted as features of key frames. To evaluate the applicability of the proposed technique against various methods, the P(1) metric and the CC_WEB_VIDEO dataset are used. The experimental results show that the proposed method provides better performance and less processing time compared to the other methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Yulita, Intan Nurma, Mohamad Ivan Fanany, and Aniati Murni Arymurthy. "Fuzzy Latent-Dynamic Conditional Neural Fields for Gesture Recognition in Video." International Journal on Information and Communication Technology (IJoICT) 2, no. 2 (July 25, 2017): 1. http://dx.doi.org/10.21108/ijoict.2016.22.124.

Повний текст джерела
Анотація:
<p>With the explosion of data on the internet led to the presence of the big data era, so it requires data processing in order to get the useful information. One of the challenges is the gesture recognition the video processing. Therefore, this study proposes Latent-Dynamic Conditional Neural Fields and compares with the other family members of Conditional Random Fields. To improve the accuracy, these methods are combined by using Fuzzy Clustering. From the result, it can be concluded that the performance of Latent-Dynamic Conditional Neural Fields are lower than Conditional Neural Fields but higher than the Conditional Random Fields and Latent-Dynamic Conditional Random Fields. Also, the combination of Latent-Dynamic Conditional Neural Fields and Fuzzy C-Means Clustering has the highest. This evaluation is tested in a temporal dataset of gesture phase segmentation.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Tanseer, Iffrah, Nadia Kanwal, Mamoona Naveed Asghar, Ayesha Iqbal, Faryal Tanseer, and Martin Fleury. "Real-Time, Content-Based Communication Load Reduction in the Internet of Multimedia Things." Applied Sciences 10, no. 3 (February 8, 2020): 1152. http://dx.doi.org/10.3390/app10031152.

Повний текст джерела
Анотація:
There is an increasing number of devices available for the Internet of Multimedia Things (IoMT). The demands these ever-more complex devices make are also increasing in terms of energy efficiency, reliability, quality-of-service guarantees, higher data transfer rates, and general security. The IoMT itself faces challenges when processing and storing massive amounts of data, transmitting it over low bandwidths, bringing constrained resources to bear and keeping power consumption under check. This paper’s research focuses on an efficient video compression technique to reduce that communication load, potentially generated by diverse camera sensors, and also improve bit-rates, while ensuring accuracy of representation and completeness of video data. The proposed method applies a video content-based solution, which, depending on the motion present between consecutive frames, decides on whether to send only motion information or no frame information at all. The method is efficient in terms of limiting the data transmitted, potentially conserving device energy, and reducing latencies by means of negotiable processing overheads. Data are also encrypted in the interests of confidentiality. Video quality measurements, along with a good number of Quality-of-Service measurements demonstrated the value of the load reduction, as is also apparent from a comparison with other related methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Yang, Chao, Xiaolei Han, Mingxue Jin, Jianhui Xu, Yiren Wang, Yajun Zhang, Chonglong Xu, Yingshi Zhang, Enshi Jin, and Chengzhe Piao. "The Effect of Video Game–Based Interventions on Performance and Cognitive Function in Older Adults: Bayesian Network Meta-analysis." JMIR Serious Games 9, no. 4 (December 30, 2021): e27058. http://dx.doi.org/10.2196/27058.

Повний текст джерела
Анотація:
Background The decline in performance of older people includes balance function, physical function, and fear of falling and depression. General cognitive function decline is described in terms of processing speed, working memory, attention, and executive functioning, and video game interventions may be effective. Objective This study evaluates the effect of video game interventions on performance and cognitive function in older participants in terms of 6 indicators: balance function, executive function, general cognitive function, physical function, processing speed, and fear of falling and depression. Methods Electronic databases were searched for studies from inception to June 30, 2020. Randomized controlled trials and case-controlled trials comparing video game interventions versus nonvideo game control in terms of performance and cognitive function outcomes were incorporated into a Bayesian network meta-analysis. All data were continuous variables. Results In total, 47 studies (3244 participants) were included. In pairwise meta-analysis, compared with nonvideo game control, video game interventions improved processing speed, general cognitive function, and depression scores. In the Bayesian network meta-analysis, interventions with video games improved balance function time (standardized mean difference [SMD] –3.34, 95% credible interval [CrI] –5.54 to –2.56), the cognitive function score (SMD 1.23, 95% CrI 0.82-1.86), processing speed time (SMD –0.29, 95% CrI –0.49 to –0.08), and processing speed number (SMD 0.72, 95% CrI 0.36-1.09), similar to the pairwise meta-analysis. Interventions with video games with strong visual senses and good interactivity ranked first, and these might be more beneficial for the elderly. Conclusions Our comprehensive Bayesian network meta-analysis provides evidence that video game interventions could be considered for the elderly for improving performance and cognitive function, especially general cognitive scores and processing speed. Games with better interactivity and visual stimulation have better curative effects. Based on the available evidence, we recommend video game interventions for the elderly. Trial Registration PROSPERO CRD42020197158; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=197158
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії