Academic literature on the topic 'Methods of video data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Methods of video data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Methods of video data processing"

1

Chavate, Shrikant, and Ravi Mishra. "Efficient Detection of Abrupt Transitions Using Statistical Methods." ECS Transactions 107, no. 1 (April 24, 2022): 6541–52. http://dx.doi.org/10.1149/10701.6541ecst.

Full text
Abstract:
The rapid increment of technological advancements in the field of multimedia stream has piled up data production and consumption in cyberspace in the past two decades. It has prompted a swift increase in data transmission volume and repository size. Video is the most consumed data type on the internet. So, retrieval of selected video clips from the extra-large database of videos is extremely complex. The video shot boundary detection (SBD) is used for retrieval of desired video clips. It also is the fundamental step in video processing, which also was found important for applications like video browsing and indexing. The SBD handles the identification of abrupt and gradual transitions in the video database. In this paper, the statistical methods have implemented to detect the abrupt cuts. This benefitted the advantage of less complexity. The experiments are performed on TRECVID 2007 dataset, and the results obtained with high accuracy in less computational time.
APA, Harvard, Vancouver, ISO, and other styles
2

Gu, Chong, and Zhan Jun Si. "Applied Research of Assessment Methods on Video Quality." Applied Mechanics and Materials 262 (December 2012): 157–62. http://dx.doi.org/10.4028/www.scientific.net/amm.262.157.

Full text
Abstract:
With the rapid development of modern video technology, the range of video applications is increasing, such as online video conferencing, online classroom, online medical, etc. However, due to the quantity of video data is large, video has to be compressed and encoded appropriately, but the encoding process may cause some distortions on video quality. Therefore, how to evaluate the video quality efficiently and accurately is essential in the fields of video processing, video quality monitoring and multimedia video applications. In this article, subjective, and comprehensive evaluation method of video quality were introduced, a video quality assessment system was completed, four ITU recommended videos were encoded and evaluated by Degradation Category Rating (DCR) and Structural Similarity (SSIM) methods using five different formats. After that, comprehensive evaluations with weights were applied. Results show that data of all three evaluations have good consistency; H.264 is the best encoding method, followed by Xvid and wmv8; the higher the encoding bit rate is, the better the evaluations are, but comparing to 1000kbps, the subjective and objective evaluation scores of 1400kbps couldn’t improve obviously. The whole process could also evaluate new encodings methods, and is applicable for high-definition video, finally plays a significant role in promoting the video quality evaluation and video encoding.
APA, Harvard, Vancouver, ISO, and other styles
3

Et. al., G. Megala,. "State-Of-The-Art In Video Processing: Compression, Optimization And Retrieval." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 5 (April 11, 2021): 1256–72. http://dx.doi.org/10.17762/turcomat.v12i5.1793.

Full text
Abstract:
Video compression plays a vital role in the modern social media networking with plethora of multimedia applications. It empowers transmission medium to competently transfer videos and enable resources to store the video efficiently. Nowadays high-resolution video data are transferred through the communication channel having high bit rate in order to send multiple compressed videos. There are many advances in transmission ability, efficient storage ways of these compressed video where compression is the primary task involved in multimedia services. This paper summarizes the compression standards, describes the main concepts involved in video coding. Video compression performs conversion of large raw bits of video sequence into a small compact one, achieving high compression ratio with good video perceptual quality. Removing redundant information is the main task in the video sequence compression. A survey on various block matching algorithms, quantization and entropy coding are focused. It is found that many of the methods having computational complexities needs improvement with optimization.
APA, Harvard, Vancouver, ISO, and other styles
4

Wei, Bo, Kai Li, Chengwen Luo, Weitao Xu, Jin Zhang, and Kuan Zhang. "No Need of Data Pre-processing." ACM Transactions on Internet of Things 2, no. 4 (November 30, 2021): 1–26. http://dx.doi.org/10.1145/3467980.

Full text
Abstract:
Device-free context awareness is important to many applications. There are two broadly used approaches for device-free context awareness, i.e., video-based and radio-based. Video-based approaches can deliver good performance, but privacy is a serious concern. Radio-based context awareness applications have drawn researchers' attention instead, because it does not violate privacy and radio signal can penetrate obstacles. The existing works design explicit methods for each radio-based application. Furthermore, they use one additional step to extract features before conducting classification and exploit deep learning as a classification tool. Although this feature extraction step helps explore patterns of raw signals, it generates unnecessary noise and information loss. The use of raw CSI signal without initial data processing was, however, considered as no usable patterns. In this article, we are the first to propose an innovative deep learning–based general framework for both signal processing and classification. The key novelty of this article is that the framework can be generalised for all the radio-based context awareness applications with the use of raw CSI. We also eliminate the extra work to extract features from raw radio signals. We conduct extensive evaluations to show the superior performance of our proposed method and its generalisation.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Hui, Yapeng Liu, Wenzhong Lin, Lingwei Xu, and Junyin Wang. "Data Association Methods via Video Signal Processing in Imperfect Tracking Scenarios: A Review and Evaluation." Mathematical Problems in Engineering 2020 (August 31, 2020): 1–26. http://dx.doi.org/10.1155/2020/7549816.

Full text
Abstract:
In 5G scenarios, there are a large number of video signals that need to be processed. Multiobject tracking is one of the main directions in video signal processing. Data association is a very important link in tracking algorithms. Complexity and efficiency of association method have a direct impact on the performance of multiobject tracking. Breakthroughs have been made in data association methods based on deep learning, and the performance has been greatly improved compared with traditional methods. However, there is a lack of overviews about data association methods. Therefore, this article first analyzes characteristics and performance of three traditional data association methods and then focuses on data association methods based on deep learning, which is divided into different deep network structures: SOT methods, end-to-end methods, and Wasserstein metric methods. The performance of each tracking method is compared and analyzed. Finally, it summarizes the current common datasets and evaluation criteria for multiobject tracking and discusses challenges and development trends of data association technology and data association methods which ensure robust and real time need to be continuously improved.
APA, Harvard, Vancouver, ISO, and other styles
6

Guo, Jianbang, Peng Sun, and Sang-Bing Tsai. "A Study on the Optimization Simulation of Big Data Video Image Keyframes in Motion Models." Wireless Communications and Mobile Computing 2022 (March 16, 2022): 1–12. http://dx.doi.org/10.1155/2022/2508174.

Full text
Abstract:
In this paper, the signal of athletic sports video image frames is processed and studied according to the technology of big data. The sports video image-multiprocessing technology achieves interference-free research and analysis of sports technology and can meet multiple visual needs of sports technology analysis and evaluation through key technologies such as split-screen synchronous comparison, superimposed synchronous comparison, and video trajectory tracking. The sports video image-processing technology realizes the rapid extraction of key technical parameters of the sports scene, the panoramic map technology of sports video images, the split-lane calibration technology, and the development of special video image analysis software that is innovative in the field of athletics research. An image-blending approach is proposed to alleviate the problem of simple and complex background data imbalance, while enhancing the generalization ability of the network trained using small-scale datasets. Local detail features of the target are introduced in the online-tracking process by an efficient block-filter network. Moreover, online hard-sample learning is utilized to avoid the interference of similar objects to the tracker, thus improving the overall tracking performance. For the feature extraction problem of fuzzy videos, this paper proposes a fuzzy kernel extraction scheme based on the low-rank theory. The scheme fuses multiple fuzzy kernels of keyframe images by low-rank decomposition and then deblurs the video. Next, a double-detection mechanism is used to detect tampering points on the blurred video frames. Finally, the video-tampering points are located, and the specific way of video tampering is determined. Experiments on two public video databases and self-recorded videos show that the method is robust in fuzzy video forgery detection, and the efficiency of fuzzy video detection is improved compared to traditional video forgery detection methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Gang, Ainiwaer Aizimaiti, and Yan Liu. "Quaternion Model of Fast Video Quality Assessment Based on Structural Similarity Normalization." Applied Mechanics and Materials 380-384 (August 2013): 3982–85. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.3982.

Full text
Abstract:
Video quality evaluation methods have been widely studied because of an increasing need in variety of video processing applications, such as compression, analysis, communication, enhancement and restoration. The quaternion models are also widely used to measure image or video quality. In this paper, we proposed a new quaternion model which mainly describes the contour feature, surface feature and temporal information of the video. We use structure similarity comparison to normalize four quaternion parts respectively, because each part of the quaternion use different metric. Structure similarity comparison is also used to measure the difference between reference videos and distortion videos. The results of experiments show that the new method has good correlation with perceived video quality when tested on the video quality experts group (VQEG) Phase I FR-TV test data set.
APA, Harvard, Vancouver, ISO, and other styles
8

Kandriasari, Annis, Robinson Situmorang, Suyitno Muslim, and Jhoni Lagun Siang. "HOW TO DEVELOP A BREAD PROCESSING VIDEO STORYBOARD." Asia Proceedings of Social Sciences 5, no. 2 (December 30, 2019): 137–41. http://dx.doi.org/10.31580/apss.v5i2.1132.

Full text
Abstract:
The purpose of this study was to produce a video storyboard guide for bread processing practicum and the feasibility level of a practical guide video storyboard in a Bread Processing practicum course. This research uses research and development methods, conducted with the development of practicum guide video storyboards. Data collection techniques through interviews and questionnaires. Data analysis techniques using quantitative and qualitative data analysis. The procedure for developing instructional media is carried out by preparing an outline of the contents of the material, preparing a description of the material and preparing a storyboard. Furthermore, validation of the assessment by material experts, media experts, learning experts to assess the feasibility of the contents of the storyboard. The results of the expert material assessment showed that the storyboard was feasible, the assessment of the storyboard media experts in the feasible category, the assessment of the learning expert was very feasible to be developed into a practicum guidance video.
APA, Harvard, Vancouver, ISO, and other styles
9

Sabot, F., M. Naaim, F. Granada, E. Suriñach, P. Planet, and G. Furdada. "Study of avalanche dynamics by seismic methods, image-processing techniques and numerical models." Annals of Glaciology 26 (1998): 319–23. http://dx.doi.org/10.3189/1998aog26-1-319-323.

Full text
Abstract:
Seismic signals of avalanches, related video images and numerical models were compared to improve the characterization of avalanche phenomena. Seismic data and video images from two artificially released avalanches were analysed to obtain more information about the origin of the signals. Image processing was used to compare the evolution of one avalanche front and the corresponding seismic signals. A numerical model was also used to simulate an avalanche flow in order to obtain mean- and maximum-velocity profiles. Prior to this, the simulated avalanche was verified using video images. The results indicate that the seismic signals recorded correspond to changes in avalanche type and path slope, interaction with obstacles and to phenomena associated with the stopping stage of the avalanche, suggesting that only part of the avalanche was recorded. These results account for the seismic signals previously obtained automatically in a wide avalanche area.
APA, Harvard, Vancouver, ISO, and other styles
10

Sabot, F., M. Naaim, F. Granada, E. Suriñach, P. Planet, and G. Furdada. "Study of avalanche dynamics by seismic methods, image-processing techniques and numerical models." Annals of Glaciology 26 (1998): 319–23. http://dx.doi.org/10.1017/s0260305500015032.

Full text
Abstract:
Seismic signals of avalanches, related video images and numerical models were compared to improve the characterization of avalanche phenomena. Seismic data and video images from two artificially released avalanches were analysed to obtain more information about the origin of the signals. Image processing was used to compare the evolution of one avalanche front and the corresponding seismic signals. A numerical model was also used to simulate an avalanche flow in order to obtain mean- and maximum-velocity profiles. Prior to this, the simulated avalanche was verified using video images. The results indicate that the seismic signals recorded correspond to changes in avalanche type and path slope, interaction with obstacles and to phenomena associated with the stopping stage of the avalanche, suggesting that only part of the avalanche was recorded. These results account for the seismic signals previously obtained automatically in a wide avalanche area.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Methods of video data processing"

1

Karlsson, Linda S. "Spatio-Temporal Pre-Processing Methods for Region-of-Interest Video Coding." Licentiate thesis, Mid Sweden University, Department of Information Technology and Media, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-51.

Full text
Abstract:

In video transmission at low bit rates the challenge is to compress the video with a minimal reduction of the percieved quality. The compression can be adapted to knowledge of which regions in the video sequence are of most interest to the viewer. Region of interest (ROI) video coding uses this information to control the allocation of bits to the background and the ROI. The aim is to increase the quality in the ROI at the expense of the quality in the background. In order for this to occur the typical content of an ROI for a particular application is firstly determined and the actual detection is performed based on this information. The allocation of bits can then be controlled based on the result of the detection.

In this licenciate thesis existing methods to control bit allocation in ROI video coding are investigated. In particular pre-processing methods that are applied independently of the codec or standard. This makes it possible to apply the method directly to the video sequence without modifications to the codec. Three filters are proposed in this thesis based on previous approaches. The spatial filter that only modifies the background within a single frame and the temporal filter that uses information from the previous frame. These two filters are also combined into a spatio-temporal filter. The abilities of these filters to reduce the number of bits necessary to encode the background and to successfully re-allocate these to the ROI are investigated. In addition the computational compexities of the algorithms are analysed.

The theoretical analysis is verified by quantitative tests. These include measuring the quality using both the PSNR of the ROI and the border of the background, as well as subjective tests with human test subjects and an analysis of motion vector statistics.

The qualitative analysis shows that the spatio-temporal filter has a better coding efficiency than the other filters and it successfully re-allocates the bits from the foreground to the background. The spatio-temporal filter gives an improvement in average PSNR in the ROI of more than 1.32 dB or a reduction in bitrate of 31 % compared to the encoding of the original sequence. This result is similar to or slightly better than the spatial filter. However, the spatio-temporal filter has a better performance, since its computational complexity is lower than that of the spatial filter.

APA, Harvard, Vancouver, ISO, and other styles
2

Hamlet, Sean Michael. "COMPARING ACOUSTIC GLOTTAL FEATURE EXTRACTION METHODS WITH SIMULTANEOUSLY RECORDED HIGH-SPEED VIDEO FEATURES FOR CLINICALLY OBTAINED DATA." UKnowledge, 2012. http://uknowledge.uky.edu/ece_etds/12.

Full text
Abstract:
Accurate methods for glottal feature extraction include the use of high-speed video imaging (HSVI). There have been previous attempts to extract these features with the acoustic recording. However, none of these methods compare their results with an objective method, such as HSVI. This thesis tests these acoustic methods against a large diverse population of 46 subjects. Two previously studied acoustic methods, as well as one introduced in this thesis, were compared against two video methods, area and displacement for open quotient (OQ) estimation. The area comparison proved to be somewhat ambiguous and challenging due to thresholding effects. The displacement comparison, which is based on glottal edge tracking, proved to be a more robust comparison method than the area. The first acoustic methods OQ estimate had a relatively small average error of 8.90% and the second method had a relatively large average error of -59.05% compared to the displacement OQ. The newly proposed method had a relatively small error of -13.75% when compared to the displacements OQ. There was some success even though there was relatively high error with the acoustic methods, however, they may be utilized to augment the features collected by HSVI for a more accurate glottal feature estimation.
APA, Harvard, Vancouver, ISO, and other styles
3

Szolgay, Daniel. "Video event detection and visual data pro cessing for multimedia applications." Thesis, Bordeaux 1, 2011. http://www.theses.fr/2011BOR14313/document.

Full text
Abstract:
Cette thèse (i) décrit une procédure automatique pour estimer la condition d'arrêt des méthodes de déconvolution itératives basées sur un critère d'orthogonalité du signal estimé et de son gradient à une itération donnée; (ii) présente une méthode qui décompose l'image en une partie géométrique (ou "cartoon") et une partie "texture" en utilisation une estimation de paramètre et une condition d'arrêt basées sur la diffusion anisotropique avec orthogonalité, en utilisant le fait que ces deux composantes. "cartoon" et "texture", doivent être indépendantes; (iii) décrit une méthode pour extraire d'une séquence vidéo obtenue à partir de caméra portable les objets de premier plan en mouvement. Cette méthode augmente la compensation de mouvement de la caméra par une nouvelle estimation basée noyau de la fonction de probabilité de densité des pixels d'arrière-plan. Les méthodes présentées ont été testées et comparées aux algorithmes de l'état de l'art
This dissertation (i) describes an automatic procedure for estimating the stopping condition of non-regularized iterative deconvolution methods based on an orthogonality criterion of the estimated signal and its gradient at a given iteration; (ii) presents a decomposition method that splits the image into geometric (or cartoon) and texture parts using anisotropic diffusion with orthogonality based parameter estimation and stopping condition, utilizing the theory that the cartoon and the texture components of an image should be independent of each other; (iii) describes a method for moving foreground object extraction in sequences taken by wearable camera, with strong motion, where the camera motion compensated frame differencing is enhanced with a novel kernel-based estimation of the probability density function of the background pixels. The presented methods have been thoroughly tested and compared to other similar algorithms from the state-of-the-art
APA, Harvard, Vancouver, ISO, and other styles
4

Ліпчанська, Оксана Валентинівна. "Методи обробки та передачі даних для підсистеми інформаційного забезпечення машиніста локомотива." Thesis, Національний технічний університет "Харківський політехнічний інститут", 2019. http://repository.kpi.kharkov.ua/handle/KhPI-Press/41022.

Full text
Abstract:
Дисертація на здобуття наукового ступеня кандидата технічних наук за спеціальністю 05.13.05 – комп'ютерні системи та компоненти. – Національний технічний університет "Харківський політехнічний інститут", Харків, 2019. У дисертації вирішена задача розробки методів обробки та передачі даних для підсистеми інформаційного забезпечення машиніста локомотива. На підставі вивчення сучасних методів і засобів обробки і передачі даних на залізничному транспорті встановлено, що існує суперечність між підвищенням швидкості та інтенсивності руху залізничного транспорту, збільшенням кількості аварійно-небезпечних ділянок, ускладненням системи управління залізничним транспортом, підвищенням вимог до якості інформаційного забезпечення машиніста локомотива та застосовуваними математичним апаратом і технологіями управління кодування/декодування. Обґрунтовано вибір технології математичної формалізації на основі мереж масового обслуговування та нейронних мереж. Розроблено концептуальну модель бездротового сегменту підсистеми інформаційного забезпечення машиніста локомотива, яка побудована з використанням технології 4G і враховує тип модуляції QPSK та процедуру перемежіння при передачі кадрів для зниження ймовірності помилкового прийому при забезпеченні високої швидкості передачі даних. Розроблено математичну модель процесу передачі даних по фізичному розподіленому uplink каналу, яка враховує особливості передачі даних відповідно до стандарту LTE в динамічних умовах магістралі залізниці і дозволяє оцінити ймовірносно-часові характеристики процесу передачі даних в підсистемі інформаційного забезпечення машиніста локомотива. Розроблено математичну модель управління доступом до сегменту комп'ютеризованої системи залізничного транспорту, яка враховує особливості передачі даних відповідно до стандарту LTE в динамічних умовах магістралі залізниці. Удосконалено процедуру завадостійкого декодування потокового відео, що передається в бездротовому сегменті 4G мережі підсистеми інформаційного забезпечення машиніста локомотива, що дозволяє підвищити оперативність при передачі даних, скоротити загальну кількість обчислень при декодуванні і звільнити обчислювальні ресурси комп'ютеризованої системи. Розроблено інтелектуальну систему контролю стану небезпечних ділянок залізничного шляху із застосуванням адаптованої згортальної нейронної мережі, що дозволяє підвищити достовірність розпізнавання небезпечних ситуацій на залізничних переїздах і інформувати машиніста в разі виникнення критичної ситуації. Обґрунтовано практичні рекомендації щодо застосування розроблених методів.
The thesis is in candidacy for a scientific degree of candidate of technical sciences in specialty 05.13.05 – computer systems and components. – National Technical University "Kharkiv Polytechnic Institute", Kharkiv, 2019. The thesis solves the problem of developing methods for processing and transmitting data for the locomotive driver's information support subsystem. Based on the study of modern methods and means of processing and transmitting data on railway transport, it is established that there is a contradiction between the increase in the speed and intensity of railway traffic, the increase in the number of emergency areas, the complexity of the railway transport management system, the increase in the requirements for railway transport safety and the applied mathematical apparatus and technologies of coding / decoding. The choice of technology of mathematical formalization based on queuing networks and neural networks is justified. A conceptual model of the wireless segment of the locomotive driver's information support subsystem has been developed, which is built using 4G technology and takes into account the QPSK modulation type and the interleaving procedure for frame transmission to reduce the probability of erroneous reception while ensuring a high data rate. A mathematical model of the process of transmitting video data over a physical distributed uplink channel that takes into account the peculiarities of video data transmission in accordance with the LTE standard under dynamic conditions of the railway main line is developed and allows to estimate a wide range of robabilitytemporal characteristics of the video data transmission process in the locomotive driver's information support subsystem. A mathematical model of access control to the segment of a locomotive driver's information support subsystem has been developed, which takes into account the peculiarities of video data transmission in accordance with the LTE standard in the dynamic conditions of the railway line. The procedure for noiseless decoding of streaming video in the wireless 4G network segment of the locomotive driver’s information support subsystem is improved. It allows to increase efficiency in data transfer, to reduce the total number of calculations during decoding and to release the computing resources of the computerized system. An intelligent system for monitoring the state of dangerous sections of the railway using an adapted convolution neural network has been developed, which makes it possible to increase the reliability of the recognition of dangerous situations at railway crossings and inform the driver in the event of a critical situation. Practical recommendations as for the developed methods application are substantiated.
APA, Harvard, Vancouver, ISO, and other styles
5

Ліпчанська, Оксана Валентинівна. "Методи обробки та передачі даних для підсистеми інформаційного забезпечення машиніста локомотива." Thesis, Національний технічний університет "Харківський політехнічний інститут", 2019. http://repository.kpi.kharkov.ua/handle/KhPI-Press/40900.

Full text
Abstract:
Дисертація на здобуття наукового ступеня кандидата технічних наук за спеціальністю 05.13.05 – комп'ютерні системи та компоненти. – Національний технічний університет "Харківський політехнічний інститут", Харків, 2019. У дисертації вирішена задача розробки методів обробки та передачі даних для підсистеми інформаційного забезпечення машиніста локомотива. На підставі вивчення сучасних методів і засобів обробки і передачі даних на залізничному транспорті встановлено, що існує суперечність між підвищенням швидкості та інтенсивності руху залізничного транспорту, збільшенням кількості аварійно-небезпечних ділянок, ускладненням системи управління залізничним транспортом, підвищенням вимог до якості інформаційного забезпечення машиніста локомотива та застосовуваними математичним апаратом і технологіями управління кодування/декодування. Обґрунтовано вибір технології математичної формалізації на основі мереж масового обслуговування та нейронних мереж. Розроблено концептуальну модель бездротового сегменту підсистеми інформаційного забезпечення машиніста локомотива, яка побудована з використанням технології 4G і враховує тип модуляції QPSK та процедуру перемежіння при передачі кадрів для зниження ймовірності помилкового прийому при забезпеченні високої швидкості передачі даних. Розроблено математичну модель процесу передачі даних по фізичному розподіленому uplink каналу, яка враховує особливості передачі даних відповідно до стандарту LTE в динамічних умовах магістралі залізниці і дозволяє оцінити ймовірносно-часові характеристики процесу передачі даних в підсистемі інформаційного забезпечення машиніста локомотива. Розроблено математичну модель управління доступом до сегменту комп'ютеризованої системи залізничного транспорту, яка враховує особливості передачі даних відповідно до стандарту LTE в динамічних умовах магістралі залізниці. Удосконалено процедуру завадостійкого декодування потокового відео, що передається в бездротовому сегменті 4G мережі підсистеми інформаційного забезпечення машиніста локомотива, що дозволяє підвищити оперативність при передачі даних, скоротити загальну кількість обчислень при декодуванні і звільнити обчислювальні ресурси комп'ютеризованої системи. Розроблено інтелектуальну систему контролю стану небезпечних ділянок залізничного шляху із застосуванням адаптованої згортальної нейронної мережі, що дозволяє підвищити достовірність розпізнавання небезпечних ситуацій на залізничних переїздах і інформувати машиніста в разі виникнення критичної ситуації. Обґрунтовано практичні рекомендації щодо застосування розроблених методів.
The thesis is in candidacy for a scientific degree of candidate of technical sciences in specialty 05.13.05 – computer systems and components. – National Technical University "Kharkiv Polytechnic Institute", Kharkiv, 2019. The thesis solves the problem of developing methods for processing and transmitting data for the locomotive driver's information support subsystem. Based on the study of modern methods and means of processing and transmitting data on railway transport, it is established that there is a contradiction between the increase in the speed and intensity of railway traffic, the increase in the number of emergency areas, the complexity of the railway transport management system, the increase in the requirements for railway transport safety and the applied mathematical apparatus and technologies of coding / decoding. The choice of technology of mathematical formalization based on queuing networks and neural networks is justified. A conceptual model of the wireless segment of the locomotive driver's information support subsystem has been developed, which is built using 4G technology and takes into account the QPSK modulation type and the interleaving procedure for frame transmission to reduce the probability of erroneous reception while ensuring a high data rate. A mathematical model of the process of transmitting video data over a physical distributed uplink channel that takes into account the peculiarities of video data transmission in accordance with the LTE standard under dynamic conditions of the railway main line is developed and allows to estimate a wide range of robabilitytemporal characteristics of the video data transmission process in the locomotive driver's information support subsystem. A mathematical model of access control to the segment of a locomotive driver's information support subsystem has been developed, which takes into account the peculiarities of video data transmission in accordance with the LTE standard in the dynamic conditions of the railway line. The procedure for noiseless decoding of streaming video in the wireless 4G network segment of the locomotive driver’s information support subsystem is improved. It allows to increase efficiency in data transfer, to reduce the total number of calculations during decoding and to release the computing resources of the computerized system. An intelligent system for monitoring the state of dangerous sections of the railway using an adapted convolution neural network has been developed, which makes it possible to increase the reliability of the recognition of dangerous situations at railway crossings and inform the driver in the event of a critical situation. Practical recommendations as for the developed methods application are substantiated.
APA, Harvard, Vancouver, ISO, and other styles
6

Pinto, Allan da Silva 1984. "A countermeasure method for video-based face spoofing attacks : Detecção de tentativas de ataque com vídeos digitais em sistemas de biometria de face." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275616.

Full text
Abstract:
Orientador: Anderson de Rezende Rocha
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-23T22:22:57Z (GMT). No. of bitstreams: 1 Pinto_AllandaSilva_M.pdf: 47523880 bytes, checksum: 072eb0490c26631b80cdcc47d55a4817 (MD5) Previous issue date: 2013
Resumo: O resumo poderá ser visualizado no texto completo da tese digital
Abstract: The complete abstract is available with the full electronic document
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
7

Toivonen, T. (Tuukka). "Efficient methods for video coding and processing." Doctoral thesis, University of Oulu, 2008. http://urn.fi/urn:isbn:9789514286957.

Full text
Abstract:
Abstract This thesis presents several novel improvements to video coding algorithms, including block-based motion estimation, quantization selection, and video filtering. Most of the presented improvements are fully compatible with the standards in general use, including MPEG-1, MPEG-2, MPEG-4, H.261, H.263, and H.264. For quantization selection, new methods are developed based on the rate-distortion theory. The first method obtains locally optimal frame-level quantization parameter considering frame-wise dependencies. The method is applicable to generic optimization problems, including also motion estimation. The second method, aimed at real-time performance, heuristically modulates the quantization parameter in sequential frames improving significantly the rate-distortion performance. It also utilizes multiple reference frames when available, as in H.264. Finally, coding efficiency is improved by introducing a new matching criterion for motion estimation which can estimate the bit rate after transform coding more accurately, leading to better motion vectors. For fast motion estimation, several improvements on prior methods are proposed. First, fast matching, based on filtering and subsampling, is combined with a state-of-the-art search strategy to create a very quick and high-quality motion estimation method. The successive elimination algorithm (SEA) is also applied to the method and its performance is improved by deriving a new tighter lower bound and increasing it with a small constant, which eliminates a larger part of the candidate motion vectors, degrading quality only insignificantly. As an alternative, the multilevel SEA (MSEA) is applied to the H.264-compatible motion estimation utilizing efficiently the various available block sizes in the standard. Then, a new method is developed for refining the motion vector obtained from any fast and suboptimal motion estimation method. The resulting algorithm can be easily adjusted to have the desired tradeoff between computational complexity and rate-distortion performance. For refining integer motion vectors into half-pixel resolution, a new very quick but accurate method is developed based on the mathematical properties of bilinear interpolation. Finally, novel number theoretic transforms are developed which are best suited for two-dimensional image filtering, including image restoration and enhancement, but methods are developed with a view to the use of the transforms also for very reliable motion estimation.
APA, Harvard, Vancouver, ISO, and other styles
8

Jones, Jonathan A. "Nuclear magnetic resonance data processing methods." Thesis, University of Oxford, 1992. http://ora.ox.ac.uk/objects/uuid:7df97c9a-4e65-4c10-83eb-dfaccfdccefe.

Full text
Abstract:
This thesis describes the application of a wide variety of data processing methods, in particular the Maximum Entropy Method (MEM), to data from Nuclear Magnetic Resonance (NMR) experiments. Chapter 1 provides a brief introduction to NMR and to data processing, which is developed in chapter 2. NMR is described in terms of the classical model due to Bloch, and the principles of conventional (Fourier transform) data processing developed. This is followed by a description of less conventional techniques. The MEM is derived on several grounds, and related to both Bayesian reasoning and Shannon information theory. Chapter 3 describes several methods of evaluating the quality of NMR spectra obtained by a variety of data processing techniques; the simple criterion of spectral appearance is shown to be completely unsatisfactory. A Monte Carlo method is described which allows several different techniques to be compared, and the relative advantages of Fourier transformation and the MEM are assessed. Chapter 4 describes in vivo NMR, particularly the application of the MEM to data from Phase Modulated Rotating Frame Imaging (PMRFI) experiments. In this case the conventional data processing is highly unsatisfactory, and MEM processing results in much clearer spectra. Chapter 5 describes the application of a range of techniques to the estimation and removal of splittings from NMR spectra. The various techniques are discussed using simple examples, and then applied to data from the amino acid iso-leucine. The thesis ends with five appendices which contain historical and philosophical notes, detailed calculations pertaining to PMRFI spectra, and a listing of the MEM computer program.
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Jiawen (Jiawen Kevin). "Efficient data structures for piecewise-smooth video processing." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66003.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 95-102).
A number of useful image and video processing techniques, ranging from low level operations such as denoising and detail enhancement to higher level methods such as object manipulation and special effects, rely on piecewise-smooth functions computed from the input data. In this thesis, we present two computationally efficient data structures for representing piecewise-smooth visual information and demonstrate how they can dramatically simplify and accelerate a variety of video processing algorithms. We start by introducing the bilateral grid, an image representation that explicitly accounts for intensity edges. By interpreting brightness values as Euclidean coordinates, the bilateral grid enables simple expressions for edge-aware filters. Smooth functions defined on the bilateral grid are piecewise-smooth in image space. Within this framework, we derive efficient reinterpretations of a number of edge-aware filters commonly used in computational photography as operations on the bilateral grid, including the bilateral filter, edgeaware scattered data interpolation, and local histogram equalization. We also show how these techniques can be easily parallelized onto modern graphics hardware for real-time processing of high definition video. The second data structure we introduce is the video mesh, designed as a flexible central data structure for general-purpose video editing. It represents objects in a video sequence as 2.5D "paper cutouts" and allows interactive editing of moving objects and modeling of depth, which enables 3D effects and post-exposure camera control. In our representation, we assume that motion and depth are piecewise-smooth, and encode them sparsely as a set of points tracked over time. The video mesh is a triangulation over this point set and per-pixel information is obtained by interpolation. To handle occlusions and detailed object boundaries, we rely on the user to rotoscope the scene at a sparse set of frames using spline curves. We introduce an algorithm to robustly and automatically cut the mesh into local layers with proper occlusion topology, and propagate the splines to the remaining frames. Object boundaries are refined with per-pixel alpha mattes. At its core, the video mesh is a collection of texture-mapped triangles, which we can edit and render interactively using graphics hardware. We demonstrate the effectiveness of our representation with special effects such as 3D viewpoint changes, object insertion, depthof- field manipulation, and 2D to 3D video conversion.
by Jiawen Chen.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
10

Grundmann, Matthias. "Computational video: post-processing methods for stabilization, retargeting and segmentation." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47596.

Full text
Abstract:
In this thesis, we address a variety of challenges for analysis and enhancement of Computational Video. We present novel post-processing methods to bridge the difference between professional and casually shot videos mostly seen on online sites. Our research presents solutions to three well-defined problems: (1) Video stabilization and rolling shutter removal in casually-shot, uncalibrated videos; (2) Content-aware video retargeting; and (3) spatio-temporal video segmentation to enable efficient video annotation. We showcase several real-world applications building on these techniques. We start by proposing a novel algorithm for video stabilization that generates stabilized videos by employing L1-optimal camera paths to remove undesirable motions. We compute camera paths that are optimally partitioned into constant, linear and parabolic segments mimicking the camera motions employed by professional cinematographers. To achieve this, we propose a linear programming framework to minimize the first, second, and third derivatives of the resulting camera path. Our method allows for video stabilization beyond conventional filtering, that only suppresses high frequency jitter. An additional challenge in videos shot from mobile phones are rolling shutter distortions. Modern CMOS cameras capture the frame one scanline at a time, which results in non-rigid image distortions such as shear and wobble. We propose a solution based on a novel mixture model of homographies parametrized by scanline blocks to correct these rolling shutter distortions. Our method does not rely on a-priori knowledge of the readout time nor requires prior camera calibration. Our novel video stabilization and calibration free rolling shutter removal have been deployed on YouTube where they have successfully stabilized millions of videos. We also discuss several extensions to the stabilization algorithm and present technical details behind the widely used YouTube Video Stabilizer. We address the challenge of changing the aspect ratio of videos, by proposing algorithms that retarget videos to fit the form factor of a given device without stretching or letter-boxing. Our approaches use all of the screen's pixels, while striving to deliver as much video-content of the original as possible. First, we introduce a new algorithm that uses discontinuous seam-carving in both space and time for resizing videos. Our algorithm relies on a novel appearance-based temporal coherence formulation that allows for frame-by-frame processing and results in temporally discontinuous seams, as opposed to geometrically smooth and continuous seams. Second, we present a technique, that builds on the above mentioned video stabilization approach. We effectively automate classical pan and scan techniques by smoothly guiding a virtual crop window via saliency constraints. Finally, we introduce an efficient and scalable technique for spatio-temporal segmentation of long video sequences using a hierarchical graph-based algorithm. We begin by over-segmenting a volumetric video graph into space-time regions grouped by appearance. We then construct a "region graph" over the obtained segmentation and iteratively repeat this process over multiple levels to create a tree of spatio-temporal segmentations. This hierarchical approach generates high quality segmentations, and allows subsequent applications to choose from varying levels of granularity. We demonstrate the use of spatio-temporal segmentation as users interact with the video, enabling efficient annotation of objects within the video.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Methods of video data processing"

1

1947-, Trollip Stanley R., and Alessi Stephen M. 1951-, eds. Multimedia for learning: Methods and development. 3rd ed. Boston: Allyn and Bacon, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

1947-, Trollip Stanley R., ed. Computer-based instruction: Methods and development. Englewood Cliffs, N.J: Prentice-Hall, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

1947-, Trollip Stanley R., ed. Computer-based instruction: Methods and development. 2nd ed. Englewood Cliffs, N.J: Prentice Hall, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Comaniciu, Dorin, Rudolf Mester, Kenichi Kanatani, and David Suter, eds. Statistical Methods in Video Processing. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/b104157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Data driven statistical methods. London: Chapman & Hall, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Grigorev, Anatoliy. Methods and algorithms of data processing. ru: INFRA-M Academic Publishing LLC., 2017. http://dx.doi.org/10.12737/22119.

Full text
Abstract:
In this manual some methods and algorithms of data processing, the sequence of the solution of problems of processing and the analysis of data for creation of behavior model of an object taking into account all a component of his mathematical model are considered. Types of technological methods of use of software and hardware for the solution of tasks in this area are described. Algorithms of distributions, regressions of temporary ranks, their transformation for the purpose of receiving mathematical models and the forecast of behavior of information and economic systems (objects) are considered. Conforms to requirements of the Federal state educational standard of the higher education of the last generation. For students of economic specialties, experts, graduate students.
APA, Harvard, Vancouver, ISO, and other styles
7

Grigor'ev, Anatoliy, and Evgeniy Isaev. Methods and algorithms of data processing. ru: INFRA-M Academic Publishing LLC., 2020. http://dx.doi.org/10.12737/1032305.

Full text
Abstract:
The tutorial deals with selected methods and algorithms of data processing, the sequence of solving problems of processing and analysis of data to create models behavior of the object taking into account all the components of its mathematical model. Describes the types of technological methods for the use of software and hardware for solving problems in this area. The algorithms of distributions, regressions vremenny series, transform them with the aim of obtaining mathematical models and prediction of the behavior information and economic systems (objects). The second edition is supplemented by materials that are in demand by researchers in the part of the correct use of clustering algorithms. Are elements of the classification algorithms to identify their capabilities, strengths and weaknesses. Are the procedures of justification and verify the adequacy of the results of the cluster analysis, conducted a comparison and evaluation of different clustering techniques, given information about visualization of multidimensional data and examples of practical application of clustering algorithms. Meets the requirements of Federal state educational standards of higher education of the last generation. For students of economic specialties, specialists, and graduate students.
APA, Harvard, Vancouver, ISO, and other styles
8

Kiselev, I︠U︡. V. (I︠U︡riĭ Vasilʹevich), ed. Statistical methods of geophysical data processing. Singapore: World Scientific, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

McQuillin, Lon B. Computers in video production. White Plains, NY: Knowledge Industry Publications, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Brezinski, Claude. Extrapolation methods: Theory and practice. Amsterdam: North-Holland, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Methods of video data processing"

1

Jinbo, Wu. "Semantic Marking Method of Video Scene Based on 3D Convolutional Neural Network." In Data Processing Techniques and Applications for Cyber-Physical Systems (DPTA 2019), 2019–26. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-1468-5_238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Frejlichowski, Dariusz. "A Method for Data Extraction from Video Sequences for Automatic Identification of Football Players Based on Their Numbers." In Image Analysis and Processing – ICIAP 2011, 356–64. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24085-0_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Müller, Karsten, Heiko Schwarz, Peter Eisert, and Thomas Wiegand. "Video Data Processing." In Digital Transformation, 43–62. Berlin, Heidelberg: Springer Berlin Heidelberg, 2019. http://dx.doi.org/10.1007/978-3-662-58134-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Töppe, Eno, Martin R. Oswald, Daniel Cremers, and Carsten Rother. "Silhouette-Based Variational Methods for Single View Reconstruction." In Video Processing and Computational Video, 104–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24870-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Braithwaite, A., and F. J. Smith. "Processing chromatographic data." In Chromatographic Methods, 399–425. Dordrecht: Springer Netherlands, 1999. http://dx.doi.org/10.1007/978-94-011-0599-6_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Furht, Borko, Stephen W. Smoliar, and HongJiang Zhang. "Video Processing Using Compressed Data." In Video and Image Processing in Multimedia Systems, 323–34. Boston, MA: Springer US, 1995. http://dx.doi.org/10.1007/978-1-4615-2277-5_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Singhal, Kanika, and Abhineet Anand. "Video Processing Using Data Mining." In Proceedings of International Conference in Mechanical and Energy Technology, 41–48. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-2647-3_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Salomon, David. "Video Compression." In A Guide to Data Compression Methods, 227–39. New York, NY: Springer New York, 2002. http://dx.doi.org/10.1007/978-0-387-21708-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Leal-Taixé, Laura, Matthias Heydt, Axel Rosenhahn, and Bodo Rosenhahn. "Understanding What we Cannot See: Automatic Analysis of 4D Digital In-Line Holographic Microscopy Data." In Video Processing and Computational Video, 52–76. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24870-2_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Braithwaite, A., and F. J. Smith. "Processing of Chromatographic Data." In Chromatographic Methods, 322–39. Dordrecht: Springer Netherlands, 1985. http://dx.doi.org/10.1007/978-94-009-4093-2_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Methods of video data processing"

1

he, wei. "Research on mixed data processing methods in Raman spectrum." In 2018 International Conference on Image, Video Processing and Artificial Intelligence, edited by Ruidan Su. SPIE, 2018. http://dx.doi.org/10.1117/12.2513979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Yifei, Yuzhe Wang, Wenhui Wang, and Minda Zhang. "Comparison on video object segmentation: methods and results." In International Conference on Computer Graphics, Artificial Intelligence, and Data Processing (ICCAID 2021), edited by Feng Wu, Jinping Liu, and Yanping Chen. SPIE, 2022. http://dx.doi.org/10.1117/12.2631435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yang, Shuang, Jiawei Ren, Xiuhua Jiang, and Hao Liu. "The New Improved Data Processing Methods in Video Quality Subjective Assessment." In 2016 8th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC). IEEE, 2016. http://dx.doi.org/10.1109/ihmsc.2016.162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Hongtao, Xinhua Li, and Shusheng Wang. "A Video Data Processing Method for Space Application." In ICDSP 2020: 2020 4th International Conference on Digital Signal Processing. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3408127.3408147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Shuang, Yifan Wang, Fang Meng, Xiuhua Jiang, and Hao Liu. "The comparison and improvement of data processing methods in video quality subjective assessment." In 2014 7th International Congress on Image and Signal Processing (CISP). IEEE, 2014. http://dx.doi.org/10.1109/cisp.2014.7003852.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pyataeva, A. V., and M. S. Eliseeva. "Video based human smoking event detection method." In Spatial Data Processing for Monitoring of Natural and Anthropogenic Processes 2021. Crossref, 2021. http://dx.doi.org/10.25743/sdm.2021.75.39.041.

Full text
Abstract:
The paper proposes a method for recognizing smoking event detection from visual data. The method uses a three-dimensional convolutional neural network ResNet, which provides work with video based spatio-temporal features.
APA, Harvard, Vancouver, ISO, and other styles
7

MaungMaung, Imdad, KokSheik Wong, and Kiyoshi Tanaka. "Reversible data hiding methods based on audio and video synchronization in MP4 container." In 2016 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS). IEEE, 2016. http://dx.doi.org/10.1109/ispacs.2016.7824699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

MacHuchon, Keith R., Wehan J. Wessels, Chin H. Wu, and Paul C. Liu. "The Use of Streamed Digital Video Data and Binocular Stereoscopic Image System (BiSIS) Processing Methods to Analyze Ocean Wave Field Kinematics." In ASME 2009 28th International Conference on Ocean, Offshore and Arctic Engineering. ASMEDC, 2009. http://dx.doi.org/10.1115/omae2009-79853.

Full text
Abstract:
The kinematics of short crested steep and breaking waves in the ocean is a subject that is best studied spatially, in the time domain, to obtain a good understanding of the multi-directional spreading of energy, which is dependant on strongly non-linear wave interactions in the system. The paper will cover the collection, recording and processing of streamed sea surface image data, obtained simultaneously from multiple digital video cameras, for analysis using stereoscopic image processing methods to provide information on the kinematics of ocean wave fields. The data streaming architecture, which will be reviewed, incorporates an advanced laptop computer and two to three stand-alone digital video cameras which are all linked through a gigabit ethernet network connection with sufficient bandwidth to simultaneously transfer the image data from the cameras to hard drive storage. The modifications to the laptop computer comprise the provision of increased processing capacity to enable it to accept and process large IP frames simultaneously. The system has the capacity to continuously record images, at a rate of up to 60 frames per second, for periods of up to one hour. It includes an external triggering mechanism, which is synchronised to a micro-second, to ensure that stereo pairs of images are captured simultaneously. Calibration of the cameras, and their stereoscopic configuration, is a critical part of the overall process and we will discuss how ill-conditioned and singular matrices, which can prevent the determination of required intrinsic and extrinsic parameters, can be avoided. The paper will include examples of wave field image data which has been collected using streamed digital video data and Binocular Stereoscopic Image System (BiSIS) processing methods. It will also give examples digital video images and dimensional wave field data which has been collected and processed using the Automated Trinocular Stereoscopic Imaging Systems (ATSIS) methods. Both of these systems provide a valuable means of analysing irregular, non-linear, short crested waves, which leads to an improved understanding of ocean wave kinematics.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhao, Shiwei, Li Zhuo, Zhu Xiao, and Lansun Shen. "A Data-Mining Based Video Shot Classification Method." In 2009 2nd International Congress on Image and Signal Processing (CISP). IEEE, 2009. http://dx.doi.org/10.1109/cisp.2009.5303957.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Momeni, Hamed, and Arvin Ebrahimkhanlou. "Applications of High-Dimensional Data Analytics in Structural Health Monitoring and Non-Destructive Evaluation: Thermal Videos Processing Using Tensor-Based Analysis." In ASME 2021 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/imece2021-71878.

Full text
Abstract:
Abstract This study reviews existing and potential applications of high-dimensional data analytics in the fields of structural health monitoring and non-destructive evaluation. Contrary to the high potential of these methods, the implemented applications in structural health monitoring and non-destructive evaluation topics are limited. In addition, with the ever-increasing development of measurement equipment, the necessity of using these methods is enhancing. In this paper, videos captured by different non-destructive evaluation techniques are studied as an example of high-dimensional data. Thermal videos are used for automatic damage detection and localization. Particularly, thermal cameras are employed to find delamination zones in composite plates, commonly used in aircraft wings. Due to the high-dimensional intrinsic of videos, using conventional statistical methods raise theoretical and practical challenges. One of the solutions to overcome these challenges is implementing tensor-based data analysis to analyze videos. Two tensor factorization methods are presented and employed to localize the damage automatically. The results show that the recorded video can be represented by a few vectors, which easily extract the time variation and extent of the damage.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Methods of video data processing"

1

Bates, C. Richards, Melanie Chocholek, Clive Fox, John Howe, and Neil Jones. Scottish Inshore Fisheries Integrated Data System (SIFIDS): Work package (3) final report development of a novel, automated mechanism for the collection of scallop stock data. Edited by Mark James and Hannah Ladd-Jones. Marine Alliance for Science and Technology for Scotland (MASTS), 2019. http://dx.doi.org/10.15664/10023.23449.

Full text
Abstract:
[Extract from Executive Summary] This project, aimed at the development of a novel, automated mechanism for the collection of scallop stock data was a sub-part of the Scottish Inshore Fisheries Integrated Data Systems (SIFIDS) project. The project reviewed the state-of-the-art remote sensing (geophysical and camera-based) technologies available from industry and compared these to inexpensive, off-the -shelf equipment. Sea trials were conducted on scallop dredge sites and also hand-dived scallop sites. Data was analysed manually, and tests conducted with automated processing methods. It was concluded that geophysical acoustic technologies cannot presently detect individual scallop but the remote sensing technologies can be used for broad scale habitat mapping of scallop harvest areas. Further, the techniques allow for monitoring these areas in terms of scallop dredging impact. Camera (video and still) imagery is effective for scallop count and provide data that compares favourably with diver-based ground truth information for recording scallop density. Deployment of cameras is possible through inexpensive drop-down camera frames which it is recommended be deployed on a wide area basis for further trials. In addition, implementation of a ‘citizen science’ approach to wide area recording is suggested to increase the stock assessment across the widest possible variety of seafloor types around Scotland. Armed with such data a full, statistical analysis could be completed and data used with automated processing routines for future long-term monitoring of stock.
APA, Harvard, Vancouver, ISO, and other styles
2

DeVore, Ronald A., Peter G. Binev, and Robert C. Sharpley. Advanced Mathematical Methods for Processing Large Data Sets. Fort Belvoir, VA: Defense Technical Information Center, October 2008. http://dx.doi.org/10.21236/ada499985.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Healey, Glenn. Advanced Methods for Representing and Processing Hyperspectral Image Data. Fort Belvoir, VA: Defense Technical Information Center, March 2012. http://dx.doi.org/10.21236/ada581465.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chouikha, Mohamed F. A Study of Inverse Methods for Processing of Radar Data. Fort Belvoir, VA: Defense Technical Information Center, October 2006. http://dx.doi.org/10.21236/ada462060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Carter, R. J. Modification and Validation of an Automotive Data Processing Unit, Compessed Video System, and Communications Equipment. Office of Scientific and Technical Information (OSTI), April 1997. http://dx.doi.org/10.2172/2734.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Walsh, Jon. On-the-fly nuclear data processing methods for Monte Carlo simulations of fast spectrum systems. Office of Scientific and Technical Information (OSTI), August 2015. http://dx.doi.org/10.2172/1213517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Boyd, Thomas J., and Richard B. Coffin. Isotope Ratio Spectrometry Data Processing Software: Multivariate Statistical Methods for Hydrocarbon Source Identification and Comparison. Fort Belvoir, VA: Defense Technical Information Center, April 2004. http://dx.doi.org/10.21236/ada422798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Selvaraju, Ragul, SHABARIRAJ SIDDESWARAN, and Hariharan Sankarasubramanian. The Validation of Auto Rickshaw Model for Frontal Crash Studies Using Video Capture Data. SAE International, September 2020. http://dx.doi.org/10.4271/2020-28-0490.

Full text
Abstract:
Despite being Auto rickshaws are the most important public transportation around Asian countries and especially in India, the safety standards and regulations have not been established as much as for the car segment. The Crash simulations have evolved to analyze the vehicle crashworthiness since crash experimentations are costly. The work intends to provide the validation for an Auto rickshaw model by comparing frontal crash simulation with a random head-on crash video. MATLAB video processing tool has been used to process the crash video, and the impact velocity of the frontal crash is obtained. The vehicle modelled in CATIA is imported in the LS-DYNA software simulation environment to perform frontal crash simulation at the captured speed. The simulation is compared with the crash video at 5, 25, and 40 milliseconds respectively. The comparison shows that the crash pattern of simulation and real crash video are similar in detail. Thus the modelled Auto-rickshaw can be used in the future to validate the real-time crash for providing the scope of improvement in Three-wheeler safety.
APA, Harvard, Vancouver, ISO, and other styles
9

Selvaraju, Ragul, SHABARIRAJ SIDDESWARAN, and Hariharan Sankarasubramanian. The Validation of Auto Rickshaw Model for Frontal Crash Studies Using Video Capture Data. SAE International, September 2020. http://dx.doi.org/10.4271/2020-28-0490.

Full text
Abstract:
Despite being Auto rickshaws are the most important public transportation around Asian countries and especially in India, the safety standards and regulations have not been established as much as for the car segment. The Crash simulations have evolved to analyze the vehicle crashworthiness since crash experimentations are costly. The work intends to provide the validation for an Auto rickshaw model by comparing frontal crash simulation with a random head-on crash video. MATLAB video processing tool has been used to process the crash video, and the impact velocity of the frontal crash is obtained. The vehicle modelled in CATIA is imported in the LS-DYNA software simulation environment to perform frontal crash simulation at the captured speed. The simulation is compared with the crash video at 5, 25, and 40 milliseconds respectively. The comparison shows that the crash pattern of simulation and real crash video are similar in detail. Thus the modelled Auto-rickshaw can be used in the future to validate the real-time crash for providing the scope of improvement in Three-wheeler safety.
APA, Harvard, Vancouver, ISO, and other styles
10

Labonté, M. Description of computer methods and computer programs for correspondence analysis and use of the dendograph analysis as means of coal data processing. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 1989. http://dx.doi.org/10.4095/126758.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography