Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Methods of video data processing.

Дисертації з теми "Methods of video data processing"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Methods of video data processing".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Karlsson, Linda S. "Spatio-Temporal Pre-Processing Methods for Region-of-Interest Video Coding." Licentiate thesis, Mid Sweden University, Department of Information Technology and Media, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-51.

Повний текст джерела
Анотація:

In video transmission at low bit rates the challenge is to compress the video with a minimal reduction of the percieved quality. The compression can be adapted to knowledge of which regions in the video sequence are of most interest to the viewer. Region of interest (ROI) video coding uses this information to control the allocation of bits to the background and the ROI. The aim is to increase the quality in the ROI at the expense of the quality in the background. In order for this to occur the typical content of an ROI for a particular application is firstly determined and the actual detection is performed based on this information. The allocation of bits can then be controlled based on the result of the detection.

In this licenciate thesis existing methods to control bit allocation in ROI video coding are investigated. In particular pre-processing methods that are applied independently of the codec or standard. This makes it possible to apply the method directly to the video sequence without modifications to the codec. Three filters are proposed in this thesis based on previous approaches. The spatial filter that only modifies the background within a single frame and the temporal filter that uses information from the previous frame. These two filters are also combined into a spatio-temporal filter. The abilities of these filters to reduce the number of bits necessary to encode the background and to successfully re-allocate these to the ROI are investigated. In addition the computational compexities of the algorithms are analysed.

The theoretical analysis is verified by quantitative tests. These include measuring the quality using both the PSNR of the ROI and the border of the background, as well as subjective tests with human test subjects and an analysis of motion vector statistics.

The qualitative analysis shows that the spatio-temporal filter has a better coding efficiency than the other filters and it successfully re-allocates the bits from the foreground to the background. The spatio-temporal filter gives an improvement in average PSNR in the ROI of more than 1.32 dB or a reduction in bitrate of 31 % compared to the encoding of the original sequence. This result is similar to or slightly better than the spatial filter. However, the spatio-temporal filter has a better performance, since its computational complexity is lower than that of the spatial filter.

Стилі APA, Harvard, Vancouver, ISO та ін.
2

Hamlet, Sean Michael. "COMPARING ACOUSTIC GLOTTAL FEATURE EXTRACTION METHODS WITH SIMULTANEOUSLY RECORDED HIGH-SPEED VIDEO FEATURES FOR CLINICALLY OBTAINED DATA." UKnowledge, 2012. http://uknowledge.uky.edu/ece_etds/12.

Повний текст джерела
Анотація:
Accurate methods for glottal feature extraction include the use of high-speed video imaging (HSVI). There have been previous attempts to extract these features with the acoustic recording. However, none of these methods compare their results with an objective method, such as HSVI. This thesis tests these acoustic methods against a large diverse population of 46 subjects. Two previously studied acoustic methods, as well as one introduced in this thesis, were compared against two video methods, area and displacement for open quotient (OQ) estimation. The area comparison proved to be somewhat ambiguous and challenging due to thresholding effects. The displacement comparison, which is based on glottal edge tracking, proved to be a more robust comparison method than the area. The first acoustic methods OQ estimate had a relatively small average error of 8.90% and the second method had a relatively large average error of -59.05% compared to the displacement OQ. The newly proposed method had a relatively small error of -13.75% when compared to the displacements OQ. There was some success even though there was relatively high error with the acoustic methods, however, they may be utilized to augment the features collected by HSVI for a more accurate glottal feature estimation.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Szolgay, Daniel. "Video event detection and visual data pro cessing for multimedia applications." Thesis, Bordeaux 1, 2011. http://www.theses.fr/2011BOR14313/document.

Повний текст джерела
Анотація:
Cette thèse (i) décrit une procédure automatique pour estimer la condition d'arrêt des méthodes de déconvolution itératives basées sur un critère d'orthogonalité du signal estimé et de son gradient à une itération donnée; (ii) présente une méthode qui décompose l'image en une partie géométrique (ou "cartoon") et une partie "texture" en utilisation une estimation de paramètre et une condition d'arrêt basées sur la diffusion anisotropique avec orthogonalité, en utilisant le fait que ces deux composantes. "cartoon" et "texture", doivent être indépendantes; (iii) décrit une méthode pour extraire d'une séquence vidéo obtenue à partir de caméra portable les objets de premier plan en mouvement. Cette méthode augmente la compensation de mouvement de la caméra par une nouvelle estimation basée noyau de la fonction de probabilité de densité des pixels d'arrière-plan. Les méthodes présentées ont été testées et comparées aux algorithmes de l'état de l'art
This dissertation (i) describes an automatic procedure for estimating the stopping condition of non-regularized iterative deconvolution methods based on an orthogonality criterion of the estimated signal and its gradient at a given iteration; (ii) presents a decomposition method that splits the image into geometric (or cartoon) and texture parts using anisotropic diffusion with orthogonality based parameter estimation and stopping condition, utilizing the theory that the cartoon and the texture components of an image should be independent of each other; (iii) describes a method for moving foreground object extraction in sequences taken by wearable camera, with strong motion, where the camera motion compensated frame differencing is enhanced with a novel kernel-based estimation of the probability density function of the background pixels. The presented methods have been thoroughly tested and compared to other similar algorithms from the state-of-the-art
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ліпчанська, Оксана Валентинівна. "Методи обробки та передачі даних для підсистеми інформаційного забезпечення машиніста локомотива". Thesis, Національний технічний університет "Харківський політехнічний інститут", 2019. http://repository.kpi.kharkov.ua/handle/KhPI-Press/41022.

Повний текст джерела
Анотація:
Дисертація на здобуття наукового ступеня кандидата технічних наук за спеціальністю 05.13.05 – комп'ютерні системи та компоненти. – Національний технічний університет "Харківський політехнічний інститут", Харків, 2019. У дисертації вирішена задача розробки методів обробки та передачі даних для підсистеми інформаційного забезпечення машиніста локомотива. На підставі вивчення сучасних методів і засобів обробки і передачі даних на залізничному транспорті встановлено, що існує суперечність між підвищенням швидкості та інтенсивності руху залізничного транспорту, збільшенням кількості аварійно-небезпечних ділянок, ускладненням системи управління залізничним транспортом, підвищенням вимог до якості інформаційного забезпечення машиніста локомотива та застосовуваними математичним апаратом і технологіями управління кодування/декодування. Обґрунтовано вибір технології математичної формалізації на основі мереж масового обслуговування та нейронних мереж. Розроблено концептуальну модель бездротового сегменту підсистеми інформаційного забезпечення машиніста локомотива, яка побудована з використанням технології 4G і враховує тип модуляції QPSK та процедуру перемежіння при передачі кадрів для зниження ймовірності помилкового прийому при забезпеченні високої швидкості передачі даних. Розроблено математичну модель процесу передачі даних по фізичному розподіленому uplink каналу, яка враховує особливості передачі даних відповідно до стандарту LTE в динамічних умовах магістралі залізниці і дозволяє оцінити ймовірносно-часові характеристики процесу передачі даних в підсистемі інформаційного забезпечення машиніста локомотива. Розроблено математичну модель управління доступом до сегменту комп'ютеризованої системи залізничного транспорту, яка враховує особливості передачі даних відповідно до стандарту LTE в динамічних умовах магістралі залізниці. Удосконалено процедуру завадостійкого декодування потокового відео, що передається в бездротовому сегменті 4G мережі підсистеми інформаційного забезпечення машиніста локомотива, що дозволяє підвищити оперативність при передачі даних, скоротити загальну кількість обчислень при декодуванні і звільнити обчислювальні ресурси комп'ютеризованої системи. Розроблено інтелектуальну систему контролю стану небезпечних ділянок залізничного шляху із застосуванням адаптованої згортальної нейронної мережі, що дозволяє підвищити достовірність розпізнавання небезпечних ситуацій на залізничних переїздах і інформувати машиніста в разі виникнення критичної ситуації. Обґрунтовано практичні рекомендації щодо застосування розроблених методів.
The thesis is in candidacy for a scientific degree of candidate of technical sciences in specialty 05.13.05 – computer systems and components. – National Technical University "Kharkiv Polytechnic Institute", Kharkiv, 2019. The thesis solves the problem of developing methods for processing and transmitting data for the locomotive driver's information support subsystem. Based on the study of modern methods and means of processing and transmitting data on railway transport, it is established that there is a contradiction between the increase in the speed and intensity of railway traffic, the increase in the number of emergency areas, the complexity of the railway transport management system, the increase in the requirements for railway transport safety and the applied mathematical apparatus and technologies of coding / decoding. The choice of technology of mathematical formalization based on queuing networks and neural networks is justified. A conceptual model of the wireless segment of the locomotive driver's information support subsystem has been developed, which is built using 4G technology and takes into account the QPSK modulation type and the interleaving procedure for frame transmission to reduce the probability of erroneous reception while ensuring a high data rate. A mathematical model of the process of transmitting video data over a physical distributed uplink channel that takes into account the peculiarities of video data transmission in accordance with the LTE standard under dynamic conditions of the railway main line is developed and allows to estimate a wide range of robabilitytemporal characteristics of the video data transmission process in the locomotive driver's information support subsystem. A mathematical model of access control to the segment of a locomotive driver's information support subsystem has been developed, which takes into account the peculiarities of video data transmission in accordance with the LTE standard in the dynamic conditions of the railway line. The procedure for noiseless decoding of streaming video in the wireless 4G network segment of the locomotive driver’s information support subsystem is improved. It allows to increase efficiency in data transfer, to reduce the total number of calculations during decoding and to release the computing resources of the computerized system. An intelligent system for monitoring the state of dangerous sections of the railway using an adapted convolution neural network has been developed, which makes it possible to increase the reliability of the recognition of dangerous situations at railway crossings and inform the driver in the event of a critical situation. Practical recommendations as for the developed methods application are substantiated.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ліпчанська, Оксана Валентинівна. "Методи обробки та передачі даних для підсистеми інформаційного забезпечення машиніста локомотива". Thesis, Національний технічний університет "Харківський політехнічний інститут", 2019. http://repository.kpi.kharkov.ua/handle/KhPI-Press/40900.

Повний текст джерела
Анотація:
Дисертація на здобуття наукового ступеня кандидата технічних наук за спеціальністю 05.13.05 – комп'ютерні системи та компоненти. – Національний технічний університет "Харківський політехнічний інститут", Харків, 2019. У дисертації вирішена задача розробки методів обробки та передачі даних для підсистеми інформаційного забезпечення машиніста локомотива. На підставі вивчення сучасних методів і засобів обробки і передачі даних на залізничному транспорті встановлено, що існує суперечність між підвищенням швидкості та інтенсивності руху залізничного транспорту, збільшенням кількості аварійно-небезпечних ділянок, ускладненням системи управління залізничним транспортом, підвищенням вимог до якості інформаційного забезпечення машиніста локомотива та застосовуваними математичним апаратом і технологіями управління кодування/декодування. Обґрунтовано вибір технології математичної формалізації на основі мереж масового обслуговування та нейронних мереж. Розроблено концептуальну модель бездротового сегменту підсистеми інформаційного забезпечення машиніста локомотива, яка побудована з використанням технології 4G і враховує тип модуляції QPSK та процедуру перемежіння при передачі кадрів для зниження ймовірності помилкового прийому при забезпеченні високої швидкості передачі даних. Розроблено математичну модель процесу передачі даних по фізичному розподіленому uplink каналу, яка враховує особливості передачі даних відповідно до стандарту LTE в динамічних умовах магістралі залізниці і дозволяє оцінити ймовірносно-часові характеристики процесу передачі даних в підсистемі інформаційного забезпечення машиніста локомотива. Розроблено математичну модель управління доступом до сегменту комп'ютеризованої системи залізничного транспорту, яка враховує особливості передачі даних відповідно до стандарту LTE в динамічних умовах магістралі залізниці. Удосконалено процедуру завадостійкого декодування потокового відео, що передається в бездротовому сегменті 4G мережі підсистеми інформаційного забезпечення машиніста локомотива, що дозволяє підвищити оперативність при передачі даних, скоротити загальну кількість обчислень при декодуванні і звільнити обчислювальні ресурси комп'ютеризованої системи. Розроблено інтелектуальну систему контролю стану небезпечних ділянок залізничного шляху із застосуванням адаптованої згортальної нейронної мережі, що дозволяє підвищити достовірність розпізнавання небезпечних ситуацій на залізничних переїздах і інформувати машиніста в разі виникнення критичної ситуації. Обґрунтовано практичні рекомендації щодо застосування розроблених методів.
The thesis is in candidacy for a scientific degree of candidate of technical sciences in specialty 05.13.05 – computer systems and components. – National Technical University "Kharkiv Polytechnic Institute", Kharkiv, 2019. The thesis solves the problem of developing methods for processing and transmitting data for the locomotive driver's information support subsystem. Based on the study of modern methods and means of processing and transmitting data on railway transport, it is established that there is a contradiction between the increase in the speed and intensity of railway traffic, the increase in the number of emergency areas, the complexity of the railway transport management system, the increase in the requirements for railway transport safety and the applied mathematical apparatus and technologies of coding / decoding. The choice of technology of mathematical formalization based on queuing networks and neural networks is justified. A conceptual model of the wireless segment of the locomotive driver's information support subsystem has been developed, which is built using 4G technology and takes into account the QPSK modulation type and the interleaving procedure for frame transmission to reduce the probability of erroneous reception while ensuring a high data rate. A mathematical model of the process of transmitting video data over a physical distributed uplink channel that takes into account the peculiarities of video data transmission in accordance with the LTE standard under dynamic conditions of the railway main line is developed and allows to estimate a wide range of robabilitytemporal characteristics of the video data transmission process in the locomotive driver's information support subsystem. A mathematical model of access control to the segment of a locomotive driver's information support subsystem has been developed, which takes into account the peculiarities of video data transmission in accordance with the LTE standard in the dynamic conditions of the railway line. The procedure for noiseless decoding of streaming video in the wireless 4G network segment of the locomotive driver’s information support subsystem is improved. It allows to increase efficiency in data transfer, to reduce the total number of calculations during decoding and to release the computing resources of the computerized system. An intelligent system for monitoring the state of dangerous sections of the railway using an adapted convolution neural network has been developed, which makes it possible to increase the reliability of the recognition of dangerous situations at railway crossings and inform the driver in the event of a critical situation. Practical recommendations as for the developed methods application are substantiated.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Pinto, Allan da Silva 1984. "A countermeasure method for video-based face spoofing attacks : Detecção de tentativas de ataque com vídeos digitais em sistemas de biometria de face." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275616.

Повний текст джерела
Анотація:
Orientador: Anderson de Rezende Rocha
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-23T22:22:57Z (GMT). No. of bitstreams: 1 Pinto_AllandaSilva_M.pdf: 47523880 bytes, checksum: 072eb0490c26631b80cdcc47d55a4817 (MD5) Previous issue date: 2013
Resumo: O resumo poderá ser visualizado no texto completo da tese digital
Abstract: The complete abstract is available with the full electronic document
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Toivonen, T. (Tuukka). "Efficient methods for video coding and processing." Doctoral thesis, University of Oulu, 2008. http://urn.fi/urn:isbn:9789514286957.

Повний текст джерела
Анотація:
Abstract This thesis presents several novel improvements to video coding algorithms, including block-based motion estimation, quantization selection, and video filtering. Most of the presented improvements are fully compatible with the standards in general use, including MPEG-1, MPEG-2, MPEG-4, H.261, H.263, and H.264. For quantization selection, new methods are developed based on the rate-distortion theory. The first method obtains locally optimal frame-level quantization parameter considering frame-wise dependencies. The method is applicable to generic optimization problems, including also motion estimation. The second method, aimed at real-time performance, heuristically modulates the quantization parameter in sequential frames improving significantly the rate-distortion performance. It also utilizes multiple reference frames when available, as in H.264. Finally, coding efficiency is improved by introducing a new matching criterion for motion estimation which can estimate the bit rate after transform coding more accurately, leading to better motion vectors. For fast motion estimation, several improvements on prior methods are proposed. First, fast matching, based on filtering and subsampling, is combined with a state-of-the-art search strategy to create a very quick and high-quality motion estimation method. The successive elimination algorithm (SEA) is also applied to the method and its performance is improved by deriving a new tighter lower bound and increasing it with a small constant, which eliminates a larger part of the candidate motion vectors, degrading quality only insignificantly. As an alternative, the multilevel SEA (MSEA) is applied to the H.264-compatible motion estimation utilizing efficiently the various available block sizes in the standard. Then, a new method is developed for refining the motion vector obtained from any fast and suboptimal motion estimation method. The resulting algorithm can be easily adjusted to have the desired tradeoff between computational complexity and rate-distortion performance. For refining integer motion vectors into half-pixel resolution, a new very quick but accurate method is developed based on the mathematical properties of bilinear interpolation. Finally, novel number theoretic transforms are developed which are best suited for two-dimensional image filtering, including image restoration and enhancement, but methods are developed with a view to the use of the transforms also for very reliable motion estimation.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Jones, Jonathan A. "Nuclear magnetic resonance data processing methods." Thesis, University of Oxford, 1992. http://ora.ox.ac.uk/objects/uuid:7df97c9a-4e65-4c10-83eb-dfaccfdccefe.

Повний текст джерела
Анотація:
This thesis describes the application of a wide variety of data processing methods, in particular the Maximum Entropy Method (MEM), to data from Nuclear Magnetic Resonance (NMR) experiments. Chapter 1 provides a brief introduction to NMR and to data processing, which is developed in chapter 2. NMR is described in terms of the classical model due to Bloch, and the principles of conventional (Fourier transform) data processing developed. This is followed by a description of less conventional techniques. The MEM is derived on several grounds, and related to both Bayesian reasoning and Shannon information theory. Chapter 3 describes several methods of evaluating the quality of NMR spectra obtained by a variety of data processing techniques; the simple criterion of spectral appearance is shown to be completely unsatisfactory. A Monte Carlo method is described which allows several different techniques to be compared, and the relative advantages of Fourier transformation and the MEM are assessed. Chapter 4 describes in vivo NMR, particularly the application of the MEM to data from Phase Modulated Rotating Frame Imaging (PMRFI) experiments. In this case the conventional data processing is highly unsatisfactory, and MEM processing results in much clearer spectra. Chapter 5 describes the application of a range of techniques to the estimation and removal of splittings from NMR spectra. The various techniques are discussed using simple examples, and then applied to data from the amino acid iso-leucine. The thesis ends with five appendices which contain historical and philosophical notes, detailed calculations pertaining to PMRFI spectra, and a listing of the MEM computer program.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Chen, Jiawen (Jiawen Kevin). "Efficient data structures for piecewise-smooth video processing." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66003.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 95-102).
A number of useful image and video processing techniques, ranging from low level operations such as denoising and detail enhancement to higher level methods such as object manipulation and special effects, rely on piecewise-smooth functions computed from the input data. In this thesis, we present two computationally efficient data structures for representing piecewise-smooth visual information and demonstrate how they can dramatically simplify and accelerate a variety of video processing algorithms. We start by introducing the bilateral grid, an image representation that explicitly accounts for intensity edges. By interpreting brightness values as Euclidean coordinates, the bilateral grid enables simple expressions for edge-aware filters. Smooth functions defined on the bilateral grid are piecewise-smooth in image space. Within this framework, we derive efficient reinterpretations of a number of edge-aware filters commonly used in computational photography as operations on the bilateral grid, including the bilateral filter, edgeaware scattered data interpolation, and local histogram equalization. We also show how these techniques can be easily parallelized onto modern graphics hardware for real-time processing of high definition video. The second data structure we introduce is the video mesh, designed as a flexible central data structure for general-purpose video editing. It represents objects in a video sequence as 2.5D "paper cutouts" and allows interactive editing of moving objects and modeling of depth, which enables 3D effects and post-exposure camera control. In our representation, we assume that motion and depth are piecewise-smooth, and encode them sparsely as a set of points tracked over time. The video mesh is a triangulation over this point set and per-pixel information is obtained by interpolation. To handle occlusions and detailed object boundaries, we rely on the user to rotoscope the scene at a sparse set of frames using spline curves. We introduce an algorithm to robustly and automatically cut the mesh into local layers with proper occlusion topology, and propagate the splines to the remaining frames. Object boundaries are refined with per-pixel alpha mattes. At its core, the video mesh is a collection of texture-mapped triangles, which we can edit and render interactively using graphics hardware. We demonstrate the effectiveness of our representation with special effects such as 3D viewpoint changes, object insertion, depthof- field manipulation, and 2D to 3D video conversion.
by Jiawen Chen.
Ph.D.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Grundmann, Matthias. "Computational video: post-processing methods for stabilization, retargeting and segmentation." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47596.

Повний текст джерела
Анотація:
In this thesis, we address a variety of challenges for analysis and enhancement of Computational Video. We present novel post-processing methods to bridge the difference between professional and casually shot videos mostly seen on online sites. Our research presents solutions to three well-defined problems: (1) Video stabilization and rolling shutter removal in casually-shot, uncalibrated videos; (2) Content-aware video retargeting; and (3) spatio-temporal video segmentation to enable efficient video annotation. We showcase several real-world applications building on these techniques. We start by proposing a novel algorithm for video stabilization that generates stabilized videos by employing L1-optimal camera paths to remove undesirable motions. We compute camera paths that are optimally partitioned into constant, linear and parabolic segments mimicking the camera motions employed by professional cinematographers. To achieve this, we propose a linear programming framework to minimize the first, second, and third derivatives of the resulting camera path. Our method allows for video stabilization beyond conventional filtering, that only suppresses high frequency jitter. An additional challenge in videos shot from mobile phones are rolling shutter distortions. Modern CMOS cameras capture the frame one scanline at a time, which results in non-rigid image distortions such as shear and wobble. We propose a solution based on a novel mixture model of homographies parametrized by scanline blocks to correct these rolling shutter distortions. Our method does not rely on a-priori knowledge of the readout time nor requires prior camera calibration. Our novel video stabilization and calibration free rolling shutter removal have been deployed on YouTube where they have successfully stabilized millions of videos. We also discuss several extensions to the stabilization algorithm and present technical details behind the widely used YouTube Video Stabilizer. We address the challenge of changing the aspect ratio of videos, by proposing algorithms that retarget videos to fit the form factor of a given device without stretching or letter-boxing. Our approaches use all of the screen's pixels, while striving to deliver as much video-content of the original as possible. First, we introduce a new algorithm that uses discontinuous seam-carving in both space and time for resizing videos. Our algorithm relies on a novel appearance-based temporal coherence formulation that allows for frame-by-frame processing and results in temporally discontinuous seams, as opposed to geometrically smooth and continuous seams. Second, we present a technique, that builds on the above mentioned video stabilization approach. We effectively automate classical pan and scan techniques by smoothly guiding a virtual crop window via saliency constraints. Finally, we introduce an efficient and scalable technique for spatio-temporal segmentation of long video sequences using a hierarchical graph-based algorithm. We begin by over-segmenting a volumetric video graph into space-time regions grouped by appearance. We then construct a "region graph" over the obtained segmentation and iteratively repeat this process over multiple levels to create a tree of spatio-temporal segmentations. This hierarchical approach generates high quality segmentations, and allows subsequent applications to choose from varying levels of granularity. We demonstrate the use of spatio-temporal segmentation as users interact with the video, enabling efficient annotation of objects within the video.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Fookes, Gregory Peter Gwyn. "Interactive geophysical data processing with eigendecomposition methods." Thesis, Birkbeck (University of London), 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336344.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Varnavas, Andreas Soteriou. "Signal processing methods for EEG data classification." Thesis, Imperial College London, 2008. http://hdl.handle.net/10044/1/11943.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Altilar, Deniz Turgay. "Data partitioning and scheduling for parallel digital video processing." Thesis, Queen Mary, University of London, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.399730.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Dickinson, Keith William. "Traffic data capture and analysis using video image processing." Thesis, University of Sheffield, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306374.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

LI, WEI. "HIERARCHICAL SUMMARIZATION OF VIDEO DATA." University of Cincinnati / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1186941444.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Do, Quoc Bao. "Adaptive Post-processing Methods for Film and Video Quality Enhancement." Paris 13, 2011. http://www.theses.fr/2011PA132030.

Повний текст джерела
Анотація:
L'introduction des nouvelles technologies numériques de traitement et de codage de contenus visuels dans l'industrie du film a permis de réaliser de grandes avancées dans le monde du cinéma. En effet, l'automatisation de certaines tâches lourdes et la maîtrise des outils numériques ont ainsi permis de réaliser des gains de productivité de qualité considérables et des conditions de travail de plus en plus flexibles et maîtrisables. Dans ce travail de thèse, nous poursuivons cet effort de développement de solutions de traitement automatique pour améliorer la qualité des films numériques haute définition. La qualité d'image est un facteur important dans l'industrie du film qu'il convient de maîtriser. Nous nous intéressons ainsi à quelques artéfacts et limitations des outils et processus de production qui affectent la qualité de l'image. Il s'agit des artéfacts de codage tels que les effets de blocs et l'effet de Gibbs (ringing), le bruit de grain et le cas d'une distorsion couleur appelée "écho d'étalonnage". Pour chaque distorsion ou artéfact, un traitement adaptatif, respectant la fidélité d'image et la cohérence visuelle, est proposé et évalué au moyen de mesures objectives. Les résultats obtenus sont très prometteurs et démontrent l'efficacité des méthodes proposées. Des perspectives réalistes sont envisages pour améliorer et étendre les solutions développées à d'autres types contenus multimédia
The introduction of new digital processing and coding techniques of visual contents in the film industry has allowed filmmakers to achieve great technological and commercial advancements. Indeed, the automation of certain complex tasks has enabled to achieve productivity gains and has made advances in terms of reliability and technical accuracy. The picture quality is one of the most important factors in the #lm industry. The main objective of the thesis work is then to propose new methods for improving the quality of high de#nition video in the context of digital cinema. Here we focus on some known annoying artifacts and distortions. A new and less studied artifact occurring during the color processing of the film is also analyzed. All the proposed solutions are developed in a highly constrained environment dictated by the cinema post-production framework. The performances of the developed methods are evaluated using some objective measures and criteria. The obtained results show that the proposed methods can provide efficient solutions for improving HD film quality. Some perspectives for extending these solutions to other visual contents are considered
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Gu, Lifang. "Video analysis in MPEG compressed domain." University of Western Australia. School of Computer Science and Software Engineering, 2003. http://theses.library.uwa.edu.au/adt-WU2003.0016.

Повний текст джерела
Анотація:
The amount of digital video has been increasing dramatically due to the technology advances in video capturing, storage, and compression. The usefulness of vast repositories of digital information is limited by the effectiveness of the access methods, as shown by the Web explosion. The key issues in addressing the access methods are those of content description and of information space navigation. While textual documents in digital form are somewhat self-describing (i.e., they provide explicit indices, such as words and sentences that can be directly used to categorise and access them), digital video does not provide such an explicit content description. In order to access video material in an effective way, without looking at the material in its entirety, it is therefore necessary to analyse and annotate video sequences, and provide an explicit content description targeted to the user needs. Digital video is a very rich medium, and the characteristics in which users may be interested are quite diverse, ranging from the structure of the video to the identity of the people who appear in it, their movements and dialogues and the accompanying music and audio effects. Indexing digital video, based on its content, can be carried out at several levels of abstraction, beginning with indices like the video program name and name of subject, to much lower level aspects of video like the location of edits and motion properties of video. Manual video indexing requires the sequential examination of the entire video clip. This is a time-consuming, subjective, and expensive process. As a result, there is an urgent need for tools to automate the indexing process. In response to such needs, various video analysis techniques from the research fields of image processing and computer vision have been proposed to parse, index and annotate the massive amount of digital video data. However, most of these video analysis techniques have been developed for uncompressed video. Since most video data are stored in compressed formats for efficiency of storage and transmission, it is necessary to perform decompression on compressed video before such analysis techniques can be applied. Two consequences of having to first decompress before processing are incurring computation time for decompression and requiring extra auxiliary storage.To save on the computational cost of decompression and lower the overall size of the data which must be processed, this study attempts to make use of features available in compressed video data and proposes several video processing techniques operating directly on compressed video data. Specifically, techniques of processing MPEG-1 and MPEG-2 compressed data have been developed to help automate the video indexing process. This includes the tasks of video segmentation (shot boundary detection), camera motion characterisation, and highlights extraction (detection of skin-colour regions, text regions, moving objects and replays) in MPEG compressed video sequences. The approach of performing analysis on the compressed data has the advantages of dealing with a much reduced data size and is therefore suitable for computationally-intensive low-level operations. Experimental results show that most analysis tasks for video indexing can be carried out efficiently in the compressed domain. Once intermediate results, which are dramatically reduced in size, are obtained from the compressed domain analysis, partial decompression can be applied to enable high resolution processing to extract high level semantic information.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Jung, Agata. "Comparison of Video Quality Assessment Methods." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15062.

Повний текст джерела
Анотація:
Context: The newest standard in video coding High Efficiency Video Coding (HEVC) should have an appropriate coder to fully use its potential. There are a lot of video quality assessment methods. These methods are necessary to establish the quality of the video. Objectives: This thesis is a comparison of video quality assessment methods. Objective is to find out which objective method is the most similar to the subjective method. Videos used in tests are encoded in the H.265/HEVC standard. Methods: For testing MSE, PSNR, SSIM methods there is special software created in MATLAB. For VQM method downloaded software was used for testing. Results and conclusions: For videos watched on mobile device: PSNR is the most similar to subjective metric. However for videos watched on television screen: VQM is the most similar to subjective metric. Keywords: Video Quality Assessment, Video Quality Prediction, Video Compression, Video Quality Metrics
Стилі APA, Harvard, Vancouver, ISO та ін.
19

AZAMBUJA, MARCELLO DE LIMA. "A CLOUD COMPUTING ARCHITECTURE FOR LARGE SCALE VIDEO DATA PROCESSING." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2011. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=28923@1.

Повний текст джерела
Анотація:
O advento da Internet coloca grandes desafios para o projeto de sistemas de submissão abertos, uma vez que elimina barreiras físicas e geográficas. A redução de custos, associada a passagem de mídias analógicas para digitais, dificultou a projeção de capacidades e recursos necessários para a construção de tais sistemas. Neste trabalho propomos uma arquitetura de software, baseada em computação na nuvem, que provê a escalabilidade necessária para a construção de sistemas de submissão abertos. Estes sistemas são caracterizados pela geração de grandes volumes de dados. Um caso real de uso é analisado utilizando o processamento de vídeos.
The advent of the Internet poses great challenges to the design of public submission systems as it eliminates traditional barriers, such as geographical location and cost. With open global access, it is very hard to estimate storage space and processing power required by this class of applications. In this thesis we explore cloud computing technology as an alternative solution. The main contribution of this work is a general architecture in which to built open access, data intensive, public submission systems. A real world scenario is analyzed using this architecture for video processing.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Vörös, Robert. "New processing methods for multicomponent point receiver data." [S.l.] : [s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=967685079.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Clapp, T. C. "Statistical methods for the processing of communications data." Thesis, University of Cambridge, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.597697.

Повний текст джерела
Анотація:
This thesis describes the use of methods derived from Bayesian statistics on the problem of blind equalisation of communications channels, although much of this work is applicable to the more general problem of blind deconvolution. In order to allow general models to be incorporated, numerical methods are used; the focus is on Markov chain Monte Carlo (MCMC) methods for processing of blocks of data and the use of particle filters for sequential processing. In order to obtain the best performance using MCMC, the choice of the Markov chain needs tailoring to the application in hand. The use of joint sampling of all the states (transmitted data sequence) and reversible jump moves to combat delay ambiguity are proposed. The use of particle filters is still in its infancy, and much of the focus is on the development of strategies to improve its applicability to real problems. It is well known that fixed-lag methods may be used to great effect on Markovian models where later observations can provide information about states in the recent past. Methods of performing fixed-lag simulation for incorporation into particle filters are described. The use of data windowing on fixed parameter systems allows regeneration of the parameters at each time-step without having excessive demands on storage requirements. In certain cases it is difficult to perform the updating when a new data point is received in a single step. The novel concept of introducing intermediate densities in a manner akin to simulated annealing between time steps is described. This improves robustness and provides a natural method for initialisation. All of these techniques are demonstrated in simulations based upon standard models of communications systems, along with favourable comparisons to more conventional techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Bostanudin, Nurul Jihan Farhah. "Computational methods for processing ground penetrating radar data." Thesis, University of Portsmouth, 2013. https://researchportal.port.ac.uk/portal/en/theses/computational-methods-for-processing-ground-penetrating-radar-data(d519f94f-04eb-42af-a504-a4c4275d51ae).html.

Повний текст джерела
Анотація:
The aim of this work was to investigate signal processing and analysis techniques for Ground Penetrating Radar (GPR) and its use in civil engineering and construction industry. GPR is the general term applied to techniques which employ radio waves, typically in the Mega Hertz and Giga Hertz range, to map structures and features buried in the ground or in manmade structures. GPR measurements can suffer from large amount of noise. This is primarily caused by interference from other radio-wave-emitting devices (e.g., cell phones, radios, etc.) that are present in the surrounding area of the GPR system during data collection. In addition to noise, presence of clutter – reflections from other non-target objects buried underground in the vicinity of the target can make GPR measurement difficult to understand and interpret, even for the skilled human, GPR analysts. This thesis is concerned with the improvements and processes that can be applied to GPR data in order to enhance target detection and characterisation process particularly with multivariate signal processing techniques. Those primarily include Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Both techniques have been investigated, implemented and compared regarding their abilities to separate the target originating signals from the noise and clutter type signals present in the data. Combination of PCA and ICA (SVDPICA) and two-dimensional PCA (2DPCA) are the specific approaches adopted and further developed in this work. Ability of those methods to reduce the amount of clutter and unwanted signals present in GPR data have been investigated and reported in this thesis, suggesting that their use in automated analysis of GPR images is a possibility. Further analysis carried out in this work concentrated on analysing the performance of developed multivariate signal processing techniques and at the same time investigating the possibility of identifying and characterising the features of interest in pre-processed GPR images. The driving idea behind this part of work was to extract the resonant modes present in the individual traces of each GPR image and to use properties of those poles to characterise target. Three related but different methods have been implemented and applied in this work – Extended Prony, Linear Prediction Singular Value Decomposition and Matrix Pencil methods. In addition to these approaches, PCA technique has been used to reduce dimensionality of extracted traces and to compare signals measured in various experimental setups. Performance analysis shows that Matrix Pencil offers the best results.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Wang, Jue. "Foreground segmentation in images and video : methods, systems, and applications /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/6130.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

So, Wai-ki, and 蘇慧琪. "Shadow identification in traffic video sequences." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B32045967.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Hoya, Tetsuya. "Graph theoretic methods for data partitioning." Thesis, Imperial College London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.286542.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Coria, Mendoza Lino Evgueni. "Low-complexity methods for image and video watermarking." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/571.

Повний текст джерела
Анотація:
For digital media, the risk of piracy is aggravated by the ease to copy and distribute the content. Watermarking has become the technology of choice for discouraging people from creating illegal copies of digital content. Watermarking is the practice of imperceptibly altering the media content by embedding a message, which can be used to identify the owner of that content. A watermark message can also be a set of instructions for the display equipment, providing information about the content’s usage restrictions. Several applications are considered and three watermarking solutions are provided. First, applications such as owner identification, proof of ownership, and digital fingerprinting are considered and a fast content-dependent image watermarking method is proposed. The scheme offers a high degree of robustness against distortions, mainly additive noise, scaling, low-pass filtering, and lossy compression. This method also requires a small amount of computations. The method generates a set of evenly distributed codewords that are constructed via an iterative algorithm. Every message bit is represented by one of these codewords and is then embedded in one of the image’s 8 × 8 pixel blocks. The information in that particular block is used in the embedding so as to ensure robustness and image fidelity. Two watermarking schemes designed to prevent theatre camcorder piracy are also presented. In these methods, the video is watermarked so that its display is not permitted if a compliant video player detects the watermark. A watermark that is robust to geometric distortions (rotation, scaling, cropping) and lossy compression is required in order to block access to media content that has been recorded with a camera inside a movie theatre. The proposed algorithms take advantage of the properties of the dual-tree complex wavelet transform (DT CWT). This transform offers the advantages of both the regular and the complex wavelets (perfect reconstruction, approximate shift invariance and good directional selectivity). Our methods use these characteristics to create watermarks that are robust to geometric distortions and lossy compression. The proposed schemes are simple to implement and outperform comparable methods when tested against geometric distortions.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Kee, Eric. "Video Mosaicking Using Ancillary Data to Facilitate Size Estimation." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/32693.

Повний текст джерела
Анотація:
This thesis describes a mosaicking system designed to generate image mosaics that facilitate size estimation of 3-dimensional objects by improving data obtained with a multi-sensor video camera. The multi-sensor camera is equipped with a pulse laser-rangefinder and internally mounted inclinometers that measure instrument orientation about three axes. Using orientation data and video data, mosaics are constructed to reduce orientation data errors by augmenting orientation data with image information. Mosaicking is modeled as a 7-step refinement process: 1) an initial mosaic is constructed using orientation information obtained from the camera's inclinometers; 2) mosaics are refined by using coarse-to-fine processing to minimize an energy metric and, consequently, align overlapping video frames; 3) pair-wise mosaicking errors are detected, and removed, using an energy-based confidence metric; 4) mosaic accuracy is refined via color analysis; 5) mosaic accuracy is refined by estimating an affine transformation to align overlapping frames; 6) affine transformation approximations between overlapping video frames are used to reduce image noise through super-resolution; 7) original orientation data are corrected given the refined orientations of images within the mosaic. The mosaicking system has been tested using objects of known size and orientation accuracy has been improved by 86% for these cases.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Prabhakar, Aditya 1978. "A data processing subsystem for the Holo-Chidi video concentrator card." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86838.

Повний текст джерела
Анотація:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
Includes bibliographical references (p. 75-76).
by Aditya Prabhakar.
M.Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Murphy, John Frank Davis Mark E. Davis Mark E. "Methods for collection and processing of gene expression data /." Diss., Pasadena, Calif. : California Institute of Technology, 2005. http://resolver.caltech.edu/CaltechETD:etd-06252004-101813.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Khondoker, Md Mizanur Rahman. "Statistical methods for pre-processing microarray gene expression data." Thesis, University of Edinburgh, 2006. http://hdl.handle.net/1842/12367.

Повний текст джерела
Анотація:
A novel method is developed for combining multiple laser scans of microarrays to correct for “signal saturation” and “signal deterioration” effects in the gene expression measurement. A multivariate nonlinear functional regression model with Cauchy distributed errors having additive plus multiplicative scale is proposed as a model for combining multiple scan data. The model has been found to flexibly describe the nonlinear relationship in multiple scan data. The heavy tailed Cauchy distribution with additive plus multiplicative scale provides a basis for objective and robust estimation of gene expression from multiple scan data adjusting for censoring and deterioration bias in the observed intensity. Through combining multiple scans, the model reduces sampling variability in the gene expression estimates. A unified approach for nonparametric location and scale normalisation of log-ratio data is considered. A Generalised Additive Model for Location, Scale and Shape (GAMLSS) is proposed. GAMLSS uses a nonparametric approach for modelling both location and scale of log-ratio data, in contrast to the general tendency of using a parametric transformation, such as arcsinh, for variance stabilisation. Simulation studies demonstrate GAMLSS to be more powerful than the parametric method when a GAMLSS location and scale model, fitted to real data, is assumed correct. GAMLSS has been found to be as powerful as the parametric approach even when the parametric model is appropriate. Finally, we investigate the optimality of different estimation methods for analysing functional regression models. Alternative estimators are available in the literature to deal with the problems of identifiability and consistency. We investigated these estimators in terms of unbiasedness and efficiency for a specific case involving multiple laser scans of microarrays, and found that, in addition to being consistent, named methods are highly efficient and unbiased.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Crolene, Robert. "SYSTEMS AND METHODS TO REDUCE DATA PROCESSING TURNAROUND TIME." International Foundation for Telemetering, 1986. http://hdl.handle.net/10150/615397.

Повний текст джерела
Анотація:
International Telemetering Conference Proceedings / October 13-16, 1986 / Riviera Hotel, Las Vegas, Nevada
Weapon system complexity and its data expression have become a central issue for the Range Directorate at the Pacific Missile Test Center (PMTC). Increasing data complexity and data product turnaround requirements have created a technological push-pull on traditional data processing methods. Several possible responses are discussed which include distributed front and back end processing relative to the large mainframes, and increasing use of artificial intelligence techniques in the data reduction area. These methods are going through progressive steps of implementation at PMTC with some notable success.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

顔文川 and Man-chuen Abel Gan. "Developable surfaces: flattening methods and applications." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B43893661.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Wu, Liming. "Spectral methods for post processing of airborne vector gravity data." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/mq20889.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Gallay, Michal. "Assessing alternative methods for acquiring and processing digital elevation data." Thesis, Queen's University Belfast, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.534738.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

DeMaio, William (William Aloysius). "Data processing and inference methods for zero knowledge nuclear disarmament." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106698.

Повний текст джерела
Анотація:
Thesis: S.B., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 63-64).
It is hoped that future nuclear arms control treaties will call for the dismantlement of stored nuclear warheads. To make the authenticated decommissioning of nuclear weapons agreeable, methods must be developed to validate the structure and composition of nuclear warheads without it being possible to gain knowledge about these attributes. Nuclear resonance fluorescence (NRF) imaging potentially enables the physically-encrypted verification of nuclear weapons in a manner that would meet treaty requirements. This thesis examines the physics behind NRF, develops tools for processing resonance data, establishes methodologies for simulating information gain during warhead verification, and tests potential inference processes. The influence of several inference parameters are characterized, and success is shown in predicting the properties of an encrypting foil and the thickness of a warhead in a one-dimensional verification scenario.
by William DeMaio.
S.B.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Arshad, Norhashim Mohd. "Real-time data compression for machine vision measurement systems." Thesis, Liverpool John Moores University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285284.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Iya, Nuruddeen Mohammed. "A multi-strategy approach for congestion-aware real-time video." Thesis, University of Aberdeen, 2015. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=228569.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Aved, Alexander. "Scene Understanding for Real Time Processing of Queries over Big Data Streaming Video." Doctoral diss., University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5597.

Повний текст джерела
Анотація:
With heightened security concerns across the globe and the increasing need to monitor, preserve and protect infrastructure and public spaces to ensure proper operation, quality assurance and safety, numerous video cameras have been deployed. Accordingly, they also need to be monitored effectively and efficiently. However, relying on human operators to constantly monitor all the video streams is not scalable or cost effective. Humans can become subjective, fatigued, even exhibit bias and it is difficult to maintain high levels of vigilance when capturing, searching and recognizing events that occur infrequently or in isolation. These limitations are addressed in the Live Video Database Management System (LVDBMS), a framework for managing and processing live motion imagery data. It enables rapid development of video surveillance software much like traditional database applications are developed today. Such developed video stream processing applications and ad hoc queries are able to "reuse" advanced image processing techniques that have been developed. This results in lower software development and maintenance costs. Furthermore, the LVDBMS can be intensively tested to ensure consistent quality across all associated video database applications. Its intrinsic privacy framework facilitates a formalized approach to the specification and enforcement of verifiable privacy policies. This is an important step towards enabling a general privacy certification for video surveillance systems by leveraging a standardized privacy specification language. With the potential to impact many important fields ranging from security and assembly line monitoring to wildlife studies and the environment, the broader impact of this work is clear. The privacy framework protects the general public from abusive use of surveillance technology; success in addressing the “trust” issue will enable many new surveillance-related applications. Although this research focuses on video surveillance, the proposed framework has the potential to support many video-based analytical applications.
Ph.D.
Doctorate
Computer Science
Engineering and Computer Science
Computer Science
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Koriziz, Hariton. "Signal processing methods for the modelling and prediction of financial data." Thesis, Imperial College London, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.504921.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Makarov, Alexander. "Application of finite state methods to shape coding and processing in object-based video." Thesis, Staffordshire University, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368316.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Lane, Dallas W. "Signal processing methods for airborne lidar bathymetry." Title page, table of contents and abstract only, 2001. http://web4.library.adelaide.edu.au/theses/09ENS/09ensl265.pdf.

Повний текст джерела
Анотація:
"August 2001." Includes bibliographical references (leaves 77-80). Examines the susceptibility of existing signal processing methods to errors and identifies other possible causes of depth error not accounted for by existing signal processing methods, by analysis of the detected laser return waveform data. Methods to improve depth accuracy are investigated.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Begaint, Jean. "Towards novel inter-prediction methods for image and video compression." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S038/document.

Повний текст джерела
Анотація:
En raison de la grande disponibilité des dispositifs de capture vidéo et des nouvelles pratiques liées aux réseaux sociaux, ainsi qu’à l’émergence des services en ligne, les images et les vidéos constituent aujourd’hui une partie importante de données transmises sur internet. Les applications de streaming vidéo représentent ainsi plus de 70% de la bande passante totale de l’internet. Des milliards d’images sont déjà stockées dans le cloud et des millions y sont téléchargés chaque jour. Les besoins toujours croissants en streaming et stockage nécessitent donc une amélioration constante des outils de compression d’image et de vidéo. Cette thèse vise à explorer des nouvelles approches pour améliorer les méthodes actuelles de prédiction inter-images. De telles méthodes tirent parti des redondances entre images similaires, et ont été développées à l’origine dans le contexte de la vidéo compression. Dans une première partie, de nouveaux outils de prédiction inter globaux et locaux sont associés pour améliorer l’efficacité des schémas de compression de bases de données d’image. En associant une compensation géométrique et photométrique globale avec une prédiction linéaire locale, des améliorations significatives peuvent être obtenues. Une seconde approche est ensuite proposée qui introduit un schéma de prédiction inter par régions. La méthode proposée est en mesure d’améliorer les performances de codage par rapport aux solutions existantes en estimant et en compensant les distorsions géométriques et photométriques à une échelle semi locale. Cette approche est ensuite adaptée et validée dans le cadre de la compression vidéo. Des améliorations en réduction de débit sont obtenues, en particulier pour les séquences présentant des mouvements complexes réels tels que des zooms et des rotations. La dernière partie de la thèse se concentre sur l’étude des méthodes d’apprentissage en profondeur dans le cadre de la prédiction inter. Ces dernières années, les réseaux de neurones profonds ont obtenu des résultats impressionnants pour un grand nombre de tâches de vision par ordinateur. Les méthodes basées sur l’apprentissage en profondeur proposées à l’origine pour de l’interpolation d’images sont étudiées ici dans le contexte de la compression vidéo. Des améliorations en terme de performances de codage sont obtenues par rapport aux méthodes d’estimation et de compensation de mouvements traditionnelles. Ces résultats mettent en évidence le fort potentiel de ces architectures profondes dans le domaine de la compression vidéo
Due to the large availability of video cameras and new social media practices, as well as the emergence of cloud services, images and videos constitute today a significant amount of the total data that is transmitted over the internet. Video streaming applications account for more than 70% of the world internet bandwidth. Whereas billions of images are already stored in the cloud and millions are uploaded every day. The ever growing streaming and storage requirements of these media require the constant improvements of image and video coding tools. This thesis aims at exploring novel approaches for improving current inter-prediction methods. Such methods leverage redundancies between similar frames, and were originally developed in the context of video compression. In a first approach, novel global and local inter-prediction tools are associated to improve the efficiency of image sets compression schemes based on video codecs. By leveraging a global geometric and photometric compensation with a locally linear prediction, significant improvements can be obtained. A second approach is then proposed which introduces a region-based inter-prediction scheme. The proposed method is able to improve the coding performances compared to existing solutions by estimating and compensating geometric and photometric distortions on a semi-local level. This approach is then adapted and validated in the context of video compression. Bit-rate improvements are obtained, especially for sequences displaying complex real-world motions such as zooms and rotations. The last part of the thesis focuses on deep learning approaches for inter-prediction. Deep neural networks have shown striking results for a large number of computer vision tasks over the last years. Deep learning based methods proposed for frame interpolation applications are studied here in the context of video compression. Coding performance improvements over traditional motion estimation and compensation methods highlight the potential of these deep architectures
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Arts, Daniëlle Geertruida Theodora. "Information for intensive care evaluation methods to assess and improve data quality and data processing /." [S.l. : Amsterdam : s.n.] ; Universiteit van Amsterdam [Host], 2005. http://dare.uva.nl/document/79006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Dameh, Mustafa, and n/a. "Insights into gene interactions using computational methods for literature and sequence resources." University of Otago. Department of Anatomy & Structural Biology, 2008. http://adt.otago.ac.nz./public/adt-NZDU20090109.095349.

Повний текст джерела
Анотація:
At the beginning of this century many sequencing projects were finalised. As a result, overwhelming amount of literature and sequence data have been available to biologist via online bioinformatics databases. This biological data lead to better understanding of many organisms and have helped identify genes. However, there is still much to learn about the functions and interactions of genes. This thesis is concerned with predicting gene interactions using two main online resources: biomedical literature and sequence data. The biomedical literature is used to explore and refine a text mining method, known as the "co-occurrence method", which is used to predict gene interactions. The sequence data are used in an analysis to predict an upper bound of the number of genes involved in gene interactions. The co-occurrence method of text mining was extensively explored in this thesis. The effects of certain computational parameters on influencing the relevancy of documents in which two genes co-occur were critically examined. The results showed that indeed some computational parameters do have an impact on the outcome of the co-occurrence method, and if taken into consideration, can lead to better identification of documents that describe gene interactions. To explore the co-occurrence method of text mining, a prototype system was developed, and as a result, it contains unique functions that are not present in currently available text mining systems. Sequence data were used to predict the upper bound of the number of genes involved in gene interactions within a tissue. A novel approach was undertaken that used an analysis of SAGE and EST sequence libraries using ecological estimation methods. The approach proves that the species accumulation theory used in ecology can be applied to tag libraries (SAGE or EST) to predict an upper bound to the number of mRNA transcript species in a tissue. The novel computational analysis provided in this study can be used to extend the body of knowledge and insights relating to gene interactions and, hence, provide better understanding of genes and their functions.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Gao, Yang. "On the integration of qualitative and quantitative methods in data fusion." Thesis, University of Oxford, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.240463.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Sibbald, David Bruce. "Development of data acquisition and analysis methods for chemical acoustic emission." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/28829.

Повний текст джерела
Анотація:
Acoustic Emission Analysis (AEA) is the study of the sonic (and ultrasonic) energy released by chemical systems in the form of transient waves, as the system attempts to (re)attain equilibrium. This area of chemistry, and chemical analysis, is ripe for fundamental studies since it has been little explored. The high potential of the technique as a non-invasive, non-destructive reaction monitoring scheme suggests that numerous applications will follow. In this work, an apparatus and software have been constructed to monitor acoustic emission (AE) and collect and process AE data. A broad-band piezoelectric transducer was used to convert the acoustic signals to electrical waveforms which could be captured by a digital storage oscilloscope. These waveforms were then stored on an IBM-compatible computer for further analysis. Analysis of the data was performed using pattern recognition techniques. The signals were characterized through the use of descriptors which can map each signal onto a multi-dimensional feature space. Visualization of the data structure in multidimensional space was accomplished using several methods. Hierarchical clustering was used to produce tree structures, known as dendrograms, which attempt to show clustering of the signals into various groups. Abstract factor analysis (AFA) - also called principal components analysis (PCA) - was used to project the data onto a two dimensional factor space to allow for direct viewing of structure in the multidimensional data. Sodium hydroxide dissolution, aluminum chloride hydration and heat activation of Intumescent Flame Retardants (IFR's) were used to test the assembled hardware and to provide data to submit to the pattern recognition algorithms coded as part of this work. The solid-solid phase transition of trimethylolethane (Trimet), and the liquid crystal phase transitions of two liquid crystals (α-ѡ-bis(4-n-decylaniline-benzilidene-4'-oxyhexane), and 4-n-pentyloxybenzylidene-4'-n-heptylaniline) were also monitored and the signals analyzed. The pattern recognition software was able to extract much information from the acoustically emitting samples - information which would not have been apparent by using standard (uni- and bi-variate) methods of analysis. Chemical acoustic emission, coupled with pattern recognition analysis, will be able to provide the chemist with knowledge (qualitative, quantitative, kinetic, etc.) about chemical systems which are often difficult or impossible to monitor and analyze by other means.
Science, Faculty of
Chemistry, Department of
Graduate
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Lazcano, Vanel. "Some problems in depth enhanced video processing." Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/373917.

Повний текст джерела
Анотація:
In this thesis we tackle two problems, namely, the data interpolation prob- lem in the context of depth computation both for images and for videos, and the problem of the estimation of the apparent movement of objects in image sequences. The rst problem deals with completion of depth data in a region of an image or video where data are missing due to occlusions, unreliable data, damage or lost of data during acquisition. In this thesis we tackle it in two ways. First, we propose a non-local gradient-based energy which is able to complete planes locally. We consider this model as an extension of the bilateral lter to the gradient domain. We have successfully evaluated our model to complete synthetic depth images and also incomplete depth maps provided by a Kinect sensor. The second approach to tackle the problem is an experimental study of the Biased Absolutely Minimizing Lipschitz Extension (biased AMLE in short) for anisotropic interpolation of depth data to big empty regions without informa- tion. The AMLE operator is a cone interpolator, but the biased AMLE is an exponential cone interpolator which makes it more addapted to depth maps of real scenes that usually present soft convex or concave surfaces. Moreover, the biased AMLE operator is able to expand depth data to huge regions. By con- sidering the image domain endowed with an anisotropic metric, the proposed method is able to take into account the underlying geometric information in order not to interpolate across the boundary of objects at di erent depths. We have proposed a numerical model to compute the solution of the biased AMLE which is based on the eikonal operators. Additionally, we have extended the proposed numerical model to video sequences. The second problem deals with the motion estimation of the objects in a video sequence. This problem is known as the optical ow computation. The Optical ow problem is one of the most challenging problems in computer vision. Traditional models to estimate it fail in presence of occlusions and non-uniform illumination. To tackle these problems we proposed a variational model to jointly estimate optical ow and occlusion. Moreover, the proposed model is able to deal with the usual drawback of variational methods in dealing with fast displacements of objects in the scene which are larger than the object it- self. The addition of a term that balance gradient and intensities increases the robustness to illumination changes of the proposed model. The inclusions of a supplementary matches given by exhaustive search in speci cs locations helps to follow large displacements.
En esta tesis se abordan dos problemas: interpolación de datos en el contexto del cálculo de disparidades tanto para imágenes como para video, y el problema de la estimación del movimiento aparente de objetos en una secuencia de imágenes. El primer problema trata de la completación de datos de profundidad en una región de la imagen o video dónde los datos se han perdido debido a oclusiones, datos no confiables, datos dañados o pérdida de datos durante la adquisición. En esta tesis estos problemas se abordan de dos maneras. Primero, se propone una energía basada en gradientes no-locales, energía que puede (localmente) completar planos. Se considera este modelo como una extensión del filtro bilateral al dominio del gradiente. Se ha evaluado en forma exitosa el modelo para completar datos sintéticos y también mapas de profundidad incompletos de un sensor Kinect. El segundo enfoque, para abordar el problema, es un estudio experimental del biased AMLE (Biased Absolutely Minimizing Lipschitz Extension) para interpolación anisotrópica de datos de profundidad en grandes regiones sin información. El operador AMLE es un interpolador de conos, pero el operador biased AMLE es un interpolador de conos exponenciales lo que lo hace estar más adaptado a mapas de profundidad de escenas reales (las que comunmente presentan superficies convexas, concavas y suaves). Además, el operador biased AMLE puede expandir datos de profundidad a regiones grandes. Considerando al dominio de la imagen dotado de una métrica anisotrópica, el método propuesto puede tomar en cuenta información geométrica subyacente para no interpolar a través de los límites de los objetos a diferentes profundidades. Se ha propuesto un modelo numérico, basado en el operador eikonal, para calcular la solución del biased AMLE. Adicionalmente, se ha extendido el modelo numérico a sequencias de video. El cálculo del flujo óptico es uno de los problemas más desafiantes para la visión por computador. Los modelos tradicionales fallan al estimar el flujo óptico en presencia de oclusiones o iluminación no uniforme. Para abordar este problema se propone un modelo variacional para conjuntamente estimar flujo óptico y oclusiones. Además, el modelo propuesto puede tolerar, una limitación tradicional de los métodos variacionales, desplazamientos rápidos de objetos que son más grandes que el tamaño objeto en la escena. La adición de un término para el balance de gradientes e intensidades aumenta la robustez del modelo propuesto ante cambios de iluminación. La inclusión de correspondencias adicionales (obtenidas usando búsqueda exhaustiva en ubicaciones específicas) ayuda a estimar grandes desplazamientos.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Shirani, Shahram. "Post-processing of DCT-based image and video decoded data in error-prone environments." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0016/NQ48712.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Cedernaes, Erasmus. "Runway detection in LWIR video : Real time image processing and presentation of sensor data." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-300690.

Повний текст джерела
Анотація:
Runway detection in long wavelength infrared (LWIR) video could potentially increase the number of successful landings by increasing the situational awareness of pilots and verifying a correct approach. A method for detecting runways in LWIR video was therefore proposed and evaluated for robustness, speed and FPGA acceleration. The proposed algorithm improves the detection probability by making assumptions of the runway appearance during approach, as well as by using a modified Hough line transform and a symmetric search of peaks in the accumulator that is returned by the Hough line transform. A video chain was implemented on a Xilinx ZC702 Development card with input and output via HDMI through an expansion card. The video frames were buffered to RAM, and the detection algorithm ran on the CPU, which however did not meet the real-time requirement. Strategies were proposed that would improve the processing speed by either acceleration in hardware or algorithmic changes.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Liu, Jiangchuan. "On bandwidth adaptation for multi-rate video multicast /." View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?COMP%202003%20LIU.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 132-137). Also available in electronic version. Access restricted to campus users.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії