Дисертації з теми "Depth data encoding algorithm"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Depth data encoding algorithm.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-16 дисертацій для дослідження на тему "Depth data encoding algorithm".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Aldrovandi, Lorenzo. "Depth estimation algorithm for light field data." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Mansour, Moussa Reda. "Algoritmo para obtenção de planos de restabelecimento para sistemas de distribuição de grande porte." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/18/18154/tde-06052009-100440/.

Повний текст джерела
Анотація:
A elaboração de planos de restabelecimento de energia (PRE) de forma rápida, para re-energização de sistemas de distribuição radiais (SDR), faz-se necessária para lidar com situações que deixam regiões dos SDR sem energia. Tais situações podem ser causadas por faltas permanentes ou pela necessidade de isolar zonas dos SDR para serviços de manutenção. Dentre os objetivos de um PRE, destacam-se: (i) reduzir o número de consumidores interrompidos (ou nenhum), e (ii) minimizar o número de manobras; que devem ser atendidos sem desrespeitar os limites operacionais dos equipamentos. Conseqüentemente, a obtenção de PRE em SDR é um problema com múltiplos objetivos, alguns conflitantes. As principais técnicas desenvolvidas para obtenção de PRE em SDR baseiam-se em algoritmos evolutivos (AE). A limitação da maioria dessas técnicas é a necessidade de simplificações na rede, para lidar com SDR de grande porte, que limitam consideravelmente a possibilidade de obtenção de um PRE adequado. Propõe-se, neste trabalho, o desenvolvimento e implantação computacional de um algoritmo para obtenção de PRE em SDR, que consiga lidar com sistemas de grande porte sem a necessidade de simplificações, isto é, considerando uma grande parte (ou a totalidade) de linhas, barras, cargas e chaves do sistema. O algoritmo proposto baseia-se em um AE multi-objetivo e na estrutura de dados, para armazenamento de grafos, denominada representação nó-profundidade (RNP), bem como em dois operadores genéticos que foram desenvolvidos para manipular de forma eficiente os dados armazenados na RNP. Em razão de se basear em um AE multi-objetivo, o algoritmo proposto possibilita uma investigação mais ampla do espaço de busca. Por outro lado, fazendo uso da RNP, para representar computacionalmente os SDR, e de seus operadores genéticos, o algoritmo proposto aumenta significativamente a eficiência da busca por adequados PRE. Isto porque aqueles operadores geram apenas configurações radiais, nas quais todos os consumidores são atendidos. Para comprovar a eficiência do algoritmo proposto, várias simulações computacionais foram realizadas, utilizando o sistema de distribuição real, de uma companhia brasileira, que possui 3.860 barras, 635 chaves, 3 subestações e 23 alimentadores.
An elaborated and fast energy restoration plan (ERP) is required to deal with steady faults in radial distribution systems (RDS). That is, after a faulted zone has been identified and isolated by the relays, it is desired to elaborate a proper ERP to restore energy on that zone. Moreover, during the normal system operation, it is frequently necessary to elaborate ERP to isolate zones to execute routine tasks of network maintenance. Some of the objectives of an ERP are: (i) very few interrupted customers (or none), and (ii) operating a minimal number of switches, while at the same time respecting security constraints. As a consequence, the service restoration is a multiple objective problem, with some degree of conflict. The main methods developed for elaboration of ERP are based on evolutionary algorithms (EA). The limitation of the majority of these methods is the necessity of network simplifications to work with large-scale RDS. In general, these simplifications restrict the achievement of an adequate ERP. This work proposes the development and implementation of an algorithm for elaboration of ERP, which can deal with large-scale RDS without requiring network simplifications, that is, considering a large number (or all) of lines, buses, loads and switches of the system. The proposed algorithm is based on a multi-objective EA, on a new graph tree encoding called node-depth encoding (NDE), as well as on two genetic operators developed to efficiently manipulate a graph trees stored in NDEs. Using a multi-objective EA, the proposed algorithm enables a better exploration of the search space. On the other hand, using NDE and its operators, the efficiency of the search is increased when the proposed algorithm is used generating proper ERP, because those operators generate only radial configurations where all consumers are attended. The efficiency of the proposed algorithm is shown using a Brazilian distribution system with 3,860 buses, 635 switches, 3 substations and 23 feeders.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Sengupta, Aritra. "Empirical Hierarchical Modeling and Predictive Inference for Big, Spatial, Discrete, and Continuous Data." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1350660056.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Oliveira, Marcos Antônio Almeida de. "Heurística aplicada ao problema árvore de Steiner Euclidiano com representação nó-profundidade-grau." Universidade Federal de Goiás, 2014. http://repositorio.bc.ufg.br/tede/handle/tede/4171.

Повний текст джерела
Анотація:
Submitted by Luanna Matias (lua_matias@yahoo.com.br) on 2015-02-06T19:23:12Z No. of bitstreams: 2 Dissertação - Marcos Antônio Almeida de Oliveira - 2014..pdf: 1092566 bytes, checksum: 55edbdaf5b3ac84fe3f6835682fe2a13 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-02-19T14:34:20Z (GMT) No. of bitstreams: 2 Dissertação - Marcos Antônio Almeida de Oliveira - 2014..pdf: 1092566 bytes, checksum: 55edbdaf5b3ac84fe3f6835682fe2a13 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Made available in DSpace on 2015-02-19T14:34:20Z (GMT). No. of bitstreams: 2 Dissertação - Marcos Antônio Almeida de Oliveira - 2014..pdf: 1092566 bytes, checksum: 55edbdaf5b3ac84fe3f6835682fe2a13 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2014-09-03
Fundação de Amparo à Pesquisa do Estado de Goiás - FAPEG
A variation of the Beasley (1992) algorithm for the Euclidean Steiner tree problem is presented. This variation uses the Node-Depth-Degree Encoding, which requires an average time of O(n) in operations to generate and manipulate spanning forests. For spanning tree problems, this representation has linear time complexity when applied to network design problems with evolutionary algorithms. Computational results are given for test cases involving instances up to 500 vertices. These results demonstrate the use of the Node-Depth-Degree in an exact heuristic, and this suggests the possibility of using this representation in other techniques besides evolutionary algorithms. An empirical comparative and complexity analysis between the proposed algorithm and a conventional representation indicates the efficiency advantages of the solution found.
É apresentada uma variação do algoritmo de Beasley (1992) para o Problema árvore de Steiner Euclidiano. Essa variação utiliza a Representação Nó-Profundidade-Grau que requer, em média, tempo O(n) em operações para gerar e manipular florestas geradoras. Para problemas de árvore geradora essa representação possui complexidade de tempo linear sendo aplicada em problemas de projeto de redes com algoritmos evolutivos. Resultados computacionais são dados para casos de teste envolvendo instâncias de até 500 vértices. Esses resultados demonstram a utilização da representação Nó-Profundidade-Grau em uma heurística exata, e isso sugere a possibilidade de utilização dessa representação em outras técnicas além de algoritmos evolutivos. Um comparativo empírico e da análise de complexidade entre o algoritmo proposto e uma representação convencional indica vantagens na eficiência da solução encontrada.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Marques, Leandro Tolomeu. "Restabelecimento de energia em sistemas de distribuição considerando aspectos práticos." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/18/18154/tde-26072018-134924/.

Повний текст джерела
Анотація:
No contexto da operação de sistemas de distribuição, um dos problemas com os quais os operadores lidam frequentemente é o de restabelecimento de energia. Este problema surge na ocorrência de uma falta permanente e pode ser tratado por meio de manobras em chaves presentes na rede primária. Uma vez que tais redes operam com topologia radial, a ocorrência de uma falta pode resultar no desligamento de consumidores saudáveis. Desta maneira, o problema consiste em definir, num curto intervalo de tempo, um número mínimo de chaves que devem ser operadas a fim de isolar a falta e restaurar o máximo de consumidores saudáveis desligados. Os esforços para a obtenção de ferramentas computacionais para fornecimento de soluções para o problema de restabelecimento têm sido intensificados nos últimos anos. Isto ocorre, em especial, devido aos enormes prejuízos causados pela falta de energia às companhias de eletricidade e a toda a sociedade. Neste sentido, o objetivo deste trabalho é a obtenção de um método para auxiliar o trabalho dos operadores através do fornecimento de planos adequados de restabelecimento em curtos intervalos de tempo. Os diferenciais deste método proposto são a sua capacidade de: lidar, em especial, com redes reais de grande porte com reduzido esforço computacional; considerar a existência de vários níveis de prioridade de atendimento entre os consumidores (note, por exemplo, que um hospital ou um centro de segurança pública devem ter maior prioridade de atendimento que um grande supermercado ou unidades residenciais) e priorizar o atendimento deles de acordo a sua prioridade; fornecer uma sequência por meio da qual as chaves possam ser operadas a fim de isolar os setores em falta e reconectar o maior número de consumidores saudáveis desligados executando-se o mínimo de manobras em chaves e priorizando os consumidores com maior prioridade; ser capaz de selecionar cargas menos prioritárias para permaneceram desligadas nas situações em que não é possível obter uma solução que restaure todas as cargas saudáveis fora de serviço; e, adicionalmente, priorizar a operação de chaves controladas remotamente, que, diferentemente das chaves controladas manualmente, podem ser operadas com menores custos e de maneira mais rápida. O método proposto consiste, de maneira sintética, na união de uma busca exaustiva aplicada localmente a um novo algoritmo evolutivo multi-objetivo em tabelas de subpopulação que faz uso de uma estrutura de dados eficiente denominada Representação Nó-Profundidade. Para avaliar a performance relativa do método proposto, simulações foram realizadas num sistema de distribuição de pequeno porte e os resultados foram comparados com os obtidos por um método de Programação Matemática. Na sequência, novos experimentos foram realizadas em diversos casos de falta na rede de distribuição da cidade de Londrina-PR e cidades adjacentes. As soluções fornecidas mostraram-se adequadas ao tratamento dos casos de falta, assim como as sequências de chaveamento associadas a elas, as quais foram capazes de priorizar o restabelecimento dos consumidores prioritários seguindo seus níveis de prioridade. Adicionalmente, estudos avaliaram a variação do tempo de processamento computacional do método proposto com a dimensão das redes de distribuições e também com o número de gerações realizadas pelo algoritmo evolutivo multi-objetivo proposto e os resultados mostraram-se satisfatórios às necessidades do problema Portanto, pode-se comprovar que o método proposto atingiu os objetivos especificados, em especial, o tratamento de aspectos práticos do problema. Além do próprio método proposto, algumas contribuições desta pesquisa são a proposição um novo algoritmo evolutivo multiobjetivo em tabelas de subpopulação e de um novo operador para manipulação de florestas de grafo armazenadas pela Representação Nó-Profundidade e voltado ao problema de restabelecimento.
In the context of distribution systems operation, service restoration is one of the problems with which operators constantly deal. It arises when a permanent fault occurs and is treated trough operations in switches at primary grid. Since distribution systems are usually radial, fault occurrence turns-off healthy customers. Thereby, the service restoration problem consists in defining, in a short processing time, the minimum amount of switches that must be operated for the isolation of the fault and reconnection of the maximum amount of healthy out-of-service customers. The efforts of developing computational tools for getting solution to this problems has increased in the last years. It is, in special, due to enormous losses caused to the utilities and to the whole society. In this sense, the main objective of this research is getting a method able to help the distribution system operator\'s work through providing service restoration plans quickly. The differentials of this research are its ability to: deal, in special, with large scale grids whit a reduced computational effort; consider costumers of several priority levels (note, for instance, a hospital has a higher supply priority in relation to a big supermarket) and prioritize the higher priority customers; provide a switching sequence able to isolate and reconnect the maximum amount of healthy out-of-service customer by the minimum amount of switching actions; select lower priority customers to keep out-of-service in order to reconnect higher priority customers when a it is not possible to restore all customers; and, additionally, prioritize switching operation in remotely controlled switches, whose operation is faster and cheapest than the operation of manually controlled switches. The proposed method mixes a local exhaustive search and a new multi-objective evolutionary algorithm in subpopulation tables that uses a data structure named Node-Depth Encoding. For evaluating the relative performance of proposed method, simulations were performed in small distribution systems and the performance was compared with the performance a Mathematical Programing method from literature. New experiments were performed a Mathematical Programing method from literature. New experiments were performed in several fault situations in the real and large-scale distribution system of Londrina-PR and adjacent cities. The solutions provided were appropriated to the treatment of such contingency situations. The same occurs with the switching sequences provided, which were able to prioritize the restoration of higher priority customers. Additional studies evaluated the variation of the running time with the size of grids and with the values adopted for the maximum number of generations of the evolutionary algorithm (which is an input parameter). The results expressed the running time of the proposed method is suitable to the problem needs. Therefore, it could be proved the proposed method achieved the specified objectives, in special, the treatment of practical aspects of the problem. Besides the proposed method, some contributions of this research are proposition of a new multi-objective evolutionary algorithm in subpopulation tables and a new reproduction operator to manipulate graph forests computationally represented by Node-Depth Encoding.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Hemanth, Kumar S. "On Applications of 3D-Warping and An Analysis of a RANSAC Heuristic." Thesis, 2018. http://etd.iisc.ac.in/handle/2005/4156.

Повний текст джерела
Анотація:
In recent years communication of the scene geometry is gaining importance. With development of technologies such as head mounted displays and Augmented Reality (AR) the need for efficient 3D scene communication is becoming vital. Depth sensors are being incorporated into smartphones for large scale deployment of AR applications. 3D-communication require synchronous capture of the scene from multiple viewpoints along with depth for each view, known as Multiview Plus Depth (MVD) data. The number of views required depends on application. Traditionally, it has been assumed that devices are static but for smartphones such an assumption is not valid. The availability of depth modality opens up several possibilities for efficient MVD data compression. In this work we have leveraged depth for better RGB-D data compression and efficient depth estimation. Using the depth information, the RGB-D device motion can be accurately tracked. 3D-warping along with the camera tracking can then be used to generate reference frames to improve compression efficiency of motion vectors. The same mechanism can be used to predict depth in stereo disparity estimation problem. For robust tacking of the motion of camera array, we have used the Random Sample Consensus (RANSAC) algorithm. RANSAC is an iterative algorithm for robust model parameter estimation. A common practice among implementations of RANSAC is to take a few samples extra than the minimum required for estimation problem, but the implications of this heuristic is lacking in literature. We present a probabilistic analysis of this common heuristic. We also present a depth data coding algorithm by employing planar segmentation of depth. While all prior work based on this approach remained restricted to images only and under noise-free conditions, we present an efficient solution for noisy depth videos.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Yu-ChihChang and 張宇志. "A Fast Depth Map Compression Algorithm for H.264 Encoding Scheme." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/21078118850434017531.

Повний текст джерела
Анотація:
碩士
國立成功大學
電機工程學系碩博士班
100
Due to the rapid growth of population in consuming 3D or electronic products like smart phone and 3DTV, the issue of representation of 3D video is getting much more attention in recent years. There is one of the most famous 3D video representation called” Advanced Three-Dimensional Television System Technologies(ATTEST)”which uses a monoscopic video (color component) and per pixel depth information (depth component). After decoding the two sequences, it is used to synthesis free view-point views by means of depth image-based rendering techniques. Depth Image-based Rendering (DIBR) is a popular method to synthesis 3D free view-point virtual views. This method is based on a color sequence and its corresponding depth sequence to synthesize views of different positions. However, in transmission of real-time video, even if we have used the methods which we mentioned above to reduce the amount of data transferred instead of using the traditional direct encoding and decoding the two sequences, the environment of the wireless network and handheld devices on the power constraints, bandwidth is always limited. Therefore, data compression plays an important role in the 3D codec system. This thesis uses the H.264/AVC video encoding format and proposes a new algorithm for color video motion estimation and combines 3D search algorithm for depth map compression to decrease the overall encoding time and keep the depth map’s quality simultaneously.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Chen, Wei-Jen, and 陳威任. "The Modified Encoding Algorithm of ECG Data Compression Based on Wavelet Transform." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/04519476568133918606.

Повний текст джерела
Анотація:
碩士
國立臺北科技大學
電機工程系碩士班
92
Because the ECG data are enormous and need long recording time, data compression of ECG has been investigated for several decades. The research purpose of this thesis is to improve the performance of data compression of ECG and obtain the high compressed rate and low reconstructed error. For data compression of ECG, the transform type encode structure is adopted in this thesis. Firstly, the input ECG signal is preprocessed into a relevant form. After preprocessing, the ECG signals are processed by the wavelet transform. According to the percent of energy between before and after compression, a threshold value can be examined. Using the value, the less significant data can be ignored. After threshold, run length encoding and DPCM are employed to encode the resulted data consequently. The total length of data includes the encoded data and the reference data used for reconstruction. Finally, in order to test and verify the effectiveness of the proposed algorithm, computer programs are developed to compress the ECG signals in MIT-BIH arrhythmia database. From the simulation results, it is seen that the higher compressed rate with less reconstructed error is obtained. In addition, the compressed rate and reconstructed error are adjustable according to the requirement.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Chen, Hung-lung, and 陳宏隆. "Run length encoding-- based algorithm for mining association rules in data stream." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/90489529147629035129.

Повний текст джерела
Анотація:
碩士
南華大學
資訊管理學研究所
96
It is a new developing research field that the materials bunch flows and prospects, and the RLEated rule performs algorithms and is prospected by the materials (Data Mining) A quite important and practical technology in China. The RLEated type rule main purpose is to find out the dependence of some materials projects in the huge materials. The main method is to search the database and find out all high-frequency project teams; And utilize the high-frequency project team to excavate out all RLEated type rules.Because of the production of a large number of materials and a large number of candidate materials project teams, it is that one must consume a large number of work of calculating the cost to find out all high-frequency project teams. So, what effective materials amount while calculating of reduction, reduce the production of the project team of candidate materials, and reduce reading number of times,etc. of the database, the algorithm of performing that can make the RLEated type rule excavate is more efficient.      Main content that we study utilizes and builds and constructs the regular code (Run-Length Encoding) The valid RLEated type rule of reduction in the dynamic database of the way performs the materials amount of algorithms while calculating, main contribution is the method of offering a kind of new materials to deal with, it utilized and traded the database and encoded a small amount of materials, then prospect the materials to the code materials mainly in the storing device directly, and can upgrade and encode the materials effectively in the unusual fluctuation of fast materials, perform the speed of execution of the algorithm with higher speed, improve and deal with efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Yu, Ang-Hsun, and 游昂勳. "An Improved Data Hiding Algorithm Based on the Method of Matrix Encoding." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/73459809838000223623.

Повний текст джерела
Анотація:
碩士
國立暨南國際大學
資訊工程學系
100
The meaning of data (information) hiding is to embed the secret information into a cover host, such as an image. Usually, the naked eye of the people cannot perceive any change when the image is modified slightly. The evaluation of data hiding schemes should be measured by the distortion (or called Mean Square Error; MSE) and the embedding rate (the average number of bits embedded in a cover pixel). In this paper, we propose an improved data hiding scheme which improves the matrix encoding-based data hiding algorithm by the idea of Hamming+1 to further enhance the stego-image quality. This proposed improved scheme is verified to be correct through the theoretical analysis and the experiment.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Chen, I.-Kuei, and 陳奕魁. "Algorithm Design on Intelligent Vision for Objects using RGB and Depth Data." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/22090419671051833176.

Повний текст джерела
Анотація:
碩士
國立臺灣大學
電子工程學研究所
101
In recent years, new applications like augmented reality, driverless cars, intelligent environmental surveillance, human action analysis, and in-home robotics are all becoming possible due to the advancement of computer vision (CV) developments, and object recognition plays an essential role in all those tasks. Objects are basic meaningful units of surrounding environments, and good performance of those CV applications can be achieved if all related objects can be successfully tracked, detected, segmented, recognized and analyzed. However, there are still limitations in current 2D CV methods. Color data combined with depth data are then considered to improve performances and solve problems encountered by traditional algorithms. Depth sensors are now cheap and readily available, and they bring new opportunities and challenges to many CV areas. In this thesis, a thorough system is designed to solve essential problems of object recognition with RGB and depth data (RGB-D data). The system aims to assist those emerging CV applications to help people live more conveniently and have more fun. In this thesis, essential algorithms of object recognition are developed and integrated to provide a thorough solution for problems encountered by previous works. First, 3D structure analysis is developed to segment the input RGB-D data into basic 3D structure elements. The indoor scene is parsed into planes and physically meaning clusters with depth and surface normals. Further analysis and processing are the performed on those clusters as object candidates. De-noising and accurate object segmentation algorithm are then proposed. Depth data is useful in segmenting raw clusters, but it is often noisy and broken at edges of objects. Color images and depth data are then combined to accurately segment out objects due to the higher quality of passive color images. Detailed boundaries are preserved by color superpixels, and 3D structure is then used to build foreground and background color models. Using superpixels and color models, accurate object visual segmentation is achieved automatically without any user input. 3D object tracking is also developed. The targeted object can be tracked in real time with huge variations in size, translation, rotation, illumination, and appearance. Using RGB-D Data, the high performance can be achieved, and features like positions, global color and normal models, and local 3D descriptors are processed for tracking. Compared to previous 2D tracking, no sliding window or particle filtering is needed since 3D structure elements are available. Pixel-wise accurate video segmentation can also be achieved with the proposed segmentation method. Finally, a novel on-line learning method is proposed to train robust object detectors. By using the proposed object tracking, training data with labels can be automatically generated, and efficient on-line learning and detection methods are used for real-time performance. Combining object detectors with tracking, object recognition and tracking recovery can be achieved. In previous 2D CV learning tasks, training datasets often suffer from lack of variability, cluttered background, and inability in automatically locating and segmenting targets of interest. The proposed on-line learning is provided for those problems. Totally speaking, a highly integrated algorithm for object recognition is designed, and RGB-D data is used and studied for object segmentation, tracking, on-line learning, and detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Cunha, Joel Antonio Teixeira. "Improved Depth Estimation Algorithm using Plenoptic Cameras." Master's thesis, 2015. http://hdl.handle.net/10316/40461.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Awwad, Sari Moh'd Ismaeil. "Tracking and fine-grained activity recognition in depth videos." Thesis, 2016. http://hdl.handle.net/10453/90070.

Повний текст джерела
Анотація:
University of Technology Sydney. Faculty of Engineering and Information Technology.
Tracking and activity recognition in video are arguably two of the most active topics within the field of computer vision and pattern recognition. Historically, tracking and activity recognition have been performed over conventional video such as color or grey-level frames, either of which contains significant clues for the identification of targets. While this is often a desirable feature within the context of video surveillance, the use of video for activity recognition or for tracking in privacy-sensitive environments such as hospitals and care facilities is often perceived as intrusive. For this reason, this PhD research has focused on providing tracking and activity recognition solely from depth videos which offer a naturally privacy-preserving visual representation of the scene at hand. Depth videos can nowadays be acquired with inexpensive and highly-available commercial sensors such as Microsoft Kinect and Asus Xtion. The two main contributions of this research have been the design of a specialised tracking algorithm for tracking in depth data, and a fine-grained activity recognition approach for recognising activities in depth video. The proposed tracker is an extension of the popular Struck algorithm, an approach that leverages a structural support vector machine (SVM) for tracking. The main contributions of the proposed tracker include a dedicated depth feature based on local depth patterns, a heuristic for handling view occlusions in depth frames, and a technique for keeping the number of support vectors within a given budget, so as to limit computational costs. Conversely, the proposed fine-grained activity recognition approach leverages multi-scale depth measurements and a Fisher-consistent multi-class SVM. In addition to the novel approaches for tracking and activity recognition, in this thesis we have canvassed and developed a practical computer vision application for the detection of hand hygiene at a hospital. This application was developed in collaboration with clinical researchers from the Intensive Care Unit of Sydney’s Royal Prince Alfred Hospital. Experiments presented through the thesis confirm that the proposed approaches are effective, and either outperform the state of the art or significantly reduce the need for sensor instrumentation. The outcomes of the hand-hygiene detection were also positively received and assessed by the clinical research unit.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Gupte, Ajit D. "Techniques For Low Power Motion Estimation In Video Encoders." Thesis, 2010. https://etd.iisc.ac.in/handle/2005/1918.

Повний текст джерела
Анотація:
This thesis looks at hardware algorithms that help reduce dynamic power dissipation in video encoder applications. Computational complexity of motion estimation and the data traffic between external memory and the video processing engine are two main reasons for large power dissipation in video encoders. While motion estimation may consume 50% to 70% of total video encoder power, the power dissipated in external memory such as the DDR SDRAM can be of the order of 40% of the total system power. Reducing power dissipation in video encoders is important in order to improve battery life of mobile devices such as the smart phones and digital camcorders. We propose hardware algorithms which extract only the important features in the video data to reduce the complexity of computations, communications and storage, thereby reducing average power dissipation. We apply this concept to design hardware algorithms for optimizing motion estimation matching complexity, and reference frame storage and access from the external memory. In addition, we also develop techniques to reduce searching complexity of motion estimation. First, we explore a set of adaptive algorithms that reduce average power dissipated due to motion estimation. We propose that by taking into account the macro-block level features in the video data, the average matching complexity of motion estimation in terms of number of computations in real-time hardwired video encoders can be significantly reduced when compared against traditional hardwired implementations, that are designed to handle most demanding data sets. Current macro-block features such as pixel variance and Hadamard transform coefficients are analyzed, and are used to adapt the matching complexity. The macro-block is partitioned based on these features to obtain sub-block sums, which are used for matching operations. Thus, simple macro-blocks, without many features can be matched with much less computations compared to the macro-blocks with complex features, leading to reduction in average power dissipation. Apart from optimizing the matching operation, optimizing the search operation is a powerful way to reduce motion estimation complexity. We propose novel search optimization techniques including (1) a center-biased search order and (2) skipping unlikely search positions, both applied in the context of real time hardware implementation. The proposed search optimization techniques take into account and are compatible with the reference data access pattern from the memory as required by the hardware algorithm. We demonstrate that the matching and searching optimization techniques together achieve nearly 65% reduction in power dissipation due to motion estimation, without any significant degradation in motion estimation quality. A key to low power dissipation in video encoders is minimizing the data traffic between the external memory devices such as DDR SDRAM and the video processor. External memory power can be as high as 50% of the total power budget in a multimedia system. Other than the power dissipation in external memory, the amount of data traffic is an important parameter that has significant impact on the system cost. Large memory traffic necessitates high speed external memories, high speed on-chip interconnect, and more parallel I/Os to increase the memory throughput. This leads to higher system cost. We explore a lossy, scalar quantization based reference frame compression technique that can be used to reduce the amount of reference data traffic from external memory devices significantly. In this scheme, the quantization is adapted based on the pixel range within each block being compressed. We show that the error introduced by the scalar quantization is bounded and can be represented by smaller number of bits compared to the original pixel. The proposed reference frame compression scheme uses this property to minimize the motion compensation related traffic, thereby improving the compression scheme efficiency. The scheme maintains a fixed compression ratio, and the size of the quantization error is also kept constant. This enables easy storage and retrieval of reference data. The impact of using lossy reference on the motion estimation quality is negligible. As a result of reduction in DDR traffic, the DDR power is reduced significantly. The power dissipation due to additional hardware required for reference frame compression is very small compared to the reduction in DDR power. 24% reduction in peak DDR bandwidth and 23% net reduction in average DDR power is achieved. For video sequences with larger motion, the amount of bandwidth reduction is even higher (close to 40%) and reduction in power is close to 30%.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Gupte, Ajit D. "Techniques For Low Power Motion Estimation In Video Encoders." Thesis, 2010. http://etd.iisc.ernet.in/handle/2005/1918.

Повний текст джерела
Анотація:
This thesis looks at hardware algorithms that help reduce dynamic power dissipation in video encoder applications. Computational complexity of motion estimation and the data traffic between external memory and the video processing engine are two main reasons for large power dissipation in video encoders. While motion estimation may consume 50% to 70% of total video encoder power, the power dissipated in external memory such as the DDR SDRAM can be of the order of 40% of the total system power. Reducing power dissipation in video encoders is important in order to improve battery life of mobile devices such as the smart phones and digital camcorders. We propose hardware algorithms which extract only the important features in the video data to reduce the complexity of computations, communications and storage, thereby reducing average power dissipation. We apply this concept to design hardware algorithms for optimizing motion estimation matching complexity, and reference frame storage and access from the external memory. In addition, we also develop techniques to reduce searching complexity of motion estimation. First, we explore a set of adaptive algorithms that reduce average power dissipated due to motion estimation. We propose that by taking into account the macro-block level features in the video data, the average matching complexity of motion estimation in terms of number of computations in real-time hardwired video encoders can be significantly reduced when compared against traditional hardwired implementations, that are designed to handle most demanding data sets. Current macro-block features such as pixel variance and Hadamard transform coefficients are analyzed, and are used to adapt the matching complexity. The macro-block is partitioned based on these features to obtain sub-block sums, which are used for matching operations. Thus, simple macro-blocks, without many features can be matched with much less computations compared to the macro-blocks with complex features, leading to reduction in average power dissipation. Apart from optimizing the matching operation, optimizing the search operation is a powerful way to reduce motion estimation complexity. We propose novel search optimization techniques including (1) a center-biased search order and (2) skipping unlikely search positions, both applied in the context of real time hardware implementation. The proposed search optimization techniques take into account and are compatible with the reference data access pattern from the memory as required by the hardware algorithm. We demonstrate that the matching and searching optimization techniques together achieve nearly 65% reduction in power dissipation due to motion estimation, without any significant degradation in motion estimation quality. A key to low power dissipation in video encoders is minimizing the data traffic between the external memory devices such as DDR SDRAM and the video processor. External memory power can be as high as 50% of the total power budget in a multimedia system. Other than the power dissipation in external memory, the amount of data traffic is an important parameter that has significant impact on the system cost. Large memory traffic necessitates high speed external memories, high speed on-chip interconnect, and more parallel I/Os to increase the memory throughput. This leads to higher system cost. We explore a lossy, scalar quantization based reference frame compression technique that can be used to reduce the amount of reference data traffic from external memory devices significantly. In this scheme, the quantization is adapted based on the pixel range within each block being compressed. We show that the error introduced by the scalar quantization is bounded and can be represented by smaller number of bits compared to the original pixel. The proposed reference frame compression scheme uses this property to minimize the motion compensation related traffic, thereby improving the compression scheme efficiency. The scheme maintains a fixed compression ratio, and the size of the quantization error is also kept constant. This enables easy storage and retrieval of reference data. The impact of using lossy reference on the motion estimation quality is negligible. As a result of reduction in DDR traffic, the DDR power is reduced significantly. The power dissipation due to additional hardware required for reference frame compression is very small compared to the reduction in DDR power. 24% reduction in peak DDR bandwidth and 23% net reduction in average DDR power is achieved. For video sequences with larger motion, the amount of bandwidth reduction is even higher (close to 40%) and reduction in power is close to 30%.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Fick, Machteld. "Neurale netwerke as moontlike woordafkappingstegniek vir Afrikaans." Diss., 2002. http://hdl.handle.net/10500/584.

Повний текст джерела
Анотація:
Text in Afrikaans
Summaries in Afrikaans and English
In Afrikaans, soos in NederJands en Duits, word saamgestelde woorde aanmekaar geskryf. Nuwe woorde word dus voortdurend geskep deur woorde aanmekaar te haak Dit bemoeilik die proses van woordafkapping tydens teksprosessering, wat deesdae deur rekenaars gedoen word, aangesien die verwysingsbron gedurig verander. Daar bestaan verskeie afkappingsalgoritmes en tegnieke, maar die resultate is onbevredigend. Afrikaanse woorde met korrekte lettergreepverdeling is net die elektroniese weergawe van die handwoordeboek van die Afrikaanse Taal (HAT) onttrek. 'n Neutrale netwerk ( vorentoevoer-terugpropagering) is met sowat. 5 000 van hierdie woorde afgerig. Die neurale netwerk is verfyn deur 'n gcskikte afrigtingsalgoritme en oorfragfunksie vir die probleem asook die optimale aantal verborge lae en aantal neurone in elke laag te bepaal. Die neurale netwerk is met 5 000 nuwe woorde getoets en dit het 97,56% van moontlike posisies korrek as of geldige of ongeldige afkappingsposisies geklassifiseer. Verder is 510 woorde uit tydskrifartikels met die neurale netwerk getoets en 98,75% van moontlike posisies is korrek geklassifiseer.
In Afrikaans, like in Dutch and German, compound words are written as one word. New words are therefore created by simply joining words. Word hyphenation during typesetting by computer is a problem, because the source of reference changes all the time. Several algorithms and techniques for hyphenation exist, but results are not satisfactory. Afrikaans words with correct syllabification were extracted from the electronic version of the Handwoordeboek van die Afrikaans Taal (HAT). A neural network (feedforward backpropagation) was trained with about 5 000 of these words. The neural network was refined by heuristically finding a suitable training algorithm and transfer function for the problem as well as determining the optimal number of layers and number of neurons in each layer. The neural network was tested with 5 000 words not the training data. It classified 97,56% of possible points in these words correctly as either valid or invalid hyphenation points. Furthermore, 510 words from articles in a magazine were tested with the neural network and 98,75% of possible positions were classified correctly.
Computing
M.Sc. (Operasionele Navorsing)
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії