Rozprawy doktorskie na temat „Depth data encoding algorithm”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 16 najlepszych rozpraw doktorskich naukowych na temat „Depth data encoding algorithm”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Aldrovandi, Lorenzo. "Depth estimation algorithm for light field data". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.
Znajdź pełny tekst źródłaMansour, Moussa Reda. "Algoritmo para obtenção de planos de restabelecimento para sistemas de distribuição de grande porte". Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/18/18154/tde-06052009-100440/.
Pełny tekst źródłaAn elaborated and fast energy restoration plan (ERP) is required to deal with steady faults in radial distribution systems (RDS). That is, after a faulted zone has been identified and isolated by the relays, it is desired to elaborate a proper ERP to restore energy on that zone. Moreover, during the normal system operation, it is frequently necessary to elaborate ERP to isolate zones to execute routine tasks of network maintenance. Some of the objectives of an ERP are: (i) very few interrupted customers (or none), and (ii) operating a minimal number of switches, while at the same time respecting security constraints. As a consequence, the service restoration is a multiple objective problem, with some degree of conflict. The main methods developed for elaboration of ERP are based on evolutionary algorithms (EA). The limitation of the majority of these methods is the necessity of network simplifications to work with large-scale RDS. In general, these simplifications restrict the achievement of an adequate ERP. This work proposes the development and implementation of an algorithm for elaboration of ERP, which can deal with large-scale RDS without requiring network simplifications, that is, considering a large number (or all) of lines, buses, loads and switches of the system. The proposed algorithm is based on a multi-objective EA, on a new graph tree encoding called node-depth encoding (NDE), as well as on two genetic operators developed to efficiently manipulate a graph trees stored in NDEs. Using a multi-objective EA, the proposed algorithm enables a better exploration of the search space. On the other hand, using NDE and its operators, the efficiency of the search is increased when the proposed algorithm is used generating proper ERP, because those operators generate only radial configurations where all consumers are attended. The efficiency of the proposed algorithm is shown using a Brazilian distribution system with 3,860 buses, 635 switches, 3 substations and 23 feeders.
Sengupta, Aritra. "Empirical Hierarchical Modeling and Predictive Inference for Big, Spatial, Discrete, and Continuous Data". The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1350660056.
Pełny tekst źródłaOliveira, Marcos Antônio Almeida de. "Heurística aplicada ao problema árvore de Steiner Euclidiano com representação nó-profundidade-grau". Universidade Federal de Goiás, 2014. http://repositorio.bc.ufg.br/tede/handle/tede/4171.
Pełny tekst źródłaApproved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-02-19T14:34:20Z (GMT) No. of bitstreams: 2 Dissertação - Marcos Antônio Almeida de Oliveira - 2014..pdf: 1092566 bytes, checksum: 55edbdaf5b3ac84fe3f6835682fe2a13 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Made available in DSpace on 2015-02-19T14:34:20Z (GMT). No. of bitstreams: 2 Dissertação - Marcos Antônio Almeida de Oliveira - 2014..pdf: 1092566 bytes, checksum: 55edbdaf5b3ac84fe3f6835682fe2a13 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2014-09-03
Fundação de Amparo à Pesquisa do Estado de Goiás - FAPEG
A variation of the Beasley (1992) algorithm for the Euclidean Steiner tree problem is presented. This variation uses the Node-Depth-Degree Encoding, which requires an average time of O(n) in operations to generate and manipulate spanning forests. For spanning tree problems, this representation has linear time complexity when applied to network design problems with evolutionary algorithms. Computational results are given for test cases involving instances up to 500 vertices. These results demonstrate the use of the Node-Depth-Degree in an exact heuristic, and this suggests the possibility of using this representation in other techniques besides evolutionary algorithms. An empirical comparative and complexity analysis between the proposed algorithm and a conventional representation indicates the efficiency advantages of the solution found.
É apresentada uma variação do algoritmo de Beasley (1992) para o Problema árvore de Steiner Euclidiano. Essa variação utiliza a Representação Nó-Profundidade-Grau que requer, em média, tempo O(n) em operações para gerar e manipular florestas geradoras. Para problemas de árvore geradora essa representação possui complexidade de tempo linear sendo aplicada em problemas de projeto de redes com algoritmos evolutivos. Resultados computacionais são dados para casos de teste envolvendo instâncias de até 500 vértices. Esses resultados demonstram a utilização da representação Nó-Profundidade-Grau em uma heurística exata, e isso sugere a possibilidade de utilização dessa representação em outras técnicas além de algoritmos evolutivos. Um comparativo empírico e da análise de complexidade entre o algoritmo proposto e uma representação convencional indica vantagens na eficiência da solução encontrada.
Marques, Leandro Tolomeu. "Restabelecimento de energia em sistemas de distribuição considerando aspectos práticos". Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/18/18154/tde-26072018-134924/.
Pełny tekst źródłaIn the context of distribution systems operation, service restoration is one of the problems with which operators constantly deal. It arises when a permanent fault occurs and is treated trough operations in switches at primary grid. Since distribution systems are usually radial, fault occurrence turns-off healthy customers. Thereby, the service restoration problem consists in defining, in a short processing time, the minimum amount of switches that must be operated for the isolation of the fault and reconnection of the maximum amount of healthy out-of-service customers. The efforts of developing computational tools for getting solution to this problems has increased in the last years. It is, in special, due to enormous losses caused to the utilities and to the whole society. In this sense, the main objective of this research is getting a method able to help the distribution system operator\'s work through providing service restoration plans quickly. The differentials of this research are its ability to: deal, in special, with large scale grids whit a reduced computational effort; consider costumers of several priority levels (note, for instance, a hospital has a higher supply priority in relation to a big supermarket) and prioritize the higher priority customers; provide a switching sequence able to isolate and reconnect the maximum amount of healthy out-of-service customer by the minimum amount of switching actions; select lower priority customers to keep out-of-service in order to reconnect higher priority customers when a it is not possible to restore all customers; and, additionally, prioritize switching operation in remotely controlled switches, whose operation is faster and cheapest than the operation of manually controlled switches. The proposed method mixes a local exhaustive search and a new multi-objective evolutionary algorithm in subpopulation tables that uses a data structure named Node-Depth Encoding. For evaluating the relative performance of proposed method, simulations were performed in small distribution systems and the performance was compared with the performance a Mathematical Programing method from literature. New experiments were performed a Mathematical Programing method from literature. New experiments were performed in several fault situations in the real and large-scale distribution system of Londrina-PR and adjacent cities. The solutions provided were appropriated to the treatment of such contingency situations. The same occurs with the switching sequences provided, which were able to prioritize the restoration of higher priority customers. Additional studies evaluated the variation of the running time with the size of grids and with the values adopted for the maximum number of generations of the evolutionary algorithm (which is an input parameter). The results expressed the running time of the proposed method is suitable to the problem needs. Therefore, it could be proved the proposed method achieved the specified objectives, in special, the treatment of practical aspects of the problem. Besides the proposed method, some contributions of this research are proposition of a new multi-objective evolutionary algorithm in subpopulation tables and a new reproduction operator to manipulate graph forests computationally represented by Node-Depth Encoding.
Hemanth, Kumar S. "On Applications of 3D-Warping and An Analysis of a RANSAC Heuristic". Thesis, 2018. http://etd.iisc.ac.in/handle/2005/4156.
Pełny tekst źródłaYu-ChihChang i 張宇志. "A Fast Depth Map Compression Algorithm for H.264 Encoding Scheme". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/21078118850434017531.
Pełny tekst źródła國立成功大學
電機工程學系碩博士班
100
Due to the rapid growth of population in consuming 3D or electronic products like smart phone and 3DTV, the issue of representation of 3D video is getting much more attention in recent years. There is one of the most famous 3D video representation called” Advanced Three-Dimensional Television System Technologies(ATTEST)”which uses a monoscopic video (color component) and per pixel depth information (depth component). After decoding the two sequences, it is used to synthesis free view-point views by means of depth image-based rendering techniques. Depth Image-based Rendering (DIBR) is a popular method to synthesis 3D free view-point virtual views. This method is based on a color sequence and its corresponding depth sequence to synthesize views of different positions. However, in transmission of real-time video, even if we have used the methods which we mentioned above to reduce the amount of data transferred instead of using the traditional direct encoding and decoding the two sequences, the environment of the wireless network and handheld devices on the power constraints, bandwidth is always limited. Therefore, data compression plays an important role in the 3D codec system. This thesis uses the H.264/AVC video encoding format and proposes a new algorithm for color video motion estimation and combines 3D search algorithm for depth map compression to decrease the overall encoding time and keep the depth map’s quality simultaneously.
Chen, Wei-Jen, i 陳威任. "The Modified Encoding Algorithm of ECG Data Compression Based on Wavelet Transform". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/04519476568133918606.
Pełny tekst źródła國立臺北科技大學
電機工程系碩士班
92
Because the ECG data are enormous and need long recording time, data compression of ECG has been investigated for several decades. The research purpose of this thesis is to improve the performance of data compression of ECG and obtain the high compressed rate and low reconstructed error. For data compression of ECG, the transform type encode structure is adopted in this thesis. Firstly, the input ECG signal is preprocessed into a relevant form. After preprocessing, the ECG signals are processed by the wavelet transform. According to the percent of energy between before and after compression, a threshold value can be examined. Using the value, the less significant data can be ignored. After threshold, run length encoding and DPCM are employed to encode the resulted data consequently. The total length of data includes the encoded data and the reference data used for reconstruction. Finally, in order to test and verify the effectiveness of the proposed algorithm, computer programs are developed to compress the ECG signals in MIT-BIH arrhythmia database. From the simulation results, it is seen that the higher compressed rate with less reconstructed error is obtained. In addition, the compressed rate and reconstructed error are adjustable according to the requirement.
Chen, Hung-lung, i 陳宏隆. "Run length encoding-- based algorithm for mining association rules in data stream". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/90489529147629035129.
Pełny tekst źródła南華大學
資訊管理學研究所
96
It is a new developing research field that the materials bunch flows and prospects, and the RLEated rule performs algorithms and is prospected by the materials (Data Mining) A quite important and practical technology in China. The RLEated type rule main purpose is to find out the dependence of some materials projects in the huge materials. The main method is to search the database and find out all high-frequency project teams; And utilize the high-frequency project team to excavate out all RLEated type rules.Because of the production of a large number of materials and a large number of candidate materials project teams, it is that one must consume a large number of work of calculating the cost to find out all high-frequency project teams. So, what effective materials amount while calculating of reduction, reduce the production of the project team of candidate materials, and reduce reading number of times,etc. of the database, the algorithm of performing that can make the RLEated type rule excavate is more efficient. Main content that we study utilizes and builds and constructs the regular code (Run-Length Encoding) The valid RLEated type rule of reduction in the dynamic database of the way performs the materials amount of algorithms while calculating, main contribution is the method of offering a kind of new materials to deal with, it utilized and traded the database and encoded a small amount of materials, then prospect the materials to the code materials mainly in the storing device directly, and can upgrade and encode the materials effectively in the unusual fluctuation of fast materials, perform the speed of execution of the algorithm with higher speed, improve and deal with efficiency.
Yu, Ang-Hsun, i 游昂勳. "An Improved Data Hiding Algorithm Based on the Method of Matrix Encoding". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/73459809838000223623.
Pełny tekst źródła國立暨南國際大學
資訊工程學系
100
The meaning of data (information) hiding is to embed the secret information into a cover host, such as an image. Usually, the naked eye of the people cannot perceive any change when the image is modified slightly. The evaluation of data hiding schemes should be measured by the distortion (or called Mean Square Error; MSE) and the embedding rate (the average number of bits embedded in a cover pixel). In this paper, we propose an improved data hiding scheme which improves the matrix encoding-based data hiding algorithm by the idea of Hamming+1 to further enhance the stego-image quality. This proposed improved scheme is verified to be correct through the theoretical analysis and the experiment.
Chen, I.-Kuei, i 陳奕魁. "Algorithm Design on Intelligent Vision for Objects using RGB and Depth Data". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/22090419671051833176.
Pełny tekst źródła國立臺灣大學
電子工程學研究所
101
In recent years, new applications like augmented reality, driverless cars, intelligent environmental surveillance, human action analysis, and in-home robotics are all becoming possible due to the advancement of computer vision (CV) developments, and object recognition plays an essential role in all those tasks. Objects are basic meaningful units of surrounding environments, and good performance of those CV applications can be achieved if all related objects can be successfully tracked, detected, segmented, recognized and analyzed. However, there are still limitations in current 2D CV methods. Color data combined with depth data are then considered to improve performances and solve problems encountered by traditional algorithms. Depth sensors are now cheap and readily available, and they bring new opportunities and challenges to many CV areas. In this thesis, a thorough system is designed to solve essential problems of object recognition with RGB and depth data (RGB-D data). The system aims to assist those emerging CV applications to help people live more conveniently and have more fun. In this thesis, essential algorithms of object recognition are developed and integrated to provide a thorough solution for problems encountered by previous works. First, 3D structure analysis is developed to segment the input RGB-D data into basic 3D structure elements. The indoor scene is parsed into planes and physically meaning clusters with depth and surface normals. Further analysis and processing are the performed on those clusters as object candidates. De-noising and accurate object segmentation algorithm are then proposed. Depth data is useful in segmenting raw clusters, but it is often noisy and broken at edges of objects. Color images and depth data are then combined to accurately segment out objects due to the higher quality of passive color images. Detailed boundaries are preserved by color superpixels, and 3D structure is then used to build foreground and background color models. Using superpixels and color models, accurate object visual segmentation is achieved automatically without any user input. 3D object tracking is also developed. The targeted object can be tracked in real time with huge variations in size, translation, rotation, illumination, and appearance. Using RGB-D Data, the high performance can be achieved, and features like positions, global color and normal models, and local 3D descriptors are processed for tracking. Compared to previous 2D tracking, no sliding window or particle filtering is needed since 3D structure elements are available. Pixel-wise accurate video segmentation can also be achieved with the proposed segmentation method. Finally, a novel on-line learning method is proposed to train robust object detectors. By using the proposed object tracking, training data with labels can be automatically generated, and efficient on-line learning and detection methods are used for real-time performance. Combining object detectors with tracking, object recognition and tracking recovery can be achieved. In previous 2D CV learning tasks, training datasets often suffer from lack of variability, cluttered background, and inability in automatically locating and segmenting targets of interest. The proposed on-line learning is provided for those problems. Totally speaking, a highly integrated algorithm for object recognition is designed, and RGB-D data is used and studied for object segmentation, tracking, on-line learning, and detection.
Cunha, Joel Antonio Teixeira. "Improved Depth Estimation Algorithm using Plenoptic Cameras". Master's thesis, 2015. http://hdl.handle.net/10316/40461.
Pełny tekst źródłaAwwad, Sari Moh'd Ismaeil. "Tracking and fine-grained activity recognition in depth videos". Thesis, 2016. http://hdl.handle.net/10453/90070.
Pełny tekst źródłaTracking and activity recognition in video are arguably two of the most active topics within the field of computer vision and pattern recognition. Historically, tracking and activity recognition have been performed over conventional video such as color or grey-level frames, either of which contains significant clues for the identification of targets. While this is often a desirable feature within the context of video surveillance, the use of video for activity recognition or for tracking in privacy-sensitive environments such as hospitals and care facilities is often perceived as intrusive. For this reason, this PhD research has focused on providing tracking and activity recognition solely from depth videos which offer a naturally privacy-preserving visual representation of the scene at hand. Depth videos can nowadays be acquired with inexpensive and highly-available commercial sensors such as Microsoft Kinect and Asus Xtion. The two main contributions of this research have been the design of a specialised tracking algorithm for tracking in depth data, and a fine-grained activity recognition approach for recognising activities in depth video. The proposed tracker is an extension of the popular Struck algorithm, an approach that leverages a structural support vector machine (SVM) for tracking. The main contributions of the proposed tracker include a dedicated depth feature based on local depth patterns, a heuristic for handling view occlusions in depth frames, and a technique for keeping the number of support vectors within a given budget, so as to limit computational costs. Conversely, the proposed fine-grained activity recognition approach leverages multi-scale depth measurements and a Fisher-consistent multi-class SVM. In addition to the novel approaches for tracking and activity recognition, in this thesis we have canvassed and developed a practical computer vision application for the detection of hand hygiene at a hospital. This application was developed in collaboration with clinical researchers from the Intensive Care Unit of Sydney’s Royal Prince Alfred Hospital. Experiments presented through the thesis confirm that the proposed approaches are effective, and either outperform the state of the art or significantly reduce the need for sensor instrumentation. The outcomes of the hand-hygiene detection were also positively received and assessed by the clinical research unit.
Gupte, Ajit D. "Techniques For Low Power Motion Estimation In Video Encoders". Thesis, 2010. https://etd.iisc.ac.in/handle/2005/1918.
Pełny tekst źródłaGupte, Ajit D. "Techniques For Low Power Motion Estimation In Video Encoders". Thesis, 2010. http://etd.iisc.ernet.in/handle/2005/1918.
Pełny tekst źródłaFick, Machteld. "Neurale netwerke as moontlike woordafkappingstegniek vir Afrikaans". Diss., 2002. http://hdl.handle.net/10500/584.
Pełny tekst źródłaSummaries in Afrikaans and English
In Afrikaans, soos in NederJands en Duits, word saamgestelde woorde aanmekaar geskryf. Nuwe woorde word dus voortdurend geskep deur woorde aanmekaar te haak Dit bemoeilik die proses van woordafkapping tydens teksprosessering, wat deesdae deur rekenaars gedoen word, aangesien die verwysingsbron gedurig verander. Daar bestaan verskeie afkappingsalgoritmes en tegnieke, maar die resultate is onbevredigend. Afrikaanse woorde met korrekte lettergreepverdeling is net die elektroniese weergawe van die handwoordeboek van die Afrikaanse Taal (HAT) onttrek. 'n Neutrale netwerk ( vorentoevoer-terugpropagering) is met sowat. 5 000 van hierdie woorde afgerig. Die neurale netwerk is verfyn deur 'n gcskikte afrigtingsalgoritme en oorfragfunksie vir die probleem asook die optimale aantal verborge lae en aantal neurone in elke laag te bepaal. Die neurale netwerk is met 5 000 nuwe woorde getoets en dit het 97,56% van moontlike posisies korrek as of geldige of ongeldige afkappingsposisies geklassifiseer. Verder is 510 woorde uit tydskrifartikels met die neurale netwerk getoets en 98,75% van moontlike posisies is korrek geklassifiseer.
In Afrikaans, like in Dutch and German, compound words are written as one word. New words are therefore created by simply joining words. Word hyphenation during typesetting by computer is a problem, because the source of reference changes all the time. Several algorithms and techniques for hyphenation exist, but results are not satisfactory. Afrikaans words with correct syllabification were extracted from the electronic version of the Handwoordeboek van die Afrikaans Taal (HAT). A neural network (feedforward backpropagation) was trained with about 5 000 of these words. The neural network was refined by heuristically finding a suitable training algorithm and transfer function for the problem as well as determining the optimal number of layers and number of neurons in each layer. The neural network was tested with 5 000 words not the training data. It classified 97,56% of possible points in these words correctly as either valid or invalid hyphenation points. Furthermore, 510 words from articles in a magazine were tested with the neural network and 98,75% of possible positions were classified correctly.
Computing
M.Sc. (Operasionele Navorsing)