Academic literature on the topic 'Depth data encoding algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Depth data encoding algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Depth data encoding algorithm"

1

Gois, Marcilyanne M., Paulo Matias, André B. Perina, Vanderlei Bonato, and Alexandre C. B. Delbem. "A Parallel Hardware Architecture based on Node-Depth Encoding to Solve Network Design Problems." International Journal of Natural Computing Research 4, no. 1 (January 2014): 54–75. http://dx.doi.org/10.4018/ijncr.2014010105.

Full text
Abstract:
Many problems involving network design can be found in the real world, such as electric power circuit planning, telecommunications and phylogenetic trees. In general, solutions for these problems are modeled as forests represented by a graph manipulating thousands or millions of input variables, making it hard to obtain the solutions in a reasonable time. To overcome this restriction, Evolutionary Algorithms (EAs) with dynamic data structures (encodings) have been widely investigated to increase the performance of EAs for Network Design Problems (NDPs). In this context, this paper proposes a parallelization of the node-depth encoding (NDE), a data structure especially designed for NDPs. Based on the NDE the authors have developed a parallel algorithm and a hardware architecture implemented on FPGA (Field-Programmable Gate Array), denominated Hardware Parallelized NDE (HP-NDE). The running times obtained in a general purpose processor (GPP) and the HP-NDE are compared. The results show a significant speedup in relation to the GPP solution, solving NDP in a time limited by a constant. Such time upper bound can be satisfied for any size of network until the hardware resources available on the FPGA are depleted. The authors evaluated the HP-NDE on a Stratix IV FPGA with networks containing up to 2048 nodes.
APA, Harvard, Vancouver, ISO, and other styles
2

He, Shuqian, Zhengjie Deng, and Chun Shi. "Fast Decision Algorithm of CU Size for HEVC Intra-Prediction Based on a Kernel Fuzzy SVM Classifier." Electronics 11, no. 17 (September 5, 2022): 2791. http://dx.doi.org/10.3390/electronics11172791.

Full text
Abstract:
High Efficiency Video Coding (HEVC) achieves a significant improvement in compression efficiency at the cost of extremely high computational complexity. Therefore, large-scale and wide deployment applications, especially mobile real-time video applications under low-latency and power-constrained conditions, are more challenging. In order to solve the above problems, a fast decision method for intra-coding unit size based on a new fuzzy support vector machine classifier is proposed in this paper. The relationship between the depth levels of coding units is accurately expressed by defining the cost evaluation criteria of texture and non-texture rate-distortion cost. The fuzzy support vector machine is improved by using the information entropy measure to solve the negative impact of data noise and the outliers problem. The proposed method includes three stages: the optimal coded depth level “0” early decision, coding unit depth early skip, and optimal coding unit early terminate. In order to further improve the rate-distortion complexity optimization performance, more feature vectors are introduced, including features such as space complexity, the relationship between coding unit depths, and rate-distortion cost. The experimental results showed that, compared with the HEVC reference test model HM16.5, the proposed algorithm can reduce the encoding time of various test video sequences by more than 53.24% on average, while the Bjontegaard Delta Bit Rate (BDBR) only increases by 0.82%. In addition, the proposed algorithm is better than the existing algorithms in terms of comprehensively reducing the computational complexity and maintaining the rate-distortion performance.
APA, Harvard, Vancouver, ISO, and other styles
3

Lilhore, Umesh Kumar, Osamah Ibrahim Khalaf, Sarita Simaiya, Carlos Andrés Tavera Romero, Ghaida Muttashar Abdulsahib, Poongodi M, and Dinesh Kumar. "A depth-controlled and energy-efficient routing protocol for underwater wireless sensor networks." International Journal of Distributed Sensor Networks 18, no. 9 (September 2022): 155013292211171. http://dx.doi.org/10.1177/15501329221117118.

Full text
Abstract:
Underwater wireless sensor network attracted massive attention from researchers. In underwater wireless sensor network, many sensor nodes are distributed at different depths in the sea. Due to its complex nature, updating their location or adding new devices is pretty challenging. Due to the constraints on energy storage of underwater wireless sensor network end devices and the complexity of repairing or recharging the device underwater, this is highly significant to strengthen the energy performance of underwater wireless sensor network. An imbalance in power consumption can cause poor performance and a limited network lifetime. To overcome these issues, we propose a depth controlled with energy-balanced routing protocol, which will be able to adjust the depth of lower energy nodes and be able to swap the lower energy nodes with higher energy nodes to ensure consistent energy utilization. The proposed energy-efficient routing protocol is based on an enhanced genetic algorithm and data fusion technique. In the proposed energy-efficient routing protocol, an existing genetic algorithm is enhanced by adding an encoding strategy, a crossover procedure, and an improved mutation operation that helps determine the nodes. The proposed model also utilized an enhanced back propagation neural network for data fusion operation, which is based on multi-hop system and also operates a highly optimized momentum technique, which helps to choose only optimum energy nodes and avoid duplicate selections that help to improve the overall energy and further reduce the quantity of data transmission. In the proposed energy-efficient routing protocol, an enhanced cluster head node is used to select a strategy that can analyze the remaining energy and directions of each participating node. In the simulation, the proposed model achieves 86.7% packet delivery ratio, 12.6% energy consumption, and 10.5% packet drop ratio over existing depth-based routing and energy-efficient depth-based routing methods for underwater wireless sensor network.
APA, Harvard, Vancouver, ISO, and other styles
4

Petrova, Natalia, and Natalia Mokshina. "Using FIBexDB for In-Depth Analysis of Flax Lectin Gene Expression in Response to Fusarium oxysporum Infection." Plants 11, no. 2 (January 7, 2022): 163. http://dx.doi.org/10.3390/plants11020163.

Full text
Abstract:
Plant proteins with lectin domains play an essential role in plant immunity modulation, but among a plurality of lectins recruited by plants, only a few members have been functionally characterized. For the analysis of flax lectin gene expression, we used FIBexDB, which includes an efficient algorithm for flax gene expression analysis combining gene clustering and coexpression network analysis. We analyzed the lectin gene expression in various flax tissues, including root tips infected with Fusarium oxysporum. Two pools of lectin genes were revealed: downregulated and upregulated during the infection. Lectins with suppressed gene expression are associated with protein biosynthesis (Calreticulin family), cell wall biosynthesis (galactose-binding lectin family) and cytoskeleton functioning (Malectin family). Among the upregulated lectin genes were those encoding lectins from the Hevein, Nictaba, and GNA families. The main participants from each group are discussed. A list of lectin genes, the expression of which can determine the resistance of flax, is proposed, for example, the genes encoding amaranthins. We demonstrate that FIBexDB is an efficient tool both for the visualization of data, and for searching for the general patterns of lectin genes that may play an essential role in normal plant development and defense.
APA, Harvard, Vancouver, ISO, and other styles
5

Tăbuşand and Can Kaya. "Information Theoretic Modeling of High Precision Disparity Data for Lossy Compression and Object Segmentation." Entropy 21, no. 11 (November 13, 2019): 1113. http://dx.doi.org/10.3390/e21111113.

Full text
Abstract:
In this paper, we study the geometry data associated with disparity map or depth map images in order to extract easy to compress polynomial surface models at different bitrates, proposing an efficient mining strategy for geometry information. The segmentation, or partition of the image pixels, is viewed as a model structure selection problem, where the decisions are based on the implementable codelength of the model, akin to minimum description length for lossy representations. The intended usage of the extracted disparity map is to provide to the decoder the geometry information at a very small fraction from what is required for a lossless compressed version, and secondly, to convey to the decoder a segmentation describing the contours of the objects from the scene. We propose first an algorithm for constructing a hierarchical segmentation based on the persistency of the contours of regions in an iterative re-estimation algorithm. Then, we propose a second algorithm for constructing a new sequence of segmentations, by selecting the order in which the persistent contours are included in the model, driven by decisions based on the descriptive codelength. We consider real disparity datasets which have the geometry information at a high precision, in floating point format, but for which encoding of the raw information, in about 32 bits per pixels, is too expensive, and we then demonstrate good approximations preserving the object structure of the scene, achieved for rates below 0.2 bits per pixels.
APA, Harvard, Vancouver, ISO, and other styles
6

Jiang, Ming-xin, Xian-xian Luo, Tao Hai, Hai-yan Wang, Song Yang, and Ahmed N. Abdalla. "Visual Object Tracking in RGB-D Data via Genetic Feature Learning." Complexity 2019 (May 2, 2019): 1–8. http://dx.doi.org/10.1155/2019/4539410.

Full text
Abstract:
Visual object tracking is a fundamental component in many computer vision applications. Extracting robust features of object is one of the most important steps in tracking. As trackers, only formulated on RGB data, are usually affected by occlusions, appearance, or illumination variations, we propose a novel RGB-D tracking method based on genetic feature learning in this paper. Our approach addresses feature learning as an optimization problem. As owning the advantage of parallel computing, genetic algorithm (GA) has fast speed of convergence and excellent global optimization performance. At the same time, unlike handcrafted feature and deep learning methods, GA can be employed to solve the problem of feature representation without prior knowledge, and it has no use for a large number of parameters to be learned. The candidate solution in RGB or depth modality is represented as an encoding of an image in GA, and genetic feature is learned through population initialization, fitness evaluation, selection, crossover, and mutation. The proposed RGB-D tracker is evaluated on popular benchmark dataset, and experimental results indicate that our method achieves higher accuracy and faster tracking speed.
APA, Harvard, Vancouver, ISO, and other styles
7

Han, Lei, Xiaohua Huang, Zhan Shi, and Shengnan Zheng. "Depth Estimation from Light Field Geometry Using Convolutional Neural Networks." Sensors 21, no. 18 (September 10, 2021): 6061. http://dx.doi.org/10.3390/s21186061.

Full text
Abstract:
Depth estimation based on light field imaging is a new methodology that has succeeded the traditional binocular stereo matching and depth from monocular images. Significant progress has been made in light-field depth estimation. Nevertheless, the balance between computational time and the accuracy of depth estimation is still worth exploring. The geometry in light field imaging is the basis of depth estimation, and the abundant light-field data provides convenience for applying deep learning algorithms. The Epipolar Plane Image (EPI) generated from the light-field data has a line texture containing geometric information. The slope of the line is proportional to the depth of the corresponding object. Considering the light field depth estimation as a spatial density prediction task, we design a convolutional neural network (ESTNet) to estimate the accurate depth quickly. Inspired by the strong image feature extraction ability of convolutional neural networks, especially for texture images, we propose to generate EPI synthetic images from light field data as the input of ESTNet to improve the effect of feature extraction and depth estimation. The architecture of ESTNet is characterized by three input streams, encoding-decoding structure, and skipconnections. The three input streams receive horizontal EPI synthetic image (EPIh), vertical EPI synthetic image (EPIv), and central view image (CV), respectively. EPIh and EPIv contain rich texture and depth cues, while CV provides pixel position association information. ESTNet consists of two stages: encoding and decoding. The encoding stage includes several convolution modules, and correspondingly, the decoding stage embodies some transposed convolution modules. In addition to the forward propagation of the network ESTNet, some skip-connections are added between the convolution module and the corresponding transposed convolution module to fuse the shallow local and deep semantic features. ESTNet is trained on one part of a synthetic light-field dataset and then tested on another part of the synthetic light-field dataset and real light-field dataset. Ablation experiments show that our ESTNet structure is reasonable. Experiments on the synthetic light-field dataset and real light-field dataset show that our ESTNet can balance the accuracy of depth estimation and computational time.
APA, Harvard, Vancouver, ISO, and other styles
8

Yi, Luying, Xiangyu Guo, Liqun Sun, and Bo Hou. "Structural and Functional Sensing of Bio-Tissues Based on Compressive Sensing Spectral Domain Optical Coherence Tomography." Sensors 19, no. 19 (September 27, 2019): 4208. http://dx.doi.org/10.3390/s19194208.

Full text
Abstract:
In this paper, a full depth 2D CS-SDOCT approach is proposed, which combines two-dimensional (2D) compressive sensing spectral-domain optical coherence tomography (CS-SDOCT) and dispersion encoding (ED) technologies, and its applications in structural imaging and functional sensing of bio-tissues are studied. Specifically, by introducing a large dispersion mismatch between the reference arm and sample arm in SD-OCT system, the reconstruction of the under-sampled A-scan data and the removal of the conjugated images can be achieved simultaneously by only two iterations. The under-sampled B-scan data is then reconstructed using the classic CS reconstruction algorithm. For a 5 mm × 3.2 mm fish-eye image, the conjugated image was reduced by 31.4 dB using 50% × 50% sampled data (250 depth scans and 480 spectral sampling points per depth scan), and all A-scan data was reconstructed in only 1.2 s. In addition, we analyze the application performance of the CS-SDOCT in functional sensing of locally homogeneous tissue. Simulation and experimental results show that this method can correctly reconstruct the extinction coefficient spectrum under reasonable iteration times. When 8 iterations were used to reconstruct the A-scan data in the imaging experiment of fisheye, the extinction coefficient spectrum calculated using 50% × 50% data was approximately consistent with that obtained with 100% data.
APA, Harvard, Vancouver, ISO, and other styles
9

Huang, Zedong, Jinan Gu, Jing Li, Shuwei Li, and Junjie Hu. "Depth Estimation of Monocular PCB Image Based on Self-Supervised Convolution Network." Electronics 11, no. 12 (June 7, 2022): 1812. http://dx.doi.org/10.3390/electronics11121812.

Full text
Abstract:
To improve the accuracy of using deep neural networks to predict the depth information of a single image, we proposed an unsupervised convolutional neural network for single-image depth estimation. Firstly, the network is improved by introducing a dense residual module into the encoding and decoding structure. Secondly, the optimized hybrid attention module is introduced into the network. Finally, stereo image is used as the training data of the network to realize the end-to-end single-image depth estimation. The experimental results on KITTI and Cityscapes data sets show that compared with some classical algorithms, our proposed method can obtain better accuracy and lower error. In addition, we train our models on PCB data sets in industrial environments. Experiments in several scenarios verify the generalization ability of the proposed method and the excellent performance of the model.
APA, Harvard, Vancouver, ISO, and other styles
10

Lim, Olivier, Stéphane Mancini, and Mauro Dalla Mura. "Feasibility of a Real-Time Embedded Hyperspectral Compressive Sensing Imaging System." Sensors 22, no. 24 (December 13, 2022): 9793. http://dx.doi.org/10.3390/s22249793.

Full text
Abstract:
Hyperspectral imaging has been attracting considerable interest as it provides spectrally rich acquisitions useful in several applications, such as remote sensing, agriculture, astronomy, geology and medicine. Hyperspectral devices based on compressive acquisitions have appeared recently as an alternative to conventional hyperspectral imaging systems and allow for data-sampling with fewer acquisitions than classical imaging techniques, even under the Nyquist rate. However, compressive hyperspectral imaging requires a reconstruction algorithm in order to recover all the data from the raw compressed acquisition. The reconstruction process is one of the limiting factors for the spread of these devices, as it is generally time-consuming and comes with a high computational burden. Algorithmic and material acceleration with embedded and parallel architectures (e.g., GPUs and FPGAs) can considerably speed up image reconstruction, making hyperspectral compressive systems suitable for real-time applications. This paper provides an in-depth analysis of the required performance in terms of computing power, data memory and bandwidth considering a compressive hyperspectral imaging system and a state-of-the-art reconstruction algorithm as an example. The results of the analysis show that real-time application is possible by combining several approaches, namely, exploitation of system matrix sparsity and bandwidth reduction by appropriately tuning data value encoding.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Depth data encoding algorithm"

1

Aldrovandi, Lorenzo. "Depth estimation algorithm for light field data." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mansour, Moussa Reda. "Algoritmo para obtenção de planos de restabelecimento para sistemas de distribuição de grande porte." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/18/18154/tde-06052009-100440/.

Full text
Abstract:
A elaboração de planos de restabelecimento de energia (PRE) de forma rápida, para re-energização de sistemas de distribuição radiais (SDR), faz-se necessária para lidar com situações que deixam regiões dos SDR sem energia. Tais situações podem ser causadas por faltas permanentes ou pela necessidade de isolar zonas dos SDR para serviços de manutenção. Dentre os objetivos de um PRE, destacam-se: (i) reduzir o número de consumidores interrompidos (ou nenhum), e (ii) minimizar o número de manobras; que devem ser atendidos sem desrespeitar os limites operacionais dos equipamentos. Conseqüentemente, a obtenção de PRE em SDR é um problema com múltiplos objetivos, alguns conflitantes. As principais técnicas desenvolvidas para obtenção de PRE em SDR baseiam-se em algoritmos evolutivos (AE). A limitação da maioria dessas técnicas é a necessidade de simplificações na rede, para lidar com SDR de grande porte, que limitam consideravelmente a possibilidade de obtenção de um PRE adequado. Propõe-se, neste trabalho, o desenvolvimento e implantação computacional de um algoritmo para obtenção de PRE em SDR, que consiga lidar com sistemas de grande porte sem a necessidade de simplificações, isto é, considerando uma grande parte (ou a totalidade) de linhas, barras, cargas e chaves do sistema. O algoritmo proposto baseia-se em um AE multi-objetivo e na estrutura de dados, para armazenamento de grafos, denominada representação nó-profundidade (RNP), bem como em dois operadores genéticos que foram desenvolvidos para manipular de forma eficiente os dados armazenados na RNP. Em razão de se basear em um AE multi-objetivo, o algoritmo proposto possibilita uma investigação mais ampla do espaço de busca. Por outro lado, fazendo uso da RNP, para representar computacionalmente os SDR, e de seus operadores genéticos, o algoritmo proposto aumenta significativamente a eficiência da busca por adequados PRE. Isto porque aqueles operadores geram apenas configurações radiais, nas quais todos os consumidores são atendidos. Para comprovar a eficiência do algoritmo proposto, várias simulações computacionais foram realizadas, utilizando o sistema de distribuição real, de uma companhia brasileira, que possui 3.860 barras, 635 chaves, 3 subestações e 23 alimentadores.
An elaborated and fast energy restoration plan (ERP) is required to deal with steady faults in radial distribution systems (RDS). That is, after a faulted zone has been identified and isolated by the relays, it is desired to elaborate a proper ERP to restore energy on that zone. Moreover, during the normal system operation, it is frequently necessary to elaborate ERP to isolate zones to execute routine tasks of network maintenance. Some of the objectives of an ERP are: (i) very few interrupted customers (or none), and (ii) operating a minimal number of switches, while at the same time respecting security constraints. As a consequence, the service restoration is a multiple objective problem, with some degree of conflict. The main methods developed for elaboration of ERP are based on evolutionary algorithms (EA). The limitation of the majority of these methods is the necessity of network simplifications to work with large-scale RDS. In general, these simplifications restrict the achievement of an adequate ERP. This work proposes the development and implementation of an algorithm for elaboration of ERP, which can deal with large-scale RDS without requiring network simplifications, that is, considering a large number (or all) of lines, buses, loads and switches of the system. The proposed algorithm is based on a multi-objective EA, on a new graph tree encoding called node-depth encoding (NDE), as well as on two genetic operators developed to efficiently manipulate a graph trees stored in NDEs. Using a multi-objective EA, the proposed algorithm enables a better exploration of the search space. On the other hand, using NDE and its operators, the efficiency of the search is increased when the proposed algorithm is used generating proper ERP, because those operators generate only radial configurations where all consumers are attended. The efficiency of the proposed algorithm is shown using a Brazilian distribution system with 3,860 buses, 635 switches, 3 substations and 23 feeders.
APA, Harvard, Vancouver, ISO, and other styles
3

Sengupta, Aritra. "Empirical Hierarchical Modeling and Predictive Inference for Big, Spatial, Discrete, and Continuous Data." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1350660056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Oliveira, Marcos Antônio Almeida de. "Heurística aplicada ao problema árvore de Steiner Euclidiano com representação nó-profundidade-grau." Universidade Federal de Goiás, 2014. http://repositorio.bc.ufg.br/tede/handle/tede/4171.

Full text
Abstract:
Submitted by Luanna Matias (lua_matias@yahoo.com.br) on 2015-02-06T19:23:12Z No. of bitstreams: 2 Dissertação - Marcos Antônio Almeida de Oliveira - 2014..pdf: 1092566 bytes, checksum: 55edbdaf5b3ac84fe3f6835682fe2a13 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-02-19T14:34:20Z (GMT) No. of bitstreams: 2 Dissertação - Marcos Antônio Almeida de Oliveira - 2014..pdf: 1092566 bytes, checksum: 55edbdaf5b3ac84fe3f6835682fe2a13 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Made available in DSpace on 2015-02-19T14:34:20Z (GMT). No. of bitstreams: 2 Dissertação - Marcos Antônio Almeida de Oliveira - 2014..pdf: 1092566 bytes, checksum: 55edbdaf5b3ac84fe3f6835682fe2a13 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2014-09-03
Fundação de Amparo à Pesquisa do Estado de Goiás - FAPEG
A variation of the Beasley (1992) algorithm for the Euclidean Steiner tree problem is presented. This variation uses the Node-Depth-Degree Encoding, which requires an average time of O(n) in operations to generate and manipulate spanning forests. For spanning tree problems, this representation has linear time complexity when applied to network design problems with evolutionary algorithms. Computational results are given for test cases involving instances up to 500 vertices. These results demonstrate the use of the Node-Depth-Degree in an exact heuristic, and this suggests the possibility of using this representation in other techniques besides evolutionary algorithms. An empirical comparative and complexity analysis between the proposed algorithm and a conventional representation indicates the efficiency advantages of the solution found.
É apresentada uma variação do algoritmo de Beasley (1992) para o Problema árvore de Steiner Euclidiano. Essa variação utiliza a Representação Nó-Profundidade-Grau que requer, em média, tempo O(n) em operações para gerar e manipular florestas geradoras. Para problemas de árvore geradora essa representação possui complexidade de tempo linear sendo aplicada em problemas de projeto de redes com algoritmos evolutivos. Resultados computacionais são dados para casos de teste envolvendo instâncias de até 500 vértices. Esses resultados demonstram a utilização da representação Nó-Profundidade-Grau em uma heurística exata, e isso sugere a possibilidade de utilização dessa representação em outras técnicas além de algoritmos evolutivos. Um comparativo empírico e da análise de complexidade entre o algoritmo proposto e uma representação convencional indica vantagens na eficiência da solução encontrada.
APA, Harvard, Vancouver, ISO, and other styles
5

Marques, Leandro Tolomeu. "Restabelecimento de energia em sistemas de distribuição considerando aspectos práticos." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/18/18154/tde-26072018-134924/.

Full text
Abstract:
No contexto da operação de sistemas de distribuição, um dos problemas com os quais os operadores lidam frequentemente é o de restabelecimento de energia. Este problema surge na ocorrência de uma falta permanente e pode ser tratado por meio de manobras em chaves presentes na rede primária. Uma vez que tais redes operam com topologia radial, a ocorrência de uma falta pode resultar no desligamento de consumidores saudáveis. Desta maneira, o problema consiste em definir, num curto intervalo de tempo, um número mínimo de chaves que devem ser operadas a fim de isolar a falta e restaurar o máximo de consumidores saudáveis desligados. Os esforços para a obtenção de ferramentas computacionais para fornecimento de soluções para o problema de restabelecimento têm sido intensificados nos últimos anos. Isto ocorre, em especial, devido aos enormes prejuízos causados pela falta de energia às companhias de eletricidade e a toda a sociedade. Neste sentido, o objetivo deste trabalho é a obtenção de um método para auxiliar o trabalho dos operadores através do fornecimento de planos adequados de restabelecimento em curtos intervalos de tempo. Os diferenciais deste método proposto são a sua capacidade de: lidar, em especial, com redes reais de grande porte com reduzido esforço computacional; considerar a existência de vários níveis de prioridade de atendimento entre os consumidores (note, por exemplo, que um hospital ou um centro de segurança pública devem ter maior prioridade de atendimento que um grande supermercado ou unidades residenciais) e priorizar o atendimento deles de acordo a sua prioridade; fornecer uma sequência por meio da qual as chaves possam ser operadas a fim de isolar os setores em falta e reconectar o maior número de consumidores saudáveis desligados executando-se o mínimo de manobras em chaves e priorizando os consumidores com maior prioridade; ser capaz de selecionar cargas menos prioritárias para permaneceram desligadas nas situações em que não é possível obter uma solução que restaure todas as cargas saudáveis fora de serviço; e, adicionalmente, priorizar a operação de chaves controladas remotamente, que, diferentemente das chaves controladas manualmente, podem ser operadas com menores custos e de maneira mais rápida. O método proposto consiste, de maneira sintética, na união de uma busca exaustiva aplicada localmente a um novo algoritmo evolutivo multi-objetivo em tabelas de subpopulação que faz uso de uma estrutura de dados eficiente denominada Representação Nó-Profundidade. Para avaliar a performance relativa do método proposto, simulações foram realizadas num sistema de distribuição de pequeno porte e os resultados foram comparados com os obtidos por um método de Programação Matemática. Na sequência, novos experimentos foram realizadas em diversos casos de falta na rede de distribuição da cidade de Londrina-PR e cidades adjacentes. As soluções fornecidas mostraram-se adequadas ao tratamento dos casos de falta, assim como as sequências de chaveamento associadas a elas, as quais foram capazes de priorizar o restabelecimento dos consumidores prioritários seguindo seus níveis de prioridade. Adicionalmente, estudos avaliaram a variação do tempo de processamento computacional do método proposto com a dimensão das redes de distribuições e também com o número de gerações realizadas pelo algoritmo evolutivo multi-objetivo proposto e os resultados mostraram-se satisfatórios às necessidades do problema Portanto, pode-se comprovar que o método proposto atingiu os objetivos especificados, em especial, o tratamento de aspectos práticos do problema. Além do próprio método proposto, algumas contribuições desta pesquisa são a proposição um novo algoritmo evolutivo multiobjetivo em tabelas de subpopulação e de um novo operador para manipulação de florestas de grafo armazenadas pela Representação Nó-Profundidade e voltado ao problema de restabelecimento.
In the context of distribution systems operation, service restoration is one of the problems with which operators constantly deal. It arises when a permanent fault occurs and is treated trough operations in switches at primary grid. Since distribution systems are usually radial, fault occurrence turns-off healthy customers. Thereby, the service restoration problem consists in defining, in a short processing time, the minimum amount of switches that must be operated for the isolation of the fault and reconnection of the maximum amount of healthy out-of-service customers. The efforts of developing computational tools for getting solution to this problems has increased in the last years. It is, in special, due to enormous losses caused to the utilities and to the whole society. In this sense, the main objective of this research is getting a method able to help the distribution system operator\'s work through providing service restoration plans quickly. The differentials of this research are its ability to: deal, in special, with large scale grids whit a reduced computational effort; consider costumers of several priority levels (note, for instance, a hospital has a higher supply priority in relation to a big supermarket) and prioritize the higher priority customers; provide a switching sequence able to isolate and reconnect the maximum amount of healthy out-of-service customer by the minimum amount of switching actions; select lower priority customers to keep out-of-service in order to reconnect higher priority customers when a it is not possible to restore all customers; and, additionally, prioritize switching operation in remotely controlled switches, whose operation is faster and cheapest than the operation of manually controlled switches. The proposed method mixes a local exhaustive search and a new multi-objective evolutionary algorithm in subpopulation tables that uses a data structure named Node-Depth Encoding. For evaluating the relative performance of proposed method, simulations were performed in small distribution systems and the performance was compared with the performance a Mathematical Programing method from literature. New experiments were performed a Mathematical Programing method from literature. New experiments were performed in several fault situations in the real and large-scale distribution system of Londrina-PR and adjacent cities. The solutions provided were appropriated to the treatment of such contingency situations. The same occurs with the switching sequences provided, which were able to prioritize the restoration of higher priority customers. Additional studies evaluated the variation of the running time with the size of grids and with the values adopted for the maximum number of generations of the evolutionary algorithm (which is an input parameter). The results expressed the running time of the proposed method is suitable to the problem needs. Therefore, it could be proved the proposed method achieved the specified objectives, in special, the treatment of practical aspects of the problem. Besides the proposed method, some contributions of this research are proposition of a new multi-objective evolutionary algorithm in subpopulation tables and a new reproduction operator to manipulate graph forests computationally represented by Node-Depth Encoding.
APA, Harvard, Vancouver, ISO, and other styles
6

Hemanth, Kumar S. "On Applications of 3D-Warping and An Analysis of a RANSAC Heuristic." Thesis, 2018. http://etd.iisc.ac.in/handle/2005/4156.

Full text
Abstract:
In recent years communication of the scene geometry is gaining importance. With development of technologies such as head mounted displays and Augmented Reality (AR) the need for efficient 3D scene communication is becoming vital. Depth sensors are being incorporated into smartphones for large scale deployment of AR applications. 3D-communication require synchronous capture of the scene from multiple viewpoints along with depth for each view, known as Multiview Plus Depth (MVD) data. The number of views required depends on application. Traditionally, it has been assumed that devices are static but for smartphones such an assumption is not valid. The availability of depth modality opens up several possibilities for efficient MVD data compression. In this work we have leveraged depth for better RGB-D data compression and efficient depth estimation. Using the depth information, the RGB-D device motion can be accurately tracked. 3D-warping along with the camera tracking can then be used to generate reference frames to improve compression efficiency of motion vectors. The same mechanism can be used to predict depth in stereo disparity estimation problem. For robust tacking of the motion of camera array, we have used the Random Sample Consensus (RANSAC) algorithm. RANSAC is an iterative algorithm for robust model parameter estimation. A common practice among implementations of RANSAC is to take a few samples extra than the minimum required for estimation problem, but the implications of this heuristic is lacking in literature. We present a probabilistic analysis of this common heuristic. We also present a depth data coding algorithm by employing planar segmentation of depth. While all prior work based on this approach remained restricted to images only and under noise-free conditions, we present an efficient solution for noisy depth videos.
APA, Harvard, Vancouver, ISO, and other styles
7

Yu-ChihChang and 張宇志. "A Fast Depth Map Compression Algorithm for H.264 Encoding Scheme." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/21078118850434017531.

Full text
Abstract:
碩士
國立成功大學
電機工程學系碩博士班
100
Due to the rapid growth of population in consuming 3D or electronic products like smart phone and 3DTV, the issue of representation of 3D video is getting much more attention in recent years. There is one of the most famous 3D video representation called” Advanced Three-Dimensional Television System Technologies(ATTEST)”which uses a monoscopic video (color component) and per pixel depth information (depth component). After decoding the two sequences, it is used to synthesis free view-point views by means of depth image-based rendering techniques. Depth Image-based Rendering (DIBR) is a popular method to synthesis 3D free view-point virtual views. This method is based on a color sequence and its corresponding depth sequence to synthesize views of different positions. However, in transmission of real-time video, even if we have used the methods which we mentioned above to reduce the amount of data transferred instead of using the traditional direct encoding and decoding the two sequences, the environment of the wireless network and handheld devices on the power constraints, bandwidth is always limited. Therefore, data compression plays an important role in the 3D codec system. This thesis uses the H.264/AVC video encoding format and proposes a new algorithm for color video motion estimation and combines 3D search algorithm for depth map compression to decrease the overall encoding time and keep the depth map’s quality simultaneously.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Wei-Jen, and 陳威任. "The Modified Encoding Algorithm of ECG Data Compression Based on Wavelet Transform." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/04519476568133918606.

Full text
Abstract:
碩士
國立臺北科技大學
電機工程系碩士班
92
Because the ECG data are enormous and need long recording time, data compression of ECG has been investigated for several decades. The research purpose of this thesis is to improve the performance of data compression of ECG and obtain the high compressed rate and low reconstructed error. For data compression of ECG, the transform type encode structure is adopted in this thesis. Firstly, the input ECG signal is preprocessed into a relevant form. After preprocessing, the ECG signals are processed by the wavelet transform. According to the percent of energy between before and after compression, a threshold value can be examined. Using the value, the less significant data can be ignored. After threshold, run length encoding and DPCM are employed to encode the resulted data consequently. The total length of data includes the encoded data and the reference data used for reconstruction. Finally, in order to test and verify the effectiveness of the proposed algorithm, computer programs are developed to compress the ECG signals in MIT-BIH arrhythmia database. From the simulation results, it is seen that the higher compressed rate with less reconstructed error is obtained. In addition, the compressed rate and reconstructed error are adjustable according to the requirement.
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Hung-lung, and 陳宏隆. "Run length encoding-- based algorithm for mining association rules in data stream." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/90489529147629035129.

Full text
Abstract:
碩士
南華大學
資訊管理學研究所
96
It is a new developing research field that the materials bunch flows and prospects, and the RLEated rule performs algorithms and is prospected by the materials (Data Mining) A quite important and practical technology in China. The RLEated type rule main purpose is to find out the dependence of some materials projects in the huge materials. The main method is to search the database and find out all high-frequency project teams; And utilize the high-frequency project team to excavate out all RLEated type rules.Because of the production of a large number of materials and a large number of candidate materials project teams, it is that one must consume a large number of work of calculating the cost to find out all high-frequency project teams. So, what effective materials amount while calculating of reduction, reduce the production of the project team of candidate materials, and reduce reading number of times,etc. of the database, the algorithm of performing that can make the RLEated type rule excavate is more efficient.      Main content that we study utilizes and builds and constructs the regular code (Run-Length Encoding) The valid RLEated type rule of reduction in the dynamic database of the way performs the materials amount of algorithms while calculating, main contribution is the method of offering a kind of new materials to deal with, it utilized and traded the database and encoded a small amount of materials, then prospect the materials to the code materials mainly in the storing device directly, and can upgrade and encode the materials effectively in the unusual fluctuation of fast materials, perform the speed of execution of the algorithm with higher speed, improve and deal with efficiency.
APA, Harvard, Vancouver, ISO, and other styles
10

Yu, Ang-Hsun, and 游昂勳. "An Improved Data Hiding Algorithm Based on the Method of Matrix Encoding." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/73459809838000223623.

Full text
Abstract:
碩士
國立暨南國際大學
資訊工程學系
100
The meaning of data (information) hiding is to embed the secret information into a cover host, such as an image. Usually, the naked eye of the people cannot perceive any change when the image is modified slightly. The evaluation of data hiding schemes should be measured by the distortion (or called Mean Square Error; MSE) and the embedding rate (the average number of bits embedded in a cover pixel). In this paper, we propose an improved data hiding scheme which improves the matrix encoding-based data hiding algorithm by the idea of Hamming+1 to further enhance the stego-image quality. This proposed improved scheme is verified to be correct through the theoretical analysis and the experiment.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Depth data encoding algorithm"

1

Nakov, Svetlin. Fundamentals of Computer Programming with C#: The Bulgarian C# Book. Sofia, Bulgaria: Svetlin Nakov, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Depth data encoding algorithm"

1

Baidari, Ishwar, and Channamma Patil. "K-Data Depth Based Clustering Algorithm." In Computational Intelligence: Theories, Applications and Future Directions - Volume I, 13–24. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-1132-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Song, Bin, Limin Xiao, Guangjun Qin, Li Ruan, and Shida Qiu. "A Deduplication Algorithm Based on Data Similarity and Delta Encoding." In Communications in Computer and Information Science, 245–53. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-3969-0_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mou, Xingang, Chang Liu, and Xiao Zhou. "Radar Echo Image Prediction Algorithm Based on Multi-scale Encoding-Decoding Network." In Intelligent Data Engineering and Automated Learning – IDEAL 2021, 637–46. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-91608-4_61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Du, Jun, Emin Erkan Korkmaz, Reda Alhajj, and Ken Barker. "Alternative Clustering by Utilizing Multi-objective Genetic Algorithm with Linked-List Based Chromosome Encoding." In Machine Learning and Data Mining in Pattern Recognition, 346–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11510888_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ye, Nan, Jianfeng Di, Jiapeng Zhang, Rui Zhang, Xiaopei Xu, and Ying Yan. "LDPC Encoding and Decoding Algorithm of LEO Satellite Communication Based on Big Data." In Lecture Notes in Electrical Engineering, 84–91. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-3387-5_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wyss, Catharine, Chris Giannella, and Edward Robertson. "FastFDs: A Heuristic-Driven, Depth-First Algorithm for Mining Functional Dependencies from Relation Instances Extended Abstract." In Data Warehousing and Knowledge Discovery, 101–10. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44801-2_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gois, Marcilyanne Moreira, Danilo Sipoli Sanches, Jean Martins, João Bosco A. London Junior, and Alexandre Cláudio Botazzo Delbem. "Multi-Objective Evolutionary Algorithm with Node-Depth Encoding and Strength Pareto for Service Restoration in Large-Scale Distribution Systems." In Lecture Notes in Computer Science, 771–86. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37140-0_57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Singha, Souvik, and Debarshi Saha. "Encoding-Based Algorithm for Minimization of Inductive Cross-Talk Based on Off-Chip Data Transmission." In Lecture Notes in Electrical Engineering, 923–29. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21747-0_119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kathirvalavakumar, T., and R. Palaniappan. "Modified Run-Length Encoding Method and Distance Algorithm to Classify Run-Length Encoded Binary Data." In Communications in Computer and Information Science, 271–80. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19263-0_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, J., D. Q. Chen, and S. H. Meng. "A Novel Region Selection Algorithm for Auto-focusing Method Based on Depth from Focus." In Proceedings of the Fourth Euro-China Conference on Intelligent Data Analysis and Applications, 101–8. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68527-4_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Depth data encoding algorithm"

1

Zhou, Hao, Fen Chen, Zongju Peng, Gangyi Jiang, Mei Yu, and Wei Bi. "Encoding oriented depth video spatial processing algorithm." In 3rd International Conference on Green Communications and Networks. Southampton, UK: WIT Press, 2014. http://dx.doi.org/10.2495/gcn131142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wali, Ibtissem, Amina Kessentini, Mohamed Ali Ben Ayed, and Nouri Masmoudi. "Early depth partitioning determination algorithm for SHVC encoding standard." In 2017 18th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering (STA). IEEE, 2017. http://dx.doi.org/10.1109/sta.2017.8314923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zongju Peng, Mei Yu, Gangyi Jiang, Yuehou Si, and Fen Chen. "Virtual view synthesis oriented fast depth video encoding algorithm." In 2010 2nd International Conference on Industrial and Information Systems (IIS 2010). IEEE, 2010. http://dx.doi.org/10.1109/indusis.2010.5565875.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kum, Sang-Uok, and Ketan Mayer-Patel. "Reference Stream Selection for Multiple Depth Stream Encoding." In Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06). IEEE, 2006. http://dx.doi.org/10.1109/3dpvt.2006.117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Guo, Mingsong, Fen Chen, Chengkai Sheng, Zongju Peng, and Gangyi Jiang. "A depth video processing algorithm for high encoding and rendering performance." In SPIE/COS Photonics Asia, edited by Qionghai Dai and Tsutomu Shimura. SPIE, 2014. http://dx.doi.org/10.1117/12.2073478.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Santos, Augusto Cesar dos, Alexandre C. B. Delbem, and Newton Geraldo Bretas. "A Multiobjective Evolutionary Algorithm with Node-Depth Encoding for Energy Restoration." In 2008 Fourth International Conference on Natural Computation. IEEE, 2008. http://dx.doi.org/10.1109/icnc.2008.843.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Huang, D., and P. A. Versnel. "Depth estimation algorithm applied to FTG data." In SEG Technical Program Expanded Abstracts 2000. Society of Exploration Geophysicists, 2000. http://dx.doi.org/10.1190/1.1816076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

AlTarawneh, Ragaad, Shah Rukh Humayoun, and Achim Ebert. "Utilization of variation in stereoscopic depth for encoding aspects of non-spatial data." In 2015 IEEE Symposium on 3D User Interfaces (3DUI). IEEE, 2015. http://dx.doi.org/10.1109/3dui.2015.7131741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kaewmanee, Jutanon, and Somporn Sirisumrannukul. "Multiobjective service restoration in distribution system using fuzzy decision algorithm and node-depth encoding." In 2011 8th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON 2011). IEEE, 2011. http://dx.doi.org/10.1109/ecticon.2011.5947984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Santos, Augusto, Alexandre Delbem, Joao Bosco London, and Newton Bretas. "Node-depth encoding and multiobjective evolutionary algorithm applied to large-scale distribution system reconfiguration." In 2011 IEEE Power & Energy Society General Meeting. IEEE, 2011. http://dx.doi.org/10.1109/pes.2011.6039211.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Depth data encoding algorithm"

1

Chapman, Ray, Phu Luong, Sung-Chan Kim, and Earl Hayter. Development of three-dimensional wetting and drying algorithm for the Geophysical Scale Transport Multi-Block Hydrodynamic Sediment and Water Quality Transport Modeling System (GSMB). Engineer Research and Development Center (U.S.), July 2021. http://dx.doi.org/10.21079/11681/41085.

Full text
Abstract:
The Environmental Laboratory (EL) and the Coastal and Hydraulics Laboratory (CHL) have jointly completed a number of large-scale hydrodynamic, sediment and water quality transport studies. EL and CHL have successfully executed these studies utilizing the Geophysical Scale Transport Modeling System (GSMB). The model framework of GSMB is composed of multiple process models as shown in Figure 1. Figure 1 shows that the United States Army Corps of Engineers (USACE) accepted wave, hydrodynamic, sediment and water quality transport models are directly and indirectly linked within the GSMB framework. The components of GSMB are the two-dimensional (2D) deep-water wave action model (WAM) (Komen et al. 1994, Jensen et al. 2012), data from meteorological model (MET) (e.g., Saha et al. 2010 - http://journals.ametsoc.org/doi/pdf/10.1175/2010BAMS3001.1), shallow water wave models (STWAVE) (Smith et al. 1999), Coastal Modeling System wave (CMS-WAVE) (Lin et al. 2008), the large-scale, unstructured two-dimensional Advanced Circulation (2D ADCIRC) hydrodynamic model (http://www.adcirc.org), and the regional scale models, Curvilinear Hydrodynamics in three dimensions-Multi-Block (CH3D-MB) (Luong and Chapman 2009), which is the multi-block (MB) version of Curvilinear Hydrodynamics in three-dimensions-Waterways Experiments Station (CH3D-WES) (Chapman et al. 1996, Chapman et al. 2009), MB CH3D-SEDZLJ sediment transport model (Hayter et al. 2012), and CE-QUAL Management - ICM water quality model (Bunch et al. 2003, Cerco and Cole 1994). Task 1 of the DOER project, “Modeling Transport in Wetting/Drying and Vegetated Regions,” is to implement and test three-dimensional (3D) wetting and drying (W/D) within GSMB. This technical note describes the methods and results of Task 1. The original W/D routines were restricted to a single vertical layer or depth-averaged simulations. In order to retain the required 3D or multi-layer capability of MB-CH3D, a multi-block version with variable block layers was developed (Chapman and Luong 2009). This approach requires a combination of grid decomposition, MB, and Message Passing Interface (MPI) communication (Snir et al. 1998). The MB single layer W/D has demonstrated itself as an effective tool in hyper-tide environments, such as Cook Inlet, Alaska (Hayter et al. 2012). The code modifications, implementation, and testing of a fully 3D W/D are described in the following sections of this technical note.
APA, Harvard, Vancouver, ISO, and other styles
2

O'Neill, Francis, Kristofer Lasko, and Elena Sava. Snow-covered region improvements to a support vector machine-based semi-automated land cover mapping decision support tool. Engineer Research and Development Center (U.S.), November 2022. http://dx.doi.org/10.21079/11681/45842.

Full text
Abstract:
This work builds on the original semi-automated land cover mapping algorithm and quantifies improvements to class accuracy, analyzes the results, and conducts a more in-depth accuracy assessment in conjunction with test sites and the National Land Cover Database (NLCD). This algorithm uses support vector machines trained on data collected across the continental United States to generate a pre-trained model for inclusion into a decision support tool within ArcGIS Pro. Version 2 includes an additional snow cover class and accounts for snow cover effects within the other land cover classes. Overall accuracy across the continental United States for Version 2 is 75% on snow-covered pixels and 69% on snow-free pixels, versus 16% and 66% for Version 1. However, combining the “crop” and “low vegetation” classes improves these values to 86% for snow and 83% for snow-free, compared to 19% and 83% for Version 1. This merging is justified by their spectral similarity, the difference between crop and low vegetation falling closer to land use than land cover. The Version 2 tool is built into a Python-based ArcGIS toolbox, allowing users to leverage the pre-trained model—along with image splitting and parallel processing techniques—for their land cover type map generation needs.
APA, Harvard, Vancouver, ISO, and other styles
3

Striuk, Andrii, Olena Rybalchenko, and Svitlana Bilashenko. Development and Using of a Virtual Laboratory to Study the Graph Algorithms for Bachelors of Software Engineering. [б. в.], November 2020. http://dx.doi.org/10.31812/123456789/4462.

Full text
Abstract:
The paper presents an analysis of the importance of studying graph algorithms, the reasons for the need to implement this project and its subsequent use. The existing analogues analysis is carried out, due to which a list of advantages and disadvantages is formed and taken into account in developing the virtual laboratory. A web application is created that clearly illustrates the work of graph algorithms, such as Depth-First Search, Dijkstra’s Shortest Path, Floyd- Warshall, Kruskal Minimum Cost Spanning Tree Algorithm. A simple and user- friendly interface is developed and it is supported by all popular browsers. The software product is provided with user registration and authorization functions, chat communication, personal cabinet editing and viewing the statistics on web- application use. An additional condition is taken into account at the design stage, namely the flexibility of the architecture, which envisaged the possibility of easy expansion of an existing functionality. Virtual laboratory is used at Kryvyi Rih National University to training students of specialty 121 Software Engineering in the disciplines “Algorithms and Data Structures” and “Discrete Structures”.
APA, Harvard, Vancouver, ISO, and other styles
4

Chamovitz, Daniel A., and Xing-Wang Deng. Developmental Regulation and Light Signal Transduction in Plants: The Fus5 Subunit of the Cop9 Signalosome. United States Department of Agriculture, September 2003. http://dx.doi.org/10.32747/2003.7586531.bard.

Full text
Abstract:
Plants adjust their growth and development in a manner optimal for the prevailing light conditions. The molecular mechanisms by which light signals are transduced and integrated with other environmental and developmental signals are an area of intense research. (Batschauer, 1999; Quail, 2002) One paradigm emerging from this work is the interconnectedness of discrete physiological responses at the biochemical level, for instance, between auxin and light signaling (Colon-Carmona et al., 2000; Schwechheimer and Deng, 2001; Tian and Reed, 1999) and between light signaling and plant pathogen interactions (Azevedo et al., 2002; Liu et al., 2002). The COP9 signalosome (CSN) protein complex has a central role in the light control of plant development. Arabidopsis mutants that lack this complex develop photomorphogenically even in the absence of light signals (reviewed in (Karniol and Chamovitz, 2000; Schwechheimer and Deng, 2001). Thus the CSN was hypothesized to be a master repressor of photomorphogenesis in darkness, and light acts to bypass or eliminate this repression. However, the CSN regulates more than just photomorphogenesis as all mutants lacking this complex die near the end of seedling development. Moreover, an essentially identical complex was subsequently discovered in animals and yeast, organisms whose development is not light responsive, exemplifying how plant science can lead the way to exciting discoveries in biomedical model species (Chamovitz and Deng, 1995; Freilich et al., 1999; Maytal-Kivity et al., 2002; Mundt et al., 1999; Seeger et al., 1998; Wei et al., 1998). Our long-term objective is to determine mechanistically how the CSN controls plant development. We previously that this complex contains eight subunits (Karniol et al., 1998; Serino et al., 1999) and that the 27 ilia subunit is encoded by the FUS5/CSN7 locus (Karniol et al., 1999). The CSN7 subunit also has a role extraneous to the COP9 signalosome, and differential kinase activity has been implicated in regulating CSN7 and the COP9 signalosome (Karniol et al., 1999). In the present research, we further analyzed CSN7, both in terms of interacting proteins and in terms of kinases that act on CSN7. Furthermore we completed our analysis of the CSN in Arabidopsis by analyzing the remaining subunits. Outline of Original Objectives and Subsequent Modifications The general goal of the proposed research was to study the CSN7 (FUS5) subunit of the COP9 signalosome. To this end we specifically intended to: 1. Identify the residues of CSN7 that are phosphorylated. 2. Monitor the phosphorylation of CSN7 under different environmental conditions and under different genetic backgrounds. 3. Generate transgenic plants with altered CSN7 phosphorylation sites. 4. Purify CSN7 kinase from cauliflower. 5. Clone the Arabidopsis cDNA encoding CSN7 kinase 6. Isolate and characterize additional CSN7 interacting proteins. 7. Characterize the interaction of CSN7 and the COP9 signalosome with the HY5-COP1 transcriptional complex. Throughout the course of the research, emphasis shifted from studying CSN7 phosphorylation (Goals 1-3), to studying the CSN7 kinase (Goal 4 and 5), an in depth analysis of CSN7 interactions (Goal 6), and the study of additional CSN subunits. Goal 7 was also abandoned as no data was found to support this interaction.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography