Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Depth data encoding algorithm.

Artykuły w czasopismach na temat „Depth data encoding algorithm”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Depth data encoding algorithm”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Gois, Marcilyanne M., Paulo Matias, André B. Perina, Vanderlei Bonato i Alexandre C. B. Delbem. "A Parallel Hardware Architecture based on Node-Depth Encoding to Solve Network Design Problems". International Journal of Natural Computing Research 4, nr 1 (styczeń 2014): 54–75. http://dx.doi.org/10.4018/ijncr.2014010105.

Pełny tekst źródła
Streszczenie:
Many problems involving network design can be found in the real world, such as electric power circuit planning, telecommunications and phylogenetic trees. In general, solutions for these problems are modeled as forests represented by a graph manipulating thousands or millions of input variables, making it hard to obtain the solutions in a reasonable time. To overcome this restriction, Evolutionary Algorithms (EAs) with dynamic data structures (encodings) have been widely investigated to increase the performance of EAs for Network Design Problems (NDPs). In this context, this paper proposes a parallelization of the node-depth encoding (NDE), a data structure especially designed for NDPs. Based on the NDE the authors have developed a parallel algorithm and a hardware architecture implemented on FPGA (Field-Programmable Gate Array), denominated Hardware Parallelized NDE (HP-NDE). The running times obtained in a general purpose processor (GPP) and the HP-NDE are compared. The results show a significant speedup in relation to the GPP solution, solving NDP in a time limited by a constant. Such time upper bound can be satisfied for any size of network until the hardware resources available on the FPGA are depleted. The authors evaluated the HP-NDE on a Stratix IV FPGA with networks containing up to 2048 nodes.
Style APA, Harvard, Vancouver, ISO itp.
2

He, Shuqian, Zhengjie Deng i Chun Shi. "Fast Decision Algorithm of CU Size for HEVC Intra-Prediction Based on a Kernel Fuzzy SVM Classifier". Electronics 11, nr 17 (5.09.2022): 2791. http://dx.doi.org/10.3390/electronics11172791.

Pełny tekst źródła
Streszczenie:
High Efficiency Video Coding (HEVC) achieves a significant improvement in compression efficiency at the cost of extremely high computational complexity. Therefore, large-scale and wide deployment applications, especially mobile real-time video applications under low-latency and power-constrained conditions, are more challenging. In order to solve the above problems, a fast decision method for intra-coding unit size based on a new fuzzy support vector machine classifier is proposed in this paper. The relationship between the depth levels of coding units is accurately expressed by defining the cost evaluation criteria of texture and non-texture rate-distortion cost. The fuzzy support vector machine is improved by using the information entropy measure to solve the negative impact of data noise and the outliers problem. The proposed method includes three stages: the optimal coded depth level “0” early decision, coding unit depth early skip, and optimal coding unit early terminate. In order to further improve the rate-distortion complexity optimization performance, more feature vectors are introduced, including features such as space complexity, the relationship between coding unit depths, and rate-distortion cost. The experimental results showed that, compared with the HEVC reference test model HM16.5, the proposed algorithm can reduce the encoding time of various test video sequences by more than 53.24% on average, while the Bjontegaard Delta Bit Rate (BDBR) only increases by 0.82%. In addition, the proposed algorithm is better than the existing algorithms in terms of comprehensively reducing the computational complexity and maintaining the rate-distortion performance.
Style APA, Harvard, Vancouver, ISO itp.
3

Lilhore, Umesh Kumar, Osamah Ibrahim Khalaf, Sarita Simaiya, Carlos Andrés Tavera Romero, Ghaida Muttashar Abdulsahib, Poongodi M i Dinesh Kumar. "A depth-controlled and energy-efficient routing protocol for underwater wireless sensor networks". International Journal of Distributed Sensor Networks 18, nr 9 (wrzesień 2022): 155013292211171. http://dx.doi.org/10.1177/15501329221117118.

Pełny tekst źródła
Streszczenie:
Underwater wireless sensor network attracted massive attention from researchers. In underwater wireless sensor network, many sensor nodes are distributed at different depths in the sea. Due to its complex nature, updating their location or adding new devices is pretty challenging. Due to the constraints on energy storage of underwater wireless sensor network end devices and the complexity of repairing or recharging the device underwater, this is highly significant to strengthen the energy performance of underwater wireless sensor network. An imbalance in power consumption can cause poor performance and a limited network lifetime. To overcome these issues, we propose a depth controlled with energy-balanced routing protocol, which will be able to adjust the depth of lower energy nodes and be able to swap the lower energy nodes with higher energy nodes to ensure consistent energy utilization. The proposed energy-efficient routing protocol is based on an enhanced genetic algorithm and data fusion technique. In the proposed energy-efficient routing protocol, an existing genetic algorithm is enhanced by adding an encoding strategy, a crossover procedure, and an improved mutation operation that helps determine the nodes. The proposed model also utilized an enhanced back propagation neural network for data fusion operation, which is based on multi-hop system and also operates a highly optimized momentum technique, which helps to choose only optimum energy nodes and avoid duplicate selections that help to improve the overall energy and further reduce the quantity of data transmission. In the proposed energy-efficient routing protocol, an enhanced cluster head node is used to select a strategy that can analyze the remaining energy and directions of each participating node. In the simulation, the proposed model achieves 86.7% packet delivery ratio, 12.6% energy consumption, and 10.5% packet drop ratio over existing depth-based routing and energy-efficient depth-based routing methods for underwater wireless sensor network.
Style APA, Harvard, Vancouver, ISO itp.
4

Petrova, Natalia, i Natalia Mokshina. "Using FIBexDB for In-Depth Analysis of Flax Lectin Gene Expression in Response to Fusarium oxysporum Infection". Plants 11, nr 2 (7.01.2022): 163. http://dx.doi.org/10.3390/plants11020163.

Pełny tekst źródła
Streszczenie:
Plant proteins with lectin domains play an essential role in plant immunity modulation, but among a plurality of lectins recruited by plants, only a few members have been functionally characterized. For the analysis of flax lectin gene expression, we used FIBexDB, which includes an efficient algorithm for flax gene expression analysis combining gene clustering and coexpression network analysis. We analyzed the lectin gene expression in various flax tissues, including root tips infected with Fusarium oxysporum. Two pools of lectin genes were revealed: downregulated and upregulated during the infection. Lectins with suppressed gene expression are associated with protein biosynthesis (Calreticulin family), cell wall biosynthesis (galactose-binding lectin family) and cytoskeleton functioning (Malectin family). Among the upregulated lectin genes were those encoding lectins from the Hevein, Nictaba, and GNA families. The main participants from each group are discussed. A list of lectin genes, the expression of which can determine the resistance of flax, is proposed, for example, the genes encoding amaranthins. We demonstrate that FIBexDB is an efficient tool both for the visualization of data, and for searching for the general patterns of lectin genes that may play an essential role in normal plant development and defense.
Style APA, Harvard, Vancouver, ISO itp.
5

Tăbuşand i Can Kaya. "Information Theoretic Modeling of High Precision Disparity Data for Lossy Compression and Object Segmentation". Entropy 21, nr 11 (13.11.2019): 1113. http://dx.doi.org/10.3390/e21111113.

Pełny tekst źródła
Streszczenie:
In this paper, we study the geometry data associated with disparity map or depth map images in order to extract easy to compress polynomial surface models at different bitrates, proposing an efficient mining strategy for geometry information. The segmentation, or partition of the image pixels, is viewed as a model structure selection problem, where the decisions are based on the implementable codelength of the model, akin to minimum description length for lossy representations. The intended usage of the extracted disparity map is to provide to the decoder the geometry information at a very small fraction from what is required for a lossless compressed version, and secondly, to convey to the decoder a segmentation describing the contours of the objects from the scene. We propose first an algorithm for constructing a hierarchical segmentation based on the persistency of the contours of regions in an iterative re-estimation algorithm. Then, we propose a second algorithm for constructing a new sequence of segmentations, by selecting the order in which the persistent contours are included in the model, driven by decisions based on the descriptive codelength. We consider real disparity datasets which have the geometry information at a high precision, in floating point format, but for which encoding of the raw information, in about 32 bits per pixels, is too expensive, and we then demonstrate good approximations preserving the object structure of the scene, achieved for rates below 0.2 bits per pixels.
Style APA, Harvard, Vancouver, ISO itp.
6

Jiang, Ming-xin, Xian-xian Luo, Tao Hai, Hai-yan Wang, Song Yang i Ahmed N. Abdalla. "Visual Object Tracking in RGB-D Data via Genetic Feature Learning". Complexity 2019 (2.05.2019): 1–8. http://dx.doi.org/10.1155/2019/4539410.

Pełny tekst źródła
Streszczenie:
Visual object tracking is a fundamental component in many computer vision applications. Extracting robust features of object is one of the most important steps in tracking. As trackers, only formulated on RGB data, are usually affected by occlusions, appearance, or illumination variations, we propose a novel RGB-D tracking method based on genetic feature learning in this paper. Our approach addresses feature learning as an optimization problem. As owning the advantage of parallel computing, genetic algorithm (GA) has fast speed of convergence and excellent global optimization performance. At the same time, unlike handcrafted feature and deep learning methods, GA can be employed to solve the problem of feature representation without prior knowledge, and it has no use for a large number of parameters to be learned. The candidate solution in RGB or depth modality is represented as an encoding of an image in GA, and genetic feature is learned through population initialization, fitness evaluation, selection, crossover, and mutation. The proposed RGB-D tracker is evaluated on popular benchmark dataset, and experimental results indicate that our method achieves higher accuracy and faster tracking speed.
Style APA, Harvard, Vancouver, ISO itp.
7

Han, Lei, Xiaohua Huang, Zhan Shi i Shengnan Zheng. "Depth Estimation from Light Field Geometry Using Convolutional Neural Networks". Sensors 21, nr 18 (10.09.2021): 6061. http://dx.doi.org/10.3390/s21186061.

Pełny tekst źródła
Streszczenie:
Depth estimation based on light field imaging is a new methodology that has succeeded the traditional binocular stereo matching and depth from monocular images. Significant progress has been made in light-field depth estimation. Nevertheless, the balance between computational time and the accuracy of depth estimation is still worth exploring. The geometry in light field imaging is the basis of depth estimation, and the abundant light-field data provides convenience for applying deep learning algorithms. The Epipolar Plane Image (EPI) generated from the light-field data has a line texture containing geometric information. The slope of the line is proportional to the depth of the corresponding object. Considering the light field depth estimation as a spatial density prediction task, we design a convolutional neural network (ESTNet) to estimate the accurate depth quickly. Inspired by the strong image feature extraction ability of convolutional neural networks, especially for texture images, we propose to generate EPI synthetic images from light field data as the input of ESTNet to improve the effect of feature extraction and depth estimation. The architecture of ESTNet is characterized by three input streams, encoding-decoding structure, and skipconnections. The three input streams receive horizontal EPI synthetic image (EPIh), vertical EPI synthetic image (EPIv), and central view image (CV), respectively. EPIh and EPIv contain rich texture and depth cues, while CV provides pixel position association information. ESTNet consists of two stages: encoding and decoding. The encoding stage includes several convolution modules, and correspondingly, the decoding stage embodies some transposed convolution modules. In addition to the forward propagation of the network ESTNet, some skip-connections are added between the convolution module and the corresponding transposed convolution module to fuse the shallow local and deep semantic features. ESTNet is trained on one part of a synthetic light-field dataset and then tested on another part of the synthetic light-field dataset and real light-field dataset. Ablation experiments show that our ESTNet structure is reasonable. Experiments on the synthetic light-field dataset and real light-field dataset show that our ESTNet can balance the accuracy of depth estimation and computational time.
Style APA, Harvard, Vancouver, ISO itp.
8

Yi, Luying, Xiangyu Guo, Liqun Sun i Bo Hou. "Structural and Functional Sensing of Bio-Tissues Based on Compressive Sensing Spectral Domain Optical Coherence Tomography". Sensors 19, nr 19 (27.09.2019): 4208. http://dx.doi.org/10.3390/s19194208.

Pełny tekst źródła
Streszczenie:
In this paper, a full depth 2D CS-SDOCT approach is proposed, which combines two-dimensional (2D) compressive sensing spectral-domain optical coherence tomography (CS-SDOCT) and dispersion encoding (ED) technologies, and its applications in structural imaging and functional sensing of bio-tissues are studied. Specifically, by introducing a large dispersion mismatch between the reference arm and sample arm in SD-OCT system, the reconstruction of the under-sampled A-scan data and the removal of the conjugated images can be achieved simultaneously by only two iterations. The under-sampled B-scan data is then reconstructed using the classic CS reconstruction algorithm. For a 5 mm × 3.2 mm fish-eye image, the conjugated image was reduced by 31.4 dB using 50% × 50% sampled data (250 depth scans and 480 spectral sampling points per depth scan), and all A-scan data was reconstructed in only 1.2 s. In addition, we analyze the application performance of the CS-SDOCT in functional sensing of locally homogeneous tissue. Simulation and experimental results show that this method can correctly reconstruct the extinction coefficient spectrum under reasonable iteration times. When 8 iterations were used to reconstruct the A-scan data in the imaging experiment of fisheye, the extinction coefficient spectrum calculated using 50% × 50% data was approximately consistent with that obtained with 100% data.
Style APA, Harvard, Vancouver, ISO itp.
9

Huang, Zedong, Jinan Gu, Jing Li, Shuwei Li i Junjie Hu. "Depth Estimation of Monocular PCB Image Based on Self-Supervised Convolution Network". Electronics 11, nr 12 (7.06.2022): 1812. http://dx.doi.org/10.3390/electronics11121812.

Pełny tekst źródła
Streszczenie:
To improve the accuracy of using deep neural networks to predict the depth information of a single image, we proposed an unsupervised convolutional neural network for single-image depth estimation. Firstly, the network is improved by introducing a dense residual module into the encoding and decoding structure. Secondly, the optimized hybrid attention module is introduced into the network. Finally, stereo image is used as the training data of the network to realize the end-to-end single-image depth estimation. The experimental results on KITTI and Cityscapes data sets show that compared with some classical algorithms, our proposed method can obtain better accuracy and lower error. In addition, we train our models on PCB data sets in industrial environments. Experiments in several scenarios verify the generalization ability of the proposed method and the excellent performance of the model.
Style APA, Harvard, Vancouver, ISO itp.
10

Lim, Olivier, Stéphane Mancini i Mauro Dalla Mura. "Feasibility of a Real-Time Embedded Hyperspectral Compressive Sensing Imaging System". Sensors 22, nr 24 (13.12.2022): 9793. http://dx.doi.org/10.3390/s22249793.

Pełny tekst źródła
Streszczenie:
Hyperspectral imaging has been attracting considerable interest as it provides spectrally rich acquisitions useful in several applications, such as remote sensing, agriculture, astronomy, geology and medicine. Hyperspectral devices based on compressive acquisitions have appeared recently as an alternative to conventional hyperspectral imaging systems and allow for data-sampling with fewer acquisitions than classical imaging techniques, even under the Nyquist rate. However, compressive hyperspectral imaging requires a reconstruction algorithm in order to recover all the data from the raw compressed acquisition. The reconstruction process is one of the limiting factors for the spread of these devices, as it is generally time-consuming and comes with a high computational burden. Algorithmic and material acceleration with embedded and parallel architectures (e.g., GPUs and FPGAs) can considerably speed up image reconstruction, making hyperspectral compressive systems suitable for real-time applications. This paper provides an in-depth analysis of the required performance in terms of computing power, data memory and bandwidth considering a compressive hyperspectral imaging system and a state-of-the-art reconstruction algorithm as an example. The results of the analysis show that real-time application is possible by combining several approaches, namely, exploitation of system matrix sparsity and bandwidth reduction by appropriately tuning data value encoding.
Style APA, Harvard, Vancouver, ISO itp.
11

Wang, Jihua, i Huayu Wang. "A study of 3D model similarity based on surface bipartite graph matching". Engineering Computations 34, nr 1 (6.03.2017): 174–88. http://dx.doi.org/10.1108/ec-10-2015-0315.

Pełny tekst źródła
Streszczenie:
Purpose This study aims to compute 3D model similarity by extracting and comparing shape features from the neutral files. Design/methodology/approach In this work, the clear text encoding document STEP (Standard for The Exchange of Product model data) of 3D models was analysed, and the models were characterized by two-depth trees consisting of both surface and shell nodes. All surfaces in the STEP files can be subdivided into three kinds, namely, free, analytical and loop surfaces. Surface similarity is defined by the variation coefficients of distances between data points on two surfaces, and subsequently, the shell similarity and 3D model similarity are determined using an optimal algorithm for bipartite graph matching. Findings This approach is used to experimentally verify the effectiveness of the 3D model similarity algorithm. Originality/value The novelty of this study research lies in the computation of 3D model similarity by comparison of all surfaces. In addition, the study makes several key observations: surfaces reflect the most information concerning the functions and attributes of a 3D model and so the similarity between surfaces generates more comprehensive content (both external and internal); semantic-based 3D retrieval can be obtained under the premise of comparison of surface semantics; and more accurate similarity of 3D models can be obtained using the optimal algorithm of bipartite graph matching for all surfaces.
Style APA, Harvard, Vancouver, ISO itp.
12

Matin, Amir, i Xu Wang. "Compressive Coded Rotating Mirror Camera for High-Speed Imaging". Photonics 8, nr 2 (30.01.2021): 34. http://dx.doi.org/10.3390/photonics8020034.

Pełny tekst źródła
Streszczenie:
We develop a novel compressive coded rotating mirror (CCRM) camera to capture events at high frame rates in passive mode with a compact instrument design at a fraction of the cost compared to other high-speed imaging cameras. Operation of the CCRM camera is based on amplitude optical encoding (grey scale) and a continuous frame sweep across a low-cost detector using a motorized rotating mirror system which can achieve single pixel shift between adjacent frames. Amplitude encoding and continuous frame overlapping enable the CCRM camera to achieve a high number of captured frames and high temporal resolution without making sacrifices in the spatial resolution. Two sets of dynamic scenes have been captured at up to a 120 Kfps frame rate in both monochrome and colored scales in the experimental demonstrations. The obtained heavily compressed data from the experiment are reconstructed using the optimization algorithm under the compressive sensing (CS) paradigm and the highest sequence depth of 1400 captured frames in a single exposure has been achieved with the highest compression ratio of 368 compared to other CS-based high-speed imaging technologies. Under similar conditions the CCRM camera is 700× faster than conventional rotating mirror based imaging devices and could reach a frame rate of up to 20 Gfps.
Style APA, Harvard, Vancouver, ISO itp.
13

Nguyen, T. V., i A. G. Kravets. "Evaluation and Prediction of Trends in the Development of Scientific Research Based on Bibliometric Analysis of Publications". INFORMACIONNYE TEHNOLOGII 27, nr 4 (10.04.2021): 195–201. http://dx.doi.org/10.17587/it.27.195-201.

Pełny tekst źródła
Streszczenie:
The article proposes an approach to analyzing and predicting the thematic evolution of research by identifying an upward trend in keywords. Statistical analysis of the vocabulary of publications allows us to trace the depth of penetration of new ideas and methods, which can be set by the frequency of occurrence of words encoding whole concepts. The article presents a developed method for analyzing research trends and an article ranking algorithm based on the structure of a direct citation network. Data for the study was extracted from the Web of Science Core Collection, 6696 publications were collected for the experiment over the period 2005—2016 in the field of artificial intelligence. To evaluate the proposed method, 3211 publications were collected from 2017 to 2019. As a result, the method was evaluated by checking the presence of predicted keywords in the set of the most frequent terms for the period 2017—2019 and provided an accuracy of 73.33 %.
Style APA, Harvard, Vancouver, ISO itp.
14

Phermphoonphiphat, Ekasit, Tomohiko Tomita, Takashi Morita, Masayuki Numao i Ken-Ichi Fukui. "Soft Periodic Convolutional Recurrent Network for Spatiotemporal Climate Forecast". Applied Sciences 11, nr 20 (18.10.2021): 9728. http://dx.doi.org/10.3390/app11209728.

Pełny tekst źródła
Streszczenie:
Many machine-learning applications and methods are emerging to solve problems associated with spatiotemporal climate forecasting; however, a prediction algorithm that considers only short-range sequential information may not be adequate to deal with periodic patterns such as seasonality. In this paper, we adopt a Periodic Convolutional Recurrent Network (Periodic-CRN) model to employ the periodicity component in our proposals of the periodic representation dictionary (PRD). Phase shifts and non-stationarity of periodicity are the key components in the model to support. Specifically, we propose a Soft Periodic-CRN (SP-CRN) with three proposals of utilizing periodicity components: nearby-time (PRD-1), periodic-depth (PRD-2), and periodic-depth differencing (PRD-3) representation to improve climate forecasting accuracy. We experimented on geopotential height at 300 hPa (ZH300) and sea surface temperature (SST) datasets of ERA-Interim. The results showed the superiority of PRD-1 plus or minus one month of a prior cycle to capture the phase shift. In addition, PRD-3 considered only the depth of one differencing periodic cycle (i.e., the previous year) can significantly improve the prediction accuracy of ZH300 and SST. The mixed method of PRD-1, and PRD-3 (SP-CRN-1+3) showed a competitive or slight improvement over their base models. By adding the metadata component to indicate the month with one-hot encoding to SP-CRN-1+3, the prediction result was a drastic improvement. The results showed that the proposed method could learn four years of periodicity from the data, which may relate to the El Niño–Southern Oscillation (ENSO) cycle.
Style APA, Harvard, Vancouver, ISO itp.
15

Liao, Qing, Haoyu Tan, Wuman Luo i Ye Ding. "Diverse Mobile System for Location-Based Mobile Data". Wireless Communications and Mobile Computing 2018 (1.08.2018): 1–17. http://dx.doi.org/10.1155/2018/4217432.

Pełny tekst źródła
Streszczenie:
The value of large amount of location-based mobile data has received wide attention in many research fields including human behavior analysis, urban transportation planning, and various location-based services. Nowadays, both scientific and industrial communities are encouraged to collect as much location-based mobile data as possible, which brings two challenges: (1) how to efficiently process the queries of big location-based mobile data and (2) how to reduce the cost of storage services, because it is too expensive to store several exact data replicas for fault-tolerance. So far, several dedicated storage systems have been proposed to address these issues. However, they do not work well when the ranges of queries vary widely. In this work, we design a storage system based on diverse replica scheme which not only can improve the query processing efficiency but also can reduce the cost of storage space. To the best of our knowledge, this is the first work to investigate the data storage and processing in the context of big location-based mobile data. Specifically, we conduct in-depth theoretical and empirical analysis of the trade-offs between different spatial-temporal partitioning and data encoding schemes. Moreover, we propose an effective approach to select an appropriate set of diverse replicas, which is optimized for the expected query loads while conforming to the given storage space budget. The experiment results show that using diverse replicas can significantly improve the overall query performance and the proposed algorithms for the replica selection problem are both effective and efficient.
Style APA, Harvard, Vancouver, ISO itp.
16

Tasnim, Nusrat, i Joong-Hwan Baek. "Deep Learning-Based Human Action Recognition with Key-Frames Sampling Using Ranking Methods". Applied Sciences 12, nr 9 (20.04.2022): 4165. http://dx.doi.org/10.3390/app12094165.

Pełny tekst źródła
Streszczenie:
Nowadays, the demand for human–machine or object interaction is growing tremendously owing to its diverse applications. The massive advancement in modern technology has greatly influenced researchers to adopt deep learning models in the fields of computer vision and image-processing, particularly human action recognition. Many methods have been developed to recognize human activity, which is limited to effectiveness, efficiency, and use of data modalities. Very few methods have used depth sequences in which they have introduced different encoding techniques to represent an action sequence into the spatial format called dynamic image. Then, they have used a 2D convolutional neural network (CNN) or traditional machine learning algorithms for action recognition. These methods are completely dependent on the effectiveness of the spatial representation. In this article, we propose a novel ranking-based approach to select key frames and adopt a 3D-CNN model for action classification. We directly use the raw sequence instead of generating the dynamic image. We investigate the recognition results with various levels of sampling to show the competency and robustness of the proposed system. We also examine the universality of the proposed method on three benchmark human action datasets: DHA (depth-included human action), MSR-Action3D (Microsoft Action 3D), and UTD-MHAD (University of Texas at Dallas Multimodal Human Action Dataset). The proposed method secures better performance than state-of-the-art techniques using depth sequences.
Style APA, Harvard, Vancouver, ISO itp.
17

Gatti, Giancarlo, Daniel Huerga, Enrique Solano i Mikel Sanz. "Random access codes via quantum contextual redundancy". Quantum 7 (13.01.2023): 895. http://dx.doi.org/10.22331/q-2023-01-13-895.

Pełny tekst źródła
Streszczenie:
We propose a protocol to encode classical bits in the measurement statistics of many-body Pauli observables, leveraging quantum correlations for a random access code. Measurement contexts built with these observables yield outcomes with intrinsic redundancy, something we exploit by encoding the data into a set of convenient context eigenstates. This allows to randomly access the encoded data with few resources. The eigenstates used are highly entangled and can be generated by a discretely-parametrized quantum circuit of low depth. Applications of this protocol include algorithms requiring large-data storage with only partial retrieval, as is the case of decision trees. Using n-qubit states, this Quantum Random Access Code has greater success probability than its classical counterpart for n≥14 and than previous Quantum Random Access Codes for n≥16. Furthermore, for n≥18, it can be amplified into a nearly-lossless compression protocol with success probability 0.999 and compression ratio O(n2/2n). The data it can store is equal to Google-Drive server capacity for n=44, and to a brute-force solution for chess (what to do on any board configuration) for n=100.
Style APA, Harvard, Vancouver, ISO itp.
18

Хромушин, Oleg Khromushin, Хромушин, Viktor Khromushin, Китанина i K. Kitanina. "About the use of the recognition algorithm of the text in database". Journal of New Medical Technologies. eJournal 10, nr 1 (19.05.2016): 0. http://dx.doi.org/10.12737/18445.

Pełny tekst źródła
Streszczenie:
This article presents the features of the use of the recognition algorithm of the text by method "slitherring widening window" for coding the plural reasons to deaths. The used algorithm dynamically "adjusts" degree of the coincidence and finds the most similar variant, as well as allows to recognize the text with grammatical errors and with ceased word in wording of the reason to deaths. The authors propose three variants to realization of the recognition algorithm of the text, that increase the speed of action. The first variant is based on the elimination of one of its replacement cycle by calculating the simultaneous windows with different sizes (1 to 16). The second variant is based on the pre-filter, for example, by scanning three letters, and the use of the intermediate base for receiving the filtered information. This option allows you to reduce the amount of sorted data and thereby increases performance. The third variant based on the filtering, involves sorting information in the query executed on the basis of previous information query filtration. The advantages and disadvantages identified for each option. The results were evaluated on the speed of action and accuracy of recognition. In this framework, there were 8472 languages used for encoding of multiple causes of death. The described analysis is useful in developing a software module used to register mortality. It is recom-mended the third option, based on the filter, to implement the language Visual C ++.
Style APA, Harvard, Vancouver, ISO itp.
19

Zhang, Xiaolong, i Wei Wu. "Wireless Communication Physical Layer Sensing Antenna Array Construction and Information Security Analysis". Journal of Sensors 2021 (25.10.2021): 1–11. http://dx.doi.org/10.1155/2021/9007071.

Pełny tekst źródła
Streszczenie:
Due to the complexity of wireless communication networks and the open nature of wireless links, complex upper layer network encryption cryptographic algorithms are also difficult to implement effectively in complex mobile wireless communication and interconnection networks, and traditional cryptography-based security policies are gradually not well able to meet the security management needs of today’s mobile Internet information era. In this paper, the physical characteristics of the channel in the wireless channel are extracted and used to generate keys, and then, the keys are negotiated so that the keys generated by the two communicating parties are identical. Then, the generated keys are used by the communicating parties to design the interleaving matrix for encryption of the message. The false bit rate of the system is investigated for the case of an interleaving matrix generated using different quantization methods of keys. The problem of characterizing the encoding and encryption techniques for interleaving keys for the physical layer sensing antenna arrays of wireless channels is studied in depth. The physical layer wireless channel interleaving technique and the wireless channel physical layer encryption technique are organically combined, and a joint interleaving encryption method based on the physical layer key of the wireless channel is designed and used to encrypt and randomize the physical layer data information of the OFDM (Orthogonal Frequency Division Multiplexing) system, which improves the security and reliability of the wireless channel information transmission. The effect of physical layer keys under different physical layer quantization methods on the performance of the wireless channel interleaving encryption algorithm is studied, and the quantization methods of pure amplitude, pure phase, and joint amplitude phase are investigated for the characteristics of wireless physical layer channels.
Style APA, Harvard, Vancouver, ISO itp.
20

Mezher, Mohammad A., Almothana Altamimi i Ruhaifa Altamimi. "An enhanced Genetic Folding algorithm for prostate and breast cancer detection". PeerJ Computer Science 8 (21.06.2022): e1015. http://dx.doi.org/10.7717/peerj-cs.1015.

Pełny tekst źródła
Streszczenie:
Cancer’s genomic complexity is gradually increasing as we learn more about it. Genomic classification of various cancers is crucial in providing oncologists with vital information for targeted therapy. Thus, it becomes more pertinent to address issues of patient genomic classification. Prostate cancer is a cancer subtype that exhibits extreme heterogeneity. Prostate cancer contributes to 7.3% of new cancer cases worldwide, with a high prevalence in males. Breast cancer is the most common type of cancer in women and the second most significant cause of death from cancer in women. Breast cancer is caused by abnormal cell growth in the breast tissue, generally referred to as a tumour. Tumours are not synonymous with cancer; they can be benign (noncancerous), pre-malignant (pre-cancerous), or malignant (cancerous). Fine-needle aspiration (FNA) tests are used to biopsy the breast to diagnose breast cancer. Artificial Intelligence (AI) and machine learning (ML) models are used to diagnose with varying accuracy. In light of this, we used the Genetic Folding (GF) algorithm to predict prostate cancer status in a given dataset. An accuracy of 96% was obtained, thus being the current highest accuracy in prostate cancer diagnosis. The model was also used in breast cancer classification with a proposed pipeline that used exploratory data analysis (EDA), label encoding, feature standardization, feature decomposition, log transformation, detect and remove the outliers with Z-score, and the BAGGINGSVM approach attained a 95.96% accuracy. The accuracy of this model was then assessed using the rate of change of PSA, age, BMI, and filtration by race. We discovered that integrating the rate of change of PSA and age in our model raised the model’s area under the curve (AUC) by 6.8%, whereas BMI and race had no effect. As for breast cancer classification, no features were removed.
Style APA, Harvard, Vancouver, ISO itp.
21

Jia, Yin, Balakrishnan Ramalingam, Rajesh Elara Mohan, Zhenyuan Yang, Zimou Zeng i Prabakaran Veerajagadheswar. "Deep-Learning-Based Context-Aware Multi-Level Information Fusion Systems for Indoor Mobile Robots Safe Navigation". Sensors 23, nr 4 (20.02.2023): 2337. http://dx.doi.org/10.3390/s23042337.

Pełny tekst źródła
Streszczenie:
Hazardous object detection (escalators, stairs, glass doors, etc.) and avoidance are critical functional safety modules for autonomous mobile cleaning robots. Conventional object detectors have less accuracy for detecting low-feature hazardous objects and have miss detection, and the false classification ratio is high when the object is under occlusion. Miss detection or false classification of hazardous objects poses an operational safety issue for mobile robots. This work presents a deep-learning-based context-aware multi-level information fusion framework for autonomous mobile cleaning robots to detect and avoid hazardous objects with a higher confidence level, even if the object is under occlusion. First, the image-level-contextual-encoding module was proposed and incorporated with the Faster RCNN ResNet 50 object detector model to improve the low-featured and occluded hazardous object detection in an indoor environment. Further, a safe-distance-estimation function was proposed to avoid hazardous objects. It computes the distance of the hazardous object from the robot’s position and steers the robot into a safer zone using detection results and object depth data. The proposed framework was trained with a custom image dataset using fine-tuning techniques and tested in real-time with an in-house-developed mobile cleaning robot, BELUGA. The experimental results show that the proposed algorithm detected the low-featured and occluded hazardous object with a higher confidence level than the conventional object detector and scored an average detection accuracy of 88.71%.
Style APA, Harvard, Vancouver, ISO itp.
22

Yu, Xianjia, Sahar Salimpour, Jorge Peña Queralta i Tomi Westerlund. "General-Purpose Deep Learning Detection and Segmentation Models for Images from a Lidar-Based Camera Sensor". Sensors 23, nr 6 (8.03.2023): 2936. http://dx.doi.org/10.3390/s23062936.

Pełny tekst źródła
Streszczenie:
Over the last decade, robotic perception algorithms have significantly benefited from the rapid advances in deep learning (DL). Indeed, a significant amount of the autonomy stack of different commercial and research platforms relies on DL for situational awareness, especially vision sensors. This work explored the potential of general-purpose DL perception algorithms, specifically detection and segmentation neural networks, for processing image-like outputs of advanced lidar sensors. Rather than processing the three-dimensional point cloud data, this is, to the best of our knowledge, the first work to focus on low-resolution images with a 360° field of view obtained with lidar sensors by encoding either depth, reflectivity, or near-infrared light in the image pixels. We showed that with adequate preprocessing, general-purpose DL models can process these images, opening the door to their usage in environmental conditions where vision sensors present inherent limitations. We provided both a qualitative and quantitative analysis of the performance of a variety of neural network architectures. We believe that using DL models built for visual cameras offers significant advantages due to their much wider availability and maturity compared to point cloud-based perception.
Style APA, Harvard, Vancouver, ISO itp.
23

Hasanujjaman, Arnab Banerjee, Utpal Biswas i Mrinal K. Naskar. "Design and Development of a Hardware Efficient Image Compression Improvement Framework". Micro and Nanosystems 12, nr 3 (1.12.2020): 217–25. http://dx.doi.org/10.2174/1876402912666200128125733.

Pełny tekst źródła
Streszczenie:
Background: In the region of image processing, a varied number of methods have already initiated the concept of data sciences optimization, in which, numerous global researchers have put their efforts upon the reduction of compression ratio and increment of PSNR. Additionally, the efforts have also separated into hardware and processing sections, that would help in emerging more prospective outcomes from the research. In this particular paper, a mystical concept for the image segmentation has been developed that helps in splitting the image into two different halves’, which is further termed as the atomic image. In-depth, the separations were done on the bases of even and odd pixels present within the size of the original image in the spatial domain. Furthermore, by splitting the original image into an atomic image will reflect an efficient result in experimental data. Additionally, the time for compression and decompression of the original image with both Quadtree and Huffman is also processed to receive the higher results observed in the result section. The superiority of the proposed schemes is further described upon the comparison basis of performances through the conventional Quadtree decomposition process. Objective : The objective of this present work is to find out the minimum resources required to reconstruct the image after compression. Method : The popular method of quadtree decomposition with Huffman encoding used for image compression. Results : The proposed algorithm was implemented on six types of images and got maximum PSNR of 30.12dB for Lena Image and a maximum compression ratio of 25.96 for MRI image. Conclusion: Different types of images are tested and a high compression ratio with acceptable PSNR was obtained.
Style APA, Harvard, Vancouver, ISO itp.
24

Murale, C., M. Sundarambal i R. Nedunchezhian. "Analysis on Ensemble Methods for the Prediction of Cardiovascular Disease". Journal of Medical Imaging and Health Informatics 11, nr 10 (1.10.2021): 2529–37. http://dx.doi.org/10.1166/jmihi.2021.3839.

Pełny tekst źródła
Streszczenie:
Coronary Heart disease is one of the dominant sources of death and morbidity for the people worldwide. The identification of cardiac disease in the clinical review is considered one of the main problems. As the amount of data grows increasingly, interpretation and retrieval become even more complex. In addition, the Ensemble learning prediction model seems to be an important fact in this area of study. The prime aim of this paper is also to forecast CHD accurately. This paper is intended to offer a modern paradigm for prediction of cardiovascular diseases with the use of such processes such as pre-processing, detection of features, feature selection and classification. The pre-processing will initially be performed using the ordinal encoding technique, and the statistical and the features of higher order are extracted using the Fisher algorithm. Later, the minimization of record and attribute is performed, in which principle component analysis performs its extensive part in figuring out the “curse of dimensionality.” Lastly, the process of prediction is carried out by the different Ensemble models (SVM, Gaussian Naïve Bayes, Random forest, K-nearest neighbor, Logistic regression, decision tree and Multilayer perceptron that intake the features with reduced dimensions. Finally, in comparison to such success metrics the reliability of the proposal work is compared and its superiority has been confirmed. From the analysis, Naïve bayes with regards to accuracy is 98.4% better than other Ensemble algorithms.
Style APA, Harvard, Vancouver, ISO itp.
25

Zhu, Yafei, Yuhai Liu, Yu Chen i Lei Li. "ResSUMO: A Deep Learning Architecture Based on Residual Structure for Prediction of Lysine SUMOylation Sites". Cells 11, nr 17 (25.08.2022): 2646. http://dx.doi.org/10.3390/cells11172646.

Pełny tekst źródła
Streszczenie:
Lysine SUMOylation plays an essential role in various biological functions. Several approaches integrating various algorithms have been developed for predicting SUMOylation sites based on a limited dataset. Recently, the number of identified SUMOylation sites has significantly increased due to investigation at the proteomics scale. We collected modification data and found the reported approaches had poor performance using our collected data. Therefore, it is essential to explore the characteristics of this modification and construct prediction models with improved performance based on an enlarged dataset. In this study, we constructed and compared 16 classifiers by integrating four different algorithms and four encoding features selected from 11 sequence-based or physicochemical features. We found that the convolution neural network (CNN) model integrated with residue structure, dubbed ResSUMO, performed favorably when compared with the traditional machine learning and CNN models in both cross-validation and independent tests. The area under the receiver operating characteristic (ROC) curve for ResSUMO was around 0.80, superior to that of the reported predictors. We also found that increasing the depth of neural networks in the CNN models did not improve prediction performance due to the degradation problem, but the residual structure could be included to optimize the neural networks and improve performance. This indicates that residual neural networks have the potential to be broadly applied in the prediction of other types of modification sites with great effectiveness and robustness. Furthermore, the online ResSUMO service is freely accessible.
Style APA, Harvard, Vancouver, ISO itp.
26

Chen, Zhen, Pei Zhao, Fuyi Li, Yanan Wang, A. Ian Smith, Geoffrey I. Webb, Tatsuya Akutsu, Abdelkader Baggag, Halima Bensmail i Jiangning Song. "Comprehensive review and assessment of computational methods for predicting RNA post-transcriptional modification sites from RNA sequences". Briefings in Bioinformatics 21, nr 5 (11.11.2019): 1676–96. http://dx.doi.org/10.1093/bib/bbz112.

Pełny tekst źródła
Streszczenie:
Abstract RNA post-transcriptional modifications play a crucial role in a myriad of biological processes and cellular functions. To date, more than 160 RNA modifications have been discovered; therefore, accurate identification of RNA-modification sites is fundamental for a better understanding of RNA-mediated biological functions and mechanisms. However, due to limitations in experimental methods, systematic identification of different types of RNA-modification sites remains a major challenge. Recently, more than 20 computational methods have been developed to identify RNA-modification sites in tandem with high-throughput experimental methods, with most of these capable of predicting only single types of RNA-modification sites. These methods show high diversity in their dataset size, data quality, core algorithms, features extracted and feature selection techniques and evaluation strategies. Therefore, there is an urgent need to revisit these methods and summarize their methodologies, in order to improve and further develop computational techniques to identify and characterize RNA-modification sites from the large amounts of sequence data. With this goal in mind, first, we provide a comprehensive survey on a large collection of 27 state-of-the-art approaches for predicting N1-methyladenosine and N6-methyladenosine sites. We cover a variety of important aspects that are crucial for the development of successful predictors, including the dataset quality, operating algorithms, sequence and genomic features, feature selection, model performance evaluation and software utility. In addition, we also provide our thoughts on potential strategies to improve the model performance. Second, we propose a computational approach called DeepPromise based on deep learning techniques for simultaneous prediction of N1-methyladenosine and N6-methyladenosine. To extract the sequence context surrounding the modification sites, three feature encodings, including enhanced nucleic acid composition, one-hot encoding, and RNA embedding, were used as the input to seven consecutive layers of convolutional neural networks (CNNs), respectively. Moreover, DeepPromise further combined the prediction score of the CNN-based models and achieved around 43% higher area under receiver-operating curve (AUROC) for m1A site prediction and 2–6% higher AUROC for m6A site prediction, respectively, when compared with several existing state-of-the-art approaches on the independent test. In-depth analyses of characteristic sequence motifs identified from the convolution-layer filters indicated that nucleotide presentation at proximal positions surrounding the modification sites contributed most to the classification, whereas those at distal positions also affected classification but to different extents. To maximize user convenience, a web server was developed as an implementation of DeepPromise and made publicly available at http://DeepPromise.erc.monash.edu/, with the server accepting both RNA sequences and genomic sequences to allow prediction of two types of putative RNA-modification sites.
Style APA, Harvard, Vancouver, ISO itp.
27

Green, Dylan, Anne Gelb i Geoffrey P. Luke. "Sparsity-Based Recovery of Three-Dimensional Photoacoustic Images from Compressed Single-Shot Optical Detection". Journal of Imaging 7, nr 10 (2.10.2021): 201. http://dx.doi.org/10.3390/jimaging7100201.

Pełny tekst źródła
Streszczenie:
Photoacoustic (PA) imaging combines optical excitation with ultrasonic detection to achieve high-resolution imaging of biological samples. A high-energy pulsed laser is often used for imaging at multi-centimeter depths in tissue. These lasers typically have a low pulse repetition rate, so to acquire images in real-time, only one pulse of the laser can be used per image. This single pulse necessitates the use of many individual detectors and receive electronics to adequately record the resulting acoustic waves and form an image. Such requirements make many PA imaging systems both costly and complex. This investigation proposes and models a method of volumetric PA imaging using a state-of-the-art compressed sensing approach to achieve real-time acquisition of the initial pressure distribution (IPD) at a reduced level of cost and complexity. In particular, a single exposure of an optical image sensor is used to capture an entire Fabry–Pérot interferometric acoustic sensor. Time resolved encoding as achieved through spatial sweeping with a galvanometer. This optical system further makes use of a random binary mask to set a predetermined subset of pixels to zero, thus enabling recovery of the time-resolved signals. The Two-Step Iterative Shrinking and Thresholding algorithm is used to reconstruct the IPD, harnessing the sparsity naturally occurring in the IPD as well as the additional structure provided by the binary mask. We conduct experiments on simulated data and analyze the performance of our new approach.
Style APA, Harvard, Vancouver, ISO itp.
28

Tian, Tao, Zong Ju Peng, Gang Yi Jiang, Mei Yu i Fen Chen. "High Efficient Depth Video Coding Based on Static Region Skipping". Applied Mechanics and Materials 321-324 (czerwiec 2013): 989–93. http://dx.doi.org/10.4028/www.scientific.net/amm.321-324.989.

Pełny tekst źródła
Streszczenie:
Depth video has attracted great attention recently with the trend of MVD (Multiview plus depth) being the main 3D representation format. In order to improve depth video coding efficiency, we propose a novel depth video coding algorithm based on static region skipping. Firstly, based on the corresponding color video, depth video is segmented into two regions: motion regions and static regions. The blocks in static regions are skipped without encoding. The skipped blocks are predicted from the neighboring depth images. Experimental results show that the proposed algorithm not only saves the bitrate up to 56.83% but also improves virtual view quality and saves the encoding time.
Style APA, Harvard, Vancouver, ISO itp.
29

Schidler, Andre, i Stefan Szeider. "SAT-based Decision Tree Learning for Large Data Sets". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 5 (18.05.2021): 3904–12. http://dx.doi.org/10.1609/aaai.v35i5.16509.

Pełny tekst źródła
Streszczenie:
Decision trees of low depth are beneficial for understanding and interpreting the data they represent. Unfortunately, finding a decision tree of lowest depth that correctly represents given data is NP-hard. Hence known algorithms either (i) utilize heuristics that do not optimize the depth or (ii) are exact but scale only to small or medium-sized instances. We propose a new hybrid approach to decision tree learning, combining heuristic and exact methods in a novel way. More specifically, we employ SAT encodings repeatedly to local parts of a decision tree provided by a standard heuristic, leading to a global depth improvement. This allows us to scale the power of exact SAT-based methods to almost arbitrarily large data sets. We evaluate our new approach experimentally on a range of real-world instances that contain up to several thousand samples. In almost all cases, our method successfully decreases the depth of the initial decision tree; often, the decrease is significant.
Style APA, Harvard, Vancouver, ISO itp.
30

Anbarasi, A., D. Sathyasrinivas i K. Vivekanandan. "Transaction Encoding Algorithm (TEA) for Distributed Data". International Journal of Computer Applications 16, nr 8 (28.02.2011): 43–49. http://dx.doi.org/10.5120/2030-2580.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Joy, Helen K., i Manjunath R. Kounte. "DECISION ALGORITHM FOR INTRA PREDICTION IN HIGH-EFFICIENCY VIDEO CODING (HEVC)". Journal of Southwest Jiaotong University 57, nr 5 (30.10.2022): 180–93. http://dx.doi.org/10.35741/issn.0258-2724.57.5.15.

Pełny tekst źródła
Streszczenie:
Prediction in HEVC exploits the redundant information in the fame to improve compression efficiency. The computational complexity of prediction is comparatively high as it recursively calculates the depth by comparing the rate-distortion optimization cost (RDO) exhaustively. The deep learning technology has shown a good mark in this area compared to traditional signal processing because of its content-based analysis and learning ability. This paper proposes a deep depth decision algorithm to predict the depth of the coding tree unit (CTU) and store it as a 16-element vector, and this model is pipelined to the HEVC encoder to compare the time taken and bit rate of encoding. The comparison chart clearly shows the reduction in computational time and enhancement in bitrate while encoding. The dataset used here is generated for the model with 110000 frames of the various resolutions, split into test, training, and validation, and trained on a depth decision model. The trained model interfaced with the HEVC encoder is compared with the normal encoder. The evaluation is done for quality check for the proposed model with BD-PSNR and BD-Bitrate shows a dip of 0.6 in BD-PSNR and increment of 6.7 in BD-Bitrate. When pipelined with the original HEVC, the RDO cost shows an improvement over existing techniques. The average encoding time is reduced by about 72% by the pipelined deep depth decision algorithm that points to the reduction in computational complexity. An average time saving of 88.49% is achieved with a deep depth decision algorithm-based encoder compared to the existing techniques.
Style APA, Harvard, Vancouver, ISO itp.
32

Yao, Quan Zhu, Bing Tian i Wang Yun He. "XML Keyword Search Algorithm Based on Level-Traverse Encoding". Applied Mechanics and Materials 263-266 (grudzień 2012): 1553–58. http://dx.doi.org/10.4028/www.scientific.net/amm.263-266.1553.

Pełny tekst źródła
Streszczenie:
For XML documents, existing keyword retrieval methods encode each node with Dewey encoding, comparing Dewey encodings part by part is necessary in LCA computation. When the depth of XML is large, lots of LCA computations will affect the performance of keyword search. In this paper we propose a novel labeling method called Level-TRaverse (LTR) encoding, combine with the definition of the result set based on Exclusive Lowest Common Ancestor (ELCA),design a query Bottom-Up Level Algorithm(BULA).The experiments demonstrate this method improves the efficiency and the veracity of XML keyword retrieval.
Style APA, Harvard, Vancouver, ISO itp.
33

M’Balé, Kenneth M., i Darsana Josyula. "Encoding seasonal patterns using the Kasai algorithm". Artificial Intelligence Research 6, nr 2 (14.06.2017): 80. http://dx.doi.org/10.5430/air.v6n2p80.

Pełny tekst źródła
Streszczenie:
Data series that contain patterns are the expression of a set of rules that specify the pattern. In cases where the data series is known but the rules are not known, the Kasai algorithm can analyze the data pattern and produce the complete set of rules that described the data pattern observed to date.
Style APA, Harvard, Vancouver, ISO itp.
34

Jiang, Xiantao, Jie Feng, Tian Song i Takafumi Katayama. "Low-Complexity and Hardware-Friendly H.265/HEVC Encoder for Vehicular Ad-Hoc Networks". Sensors 19, nr 8 (24.04.2019): 1927. http://dx.doi.org/10.3390/s19081927.

Pełny tekst źródła
Streszczenie:
Real-time video streaming over vehicular ad-hoc networks (VANETs) has been considered as a critical challenge for road safety applications. The purpose of this paper is to reduce the computation complexity of high efficiency video coding (HEVC) encoder for VANETs. Based on a novel spatiotemporal neighborhood set, firstly the coding tree unit depth decision algorithm is presented by controlling the depth search range. Secondly, a Bayesian classifier is used for the prediction unit decision for inter-prediction, and prior probability value is calculated by Gibbs Random Field model. Simulation results show that the overall algorithm can significantly reduce encoding time with a reasonably low loss in encoding efficiency. Compared to HEVC reference software HM16.0, the encoding time is reduced by up to 63.96%, while the Bjontegaard delta bit-rate is increased by only 0.76–0.80% on average. Moreover, the proposed HEVC encoder is low-complexity and hardware-friendly for video codecs that reside on mobile vehicles for VANETs.
Style APA, Harvard, Vancouver, ISO itp.
35

Sinurat, Santana, i Maranatha Pasaribu. "Text Encoding Using Cipher Block Chaining Algorithm". Jurnal Info Sains : Informatika dan Sains 11, nr 2 (1.09.2021): 13–17. http://dx.doi.org/10.54209/infosains.v11i2.42.

Pełny tekst źródła
Streszczenie:
. Data confidentiality and security are critical in data communication, both for the purpose of shared security, and for individual privacy. Computer users who want their data unknown to unauthorized parties are always trying to work out how to secure the information that will be communicated or that will be stored. Protection against data confidentiality is increasing, one way is by applying cryptographic science. Cipher Block Chaining (CBC), this mode is a feedback mechanism on a block, and in this case the result of the previous block encryption is feedback into the current block encryption. The trick is to block the current plaintext in XOR first with the ciphertext block of the previous encryption result, then the result of this XOR-ing goes into the encryption function. With CBC mode, each ciphertext block is calculated not only on its plaintext block but also on the entire previous plaintext block. The author tries to co-create a text encoding to secure the data with the Cipher Block Chaining (CBC) cryptographic method.
Style APA, Harvard, Vancouver, ISO itp.
36

Zhang, Qiuwen, Xiao Wang, Xinpeng Huang, Rijian Su i Yong Gan. "Fast mode decision algorithm for 3D-HEVC encoding optimization based on depth information". Digital Signal Processing 44 (wrzesień 2015): 37–46. http://dx.doi.org/10.1016/j.dsp.2015.06.005.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Mathias-Neto, Waldemar Pereira, i José Roberto Sanches Mantovani. "A Node-Depth Encoding-Based Tabu Search Algorithm for Power Distribution System Restoration". Journal of Control, Automation and Electrical Systems 27, nr 3 (26.02.2016): 317–27. http://dx.doi.org/10.1007/s40313-016-0234-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Jiang, Dexia, i Leilei Li. "Node Selection Algorithm for Network Coding in the Mobile Wireless Network". Symmetry 13, nr 5 (10.05.2021): 842. http://dx.doi.org/10.3390/sym13050842.

Pełny tekst źródła
Streszczenie:
In the multicast network, network coding has proven to be an effective technique to approach maximum flow capacity. Although network coding has the advantage of improving performance, encoding nodes increases the cost and delay in wireless networks. Therefore, minimizing encoding nodes is of great significance to improve the actual network’s performance under a maximum multicast flow. This paper seeks to achieve partial improvements in the existing selection algorithm of encoding nodes in wireless networks. Firstly, the article gives the condition for an intermediate node to be an encoding node. Secondly, a maximum flow algorithm, which depends on the depth-first search method, is proposed to optimize the search time by selecting the larger augmentation flow in each step. Finally, we construct a random graph model to simulate the wireless network and the maximum multicast flow algorithm to analyze the statistical characteristics of encoding nodes. This paper aims at the optimization to find the minimal number of required coding nodes which means the minimum energy consumption. Meanwhile, the simulations indicate that the curve of coding nodes tends to be a geometric distribution, and that the curve of the maximum flow tends to be symmetric as the network scale and the node covering radius increase.
Style APA, Harvard, Vancouver, ISO itp.
39

Chen, Yihang, Zening Cao, Jinxin Wang, Yan Shi i Zilong Qin. "Encoding Conversion Algorithm of Quaternary Triangular Mesh". ISPRS International Journal of Geo-Information 11, nr 1 (31.12.2021): 33. http://dx.doi.org/10.3390/ijgi11010033.

Pełny tekst źródła
Streszczenie:
In the process of global information construction, different fields have built their own discrete global grid systems (DGGS). With the development of big data technology, data exchange, integration, and update have gradually become a trend, as well as the associative integration of different DGGS. Due to the heterogeneity of DGGS and the different encoding rules, how to build the encoding conversion rules and data mapping relationship between the same object in various DGGS is an effective support and key technology to achieve the interoperability of DGGS. As a kind of multipurpose DGGS, the quaternary triangular mesh (QTM) has become an effective spatial framework for constructing the digital earth because of its simple structure. At present, there are many schemes for QTM encoding research, which plays a key role in the development of QTM, but at the same time, it also leads to difficulties in the communication and integration of QTM under different encoding. In order to solve this problem, we explore the characteristics of QTM encoding, and put forward three conversion algorithms: resampling conversion algorithm, hierarchical conversion algorithm, and row–column conversion algorithm.
Style APA, Harvard, Vancouver, ISO itp.
40

Ibrahem, Hatem, Ahmed Salem i Hyun-Soo Kang. "DTS-Depth: Real-Time Single-Image Depth Estimation Using Depth-to-Space Image Construction". Sensors 22, nr 5 (1.03.2022): 1914. http://dx.doi.org/10.3390/s22051914.

Pełny tekst źródła
Streszczenie:
As most of the recent high-resolution depth-estimation algorithms are computationally so expensive that they cannot work in real time, the common solution is using a low-resolution input image to reduce the computational complexity. We propose a different approach, an efficient and real-time convolutional neural network-based depth-estimation algorithm using a single high-resolution image as the input. The proposed method efficiently constructs a high-resolution depth map using a small encoding architecture and eliminates the need for a decoder, which is typically used in the encoder–decoder architectures employed for depth estimation. The proposed algorithm adopts a modified MobileNetV2 architecture, which is a lightweight architecture, to estimate the depth information through the depth-to-space image construction, which is generally employed in image super-resolution. As a result, it realizes fast frame processing and can predict a high-accuracy depth in real time. We train and test our method on the challenging KITTI, Cityscapes, and NYUV2 depth datasets. The proposed method achieves low relative absolute error (0.028 for KITTI, 0.167 for CITYSCAPES, and 0.069 for NYUV2) while working at speed reaching 48 frames per second on a GPU and 20 frames per second on a CPU for high-resolution test images. We compare our method with the state-of-the-art methods on depth estimation, showing that our method outperforms those methods. However, the architecture is less complex and works in real time.
Style APA, Harvard, Vancouver, ISO itp.
41

Wang, Peiren, Yong Wang, Wei Nie, Zhengyang Li, Zilong Li i Xue Han. "P‐4.6: Efficient Synthetic Encoding Algorithm based on Depth Offset Mapping for Tabletop Integral Imaging Display". SID Symposium Digest of Technical Papers 54, S1 (kwiecień 2023): 644–47. http://dx.doi.org/10.1002/sdtp.16374.

Pełny tekst źródła
Streszczenie:
An efficient algorithm of three‐dimensional image rendering based on depth offset mapping for tabletop integral imaging display system is proposed. The two‐dimensional color image is used as the reference image, and the corresponding gray depth image is used to obtain the depth information. The offset of pixels in each elemental image array is calculated along the horizontal and vertical directions. According to the geometric relationship between the integral imaging display panel and the viewing position, the synthesized three‐dimensional image can be rendered directly. Compared with the traditional method based on virtual viewpoints generation, the proposed algorithm has fast computational speed and high rendering efficiency. The effectiveness of this method is verified by experiment, which can provide high quality three‐dimensional display effect.
Style APA, Harvard, Vancouver, ISO itp.
42

Xu, Dong-wei, Yong-dong Wang, Li-min Jia, Gui-jun Zhang i Hai-feng Guo. "Compression Algorithm of Road Traffic Spatial Data Based on LZW Encoding". Journal of Advanced Transportation 2017 (2017): 1–13. http://dx.doi.org/10.1155/2017/8182690.

Pełny tekst źródła
Streszczenie:
Wide-ranging applications of road traffic detection technology in road traffic state data acquisition have introduced new challenges for transportation and storage of road traffic big data. In this paper, a compression method for road traffic spatial data based on LZW encoding is proposed. First, the spatial correlation of road segments was analyzed by principal component analysis. Then, the road traffic spatial data compression based on LZW encoding is presented. The parameters determination is also discussed. Finally, six typical road segments in Beijing are adopted for case studies. The final results are listed and prove that the road traffic spatial data compression method based on LZW encoding is feasible, and the reconstructed data can achieve high accuracy.
Style APA, Harvard, Vancouver, ISO itp.
43

Finley, Matthew G., i Tyler Bell. "Variable Precision Depth Encoding for 3D Range Geometry Compression". Electronic Imaging 2020, nr 17 (26.01.2020): 34–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.17.3dmp-034.

Pełny tekst źródła
Streszczenie:
This paper presents a novel method for accurately encoding 3D range geometry within the color channels of a 2D RGB image that allows the encoding frequency—and therefore the encoding precision—to be uniquely determined for each coordinate. The proposed method can thus be used to balance between encoding precision and file size by encoding geometry along a normal distribution; encoding more precisely where the density of data is high and less precisely where the density is low. Alternative distributions may be followed to produce encodings optimized for specific applications. In general, the nature of the proposed encoding method is such that the precision of each point can be freely controlled or derived from an arbitrary distribution, ideally enabling this method for use within a wide range of applications.
Style APA, Harvard, Vancouver, ISO itp.
44

Lee, Han Soo. "Fast encoding algorithm based on depth of coding-unit for high efficiency video coding". Optical Engineering 51, nr 6 (8.06.2012): 067402. http://dx.doi.org/10.1117/1.oe.51.6.067402.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Santos, A. C., A. C. B. Delbem, J. B. A. London i N. G. Bretas. "Node-Depth Encoding and Multiobjective Evolutionary Algorithm Applied to Large-Scale Distribution System Reconfiguration". IEEE Transactions on Power Systems 25, nr 3 (sierpień 2010): 1254–65. http://dx.doi.org/10.1109/tpwrs.2010.2041475.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

UbaidurRahman, Noorul Hussain, Chithralekha Balamurugan i Rajapandian Mariappan. "A Novel String Matrix Data Structure for DNA Encoding Algorithm". Procedia Computer Science 46 (2015): 820–32. http://dx.doi.org/10.1016/j.procs.2015.02.151.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Hwang, Wen-Jyi, i Bung-Yau Chen. "Fast vector quantisation encoding algorithm using zero-tree data structure". Electronics Letters 33, nr 15 (1997): 1290. http://dx.doi.org/10.1049/el:19970880.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

M, Anand, i V. Mathivananr. "Quantization Encoding Algorithm Based Satellite Image Compression". Indonesian Journal of Electrical Engineering and Computer Science 8, nr 3 (1.12.2017): 740. http://dx.doi.org/10.11591/ijeecs.v8.i3.pp740-742.

Pełny tekst źródła
Streszczenie:
In the field of digital data there is a demand in bandwidth for the transmission of the videos and images all over the worlds. So in order to reduce the storage space in the field of image applications there is need for the image compression process with lesser transmission bandwidth. So in this paper we are proposing a new image compression technique for the compression of the satellite images by using the Region of Interest (ROI) based on the lossy image technique called the Quantization encoding algorithm for the compression. The performance of our method can be evaluated and analyzing the PSNR values of the output images.
Style APA, Harvard, Vancouver, ISO itp.
49

David, Shibin, i G. Jaspher W. Kathrine. "Digital Signature Algorithm for M-Payment Applications Using Arithmetic Encoding and Factorization Algorithm". Journal of Cases on Information Technology 23, nr 3 (lipiec 2021): 11–26. http://dx.doi.org/10.4018/jcit.20210701.oa2.

Pełny tekst źródła
Streszczenie:
Mobile communication systems employ an encoding-encryption model to ensure secure data transmission over the network. This model encodes the data and encrypts it before transmitting it to the receiver's end. A non-trivial operation is performed to generate a strong secret key through which the data is encrypted. To support the security level of this model, arithmetic encoding is performed upon the data before encryption. The encrypted data is hashed using a lightweight hashing algorithm to generate a small and fixed length hash digest to overcome the overheads before it is communicated to the other end. To authorize the message being sent by the sender to the receiver, signature plays an important role. To avoid forging using proxy signature, blind signature, etc., a hybrid scheme is proposed in this article. The mobile communication system is enhanced by introducing the hybrid encode-encrypt-hashing mechanism which stands secure against the plain text attacks, mathematical attacks, and increases confidentiality of the data, security of the key, and thereby enhances the security of the system. In this paper, the design is applied over the mobile payment system which is considered as one of the appreciable mobile services. It proves that the designed security model can ensure swift transactions in a secure way.
Style APA, Harvard, Vancouver, ISO itp.
50

Zheng, Jian Yun, Li Li i Yi Sheng Zhu. "The Performance of Double-Binary Turbo Codes in the HAP-Based Communication System". Applied Mechanics and Materials 713-715 (styczeń 2015): 1141–44. http://dx.doi.org/10.4028/www.scientific.net/amm.713-715.1141.

Pełny tekst źródła
Streszczenie:
The small-scale fading model for HAP-based communication channel is analyzed, the structure and algorithm of double-binary Turbo encoding and decoding are studied. Numerical results show the BER performance with different interleaving depth and interleaving length of double-binary Turbo codes using QPSK modulation over Rayleigh fading channel.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii