Journal articles on the topic 'Run-Length encoding method'

To see the other types of publications on this topic, follow the link: Run-Length encoding method.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Run-Length encoding method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Suwardiman, Suwardiman, and Fitri Bimantoro. "Implementasi Modifikasi Kompresi Run-Length Encoding pada Steganografi." Journal of Computer Science and Informatics Engineering (J-Cosine) 4, no. 2 (December 31, 2020): 100–109. http://dx.doi.org/10.29303/jcosine.v4i2.109.

Full text
Abstract:
RLE is one of the methods to compress data, however it has disadvantages that the compressed data may become twice larger as the original size. Therefore, in this research RLE will be modified to solve the problem. In the experiment we tested 3 file format i.e. JPG, PNG and BMP. The testing on JPG and PNG shows that conventional RLE method is not able to compress all images because it obtained negative compression ratio with an average compression ratio about -98,2%, Meanwhile the compression with RLE modification shows that the all image successfully compress with an average compression ratio about 0,17%. The testing on BMP shows the conventional RLE method successfully compress 5 files out of 11 tested images with average compression ratio about -34,7%, Meanwhile the compression with modified RLE successfully compress all tested images with average compression ratio about 18,8 %.
APA, Harvard, Vancouver, ISO, and other styles
2

Shan, Yanhu, Yongfeng Ren, Guoyong Zhen, and Kaiqun Wang. "An Enhanced Run-Length Encoding Compression Method for Telemetry Data." Metrology and Measurement Systems 24, no. 3 (September 1, 2017): 551–62. http://dx.doi.org/10.1515/mms-2017-0039.

Full text
Abstract:
AbstractThe telemetry data are essential in evaluating the performance of aircraft and diagnosing its failures. This work combines the oversampling technology with the run-length encoding compression algorithm with an error factor to further enhance the compression performance of telemetry data in a multichannel acquisition system. Compression of telemetry data is carried out with the use of FPGAs. In the experiments there are used pulse signals and vibration signals. The proposed method is compared with two existing methods. The experimental results indicate that the compression ratio, precision, and distortion degree of the telemetry data are improved significantly compared with those obtained by the existing methods. The implementation and measurement of the proposed telemetry data compression method show its effectiveness when used in a high-precision high-capacity multichannel acquisition system.
APA, Harvard, Vancouver, ISO, and other styles
3

Agulhari, Cristiano M., Ivanil S. Bonatti, and Pedro L. D. Peres. "An Adaptive Run Length Encoding method for the compression of electrocardiograms." Medical Engineering & Physics 35, no. 2 (February 2013): 145–53. http://dx.doi.org/10.1016/j.medengphy.2010.03.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

CHOUAKRI, Sid Ahmed, and Fatiha MESKINE. "Three States QRLE (Quantized Run Length Encoding) Based JPEG Image Compression Method." Eurasia Proceedings of Science Technology Engineering and Mathematics 21 (December 31, 2022): 173–81. http://dx.doi.org/10.55549/epstem.1225063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Hanlin, Jingju Liu, Xuehu Yan, Pengfei Xue, and Dingwei Tan. "An Audio Steganography Based on Run Length Encoding and Integer Wavelet Transform." International Journal of Digital Crime and Forensics 13, no. 2 (March 2021): 16–34. http://dx.doi.org/10.4018/ijdcf.2021030102.

Full text
Abstract:
This paper proposes an audio steganography method based on run length encoding and integer wavelet transform which can be used to hide secret message in digital audio. The major contribution of the proposed scheme is to propose an audio steganography with high capacity, where the secret information is compressed by run length encoding. In the applicable scenario, the main purpose is to hide as more information as possible in the cover audio files. First, the secret information is chaotic scrambling, then the result of scrambling is run length encoded, and finally, the secret information is embedded into integer wavelet coefficients. The experimental results and comparison with existing technique show that by utilizing the lossless compression of run length encoding and anti-attack of wavelet domain, the proposed method has improved the capacity, good audio quality, and can achieve blind extraction while maintaining imperceptibility and strong robustness.
APA, Harvard, Vancouver, ISO, and other styles
6

Girishwaingankar, Poorva, and Sangeeta Milind Joshi. "The PHY-NGSC-Based ORT Run Length Encoding Scheme for Video Compression." International Journal of Image and Graphics 20, no. 02 (April 2020): 2050007. http://dx.doi.org/10.1142/s0219467820500072.

Full text
Abstract:
This paper proposes a compression algorithm using octonary repetition tree (ORT) based on run length encoding (RLE). Generally, RLE is one type of lossless data compression method which has duplication problem as a major issue due to the usage of code word or flag. Hence, ORT is offered instead of using a flag or code word to overcome this issue. This method gives better performance by means of compression ratio, i.e. 99.75%. But, the functioning of ORT is not good in terms of compression speed. For that reason, physical- next generation secure computing (PHY-NGSC) is hybridized with ORT to raise the compression speed. It uses an MPI-open MP programming paradigm on ORT to improve the compression speed of encoder. The planned work achieves multiple levels of parallelism within an image such as MPI and open MP for parallelism across a group of pictures level and slice level, respectively. At the same time, wide range of data compression like multimedia, executive files and documents are possible in the proposed method. The performance of the proposed work is compared with other methods like accordian RLE, context adaptive variable length coding (CAVLC) and context-based arithmetic coding (CBAC) through the implementation in Matlab working platform.
APA, Harvard, Vancouver, ISO, and other styles
7

DUDHAGARA, CHETAN R., and HASAMUKH B. PATEL. "Performance Analysis of Data Compression using Lossless Run Length Encoding." Oriental journal of computer science and technology 10, no. 3 (August 10, 2017): 703–7. http://dx.doi.org/10.13005/ojcst/10.03.22.

Full text
Abstract:
In a recent era of modern technology, there are many problems for storage, retrieval and transmission of data. Data compression is necessary due to rapid growth of digital media and the subsequent need for reduce storage size and transmit the data in an effective and efficient manner over the networks. It reduces the transmission traffic on internet also. Data compression try to reduce the number of bits required to store digitally. The various data and image compression algorithms are widely use to reduce the original data bits into lesser number of bits. Lossless data and image compression is a special class of data compression. This algorithm involves in reducing numbers of bits by identifying and eliminating statistical data redundancy in input data. It is very simple and effective method. It provides good lossless compression of input data. This is useful on data that contains many consecutive runs of the same values. This paper presents the implementation of Run Length Encoding for data compression.
APA, Harvard, Vancouver, ISO, and other styles
8

Nechta, Ivan V. "New method of steganalysis for text data obtained by synonym run-length encoding." Bezopasnost informacionnyh tehnology 25, no. 2 (May 2018): 114–20. http://dx.doi.org/10.26583/bit.2018.2.10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sharmila, K., and K. Kuppu samy. "An Efficient Image Compression Method using DCT, Fractal and Run Length Encoding Techniques." International Journal of Engineering Trends and Technology 13, no. 6 (July 25, 2014): 287–90. http://dx.doi.org/10.14445/22315381/ijett-v13p257.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Helong, Dingtao Shen, Wenlong Chen, Yiheng Liu, Yueping Xu, and Debao Tan. "Run-Length-Based River Skeleton Line Extraction from High-Resolution Remote Sensed Image." Remote Sensing 14, no. 22 (November 18, 2022): 5852. http://dx.doi.org/10.3390/rs14225852.

Full text
Abstract:
Automatic extraction of the skeleton lines of river systems from high-resolution remote-sensing images has great significance for surveying and managing water resources. A large number of existing methods for the automatic extraction of skeleton lines from raster images are primarily used for simple graphs and images (e.g., fingerprint, text, and character recognition). These methods generally are memory intensive and have low computational efficiency. These shortcomings preclude their direct use in the extraction of skeleton lines from large volumes of high-resolution remote-sensing images. In this study, we developed a method to extract river skeleton lines based entirely on run-length encoding. This method attempts to replace direct raster encoding with run-length encoding for storing river data, which can considerably compress raster data. A run-length boundary tracing strategy is used instead of complete raster matrix traversal to quickly determine redundant pixels, thereby significantly improving the computational efficiency. An experiment was performed using a 0.5 m-resolution remote-sensing image of Yiwu city in the Chinese province of Zhejiang. Raster data for the rivers in Yiwu were obtained using both the DeepLabv3+ deep learning model and the conventional visual interpretation method. Subsequently, the proposed method was used to extract the skeleton lines of the rivers in Yiwu. To compare the proposed method with the classical raster-based skeleton line extraction algorithm developed by Zhang and Suen in terms of memory consumption and computational efficiency, the visually interpreted river data were used to generate skeleton lines at different raster resolutions. The results showed that the proposed method consumed less than 1% of the memory consumed by the classical method and was over 10 times more computationally efficient. This finding suggests that the proposed method has the potential for river skeleton line extraction from terabyte-scale remote-sensing image data on personal computers.
APA, Harvard, Vancouver, ISO, and other styles
11

S, Sivanantham, Aravind Babu S, Babu Ramki, and Mallick P.S. "Test Data Compression with Alternating Equal-Run-Length Coding." International Journal of Engineering & Technology 7, no. 4.10 (October 2, 2018): 1089. http://dx.doi.org/10.14419/ijet.v7i4.10.27925.

Full text
Abstract:
This paper presents a new X-filling algorithm for test power reduction and a novel encoding technique for test data compression in scan-based VLSI testing. The proposed encoding technique focuses on replacing redundant runs of the equal-run-length vector with a shorter codeword. The effectiveness of this compression method depends on a number of repeated runs occur in the fully specified test set. In order to maximize the repeated runs with equal run length, the unspecified bits in the test cubes are filled with the proposed technique called alternating equal-run-length (AERL) filling. The resultant test data are compressed using the proposed alternating equal-run-length coding to reduce the test data volume. Efficient decompression architecture is also presented to decode the original data with lesser area overhead and power. Experimental results obtained from larger ISCAS'89 benchmark circuits show the efficiency of the proposed work. The AERL achieves up to 82.05 % of compression ratio as well as up to 39.81% and 93.20 % of peak and average-power transitions in scan-in mode during IC testing.
APA, Harvard, Vancouver, ISO, and other styles
12

Abu-Taieh, Evon, and Issam AlHadid. "CRUSH: A New Lossless Compression Algorithm." Modern Applied Science 12, no. 11 (October 29, 2018): 387. http://dx.doi.org/10.5539/mas.v12n11p387.

Full text
Abstract:
Multimedia is highly competitive world, one of the properties that is reflected is speed of download and upload of multimedia elements: text, sound, pictures, animation. This paper presents CRUSH algorithm which is a lossless compression algorithm. CRUSH algorithm can be used to compress files. CRUSH method is fast and simple with time complexity O(n) where n is the number of elements being compressed.Furthermore, compressed file is independent from algorithm and unnecessary data structures. As the paper will show comparison with other compression algorithms like Shannon–Fano code, Huffman coding, Run Length Encoding, Arithmetic Coding, Lempel-Ziv-Welch (LZW), Run Length Encoding (RLE), Burrows-Wheeler Transform.Move-to-Front (MTF) Transform, Haar, wavelet tree, Delta Encoding, Rice &Golomb Coding, Tunstall coding, DEFLATE algorithm, Run-Length Golomb-Rice (RLGR).
APA, Harvard, Vancouver, ISO, and other styles
13

Abu-Taieh, Evon, and Issam AlHadid. "CRUSH: A New Lossless Compression Algorithm." Modern Applied Science 12, no. 11 (October 29, 2018): 406. http://dx.doi.org/10.5539/mas.v12n11p406.

Full text
Abstract:
Multimedia is highly competitive world, one of the properties that is reflected is speed of download and upload of multimedia elements: text, sound, pictures, animation. This paper presents CRUSH algorithm which is a lossless compression algorithm. CRUSH algorithm can be used to compress files. CRUSH method is fast and simple with time complexity O(n) where n is the number of elements being compressed.Furthermore, compressed file is independent from algorithm and unnecessary data structures. As the paper will show comparison with other compression algorithms like Shannon–Fano code, Huffman coding, Run Length Encoding, Arithmetic Coding, Lempel-Ziv-Welch (LZW), Run Length Encoding (RLE), Burrows-Wheeler Transform.Move-to-Front (MTF) Transform, Haar, wavelet tree, Delta Encoding, Rice &Golomb Coding, Tunstall coding, DEFLATE algorithm, Run-Length Golomb-Rice (RLGR).
APA, Harvard, Vancouver, ISO, and other styles
14

Nagasaka, Akio, and Takafumi Miyatake. "Real-time Scene Identification Using Run-length Encoding of Video Feature Sequences." Journal of Robotics and Mechatronics 11, no. 2 (April 20, 1999): 98–103. http://dx.doi.org/10.20965/jrm.1999.p0098.

Full text
Abstract:
We propose real-time video scene identification that detects all same scenes in stored videos as the latest freelength scene, compressing the video feature sequence in an average of less than 20 bytes per second to store features for a long time. It takes less than 30ms on the average for a typical personal computer to process 1 newly input fame image even storing more than 24-hour video features. Experiments with TV showed that this method finds correct pairs of the same scenes in real time without error. This becomes the basis for active video recording based on a user's TV viewing history and for new robotic machine vision for surveillance.
APA, Harvard, Vancouver, ISO, and other styles
15

Huang, Xue Mei, and Jin Chuan Wang. "Rapid Extraction and Compression of DICOM Data for Medical Image Geometric Modeling." Advanced Materials Research 433-440 (January 2012): 7511–15. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.7511.

Full text
Abstract:
This paper presents a method of extracting and compressing required data from the DCM file for medical image geometric modeling. According to the characteristics of DICOM data, combining the idea of run-length coding with block coding, the rapid data compression and storage in RAM was realized finally. Compared with other coding methods, the encoding approach for DICOM data in this paper, not only saves the memory space and improves transmission efficiency, but also can read the required a single pixel, or part of pixel data from the compressed data conveniently.
APA, Harvard, Vancouver, ISO, and other styles
16

Rajasekhar, H., and B. Prabhakara Rao. "An Efficient Video Compression Technique Using Watershed Algorithm and JPEG-LS Encoding." Journal of Computational and Theoretical Nanoscience 13, no. 10 (October 1, 2016): 6671–79. http://dx.doi.org/10.1166/jctn.2016.5613.

Full text
Abstract:
In the previous video compression method, the videos were segmented by using the novel motion estimation algorithm with aid of watershed method. But, the compression ratio (CR) of compression with novel motion estimation algorithm was not giving an adequate result. Moreover this methods performance is needed to be improved in the encoding and decoding processes. Because most of the video compression methods have utilized encoding techniques like JPEG, Run Length, Huffman coding and LSK encoding. The improvement of the encoding techniques in the compression process will improve the compression result. Hence, to overcome these drawbacks, we intended to propose a new video compression method with renowned encoding technique. In this proposed video compression method, the input video frames motion vectors are estimated by applying watershed and ARS-ST (Adaptive Rood Search with Spatio-Temporal) algorithms. After that, the vector blocks which have high difference value are encoded by using the JPEG-LS encoder. JPEG-LS have excellent coding and computational efficiency, and it outperforms JPEG2000 and many other image compression methods. This algorithm is of relatively low complexity, low storage requirement and its compression capability is efficient enough. To get the compressed video, the encoded blocks are subsequently decoded by JPEG-LS. The implementation result shows the effectiveness of proposed method, in compressing more number of videos. The performance of our proposed video compression method is evaluated by comparing the result of proposed method with the existing video compression techniques. The comparison result shows that our proposed method acquires high-quality compression ratio and PSNR for the number of testing videos than the existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
17

Journal, Baghdad Science. "Combined DWT and DCT Image Compression Using Sliding RLE Technique." Baghdad Science Journal 8, no. 3 (September 4, 2011): 832–39. http://dx.doi.org/10.21123/bsj.8.3.832-839.

Full text
Abstract:
A number of compression schemes were put forward to achieve high compression factors with high image quality at a low computational time. In this paper, a combined transform coding scheme is proposed which is based on discrete wavelet (DWT) and discrete cosine (DCT) transforms with an added new enhancement method, which is the sliding run length encoding (SRLE) technique, to further improve compression. The advantages of the wavelet and the discrete cosine transforms were utilized to encode the image. This first step involves transforming the color components of the image from RGB to YUV planes to acquire the advantage of the existing spectral correlation and consequently gaining more compression. DWT is then applied to the Y, U and V color space information giving the approximate and the detail coefficients. The detail coefficients are quantized, coded using run length encoding (RLE) and SRLE. The approximate coefficients were coded using DCT, since DCT has superior compression performance when image information has poor power concentration in high frequency areas. This output is also quantized, coded using RLE and SRLE. Test results showed that the proposed DWT DCT SRLE system proved to have encouraging results in terms of Peak Signal-to-Noise Ratio (PSNR), Compression Factor (CF) and execution time when compared with some DWT based image compressions.
APA, Harvard, Vancouver, ISO, and other styles
18

Rozenberg, Liat, Sagi Lotan, and Dan Feldman. "Finding Patterns in Signals Using Lossy Text Compression." Algorithms 12, no. 12 (December 11, 2019): 267. http://dx.doi.org/10.3390/a12120267.

Full text
Abstract:
Whether the source is autonomous car, robotic vacuum cleaner, or a quadcopter, signals from sensors tend to have some hidden patterns that repeat themselves. For example, typical GPS traces from a smartphone contain periodic trajectories such as “home, work, home, work, ⋯”. Our goal in this study was to automatically reverse engineer such signals, identify their periodicity, and then use it to compress and de-noise these signals. To do so, we present a novel method of using algorithms from the field of pattern matching and text compression to represent the “language” in such signals. Common text compression algorithms are less tailored to handle such strings. Moreover, they are lossless, and cannot be used to recover noisy signals. To this end, we define the recursive run-length encoding (RRLE) method, which is a generalization of the well known run-length encoding (RLE) method. Then, we suggest lossy and lossless algorithms to compress and de-noise such signals. Unlike previous results, running time and optimality guarantees are proved for each algorithm. Experimental results on synthetic and real data sets are provided. We demonstrate our system by showing how it can be used to turn commercial micro air-vehicles into autonomous robots. This is by reverse engineering their unpublished communication protocols and using a laptop or on-board micro-computer to control them. Our open source code may be useful for both the community of millions of toy robots users, as well as for researchers that may extend it for further protocols.
APA, Harvard, Vancouver, ISO, and other styles
19

Rijab, Khalida Shaaban, and Mohammed Abdul Redha Hussien. "Efficient electrocardiogram signal compression algorithm using dual encoding technique." Indonesian Journal of Electrical Engineering and Computer Science 25, no. 3 (March 1, 2022): 1529. http://dx.doi.org/10.11591/ijeecs.v25.i3.pp1529-1538.

Full text
Abstract:
<span>In medical practices, the storage space of electrocardiogram (ECG) records is a major concern. These records can contain hours of recording, necessitating a large amount of storage space. This problem is commonly addressed by compressing the ECG signal. The proposed work deal with the ECG signal compression method for ECG signals using discrete wavelet transform (DWT). The DWT appeared as powerful tools to compact signals and shows a signal in another time-frequency representation. It is very appropriate in the elimination &amp; removal of redundancy. The ECG signals are decomposed using DWT. After that, the coefficients that result from DWT are threshold depending on the energy packing efficiency (EPE) of the signal. The compression is achieved by the quantization and dual encoding techniques (run-length encoding &amp; Huffman encoding). The dual encoding technique compresses data significantly. The result of the proposed method shows better performance with compression ratios and good quality reconstructed signals. For example, the compression ratio (CR)=20.6, 10.7 and 11.1 with percent root mean square difference (PRD)=1%, 0.9% and 1% for using different DWT (Haar, db2 and FK4) Respectively.</span>
APA, Harvard, Vancouver, ISO, and other styles
20

Song, Wei, and Wen Bing Fan. "A New SPIHT Image Coder Based on Fast Lifting Wavelet Transform." Applied Mechanics and Materials 462-463 (November 2013): 288–93. http://dx.doi.org/10.4028/www.scientific.net/amm.462-463.288.

Full text
Abstract:
To improve PSNR and coding efficiency, the paper proposes a method by improving algorithm according to image compression technology to pledge the real time of the image transmission and gain the high compression ratio under the image quality. The improved SPIHT image coding algorithm based on fast lifting wavelet transform presents fast lifting wavelet transform to improve transform course, because of many consecutive zero appearing in SPIHT quantification coding, adopting the simultaneous encoding of entropy and SPIHT. Entropy coding adopts run-length-changeable coding. Proved by the experiment, the method could achieve expected purpose, can apply in the image data transmission and storage of remote image surveillance systems.
APA, Harvard, Vancouver, ISO, and other styles
21

Bent, Brinnae, Baiying Lu, Juseong Kim, and Jessilyn P. Dunn. "Biosignal Compression Toolbox for Digital Biomarker Discovery." Sensors 21, no. 2 (January 13, 2021): 516. http://dx.doi.org/10.3390/s21020516.

Full text
Abstract:
A critical challenge to using longitudinal wearable sensor biosignal data for healthcare applications and digital biomarker development is the exacerbation of the healthcare “data deluge,” leading to new data storage and organization challenges and costs. Data aggregation, sampling rate minimization, and effective data compression are all methods for consolidating wearable sensor data to reduce data volumes. There has been limited research on appropriate, effective, and efficient data compression methods for biosignal data. Here, we examine the application of different data compression pipelines built using combinations of algorithmic- and encoding-based methods to biosignal data from wearable sensors and explore how these implementations affect data recoverability and storage footprint. Algorithmic methods tested include singular value decomposition, the discrete cosine transform, and the biorthogonal discrete wavelet transform. Encoding methods tested include run-length encoding and Huffman encoding. We apply these methods to common wearable sensor data, including electrocardiogram (ECG), photoplethysmography (PPG), accelerometry, electrodermal activity (EDA), and skin temperature measurements. Of the methods examined in this study and in line with the characteristics of the different data types, we recommend direct data compression with Huffman encoding for ECG, and PPG, singular value decomposition with Huffman encoding for EDA and accelerometry, and the biorthogonal discrete wavelet transform with Huffman encoding for skin temperature to maximize data recoverability after compression. We also report the best methods for maximizing the compression ratio. Finally, we develop and document open-source code and data for each compression method tested here, which can be accessed through the Digital Biomarker Discovery Pipeline as the “Biosignal Data Compression Toolbox,” an open-source, accessible software platform for compressing biosignal data.
APA, Harvard, Vancouver, ISO, and other styles
22

Mishra, Mukesh, Gourab Sen Gupta, and Xiang Gui. "Investigation of Energy Cost of Data Compression Algorithms in WSN for IoT Applications." Sensors 22, no. 19 (October 10, 2022): 7685. http://dx.doi.org/10.3390/s22197685.

Full text
Abstract:
The exponential growth in remote sensing, coupled with advancements in integrated circuits (IC) design and fabrication technology for communication, has prompted the progress of Wireless Sensor Networks (WSN). WSN comprises of sensor nodes and hubs fit for detecting, processing, and communicating remotely. Sensor nodes have limited resources such as memory, energy and computation capabilities restricting their ability to process large volume of data that is generated. Compressing the data before transmission will help alleviate the problem. Many data compression methods have been proposed but mainly for image processing and a vast majority of them are not pertinent on sensor nodes because of memory impediment, energy utilization and handling speed. To overcome this issue, authors in this research have chosen Run Length Encoding (RLE) and Adaptive Huffman Encoding (AHE) data compression techniques as they can be executed on sensor nodes. Both RLE and AHE are capable of balancing compression ratio and energy utilization. In this paper, a hybrid method comprising RLE and AHE, named as H-RLEAHE, is proposed and further investigated for sensor nodes. In order to verify the efficacy of the data compression algorithms, simulations were run, and the results compared with the compression techniques employing RLE, AHE, H-RLEAHE, and without the use of any compression approach for five distinct scenarios. The results demonstrate the RLE’s efficiency, as it surpasses alternative data compression methods in terms of energy efficiency, network speed, packet delivery rate, and residual energy throughout all iterations.
APA, Harvard, Vancouver, ISO, and other styles
23

Shen, D., J. Wang, X. Cheng, Y. Rui, and S. Ye. "Integration of 2-D hydraulic model and high-resolution LiDAR-derived DEM for floodplain flow modeling." Hydrology and Earth System Sciences Discussions 12, no. 2 (February 13, 2015): 2011–46. http://dx.doi.org/10.5194/hessd-12-2011-2015.

Full text
Abstract:
Abstract. The rapid progress of Light Detection And Ranging (LiDAR) technology has made acquirement and application of high-resolution digital elevation model (DEM) data increasingly popular, especially with regards to the study of floodplain flow modeling. High-resolution DEM data include many redundant interpolation points, needs a high amount of calculation, and does not match the size of computational mesh. These disadvantages are a common problem for floodplain flow modeling studies. Two-dimensional (2-D) hydraulic modeling, a popular method of analyzing floodplain flow, offers high precision of elevation parameterization for computational mesh while ignoring much micro-topographic information of the DEM data itself. We offer a flood simulation method that integrates 2-D hydraulic model results and high-resolution DEM data, enabling the calculation of flood water levels in DEM grid cells through local inverse distance weighted interpolation. To get rid of the false inundation areas during interpolation, it employs the run-length encoding method to mark the inundated DEM grid cells and determine the real inundation areas through the run-length boundary tracing technique, which solves the complicated problem of the connectivity between DEM grid cells. We constructed a 2-D hydraulic model for the Gongshuangcha polder, a flood storage area of Dongting Lake, using our integrated method to simulate the floodplain flow. The results demonstrate that this method can solve DEM associated problems efficiently and simulate flooding processes with greater accuracy than DEM only simulations.
APA, Harvard, Vancouver, ISO, and other styles
24

Shen, D., J. Wang, X. Cheng, Y. Rui, and S. Ye. "Integration of 2-D hydraulic model and high-resolution lidar-derived DEM for floodplain flow modeling." Hydrology and Earth System Sciences 19, no. 8 (August 18, 2015): 3605–16. http://dx.doi.org/10.5194/hess-19-3605-2015.

Full text
Abstract:
Abstract. The rapid progress of lidar technology has made the acquirement and application of high-resolution digital elevation model (DEM) data increasingly popular, especially in regards to the study of floodplain flow. However, high-resolution DEM data pose several disadvantages for floodplain modeling studies; e.g., the data sets contain many redundant interpolation points, large numbers of calculations are required to work with data, and the data do not match the size of the computational mesh. Two-dimensional (2-D) hydraulic modeling, which is a popular method for analyzing floodplain flow, offers highly precise elevation parameterization for computational mesh while ignoring much of the micro-topographic information of the DEM data itself. We offer a flood simulation method that integrates 2-D hydraulic model results and high-resolution DEM data, thus enabling the calculation of flood water levels in DEM grid cells through local inverse distance-weighted interpolation. To get rid of the false inundation areas during interpolation, it employs the run-length encoding method to mark the inundated DEM grid cells and determine the real inundation areas through the run-length boundary tracing technique, which solves the complicated problem of connectivity between DEM grid cells. We constructed a 2-D hydraulic model for the Gongshuangcha detention basin, which is a flood storage area of Dongting Lake in China, by using our integrated method to simulate the floodplain flow. The results demonstrate that this method can solve DEM associated problems efficiently and simulate flooding processes with greater accuracy than simulations only with DEM.
APA, Harvard, Vancouver, ISO, and other styles
25

HU, GONGZHU, and ZE-NIAN LI. "AN X-CROSSING PRESERVING SKELETONIZATION ALGORITHM." International Journal of Pattern Recognition and Artificial Intelligence 07, no. 05 (October 1993): 1031–53. http://dx.doi.org/10.1142/s0218001493000522.

Full text
Abstract:
For an image consisting of wire-like patterns, skeletonization (thinning) is often necessary as the first step towards feature extraction. But a serious problem which exists is that the intersections of the lines (X-crossings) will be elongated when applying a thinning algorithm to the image. That is, X-crossings are usually difficult to be preserved as the result of thinning. In this paper, we present a non-iterative line thinning method that preserves X-crossings of the lines in the image. The skeleton is formed by the mid-points of run-length encoding of the patterns. Line intersection areas are identified via a histogram analysis of the lengths of runs, and intersections are detected at locations where the sequences of runs merge or split.
APA, Harvard, Vancouver, ISO, and other styles
26

Koch, Alexander, Michael Schrempp, and Michael Kirsten. "Card-Based Cryptography Meets Formal Verification." New Generation Computing 39, no. 1 (April 2021): 115–58. http://dx.doi.org/10.1007/s00354-020-00120-0.

Full text
Abstract:
AbstractCard-based cryptography provides simple and practicable protocols for performing secure multi-party computation with just a deck of cards. For the sake of simplicity, this is often done using cards with only two symbols, e.g., $$\clubsuit $$ ♣ and $$\heartsuit $$ ♡ . Within this paper, we also target the setting where all cards carry distinct symbols, catering for use-cases with commonly available standard decks and a weaker indistinguishability assumption. As of yet, the literature provides for only three protocols and no proofs for non-trivial lower bounds on the number of cards. As such complex proofs (handling very large combinatorial state spaces) tend to be involved and error-prone, we propose using formal verification for finding protocols and proving lower bounds. In this paper, we employ the technique of software bounded model checking (SBMC), which reduces the problem to a bounded state space, which is automatically searched exhaustively using a SAT solver as a backend. Our contribution is threefold: (a) we identify two protocols for converting between different bit encodings with overlapping bases, and then show them to be card-minimal. This completes the picture of tight lower bounds on the number of cards with respect to runtime behavior and shuffle properties of conversion protocols. For computing AND, we show that there is no protocol with finite runtime using four cards with distinguishable symbols and fixed output encoding, and give a four-card protocol with an expected finite runtime using only random cuts. (b) We provide a general translation of proofs for lower bounds to a bounded model checking framework for automatically finding card- and run-minimal (i.e., the protocol has a run of minimal length) protocols and to give additional confidence in lower bounds. We apply this to validate our method and, as an example, confirm our new AND protocol to have its shortest run for protocols using this number of cards. (c) We extend our method to also handle the case of decks on symbols $$\clubsuit $$ ♣ and $$\heartsuit $$ ♡ , where we show run-minimality for two AND protocols from the literature.
APA, Harvard, Vancouver, ISO, and other styles
27

Manikandan, V. M., Kandala Sree Rama Murthy, Bhavana Siddineni, Nancy Victor, Praveen Kumar Reddy Maddikunta, and Saqib Hakak. "A High-Capacity Reversible Data-Hiding Scheme for Medical Image Transmission Using Modified Elias Gamma Encoding." Electronics 11, no. 19 (September 28, 2022): 3101. http://dx.doi.org/10.3390/electronics11193101.

Full text
Abstract:
Reversible data hiding (RDH) is a recently emerged research domain in the field of information security domain with broad applications in medical images and meta-data handling in the cloud. The amount of data required to handle the healthcare sector has exponentially increased due to the increase in the population. Medical images and various reports such as discharge summaries and diagnosis reports are the most common data in the healthcare sector. The RDH schemes are widely explored to embed the medical reports in the medical image instead of sending them as separate files. The receiver can extract the clinical reports and recover the original medical image for further diagnosis. This manuscript proposes an approach that uses a new lossless compression-based RDH scheme that creates vacant room for data hiding. The proposed scheme uses run-length encoding and a modified Elias gamma encoding scheme on higher-order bit planes for lossless compression. The conventional Elias gamma encoding process is modified in the proposed method to embed some additional data bits during the encoding process itself. The revised approach ensures a high embedding rate and lossless recovery of medical images at the receiver side. The experimental study is conducted on both natural images and medical images. The average embedding rate from the proposed scheme for the medical images is 0.75 bits per pixel. The scheme achieved a 0 bit error rate during image recovery and data extraction. The experimental study shows that the newly introduced scheme performs better when compared with the existing RDH schemes.
APA, Harvard, Vancouver, ISO, and other styles
28

Mahmood, Sawsan D., Maha A. Hutaihit, Tamara A. Abdulrazaq, Azmi Shawkat Abdulbaqi, and Nada Nasih Tawfeeq. "A Telemedicine based on EEG Signal Compression and Transmission." Webology 18, SI05 (October 30, 2021): 894–913. http://dx.doi.org/10.14704/web/v18si05/web18270.

Full text
Abstract:
As a result of RLE and DWT, an effective technique for compressing and transmitting EEG signals was developed in this study. With low percent root-mean-square difference (PRD) values, this algorithm's compression ratio (CR) is high. The life database had 50 EEG patient records. In clinical and research contexts, EEG signals are often recorded at sample rates between 250 and 2000 Hz. New EEG data-collection devices, on the other hand, may record at sampling rates exceeding 20,000 Hz. Time domain (TD) and frequency domain (FD) analysis of EEG data utilizing DWT retains the essential and major features of EEG signals. The thresholding and quantization of EEG signal coefficients are the next steps in implementing this suggested technique, followed by encoding the signals utilizing RLE, which improves CR substantially. A stable method for compressing EEG signals and transmission based on DWT (discrete wavelet transform) and RLE (run length encoding) is presented in this paper in order to improve and increase the compression of the EEG signals. According to the proposed model, CR, PRD, PRDN (normalized percentage root mean square difference), QS (quality score), and SNR (signal to noise ratio) are averaged over 50 records of EEG data and range from 44.0% to 0.36 percent to 5.87 percent to 143 percent to 3.53 percent to 59 percent, respectively.
APA, Harvard, Vancouver, ISO, and other styles
29

Berger, Sebastian, Andrii Kravtsiv, Gerhard Schneider, and Denis Jordan. "Teaching Ordinal Patterns to a Computer: Efficient Encoding Algorithms Based on the Lehmer Code." Entropy 21, no. 10 (October 21, 2019): 1023. http://dx.doi.org/10.3390/e21101023.

Full text
Abstract:
Ordinal patterns are the common basis of various techniques used in the study of dynamical systems and nonlinear time series analysis. The present article focusses on the computational problem of turning time series into sequences of ordinal patterns. In a first step, a numerical encoding scheme for ordinal patterns is proposed. Utilising the classical Lehmer code, it enumerates ordinal patterns by consecutive non-negative integers, starting from zero. This compact representation considerably simplifies working with ordinal patterns in the digital domain. Subsequently, three algorithms for the efficient extraction of ordinal patterns from time series are discussed, including previously published approaches that can be adapted to the Lehmer code. The respective strengths and weaknesses of those algorithms are discussed, and further substantiated by benchmark results. One of the algorithms stands out in terms of scalability: its run-time increases linearly with both the pattern order and the sequence length, while its memory footprint is practically negligible. These properties enable the study of high-dimensional pattern spaces at low computational cost. In summary, the tools described herein may improve the efficiency of virtually any ordinal pattern-based analysis method, among them quantitative measures like permutation entropy and symbolic transfer entropy, but also techniques like forbidden pattern identification. Moreover, the concepts presented may allow for putting ideas into practice that up to now had been hindered by computational burden. To enable smooth evaluation, a function library written in the C programming language, as well as language bindings and native implementations for various numerical computation environments are provided in the supplements.
APA, Harvard, Vancouver, ISO, and other styles
30

Bruni, Vittoria, Michela Tartaglione, and Domenico Vitulano. "A Signal Complexity-Based Approach for AM–FM Signal Modes Counting." Mathematics 8, no. 12 (December 4, 2020): 2170. http://dx.doi.org/10.3390/math8122170.

Full text
Abstract:
Frequency modulated signals appear in many applied disciplines, including geology, communication, biology and acoustics. They are naturally multicomponent, i.e., they consist of multiple waveforms, with specific time-dependent frequency (instantaneous frequency). In most practical applications, the number of modes—which is unknown—is needed for correctly analyzing a signal; for instance for separating each individual component and for estimating its instantaneous frequency. Detecting the number of components is a challenging problem, especially in the case of interfering modes. The Rényi Entropy-based approach has proven to be suitable for signal modes counting, but it is limited to well separated components. This paper addresses this issue by introducing a new notion of signal complexity. Specifically, the spectrogram of a multicomponent signal is seen as a non-stationary process where interference alternates with non-interference. Complexity concerning the transition between consecutive spectrogram sections is evaluated by means of a modified Run Length Encoding. Based on a spectrogram time-frequency evolution law, complexity variations are studied for accurately estimating the number of components. The presented method is suitable for multicomponent signals with non-separable modes, as well as time-varying amplitudes, showing robustness to noise.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhu, Jia-Ying, Ning Zhao, and Bin Yang. "Global Transcriptional Analysis of Olfactory Genes in the Head of Pine Shoot Beetle,Tomicus yunnanensis." Comparative and Functional Genomics 2012 (2012): 1–10. http://dx.doi.org/10.1155/2012/491748.

Full text
Abstract:
The most important proteins involved in olfaction include odorant binding protein (OBP), chemosensory protein (CSP), olfactory receptor (OR), and gustatory receptor (GR). Despite that the exhaustive genomic analysis has revealed a large number of olfactory genes in a number of model insects, it is still poorly understood for most nonmodel species. This is mostly due to the reason that the small antenna is challenging for collection. We can generally isolate one or few genes at a time by means of the traditional method. Here, we present the large-scale identifying members of the main olfactory genes from the head ofTomicus yunnanensisusing Illumina sequencing. In a single run, we obtained over 51.8 million raw reads. These reads were assembled into 57,142 unigenes. Nearly 29,384 of them were functionally annotated in the NCBI nonredundant database. By depth analysis of the data, 11 OBPs, 8 CSPs, 18 ORs, and 8 GRs were retrieved. Sequences encoding full length proteins were further characterised for one OBP and two CSPs. The obtained olfactory genes provide a major resource in further unraveling the molecular mechanisms ofT. yunnanensischemoperception. This study indicates that the next generation sequencing is an attractive approach for efficient identification of olfactory genes from insects, for which the genome sequence is unavailable.
APA, Harvard, Vancouver, ISO, and other styles
32

Tseng, Chwan-Lu, Chun-Chieh Hsiao, I.-Chi Chou, Chia-Jung Hsu, Yi-Ju Chang, and Ren-Guey Lee. "DESIGN AND IMPLEMENTATION OF ECG COMPRESSION ALGORITHM WITH CONTROLLABLE PERCENT ROOT-MEAN-SQUARE DIFFERENCE." Biomedical Engineering: Applications, Basis and Communications 19, no. 04 (August 2007): 259–68. http://dx.doi.org/10.4015/s1016237207000343.

Full text
Abstract:
In this paper, the orthogonality of coefficient matrices of wavelet filters is utilized to derive the energy equation for the relation between time-domain signal and its corresponding wavelet coefficients. Using the energy equation, the relationship between the wavelet coefficient error and the reconstruction error is obtained. The errors considered in this paper include the truncation error and quantization error. This not only helps to control the reconstruction quality but also brings two advantages: (1) It is not necessary to perform inverse transform to obtain the distortion caused by compression using wavelet transform and can thus reduce computation efforts. (2) By using the energy equation, we can search for a threshold value to attain a better compression ratio within the range of a pre-specified percent root-mean-square difference (PRD) value. A compression algorithm with run length encoding is proposed based on the energy equation. In the end, the Matlab software and MIT-BIH database are adopted to perform simulations for verifying the feasibility of our proposed method. The algorithm is also implemented on a DSP chip to examine the practicality and suitability. The required computation time of an ECG segment is less than 0.0786 ,s which is fast enough to process real-time signals. As a result, the proposed algorithm is applicable for implementation on mobile ECG recording devices.
APA, Harvard, Vancouver, ISO, and other styles
33

He, Hongyang, Yue Gao, Yong Zheng, and Yining Liu. "Intelligent Power Grid Video Surveillance Technology Based on Efficient Compression Algorithm Using Robust Particle Swarm Optimization." Wireless Power Transfer 2021 (December 30, 2021): 1–12. http://dx.doi.org/10.1155/2021/8192582.

Full text
Abstract:
Companies that produce energy transmit it to any or all households via a power grid, which is a regulated power transmission hub that acts as a middleman. When a power grid fails, the whole area it serves is blacked out. To ensure smooth and effective functioning, a power grid monitoring system is required. Computer vision is among the most commonly utilized and active research applications in the world of video surveillance. Though a lot has been accomplished in the field of power grid surveillance, a more effective compression method is still required for large quantities of grid surveillance video data to be archived compactly and sent efficiently. Video compression has become increasingly essential with the advent of contemporary video processing algorithms. An algorithm’s efficacy in a power grid monitoring system depends on the rate at which video data is sent. A novel compression technique for video inputs from power grid monitoring equipment is described in this study. Due to a lack of redundancy in visual input, traditional techniques are unable to fulfill the current demand standards for modern technology. As a result, the volume of data that needs to be saved and handled in live time grows. Encoding frames and decreasing duplication in surveillance video using texture information similarity, the proposed technique overcomes the aforementioned problems by Robust Particle Swarm Optimization (RPSO) based run-length coding approach. Our solution surpasses other current and relevant existing algorithms based on experimental findings and assessments of different surveillance video sequences utilizing varied parameters. A massive collection of surveillance films was compressed at a 50% higher rate using the suggested approach than with existing methods.
APA, Harvard, Vancouver, ISO, and other styles
34

Oe, Shunichiro. "Special Issue on Vision." Journal of Robotics and Mechatronics 11, no. 2 (April 20, 1999): 87. http://dx.doi.org/10.20965/jrm.1999.p0087.

Full text
Abstract:
The widely used term <B>Computer Vision</B> applies to when computers are substituted for human visual information processing. As Real-world objects, except for characters, symbols, figures and photographs created by people, are 3-dimensional (3-D), their two-dimensional (2-D) images obtained by camera are produced by compressing 3-D information to 2-D. Many methods of 2-D image processing and pattern recognition have been developed and widely applied to industrial and medical processing, etc. Research work enabling computers to recognize 3-D objects by 3-D information extracted from 2-D images has been carried out in artificial intelligent robotics. Many techniques have been developed and some applied practically in scene analysis or 3-D measurement. These practical applications are based on image sensing, image processing, pattern recognition, image measurement, extraction of 3-D information, and image understanding. New techniques are constantly appearing. The title of this special issue is <B>Vision</B>, and it features 8 papers from basic computer vision theory to industrial applications. These papers include the following: Kohji Kamejima proposes a method to detect self-similarity in random image fields - the basis of human visual processing. Akio Nagasaka et al. developed a way to identify a real scene in real time using run-length encoding of video feature sequences. This technique will become a basis for active video recording and new robotic machine vision. Toshifumi Honda presents a method for visual inspection of solder joint by 3-D image analysis - a very important issue in the inspection of printed circuit boards. Saburo Okada et al. contribute a new technique on simultaneous measurement of shape and normal vector for specular objects. These methods are all useful for obtaining 3-D information. Masato Nakajima presents a human face identification method for security monitoring using 3-D gray-level information. Kenji Terada et al. propose a method of automatic counting passing people using image sensing. These two technologies are very useful in access control. Yoji. Ogawa presents a new image processing method for automatic welding in turbid water under a non-preparatory environment. Liu Wei et al. develop a method for detection and management of cutting-tool wear using visual sensors. We are certain that all of these papers will contribute greatly to the development of vision systems in robotics and mechatronics.
APA, Harvard, Vancouver, ISO, and other styles
35

Xin, Rui, and Tinghua Ai. "Run length coding and efficient compression of hexagonal raster data based on Gosper curve." Abstracts of the ICA 1 (July 15, 2019): 1–2. http://dx.doi.org/10.5194/ica-abs-1-411-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Compared with regular quadrilateral grid, regular hexagonal grid is isotropy and has higher cell compactness and sampling density. This gives regular hexagonal grid advantages in visual display, spatial analysis, and many other aspects. However, the studies of raster data mainly focus on regular quadrilateral grid, and various encoding methods are also focused on it. The researches on hexagonal raster data are relatively insufficient.</p><p>In this paper, encoding and compression for regular hexagonal grid are studied. By introducing Gosper curve which has good spatial aggregation and takes into account the morphological structure of regular hexagonal grid, the bidirectional correlation between Gosper curve and regular hexagonal grid is established. Then, a new encoding framework is built to determine the Gosper coding of each grid unit. The lossless compression is completed by performing run-length coding on adjacent coding sets in the target region.</p>
APA, Harvard, Vancouver, ISO, and other styles
36

Sarhan, Ahmad. "Run length encoding based wavelet features for COVID-19 detection in X-rays." BJR|Open 3, no. 1 (January 2021): 20200028. http://dx.doi.org/10.1259/bjro.20200028.

Full text
Abstract:
Objectives: Introduced in his paper is a novel approach for the recognition of COVID-19 cases in chest X-rays. Methods: The discrete Wavelet transform (DWT) is employed in the proposed system to obtain highly discriminative features from the input chest X-ray image. The selected features are then classified by a support vector machine (SVM) classifier as either normal or COVID-19 cases. The DWT is well-known for its energy compression power. The proposed system uses the DWT to decompose the chest X-ray image into a group of approximation coefficients that contain a small number of high-energy (high-magnitude) coefficients. The proposed system introduces a novel coefficient selection scheme that employs hard thresholding combined with run-length encoding to extract only high-magnitude Wavelet approximation coefficients. These coefficients are utilized as features symbolizing the chest X-ray input image. After applying zero-padding to unify their lengths, the feature vectors are introduced to a SVM which classifies them as either normal or COVID-19 cases. Results: The proposed system yields promising results in terms of classification accuracy, which justifies further work in this direction. Conclusion: The DWT can produce a few features that are highly discriminative. By reducing the dimensionality of the feature space, the proposed system is able to reduce the number of required training images and diminish the space and time complexities of the system. Advances in knowledge: Exploiting and reshaping the approximation coefficients can produce discriminative features representing the input image.
APA, Harvard, Vancouver, ISO, and other styles
37

Kim, Seung-Cheol, and Eun-Soo Kim. "Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods." Applied Optics 48, no. 6 (February 11, 2009): 1030. http://dx.doi.org/10.1364/ao.48.001030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Goulahsen, Abdelaziz, Julien Saadé, and Frédéric Pétrot. "Line coding methods for high speed serial links." International Symposium on Microelectronics 2015, no. 1 (October 1, 2015): 000318–23. http://dx.doi.org/10.4071/isom-2015-wa63.

Full text
Abstract:
A line coding for high speed serial transmission is defined by two major characteristics: the maximum guaranteed run length (RL) which is the number of consecutive identical bits, and the running disparity (RD or DC-Balance) which is the difference between the number of 'zeroes' and 'ones' in a frame. Both should be bounded to a certain limit, RL to ensure reliable clock recovery and RD to limit baseline wander. Another important parameter is the overhead predictability. This parameter may be critical for applications that need a regular synchronization but for other applications, especially if the variable transfer rate is handled by the upper layer protocol, a statistical value of this parameter is good enough. In this paper, we propose two programmable line codings which bound RL and RD with fixed or variable overhead. The resulting overhead for the line coding we propose is shown to be the lowest among the existing methods, as much as to 10 times lower than well-known encoding methods. The fixed overhead line coding is based on a generalization of the polarity bit approach and can be dynamically adapted to link quality and the environment. First we propose a line coding which bounds the RL, and then we propose another one which bounds the RD. We end up by combining both methods to build a DC-balanced and Run Length limited line coding.
APA, Harvard, Vancouver, ISO, and other styles
39

Domshlak, C., J. Hoffmann, and A. Sabharwal. "Friends or Foes? On Planning as Satisfiability and Abstract CNF Encodings." Journal of Artificial Intelligence Research 36 (December 21, 2009): 415–69. http://dx.doi.org/10.1613/jair.2817.

Full text
Abstract:
Planning as satisfiability, as implemented in, for instance, the SATPLAN tool, is a highly competitive method for finding parallel step-optimal plans. A bottleneck in this approach is to *prove the absence* of plans of a certain length. Specifically, if the optimal plan has N steps, then it is typically very costly to prove that there is no plan of length N-1. We pursue the idea of leading this proof within solution length preserving abstractions (over-approximations) of the original planning task. This is promising because the abstraction may have a much smaller state space; related methods are highly successful in model checking. In particular, we design a novel abstraction technique based on which one can, in several widely used planning benchmarks, construct abstractions that have exponentially smaller state spaces while preserving the length of an optimal plan. Surprisingly, the idea turns out to appear quite hopeless in the context of planning as satisfiability. Evaluating our idea empirically, we run experiments on almost all benchmarks of the international planning competitions up to IPC 2004, and find that even hand-made abstractions do not tend to improve the performance of SATPLAN. Exploring these findings from a theoretical point of view, we identify an interesting phenomenon that may cause this behavior. We compare various planning-graph based CNF encodings F of the original planning task with the CNF encodings F_abs of the abstracted planning task. We prove that, in many cases, the shortest resolution refutation for F_abs can never be shorter than that for F. This suggests a fundamental weakness of the approach, and motivates further investigation of the interplay between declarative transition-systems, over-approximating abstractions, and SAT encodings.
APA, Harvard, Vancouver, ISO, and other styles
40

Arizanovic, Boban, and Vladan Vuckovic. "Efficient image compression and decompression algorithms for OCR systems." Facta universitatis - series: Electronics and Energetics 31, no. 3 (2018): 461–85. http://dx.doi.org/10.2298/fuee1803461a.

Full text
Abstract:
This paper presents an efficient new image compression and decompression methods for document images, intended for usage in the pre-processing stage of an OCR system designed for needs of the ?Nikola Tesla Museum? in Belgrade. Proposed image compression methods exploit the Run-Length Encoding (RLE) algorithm and an algorithm based on document character contour extraction, while an iterative scanline fill algorithm is used for image decompression. Image compression and decompression methods are compared with JBIG2 and JPEG2000 image compression standards. Segmentation accuracy results for ground-truth documents are obtained in order to evaluate the proposed methods. Results show that the proposed methods outperform JBIG2 compression regarding the time complexity, providing up to 25 times lower processing time at the expense of worse compression ratio results, as well as JPEG2000 image compression standard, providing up to 4-fold improvement in compression ratio. Finally, time complexity results show that the presented methods are sufficiently fast for a real time character segmentation system.
APA, Harvard, Vancouver, ISO, and other styles
41

J.S. Al-janabi, Rana, Shroouq J.S. Al-janabi, and Zinah Hussein Toman. "New method for Increasing watermarked image quality and security." Journal of Al-Qadisiyah for computer science and mathematics 9, no. 2 (November 20, 2017). http://dx.doi.org/10.29304/jqcm.2017.9.2.316.

Full text
Abstract:
Recently, information transmission is done electronically, information can be stored and manipulated easily using computer, information transferring should be done in secure way. Watermark is a technique that can be used to secure data. In this paper, a new watermark algorithm is proposed which depends on integer wavelet transform, arithmetic encoding and adaptive run length encoding. Adaptive run length encoding is used to compress watermark before embedding process. Arithmetic encoding can be used to compress data in original image that are needed to be retrieved from watermarked images to recover original image without any differences after watermark extraction process. To avoid the problem of traditional Run Length Encoding Technique that may produce file that is larger than original one. Adaptive run length encoding is suggested which is used to compress watermark in cover image. The adaptive compression technique can be done several times on watermark using different number of bits to represent counts of runs and produce several vectors for number of runs, compare several results and select smallest vector from them. Surely, the smallest vector means that watermarked image will be high quality image; capacity will be better and adaptively ensure embedding process will be more secure.
APA, Harvard, Vancouver, ISO, and other styles
42

Husseen, A. H., S. Sh Mahmud, and R. J. Mohammed. "Image Compression Using Proposed Enhanced Run Length Encoding Algorithm." Ibn AL- Haitham Journal For Pure and Applied Sciences 24, no. 1 (May 17, 2017). http://dx.doi.org/10.30526/24.1.803.

Full text
Abstract:
In this paper, we will present proposed enhance process of image compression by using RLE algorithm. This proposed yield to decrease the size of compressing image, but the original method used primarily for compressing a binary images [1].Which will yield increasing the size of an original image mostly when used for color images. The test of an enhanced algorithm is performed on sample consists of ten BMP 24-bit true color images, building an application by using visual basic 6.0 to show the size after and before compression process and computing the compression ratio for RLE and for the enhanced RLE algorithm
APA, Harvard, Vancouver, ISO, and other styles
43

Lehmann, Gaetan, and David Legland. "Efficient N-Dimensional surface estimation using Crofton formula and run-length encoding." Insight Journal, February 23, 2012. http://dx.doi.org/10.54294/wdu86d.

Full text
Abstract:
Unlike the measure of the area in 2D or of the volume in 3D, the perimeter and the surface are not easily measurable in a discretized image. In this article we describe a method based on the Crofton formula to measure those two parameters in a discritized image. The accuracy of the method is discussed and tested on several known objects. An algorithm based on the run-length encoding of binary objects is presented and compared to other approaches. An implementation is provided and integrated in the LabelObject/LabelMap framework contributed earlier by the authors.
APA, Harvard, Vancouver, ISO, and other styles
44

Prayoga, Eka, and Kristien Margi Suryaningrum. "IMPLEMENTASI ALGORITMA HUFFMAN DAN RUN LENGTH ENCODING PADA APLIKASI KOMPRESI BERBASIS WEB." Jurnal Ilmiah Teknologi Infomasi Terapan 4, no. 2 (April 30, 2018). http://dx.doi.org/10.33197/jitter.vol4.iss2.2018.154.

Full text
Abstract:
[Id]Meningkatnya penggunaan media digital dalam kehidupan sehari-hari secara tidak langsung turut meningkatkan kebutuhan dalam penyimpanan data, oleh karena itu dibutuhkan sebuah metode untuk menangani hal tersebut, salah satunya adalah dengan menerapkan kompresi data. Kompresi adalah teknik dalam memampatkan suatu data untuk menghemat media penyimpanan yang digunakan, selain itu, kompresi pun dapat dimanfaatkan untuk kebutuhan lain, seperti backup data, proses pengiriman data, serta keamanan data. Pemampatan atau kompresi pada umumnya diterapkan pada mesin komputer, karena setiap simbol yang ditampilkan memiliki bit-bit yang berbeda. Penulis menggunakan algoritma Huffman dan Run Length Encoding dalam proses pemampatan yang dilakukan, dimana masukkannya adalah file TXT. Tujuan penelitian ini adalah untuk mengetahui bagaimana implementasi dari gabungan antara kedua algoritma tersebut, selain itu, penelitian ini juga bertujuan untuk mengetahui bagaimana rasio perbandingan ukuran file antara file awal dan file yang terkompresi. Implementasi sistem yang dilakukan memanfaatkan aplikasi berbasis web untuk memudahkan pengguna dalam memanfaatkan fitur sistem yang ada, dimana dalam sistem ini memuat proses kompresi dan dekompresi. Tahapan kompresi digunakan untuk proses pemampatan, dan tahapan dekompresi untuk proses pengembalian file ke bentuk dan ukuran yang semula. Penelitian dilakukan dengan menggunakan 5 data uji, dan menunjukkan ukuran file hasil dekompres tidak seperti semula karena proses kompresi yang bersifat lossy.Kata kunci :Kompresi, TXT, Dekompresi, Huffman, Run Length Encoding[En]Increasing the use of digital media in life indirectly also increases the need for data storage, therefore needed a method to handle it, one of them is by applying data compression. Compression is a technique which compress data to save used storage, in addition, any compression can be used for other needs, such as data backup, data transmission process, and data security. Compression or compression is generally applied to a computer machine, because every symbol displayed has different bits. Writer here used Huffman and Run Length Encoding algorithm in the compression process, where the input is TXT file. The purpose of this study is to find out how the implementation of the combination between the two algorithms, in addition, this study also aims to find out how the ratio of file sizes between the initial file and the compressed file. Implementation of the system made use of web-based applications to facilitate users in utilizing the features of existing systems, which in this system includes the compression and decompression process. The compression stages are used for the compression process, and the decompression stage for the process of returning the file to its original shape and size. The study was conducted using 5 test data, and showed the decompress file size is not as original because the compression process is categorized as lossy
APA, Harvard, Vancouver, ISO, and other styles
45

Chutke, Sravanthi, Nandhitha N.M., and Praveen Kumar Lendale. "Video compression based on zig-zag 3D DCT and run-length encoding for multimedia communication systems." International Journal of Pervasive Computing and Communications, July 25, 2022. http://dx.doi.org/10.1108/ijpcc-01-2022-0012.

Full text
Abstract:
Purpose With the advent of technology, a huge amount of data is being transmitted and received through the internet. Large bandwidth and storage are required for the exchange of data and storage, respectively. Hence, compression of the data which is to be transmitted over the channel is unavoidable. The main purpose of the proposed system is to use the bandwidth effectively. The videos are compressed at the transmitter’s end and reconstructed at the receiver’s end. Compression techniques even help for smaller storage requirements. Design/methodology/approach The paper proposes a novel compression technique for three-dimensional (3D) videos using a zig-zag 3D discrete cosine transform. The method operates a 3D discrete cosine transform on the videos, followed by a zig-zag scanning process. Finally, to convert the data into a single bit stream for transmission, a run-length encoding technique is used. The videos are reconstructed by using the inverse 3D discrete cosine transform, inverse zig-zag scanning (quantization) and inverse run length coding techniques. The proposed method is simple and reduces the complexity of the convolutional techniques. Findings Coding reduction, code word reduction, peak signal to noise ratio (PSNR), mean square error, compression percent and compression ratio values are calculated, and the dominance of the proposed method over the convolutional methods is seen. Originality/value With zig-zag quantization and run length encoding using 3D discrete cosine transform for 3D video compression, gives compression up to 90% with a PSNR of 41.98 dB. The proposed method can be used in multimedia applications where bandwidth, storage and data expenses are the major issues.
APA, Harvard, Vancouver, ISO, and other styles
46

Feng, Xiu-Fang, Shi-Xian Nan, Rui-Qing Ma, and Hao Zhang. "A Lossless Compression and Encryption Method for Remote Sensing Image Using LWT, Rubik’s Cube and 2D-CCM." International Journal of Bifurcation and Chaos 32, no. 10 (August 2022). http://dx.doi.org/10.1142/s0218127422501498.

Full text
Abstract:
This paper proposes a lossless encryption–compression algorithm for large-scale remote sensing images. Firstly, the red, green and blue components of color image are compressed by a lossless predictive encoding. Then, the lifting wavelet transform (LWT) is used to decompose the encoding results. And a new Rubik’s cube transformation is introduced to scramble the decomposed coefficients, which uses the chaotic sequence generated by 2D Cubic–Chebyshev map (2D-CCM). The initial values of 2D-CCM are obtained from the chi-square test values of the three components, which leads to the algorithm related to the plaintext image. After that, the scrambling coefficients are thresholding, and the position sequences generated in the process are encrypted and compressed by the proposed encrypted run-length encoding (E-RLE). The processed coefficients are further compressed by Huffman encoding. At the end, the final results are obtained by a novel helix diffusion which is related to the chaotic sequence. Experimental results show that, this algorithm achieves higher lossless compression ratio with lower time complexity, and the encryption scheme has higher security.
APA, Harvard, Vancouver, ISO, and other styles
47

"Lossless Compression of Medical Images based on the Differential Probability of Images." International Journal of Computers 14 (April 30, 2020). http://dx.doi.org/10.46300/9108.2020.14.1.

Full text
Abstract:
Lossless compression is crucial in the remote transmission of large-scale medical image and the retainment of complete medical diagnostic information. The lossless compression method of medical image based on differential probability of image is proposed in this study. The medical image with DICOM format was decorrelated by the differential method, and the difference matrix was optimally coded by the Huffman coding method to obtain the optimal compression effect. Experimental results obtained using the new method were compared with those using Lempel–Ziv–Welch, modified run–length encoding, and block–bit allocation methods to verify its effectiveness. For 2-D medical images, the lossless compression effect of the proposed method is the best when the object region is more than 20% of the image. For 3-D medical images, the proposed method has the highest compression ratio among the control methods. The proposed method can be directly used for lossless compression of DICOM images.
APA, Harvard, Vancouver, ISO, and other styles
48

"An Efficient DWT and Tucker Decomposition with H.264 Video Compression for Multimedia Applications." International Journal of Engineering and Advanced Technology 8, no. 6S (September 6, 2019): 495–500. http://dx.doi.org/10.35940/ijeat.f1101.0886s19.

Full text
Abstract:
In last thirty years, there has been so much of intensive research has been carried out on video compression techniques and now it has become mature and used in a large number of applications. In this paper, we are trying to present video compression using H.264 compression with Tucker decomposition. The largest Kn sub-tensors and their eigenvectors with run length encoding to compress the frames in the video was obtained by implementing tucker decomposition of tensor. DWT is used to separate each frames into sub-images and TD on DWT coefficient to compact the energy of sub-images. The obtained experimental results supported that our proposed method yields higher compression ratio with good PSNR.
APA, Harvard, Vancouver, ISO, and other styles
49

Lusiana, Veronica, Imam Husni Al Amin, and Felix Andreas Sutanto. "Pengaruh Peningkatan Kualitas Citra Menggunakan Modifikasi Kontras Pada Kompresi Data RLE." Building of Informatics, Technology and Science (BITS) 4, no. 1 (July 1, 2022). http://dx.doi.org/10.47065/bits.v4i1.1646.

Full text
Abstract:
Data compression is needed so that the need for storage media and data transfer time becomes more efficient. This study compressed image data using the Run-Length Encoding (RLE) method. The test data is the original image (gray scale) and the image results of improving image quality (image enhancement) using contrast modification. Modification of contrast using contrast stretching methods. Through experiments wanting to know the extent to which the RLE method works less effectively for images with complex color intensity. The image of contrast modification results has a more complex color intensity or more varied pixel value. Obtained the number of pairs (p, q) RLE in the image of contrast modification results is less than the original image, with the pair ratio (p, q) RLE ranges from 0.64% to 1.59%. Although this image has a more varied pixel value than its original image, it can produce a compression ratio of the number of pairs (p, q) RLE.
APA, Harvard, Vancouver, ISO, and other styles
50

"A Novel Bit-plane Compression based Reversible Data Hiding Scheme with Arnold Transform." International Journal of Engineering and Advanced Technology 9, no. 5 (June 30, 2020): 417–23. http://dx.doi.org/10.35940/ijeat.e9517.069520.

Full text
Abstract:
Reversible data hiding (RDH) is an active research area in the field of information security. The RDH scheme allows the transmission of a secret message by embedding it into a cover image, and the receiver can recover the original cover image along with the extraction of the secret message. In this paper, we propose a bit plane compression based RDH scheme to hide a sequence of secret message bits into a grayscale image. In the proposed method, a selected bit plane of the cover image will be compressed using run-length encoding (RLE) scheme. Further, the RLE sequence has been efficiently encoded as a binary sequence using Elias gamma encoding method. The Elias gamma encoded bit sequence concatenated with the secret message bits are used to replace the selected bit plane after performing a sequence of Arnold transform. The Arnold transform helps to find a new scrambled version of the bit plane which is very close to the original bit plane to ensure the visual quality of the stego image. The RLE is a lossless compression technique, therefore recovery of the original image is possible by the receiver. The experimental study of the proposed scheme on the images from standard image dataset (USC-SIPI image dataset) shows that the proposed scheme outperforms the existing scheme in terms of the visual quality of the stego image without compromising the data embedding rate.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography