Journal articles on the topic 'Sawtooth Compressed Row Storage'

To see the other types of publications on this topic, follow the link: Sawtooth Compressed Row Storage.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 27 journal articles for your research on the topic 'Sawtooth Compressed Row Storage.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bani-Ismail, Basel, and Ghassan Kanaan. "Comparing Different Sparse Matrix Storage Structures as Index Structure for Arabic Text Collection." International Journal of Information Retrieval Research 2, no. 2 (April 2012): 52–67. http://dx.doi.org/10.4018/ijirr.2012040105.

Full text
Abstract:
In the authors’ study they evaluate and compare the storage efficiency of different sparse matrix storage structures as index structure for Arabic text collection and their corresponding sparse matrix-vector multiplication algorithms to perform query processing in any Information Retrieval (IR) system. The study covers six sparse matrix storage structures including the Coordinate Storage (COO), Compressed Sparse Row (CSR), Compressed Sparse Column (CSC), Block Coordinate (BCO), Block Sparse Row (BSR), and Block Sparse Column (BSC). Evaluation depends on the storage space requirements for each storage structure and the efficiency of the query processing algorithm. The experimental results demonstrate that CSR is more efficient in terms of storage space requirements and query processing time than the other sparse matrix storage structures. The results also show that CSR requires the least amount of disk space and performs the best in terms of query processing time compared with the other point entry storage structures (COO, CSC). The results demonstrate that BSR requires the least amount of disk space and performs the best in terms of query processing time compared with the other block entry storage structures (BCO, BSC).
APA, Harvard, Vancouver, ISO, and other styles
2

Mohammed, Saira Banu Jamal, M. Rajasekhara Babu, and Sumithra Sriram. "GPU Implementation of Image Convolution Using Sparse Model with Efficient Storage Format." International Journal of Grid and High Performance Computing 10, no. 1 (January 2018): 54–70. http://dx.doi.org/10.4018/ijghpc.2018010104.

Full text
Abstract:
With the growth of data parallel computing, role of GPU computing in non-graphic applications such as image processing becomes a focus in research fields. Convolution is an integral operation in filtering, smoothing and edge detection. In this article, the process of convolution is realized as a sparse linear system and is solved using Sparse Matrix Vector Multiplication (SpMV). The Compressed Sparse Row (CSR) format of SPMV shows better CPU performance compared to normal convolution. To overcome the stalling of threads for short rows in the GPU implementation of CSR SpMV, a more efficient model is proposed, which uses the Adaptive-Compressed Row Storage (A-CSR) format to implement the same. Using CSR in the convolution process achieves a 1.45x and a 1.159x increase in speed compared to the normal convolution of image smoothing and edge detection operations, respectively. An average speedup of 2.05x is achieved for image smoothing technique and 1.58x for edge detection technique in GPU platform usig adaptive CSR format.
APA, Harvard, Vancouver, ISO, and other styles
3

Tanaka, Teruo, Ryo Otsuka, Akihiro Fujii, Takahiro Katagiri, and Toshiyuki Imamura. "Implementation of D-Spline-Based Incremental Performance Parameter Estimation Method with ppOpen-AT." Scientific Programming 22, no. 4 (2014): 299–307. http://dx.doi.org/10.1155/2014/310879.

Full text
Abstract:
In automatic performance tuning (AT), a primary aim is to optimize performance parameters that are suitable for certain computational environments in ordinary mathematical libraries. For AT, an important issue is to reduce the estimation time required for optimizing performance parameters. To reduce the estimation time, we previously proposed the Incremental Performance Parameter Estimation method (IPPE method). This method estimates optimal performance parameters by inserting suitable sampling points that are based on computational results for a fitting function. As the fitting function, we introduced d-Spline, which is highly adaptable and requires little estimation time. In this paper, we report the implementation of the IPPE method with ppOpen-AT, which is a scripting language (set of directives) with features that reduce the workload of the developers of mathematical libraries that have AT features. To confirm the effectiveness of the IPPE method for the runtime phase AT, we applied the method to sparse matrix–vector multiplication (SpMV), in which the block size of the sparse matrix structure blocked compressed row storage (BCRS) was used for the performance parameter. The results from the experiment show that the cost was negligibly small for AT using the IPPE method in the runtime phase. Moreover, using the obtained optimal value, the execution time for the mathematical library SpMV was reduced by 44% on comparing the compressed row storage and BCRS (block size 8).
APA, Harvard, Vancouver, ISO, and other styles
4

Yan, Kaizhuang, Yongxian Wang, and Wenbin Xiao. "A New Compression and Storage Method for High-Resolution SSP Data Based-on Dictionary Learning." Journal of Marine Science and Engineering 10, no. 8 (August 10, 2022): 1095. http://dx.doi.org/10.3390/jmse10081095.

Full text
Abstract:
The sound speed profile data of seawater provide an important basis for carrying out underwater acoustic modeling and analysis, sonar performance evaluation, and underwater acoustic assistant decision-making. The data volume of the high-resolution sound speed profile is vast, and the demand for data storage space is high, which severely limits the analysis and application of the high-resolution sound speed profile data in the field of marine acoustics. This paper uses the dictionary learning method to achieve sparse coding of the high-resolution sound speed profile and uses a compressed sparse row method to compress and store the sparse characteristics of the data matrix. The influence of related parameters on the compression rate and recovery data error is analyzed and discussed, as are different scenarios and the difference in compression processing methods. Through comparative experiments, the average error of the sound speed profile data compressed is less than 0.5 m/s, the maximum error is less than 3 m/s, and the data volume is about 10% to 15% of the original data volume. This method significantly reduces the storage capacity of high-resolution sound speed profile data and ensures the accuracy of the data, providing technical support for efficient and convenient access to high-resolution sound speed profiles.
APA, Harvard, Vancouver, ISO, and other styles
5

Knopp, T., and A. Weber. "Local System Matrix Compression for Efficient Reconstruction in Magnetic Particle Imaging." Advances in Mathematical Physics 2015 (2015): 1–7. http://dx.doi.org/10.1155/2015/472818.

Full text
Abstract:
Magnetic particle imaging (MPI) is a quantitative method for determining the spatial distribution of magnetic nanoparticles, which can be used as tracers for cardiovascular imaging. For reconstructing a spatial map of the particle distribution, the system matrix describing the magnetic particle imaging equation has to be known. Due to the complex dynamic behavior of the magnetic particles, the system matrix is commonly measured in a calibration procedure. In order to speed up the reconstruction process, recently, a matrix compression technique has been proposed that makes use of a basis transformation in order to compress the MPI system matrix. By thresholding the resulting matrix and storing the remaining entries in compressed row storage format, only a fraction of the data has to be processed when reconstructing the particle distribution. In the present work, it is shown that the image quality of the algorithm can be considerably improved by using a local threshold for each matrix row instead of a global threshold for the entire system matrix.
APA, Harvard, Vancouver, ISO, and other styles
6

AlAhmadi, Sarah, Thaha Mohammed, Aiiad Albeshri, Iyad Katib, and Rashid Mehmood. "Performance Analysis of Sparse Matrix-Vector Multiplication (SpMV) on Graphics Processing Units (GPUs)." Electronics 9, no. 10 (October 13, 2020): 1675. http://dx.doi.org/10.3390/electronics9101675.

Full text
Abstract:
Graphics processing units (GPUs) have delivered a remarkable performance for a variety of high performance computing (HPC) applications through massive parallelism. One such application is sparse matrix-vector (SpMV) computations, which is central to many scientific, engineering, and other applications including machine learning. No single SpMV storage or computation scheme provides consistent and sufficiently high performance for all matrices due to their varying sparsity patterns. An extensive literature review reveals that the performance of SpMV techniques on GPUs has not been studied in sufficient detail. In this paper, we provide a detailed performance analysis of SpMV performance on GPUs using four notable sparse matrix storage schemes (compressed sparse row (CSR), ELLAPCK (ELL), hybrid ELL/COO (HYB), and compressed sparse row 5 (CSR5)), five performance metrics (execution time, giga floating point operations per second (GFLOPS), achieved occupancy, instructions per warp, and warp execution efficiency), five matrix sparsity features (nnz, anpr, nprvariance, maxnpr, and distavg), and 17 sparse matrices from 10 application domains (chemical simulations, computational fluid dynamics (CFD), electromagnetics, linear programming, economics, etc.). Subsequently, based on the deeper insights gained through the detailed performance analysis, we propose a technique called the heterogeneous CPU–GPU Hybrid (HCGHYB) scheme. It utilizes both the CPU and GPU in parallel and provides better performance over the HYB format by an average speedup of 1.7x. Heterogeneous computing is an important direction for SpMV and other application areas. Moreover, to the best of our knowledge, this is the first work where the SpMV performance on GPUs has been discussed in such depth. We believe that this work on SpMV performance analysis and the heterogeneous scheme will open up many new directions and improvements for the SpMV computing field in the future.
APA, Harvard, Vancouver, ISO, and other styles
7

Christnatalis, Christnatalis, Bachtiar Bachtiar, and Rony Rony. "Comparative Compression of Wavelet Haar Transformation with Discrete Wavelet Transform on Colored Image Compression." JOURNAL OF INFORMATICS AND TELECOMMUNICATION ENGINEERING 3, no. 2 (January 20, 2020): 202–9. http://dx.doi.org/10.31289/jite.v3i2.3154.

Full text
Abstract:
In this research, the algorithm used to compress images is using the haar wavelet transformation method and the discrete wavelet transform algorithm. The image compression based on Wavelet Wavelet transform uses a calculation system with decomposition with row direction and decomposition with column direction. While discrete wavelet transform-based image compression, the size of the compressed image produced will be more optimal because some information that is not so useful, not so felt, and not so seen by humans will be eliminated so that humans still assume that the data can still be used even though it is compressed. The data used are data taken directly, so the test results are obtained that digital image compression based on Wavelet Wavelet Transformation gets a compression ratio of 41%, while the discrete wavelet transform reaches 29.5%. Based on research problems regarding the efficiency of storage media, it can be concluded that the right algorithm to choose is the Haar Wavelet transformation algorithm. To improve compression results it is recommended to use wavelet transforms other than haar, such as daubechies, symlets, and so on.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Xi Xi, Yu Jing Jia, and Guang Zhen Cheng. "The Water Sump Cleaning Machine by Vacuum Suction." Applied Mechanics and Materials 201-202 (October 2012): 785–88. http://dx.doi.org/10.4028/www.scientific.net/amm.201-202.785.

Full text
Abstract:
This article describes a vacuum water sump cleaning machine which is used to clean up coal mine water sump. Cleaning machine is composed of mechanical structure and electrical control devices. The parts of machine are made up of Walk the flatbed, storage mud tank, vacuum pumps, suction pipe, mud tubes, swing devices, control valves, suction pipe and pressure tracheal. When working, under the function of vacuum pumping, cleaning machine pulls out the vacuum from storage mud tank through the vacuum air feeder. As the vacuum level in the tank is increasing, under the function of atmospheric pressure outside world, the mud flows into the reservoir along the suction tube. When storage mud tank is full, vacuum pump automatically shut down. Turning off the vacuum valve and opening the pressure valve, the slime in the tank under the function of compressed air comes into the mine car through the row mud tube. The layout of this cleaning machine is reasonable, what is more, it is flexible and convenient to operate, so that it reduces the labor intensity significantly and improves the work efficiency of the clearance.
APA, Harvard, Vancouver, ISO, and other styles
9

Ji, Guo Liang, Yang De Feng, Wen Kai Cui, and Liang Gang Lu. "Implementation Procedures of Parallel Preconditioning with Sparse Matrix Based on FEM." Applied Mechanics and Materials 166-169 (May 2012): 3166–73. http://dx.doi.org/10.4028/www.scientific.net/amm.166-169.3166.

Full text
Abstract:
A technique to assemble global stiffness matrix stored in sparse storage format and two parallel solvers for sparse linear systems based on FEM are presented. The assembly method uses a data structure named associated node at intermediate stages to finally arrive at the Compressed Sparse Row (CSR) format. The associated nodes record the information about the connection of nodes in the mesh. The technique can reduce large memory because it only stores the nonzero elements of the global stiffness matrix. This method is simple and effective. The solvers are Restarted GMRES iterative solvers with Jacobi and sparse appropriate inverse (SPAI) preconditioning, respectively. Some numerical experiments show that the both preconditioners can improve the convergence of the iterative method, and SPAI is more powerful than Jacobi in the sence of reducing the number of iterations and parallel efficiency. Both of the two solvers can be used to solve large sparse linear system.
APA, Harvard, Vancouver, ISO, and other styles
10

Mahmoud, Mohammed, Mark Hoffmann, and Hassan Reza. "Developing a New Storage Format and a Warp-Based SpMV Kernel for Configuration Interaction Sparse Matrices on the GPU." Computation 6, no. 3 (August 24, 2018): 45. http://dx.doi.org/10.3390/computation6030045.

Full text
Abstract:
Sparse matrix-vector multiplication (SpMV) can be used to solve diverse-scaled linear systems and eigenvalue problems that exist in numerous, and varying scientific applications. One of the scientific applications that SpMV is involved in is known as Configuration Interaction (CI). CI is a linear method for solving the nonrelativistic Schrödinger equation for quantum chemical multi-electron systems, and it can deal with the ground state as well as multiple excited states. In this paper, we have developed a hybrid approach in order to deal with CI sparse matrices. The proposed model includes a newly-developed hybrid format for storing CI sparse matrices on the Graphics Processing Unit (GPU). In addition to the new developed format, the proposed model includes the SpMV kernel for multiplying the CI matrix (proposed format) by a vector using the C language and the Compute Unified Device Architecture (CUDA) platform. The proposed SpMV kernel is a vector kernel that uses the warp approach. We have gauged the newly developed model in terms of two primary factors, memory usage and performance. Our proposed kernel was compared to the cuSPARSE library and the CSR5 (Compressed Sparse Row 5) format and already outperformed both.
APA, Harvard, Vancouver, ISO, and other styles
11

Saad, Aouatif, Adil Echchelh, Mohammed Hattabi, and Ganaoui El. "An improved computational method for non isothermal resin transfer moulding simulation." Thermal Science 15, suppl. 2 (2011): 275–89. http://dx.doi.org/10.2298/tsci100928016s.

Full text
Abstract:
The optimization in the simulation time of non-isothermal filling process without losing effectiveness remains a challenge in the resin transfer moulding process simulation. We are interested in this work on developing an improved computational approach based on finite element method coupled with control volume approach. Simulations can predict the position of the front of resin flow, pressure and temperature distribution at each time step. Our optimization approach is first based on the modification of conventional control volume/finite element method, then on the adaptation of the iterative algorithm of conjugate gradient to Compressed Sparse Row (CSR) storage scheme. The approach has been validated by comparison with available results. The proposed method yielded smoother flow fronts and reduced the error in the pressure and temperature pattern that plagued the conventional fixed grid methods. The solution accuracy was considerably higher than that of the conventional method since we could proceed in the mesh refinement without a significant increase in the computation time. Various thermal engineering situations can be simulated by using the developed code.
APA, Harvard, Vancouver, ISO, and other styles
12

Sadineni, Praveen Kumar. "Compression Methodologies for Columnar Database Optimization." Journal of Computational Science and Intelligent Technologies 3, no. 1 (2022): 24–32. http://dx.doi.org/10.53409/mnaa/jcsit/e202203012432.

Full text
Abstract:
Today’s life is completely dependent on data. Conventional relational databases take longer to respond to queries because they are built for row-wise data storage and retrieval. Due to their efficient read and write operations to and from hard discs, which reduce the time it takes for queries to produce results, columnar databases have recently overtaken traditional databases. To execute Business Intelligence and create decision-making systems, vast amounts of data gathered from various sources are required in data warehouses, where columnar databases are primarily created. Since the data are stacked closely together, and the seek time is reduced, columnar databases perform queries more quickly. With aggregation queries to remove unnecessary data, they allow several compression techniques for faster data access. To optimise the efficiency of columnar databases, various compression approaches, including NULL Suppression, Dictionary Encoding, Run Length Encoding, Bit Vector Encoding, and Lempel Ziv Encoding, are discussed in this work. Database operations are conducted on the compressed data to demonstrate the decrease in memory needs and speed improvements.
APA, Harvard, Vancouver, ISO, and other styles
13

Ahmed, Muhammad, Sardar Usman, Nehad Ali Shah, M. Usman Ashraf, Ahmed Mohammed Alghamdi, Adel A. Bahadded, and Khalid Ali Almarhabi. "AAQAL: A Machine Learning-Based Tool for Performance Optimization of Parallel SPMV Computations Using Block CSR." Applied Sciences 12, no. 14 (July 13, 2022): 7073. http://dx.doi.org/10.3390/app12147073.

Full text
Abstract:
The sparse matrix–vector product (SpMV), considered one of the seven dwarfs (numerical methods of significance), is essential in high-performance real-world scientific and analytical applications requiring solution of large sparse linear equation systems, where SpMV is a key computing operation. As the sparsity patterns of sparse matrices are unknown before runtime, we used machine learning-based performance optimization of the SpMV kernel by exploiting the structure of the sparse matrices using the Block Compressed Sparse Row (BCSR) storage format. As the structure of sparse matrices varies across application domains, optimizing the block size is important for reducing the overall execution time. Manual allocation of block sizes is error prone and time consuming. Thus, we propose AAQAL, a data-driven, machine learning-based tool that automates the process of data distribution and selection of near-optimal block sizes based on the structure of the matrix. We trained and tested the tool using different machine learning methods—decision tree, random forest, gradient boosting, ridge regressor, and AdaBoost—and nearly 700 real-world matrices from 43 application domains, including computer vision, robotics, and computational fluid dynamics. AAQAL achieved 93.47% of the maximum attainable performance with a substantial difference compared to in practice manual or random selection of block sizes. This is the first attempt at exploiting matrix structure using BCSR, to select optimal block sizes for the SpMV computations using machine learning techniques.
APA, Harvard, Vancouver, ISO, and other styles
14

Zeng, Guangsen, and Yi Zou. "Leveraging Memory Copy Overlap for Efficient Sparse Matrix-Vector Multiplication on GPUs." Electronics 12, no. 17 (August 31, 2023): 3687. http://dx.doi.org/10.3390/electronics12173687.

Full text
Abstract:
Sparse matrix-vector multiplication (SpMV) is central to many scientific, engineering, and other applications, including machine learning. Compressed Sparse Row (CSR) is a widely used sparse matrix storage format. SpMV using the CSR format on GPU computing platforms is widely studied, where the access behavior of GPU is often the performance bottleneck. The Ampere GPU architecture recently from NVIDIA provides a new asynchronous memory copy instruction, memcpy_async, for more efficient data movement in shared memory. Leveraging the capability of this new memcpy_async instruction, we first propose the CSR-Partial-Overlap to carefully overlap the data copy from global memory to shared memory and computation, allowing us to take full advantage of the data transfer time. In addition, we design the dynamic batch partition and the dynamic threads distribution to achieve effective load balancing, avoid the overhead of fixing up partial sums, and improve thread utilization. Furthermore, we propose the CSR-Full-Overlap based on the CSR-Partial-Overlap, which takes the overlap of data transfer from host to device and SpMV kernel execution into account as well. The CSR-Full-Overlap unifies the two major overlaps in SpMV and hides the computation as much as possible in the two important access behaviors of the GPU. This allows CSR-Full-Overlap to achieve the best performance gains from both overlaps. As far as we know, this paper is the first in-depth study of how memcpy_async can be potentially applied to help accelerate SpMV computation in GPU platforms. We compare CSR-Full-Overlap to the current state-of-the-art cuSPARSE, where our experimental results show an average 2.03x performance gain and up to 2.67x performance gain.
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Zhi, and Sinan Fang. "Three-Dimensional Inversion of Borehole-Surface Resistivity Method Based on the Unstructured Finite Element." International Journal of Antennas and Propagation 2021 (September 24, 2021): 1–13. http://dx.doi.org/10.1155/2021/5154985.

Full text
Abstract:
The electromagnetic wave signal from the electromagnetic field source generates induction signals after reaching the target geological body through the underground medium. The time and spatial distribution rules of the artificial or the natural electromagnetic fields are obtained for the exploration of mineral resources of the subsurface and determining the geological structure of the subsurface to solve the geological problems. The goal of electromagnetic data processing is to suppress the noise and improve the signal-to-noise ratio and the inversion of resistivity data. Inversion has always been the focus of research in the field of electromagnetic methods. In this paper, the three-dimensional borehole-surface resistivity method is explored based on the principle of geometric sounding, and the three-dimensional inversion algorithm of the borehole-surface resistivity method in arbitrary surface topography is proposed. The forward simulation and calculation start from the partial differential equation and the boundary conditions of the total potential of the three-dimensional point current source field are satisfied. Then the unstructured tetrahedral grids are used to discretely subdivide the calculation area that can well fit the complex structure of subsurface and undulating surface topography. The accuracy of the numerical solution is low due to the rapid attenuation of the electric field at the point current source and the nearby positions and sharply varying potential gradients. Therefore, the mesh density is defined at the local area, that is, the vicinity of the source electrode and the measuring electrode. The mesh refinement can effectively reduce the influence of the source point and its vicinity and improve the accuracy of the numerical solution. The stiffness matrix is stored with Compressed Row Storage (CSR) format, and the final large linear equations are solved using the Super Symmetric Over Relaxation Preconditioned Conjugate Gradient (SSOR-PCG) method. The quasi-Newton method with limited memory (L_BFGS) is used to optimize the objective function in the inversion calculation, and a double-loop recursive method is used to solve the normal equation obtained at each iteration in order to avoid computing and storing the sensitivity matrix explicitly and reduce the amount of calculation. The comprehensive application of the above methods makes the 3D inversion algorithm efficient, accurate, and stable. The three-dimensional inversion test is performed on the synthetic data of multiple theoretical geoelectric models with topography (a single anomaly model under valley and a single anomaly model under mountain) to verify the effectiveness of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
16

Ren, Zhi-guo, and Li Guo. "The Row Priority Single Vector Compressed Storage of Upper Trapezoidal Matrix." DEStech Transactions on Computer Science and Engineering, icicee (December 20, 2017). http://dx.doi.org/10.12783/dtcse/icicee2017/17192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Xing, Longyue, Zhaoshun Wang, Zhezhao Ding, Genshen Chu, Lingyu Dong, and Nan Xiao. "An efficient sparse stiffness matrix vector multiplication using compressed sparse row storage format on AMD GPU." Concurrency and Computation: Practice and Experience, July 20, 2022. http://dx.doi.org/10.1002/cpe.7186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Nhường, Lê Đắc, Lê Đăng Nguyên, and Lê Trọng Vĩnh. "Tối ưu không gian trạng thái của thuật toán AhoCorasick sử dụng kỹ thuật nén dòng và bảng chỉ số." Các công trình nghiên cứu, phát triển và ứng dụng Công nghệ Thông tin và Truyền thông, September 12, 2014. http://dx.doi.org/10.32913/mic-ict-research-vn.v1.n29.117.

Full text
Abstract:
The pattern matching algorithms are important roles in most applications of informationtechnology. For example, Network Intrusion Detection System looks for evidence of malicious behavior based on matching packet contents with known patterns. Therefore, the study of the pattern-matching algorithm is a hot topic many researchers are interested. In this paper, we propose a new method to optimize the state storage of the pattern matching algorithms, AhoCorasick by using compressed row and index table techniques. The experimental results that compare the efficacy performed between the original Aho-Corasick algorithm and the improved algorithm installed in Snort showed that our method achieved better results.
APA, Harvard, Vancouver, ISO, and other styles
19

TIGADI, GAYATRI B., and MANJULADEVI T.H. "LOSSLESS AND LOSSY IMAGE COMPRESSION BASED ON DATA FOLDING." International Journal of Electronics and Electical Engineering, July 2014, 45–49. http://dx.doi.org/10.47893/ijeee.2014.1125.

Full text
Abstract:
Image compression plays a very important role in image processing especially when we have to send the image on the internet. Since imaging techniques produce prohibitive amounts of data, compression is necessary for storage and communication purposes. Many current compression schemes provide a very high compression rates but with considerable loss of quality. On the other hand, in some areas in medicine, it may be sufficient to maintain high image quality only in the region of interest, i.e., in diagnostically important regions called region of interest. In the proposed work images are compressed using Data folding technique which uses the property of adjacent neighbour redundancy for prediction. In this method first column folding is applied followed by the row folding iteratively till the image size reduces to predefined value, then arithmetic encoding is applied which results the compressed image at the end before transmitting the data. In this paper lossless compression is achieved only at the region of interest and it is mainly suitable for medical images.
APA, Harvard, Vancouver, ISO, and other styles
20

TIGADI, GAYATRI B., and MANJULADEVI T.H. "LOSSLESS AND LOSSY IMAGE COMPRESSION BASED ON DATA FOLDING." International Journal of Electronics and Electical Engineering, July 2014, 45–49. http://dx.doi.org/10.47893/ijeee.2014.1125.

Full text
Abstract:
Image compression plays a very important role in image processing especially when we have to send the image on the internet. Since imaging techniques produce prohibitive amounts of data, compression is necessary for storage and communication purposes. Many current compression schemes provide a very high compression rates but with considerable loss of quality. On the other hand, in some areas in medicine, it may be sufficient to maintain high image quality only in the region of interest, i.e., in diagnostically important regions called region of interest. In the proposed work images are compressed using Data folding technique which uses the property of adjacent neighbour redundancy for prediction. In this method first column folding is applied followed by the row folding iteratively till the image size reduces to predefined value, then arithmetic encoding is applied which results the compressed image at the end before transmitting the data. In this paper lossless compression is achieved only at the region of interest and it is mainly suitable for medical images.
APA, Harvard, Vancouver, ISO, and other styles
21

Ding, Youde, Yuan Liao, Ji He, Jianfeng Ma, Xu Wei, Xuemei Liu, Guiying Zhang, and Jing Wang. "Enhancing genomic mutation data storage optimization based on the compression of asymmetry of sparsity." Frontiers in Genetics 14 (June 1, 2023). http://dx.doi.org/10.3389/fgene.2023.1213907.

Full text
Abstract:
Background: With the rapid development of high-throughput sequencing technology and the explosive growth of genomic data, storing, transmitting and processing massive amounts of data has become a new challenge. How to achieve fast lossless compression and decompression according to the characteristics of the data to speed up data transmission and processing requires research on relevant compression algorithms.Methods: In this paper, a compression algorithm for sparse asymmetric gene mutations (CA_SAGM) based on the characteristics of sparse genomic mutation data was proposed. The data was first sorted on a row-first basis so that neighboring non-zero elements were as close as possible to each other. The data were then renumbered using the reverse Cuthill-Mckee sorting technique. Finally the data were compressed into sparse row format (CSR) and stored. We had analyzed and compared the results of the CA_SAGM, coordinate format (COO) and compressed sparse column format (CSC) algorithms for sparse asymmetric genomic data. Nine types of single-nucleotide variation (SNV) data and six types of copy number variation (CNV) data from the TCGA database were used as the subjects of this study. Compression and decompression time, compression and decompression rate, compression memory and compression ratio were used as evaluation metrics. The correlation between each metric and the basic characteristics of the original data was further investigated.Results: The experimental results showed that the COO method had the shortest compression time, the fastest compression rate and the largest compression ratio, and had the best compression performance. CSC compression performance was the worst, and CA_SAGM compression performance was between the two. When decompressing the data, CA_SAGM performed the best, with the shortest decompression time and the fastest decompression rate. COO decompression performance was the worst. With increasing sparsity, the COO, CSC and CA_SAGM algorithms all exhibited longer compression and decompression times, lower compression and decompression rates, larger compression memory and lower compression ratios. When the sparsity was large, the compression memory and compression ratio of the three algorithms showed no difference characteristics, but the rest of the indexes were still different.Conclusion: CA_SAGM was an efficient compression algorithm that combines compression and decompression performance for sparse genomic mutation data.
APA, Harvard, Vancouver, ISO, and other styles
22

Du, Yang, Guoqiang Long, Donghua Jiang, Xiuli Chai, and Junhe Han. "Optical image encryption algorithm based on a new four-dimensional memristive hyperchaotic system and compressed sensing." Chinese Physics B, August 11, 2023. http://dx.doi.org/10.1088/1674-1056/acef08.

Full text
Abstract:
Abstract Some existing image encryption schemes use simple low-dimensional chaotic systems, which makes the algorithms insecure and vulnerable to brute force attacks and cracking. Some algorithms have issues such as weak correlation with plaintext images, poor image reconstruction quality, and low efficiency in transmission and storage. To solve these issues, this paper proposes an optical image encryption algorithm based on a new four-dimensional memristive hyperchaotic system (4-D MHS) and compressed sensing (CS). Firstly, this paper proposes a new 4-D MHS, which has larger key space, richer dynamic behavior and more complex hyperchaotic characteristics. The introduction of CS can reduce the image size and the transmission burden of hardware devices. The introduction of double random phase encoding (DRPE) enables this algorithm has the ability of parallel data processing and multi-dimensional coding space, and the hyperchaotic characteristics of 4-D MHS make up for the nonlinear deficiency of DRPE. Secondly, a construction method of the deterministic chaotic measurement matrix (DCMM) is proposed. Using DCMM can not only save a lot of transmission bandwidth and storage space, but also ensure good quality of reconstructed images. Thirdly, the confusion method and diffusion method proposed are related to plaintext images, which require both four hyperchaotic sequences of 4-D MHS and row and column keys based on plaintext images. The generation process of hyperchaotic sequences is closely related to the hash value of plaintext images. Therefore, this algorithm has high sensitivity to plaintext images. The experimental testing and comparative analysis results show that proposed algorithm has good security and effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
23

Siddika, Aiasha. "Study and Performance Analysis of Different Techniques for Computing Data Cubes." Global Journal of Computer Science and Technology, December 9, 2019, 33–42. http://dx.doi.org/10.34257/gjcstcvol19is3pg33.

Full text
Abstract:
Data is an integrated form of observable and recordable facts in operational or transactional systems in the data warehouse. Usually, data warehouse stores aggregated and historical data in multi-dimensional schemas. Data only have value to end-users when it is formulated and represented as information. And Information is a composed collection of facts for decision making. Cube computation is the most efficient way for answering this decision making queries and retrieve information from data. Online Analytical Process (OLAP) used in this purpose of the cube computation. There are two types of OLAP: Relational Online Analytical Processing (ROLAP) and Multidimensional Online Analytical Processing (MOLAP).This research worked on ROLAP and MOLAP and then compare both methods to find out the computation times by the data volume. Generally, a large data warehouse produces an extensive output, and it takes a larger space with a huge amount of empty data cells. To solve this problem, data compression is inevitable. Therefore, Compressed Row Storage (CRS) is applied to reduce empty cell overhead.
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Hu, Enying Li, and Guangyao Li. "A Parallel Reanalysis Method Based on Approximate Inverse Matrix for Complex Engineering Problems." Journal of Mechanical Design 135, no. 8 (May 30, 2013). http://dx.doi.org/10.1115/1.4024368.

Full text
Abstract:
The combined approximations (CA) method is an effective reanalysis approach providing high quality results. The CA method is suitable for a wide range of structural optimization problems including linear reanalysis, nonlinear reanalysis and eigenvalue reanalysis. However, with increasing complexity and scale of engineering problems, the efficiency of the CA method might not be guaranteed. A major bottleneck of the CA is how to obtain reduced basis vectors efficiently. Therefore, a modified CA method, based on approximation of the inverse matrix, is suggested. Based on the symmetric successive over-relaxation (SSOR) and compressed sparse row (CSR), the efficiency of CA method is shown to be much improved and corresponding storage space markedly reduced. In order to further improve the efficiency, the suggested strategy is implemented on a graphic processing unit (GPU) platform. To verify the performance of the suggested method, several case studies are undertaken. Compared with the popular serial CA method, the results demonstrate that the suggested GPU-based CA method is an order of magnitude faster for the same level of accuracy.
APA, Harvard, Vancouver, ISO, and other styles
25

Xu, Long, and Xiao Tao Hao. "Data Migration System Over Distributed Cloud Environment with Internet of Things for Smart Communications." Journal of Computing in Engineering, July 5, 2020, 19–29. http://dx.doi.org/10.46532/jce.20200703.

Full text
Abstract:
Data Migration (DM) is a significant event for an organization in the revolution of Internet of Things (IoT) and Smart City. The total service providers augmented dramatically since, the data generation through IoT in cloud is enormous. Furthermore, the cloud is turning out to be the tools of option for additional cloud storage services along IoT for Smart city. Nevertheless, as more private information/data is transferred to the cloud via social media sites like Baidu WangPan, DropBox, etc., data security and also the privacy issues are questioned. To beat such drawbacks, this paper proposed a security enhancement for the DM system over the cloud environment for Smart City. This work is partitioned into ‘2’ phases where Phase I is a storage phase and Phase II is a retrieval phase. In the storage phase, the data is compressed using MHA and then the data is encrypted utilizing MHECC algorithm for securing data for Smart City. Subsequently, some of the data-related features are extracted. After feature extraction (FE), the distributed cloud server is selected using MGWO. Then, DM is performed dynamically on various cloud servers (CS) using ANFIS. The retrieval phase is the opposite process of the storage phase. In the retrieval phase, the data is decrypted using the same MHECC algorithm and then, the decrypted data is decompressed utilizing the same MHA algorithm. After that, row transposition is performed on both component A and component B. Finally, the components are summed up, and the original data is obtained. Experimental results contrasted the proposed and the prevailing systems centered on key generation time, encryption time, decryption time, security, and threshold.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Qian, and Jianguo Wang. "Incremental association rules update algorithm based on the sort compression matrix." Journal of Intelligent & Fuzzy Systems, May 25, 2023, 1–12. http://dx.doi.org/10.3233/jifs-231252.

Full text
Abstract:
Association rule algorithm has always been a research hotspot in the field of data mining, in the context of today’s big data era, in order to efficiently obtain association rules and effectively update them, based on the original fast update pruning (FUP) algorithm, an association rule incremental update algorithm (FBSCM) based on sorting compression matrix is proposed to solve the shortcomings of frequent scanning of transaction datasets. Firstly, The algorithm maps the transaction dataset as a Boolean matrix, and changes the storage mode of the matrix(that is, adding two columns and a row vector); Secondly, the matrix is compressed many times during the generation of frequent k-itemset; After that, the items in the matrix are sorted incrementally according to the support degree of the itemset; Finally, the original string comparison operation is replaced by the vector product of each column of the matrix. Experimental results and analysis show that the FBSCM algorithm has higher temporal performance than the traditional FUP algorithm in different incremental dataset sizes, different minimum support thresholds and different feature datasets, especially when the incremental transaction volume is large or the minimum support degree is small.
APA, Harvard, Vancouver, ISO, and other styles
27

Aliaga, José I., Hartwig Anzt, Enrique S. Quintana‐Ortí, and Andrés E. Tomás. "Sparse matrix‐vector and matrix‐multivector products for the truncated SVD on graphics processors." Concurrency and Computation: Practice and Experience, August 4, 2023. http://dx.doi.org/10.1002/cpe.7871.

Full text
Abstract:
SummaryMany practical algorithms for numerical rank computations implement an iterative procedure that involves repeated multiplications of a vector, or a collection of vectors, with both a sparse matrix and its transpose. Unfortunately, the realization of these sparse products on current high performance libraries often deliver much lower arithmetic throughput when the matrix involved in the product is transposed. In this work, we propose a hybrid sparse matrix layout, named CSRC, that combines the flexibility of some well‐known sparse formats to offer a number of appealing properties: (1) CSRC can be obtained at low cost from the popular CSR (compressed sparse row) format; (2) CSRC has similar storage requirements as CSR; and especially, (3) the implementation of the sparse product kernels delivers high performance for both the direct product and its transposed variant on modern graphics accelerators thanks to a significant reduction of atomic operations compared to a conventional implementation based on CSR. This solution thus renders considerably higher performance when integrated into an iterative algorithm for the truncated singular value decomposition (SVD), such as the randomized SVD or, as demonstrated in the experimental results, the block Golub–Kahan–Lanczos algorithm.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography