Статті в журналах з теми "Compressed Row Storage"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Compressed Row Storage.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Compressed Row Storage".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Bani-Ismail, Basel, and Ghassan Kanaan. "Comparing Different Sparse Matrix Storage Structures as Index Structure for Arabic Text Collection." International Journal of Information Retrieval Research 2, no. 2 (April 2012): 52–67. http://dx.doi.org/10.4018/ijirr.2012040105.

Повний текст джерела
Анотація:
In the authors’ study they evaluate and compare the storage efficiency of different sparse matrix storage structures as index structure for Arabic text collection and their corresponding sparse matrix-vector multiplication algorithms to perform query processing in any Information Retrieval (IR) system. The study covers six sparse matrix storage structures including the Coordinate Storage (COO), Compressed Sparse Row (CSR), Compressed Sparse Column (CSC), Block Coordinate (BCO), Block Sparse Row (BSR), and Block Sparse Column (BSC). Evaluation depends on the storage space requirements for each storage structure and the efficiency of the query processing algorithm. The experimental results demonstrate that CSR is more efficient in terms of storage space requirements and query processing time than the other sparse matrix storage structures. The results also show that CSR requires the least amount of disk space and performs the best in terms of query processing time compared with the other point entry storage structures (COO, CSC). The results demonstrate that BSR requires the least amount of disk space and performs the best in terms of query processing time compared with the other block entry storage structures (BCO, BSC).
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Mohammed, Saira Banu Jamal, M. Rajasekhara Babu, and Sumithra Sriram. "GPU Implementation of Image Convolution Using Sparse Model with Efficient Storage Format." International Journal of Grid and High Performance Computing 10, no. 1 (January 2018): 54–70. http://dx.doi.org/10.4018/ijghpc.2018010104.

Повний текст джерела
Анотація:
With the growth of data parallel computing, role of GPU computing in non-graphic applications such as image processing becomes a focus in research fields. Convolution is an integral operation in filtering, smoothing and edge detection. In this article, the process of convolution is realized as a sparse linear system and is solved using Sparse Matrix Vector Multiplication (SpMV). The Compressed Sparse Row (CSR) format of SPMV shows better CPU performance compared to normal convolution. To overcome the stalling of threads for short rows in the GPU implementation of CSR SpMV, a more efficient model is proposed, which uses the Adaptive-Compressed Row Storage (A-CSR) format to implement the same. Using CSR in the convolution process achieves a 1.45x and a 1.159x increase in speed compared to the normal convolution of image smoothing and edge detection operations, respectively. An average speedup of 2.05x is achieved for image smoothing technique and 1.58x for edge detection technique in GPU platform usig adaptive CSR format.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Yan, Kaizhuang, Yongxian Wang, and Wenbin Xiao. "A New Compression and Storage Method for High-Resolution SSP Data Based-on Dictionary Learning." Journal of Marine Science and Engineering 10, no. 8 (August 10, 2022): 1095. http://dx.doi.org/10.3390/jmse10081095.

Повний текст джерела
Анотація:
The sound speed profile data of seawater provide an important basis for carrying out underwater acoustic modeling and analysis, sonar performance evaluation, and underwater acoustic assistant decision-making. The data volume of the high-resolution sound speed profile is vast, and the demand for data storage space is high, which severely limits the analysis and application of the high-resolution sound speed profile data in the field of marine acoustics. This paper uses the dictionary learning method to achieve sparse coding of the high-resolution sound speed profile and uses a compressed sparse row method to compress and store the sparse characteristics of the data matrix. The influence of related parameters on the compression rate and recovery data error is analyzed and discussed, as are different scenarios and the difference in compression processing methods. Through comparative experiments, the average error of the sound speed profile data compressed is less than 0.5 m/s, the maximum error is less than 3 m/s, and the data volume is about 10% to 15% of the original data volume. This method significantly reduces the storage capacity of high-resolution sound speed profile data and ensures the accuracy of the data, providing technical support for efficient and convenient access to high-resolution sound speed profiles.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Knopp, T., and A. Weber. "Local System Matrix Compression for Efficient Reconstruction in Magnetic Particle Imaging." Advances in Mathematical Physics 2015 (2015): 1–7. http://dx.doi.org/10.1155/2015/472818.

Повний текст джерела
Анотація:
Magnetic particle imaging (MPI) is a quantitative method for determining the spatial distribution of magnetic nanoparticles, which can be used as tracers for cardiovascular imaging. For reconstructing a spatial map of the particle distribution, the system matrix describing the magnetic particle imaging equation has to be known. Due to the complex dynamic behavior of the magnetic particles, the system matrix is commonly measured in a calibration procedure. In order to speed up the reconstruction process, recently, a matrix compression technique has been proposed that makes use of a basis transformation in order to compress the MPI system matrix. By thresholding the resulting matrix and storing the remaining entries in compressed row storage format, only a fraction of the data has to be processed when reconstructing the particle distribution. In the present work, it is shown that the image quality of the algorithm can be considerably improved by using a local threshold for each matrix row instead of a global threshold for the entire system matrix.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Christnatalis, Christnatalis, Bachtiar Bachtiar, and Rony Rony. "Comparative Compression of Wavelet Haar Transformation with Discrete Wavelet Transform on Colored Image Compression." JOURNAL OF INFORMATICS AND TELECOMMUNICATION ENGINEERING 3, no. 2 (January 20, 2020): 202–9. http://dx.doi.org/10.31289/jite.v3i2.3154.

Повний текст джерела
Анотація:
In this research, the algorithm used to compress images is using the haar wavelet transformation method and the discrete wavelet transform algorithm. The image compression based on Wavelet Wavelet transform uses a calculation system with decomposition with row direction and decomposition with column direction. While discrete wavelet transform-based image compression, the size of the compressed image produced will be more optimal because some information that is not so useful, not so felt, and not so seen by humans will be eliminated so that humans still assume that the data can still be used even though it is compressed. The data used are data taken directly, so the test results are obtained that digital image compression based on Wavelet Wavelet Transformation gets a compression ratio of 41%, while the discrete wavelet transform reaches 29.5%. Based on research problems regarding the efficiency of storage media, it can be concluded that the right algorithm to choose is the Haar Wavelet transformation algorithm. To improve compression results it is recommended to use wavelet transforms other than haar, such as daubechies, symlets, and so on.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Tanaka, Teruo, Ryo Otsuka, Akihiro Fujii, Takahiro Katagiri, and Toshiyuki Imamura. "Implementation of D-Spline-Based Incremental Performance Parameter Estimation Method with ppOpen-AT." Scientific Programming 22, no. 4 (2014): 299–307. http://dx.doi.org/10.1155/2014/310879.

Повний текст джерела
Анотація:
In automatic performance tuning (AT), a primary aim is to optimize performance parameters that are suitable for certain computational environments in ordinary mathematical libraries. For AT, an important issue is to reduce the estimation time required for optimizing performance parameters. To reduce the estimation time, we previously proposed the Incremental Performance Parameter Estimation method (IPPE method). This method estimates optimal performance parameters by inserting suitable sampling points that are based on computational results for a fitting function. As the fitting function, we introduced d-Spline, which is highly adaptable and requires little estimation time. In this paper, we report the implementation of the IPPE method with ppOpen-AT, which is a scripting language (set of directives) with features that reduce the workload of the developers of mathematical libraries that have AT features. To confirm the effectiveness of the IPPE method for the runtime phase AT, we applied the method to sparse matrix–vector multiplication (SpMV), in which the block size of the sparse matrix structure blocked compressed row storage (BCRS) was used for the performance parameter. The results from the experiment show that the cost was negligibly small for AT using the IPPE method in the runtime phase. Moreover, using the obtained optimal value, the execution time for the mathematical library SpMV was reduced by 44% on comparing the compressed row storage and BCRS (block size 8).
Стилі APA, Harvard, Vancouver, ISO та ін.
7

AlAhmadi, Sarah, Thaha Mohammed, Aiiad Albeshri, Iyad Katib, and Rashid Mehmood. "Performance Analysis of Sparse Matrix-Vector Multiplication (SpMV) on Graphics Processing Units (GPUs)." Electronics 9, no. 10 (October 13, 2020): 1675. http://dx.doi.org/10.3390/electronics9101675.

Повний текст джерела
Анотація:
Graphics processing units (GPUs) have delivered a remarkable performance for a variety of high performance computing (HPC) applications through massive parallelism. One such application is sparse matrix-vector (SpMV) computations, which is central to many scientific, engineering, and other applications including machine learning. No single SpMV storage or computation scheme provides consistent and sufficiently high performance for all matrices due to their varying sparsity patterns. An extensive literature review reveals that the performance of SpMV techniques on GPUs has not been studied in sufficient detail. In this paper, we provide a detailed performance analysis of SpMV performance on GPUs using four notable sparse matrix storage schemes (compressed sparse row (CSR), ELLAPCK (ELL), hybrid ELL/COO (HYB), and compressed sparse row 5 (CSR5)), five performance metrics (execution time, giga floating point operations per second (GFLOPS), achieved occupancy, instructions per warp, and warp execution efficiency), five matrix sparsity features (nnz, anpr, nprvariance, maxnpr, and distavg), and 17 sparse matrices from 10 application domains (chemical simulations, computational fluid dynamics (CFD), electromagnetics, linear programming, economics, etc.). Subsequently, based on the deeper insights gained through the detailed performance analysis, we propose a technique called the heterogeneous CPU–GPU Hybrid (HCGHYB) scheme. It utilizes both the CPU and GPU in parallel and provides better performance over the HYB format by an average speedup of 1.7x. Heterogeneous computing is an important direction for SpMV and other application areas. Moreover, to the best of our knowledge, this is the first work where the SpMV performance on GPUs has been discussed in such depth. We believe that this work on SpMV performance analysis and the heterogeneous scheme will open up many new directions and improvements for the SpMV computing field in the future.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Zhang, Xi Xi, Yu Jing Jia, and Guang Zhen Cheng. "The Water Sump Cleaning Machine by Vacuum Suction." Applied Mechanics and Materials 201-202 (October 2012): 785–88. http://dx.doi.org/10.4028/www.scientific.net/amm.201-202.785.

Повний текст джерела
Анотація:
This article describes a vacuum water sump cleaning machine which is used to clean up coal mine water sump. Cleaning machine is composed of mechanical structure and electrical control devices. The parts of machine are made up of Walk the flatbed, storage mud tank, vacuum pumps, suction pipe, mud tubes, swing devices, control valves, suction pipe and pressure tracheal. When working, under the function of vacuum pumping, cleaning machine pulls out the vacuum from storage mud tank through the vacuum air feeder. As the vacuum level in the tank is increasing, under the function of atmospheric pressure outside world, the mud flows into the reservoir along the suction tube. When storage mud tank is full, vacuum pump automatically shut down. Turning off the vacuum valve and opening the pressure valve, the slime in the tank under the function of compressed air comes into the mine car through the row mud tube. The layout of this cleaning machine is reasonable, what is more, it is flexible and convenient to operate, so that it reduces the labor intensity significantly and improves the work efficiency of the clearance.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Ji, Guo Liang, Yang De Feng, Wen Kai Cui, and Liang Gang Lu. "Implementation Procedures of Parallel Preconditioning with Sparse Matrix Based on FEM." Applied Mechanics and Materials 166-169 (May 2012): 3166–73. http://dx.doi.org/10.4028/www.scientific.net/amm.166-169.3166.

Повний текст джерела
Анотація:
A technique to assemble global stiffness matrix stored in sparse storage format and two parallel solvers for sparse linear systems based on FEM are presented. The assembly method uses a data structure named associated node at intermediate stages to finally arrive at the Compressed Sparse Row (CSR) format. The associated nodes record the information about the connection of nodes in the mesh. The technique can reduce large memory because it only stores the nonzero elements of the global stiffness matrix. This method is simple and effective. The solvers are Restarted GMRES iterative solvers with Jacobi and sparse appropriate inverse (SPAI) preconditioning, respectively. Some numerical experiments show that the both preconditioners can improve the convergence of the iterative method, and SPAI is more powerful than Jacobi in the sence of reducing the number of iterations and parallel efficiency. Both of the two solvers can be used to solve large sparse linear system.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Mahmoud, Mohammed, Mark Hoffmann, and Hassan Reza. "Developing a New Storage Format and a Warp-Based SpMV Kernel for Configuration Interaction Sparse Matrices on the GPU." Computation 6, no. 3 (August 24, 2018): 45. http://dx.doi.org/10.3390/computation6030045.

Повний текст джерела
Анотація:
Sparse matrix-vector multiplication (SpMV) can be used to solve diverse-scaled linear systems and eigenvalue problems that exist in numerous, and varying scientific applications. One of the scientific applications that SpMV is involved in is known as Configuration Interaction (CI). CI is a linear method for solving the nonrelativistic Schrödinger equation for quantum chemical multi-electron systems, and it can deal with the ground state as well as multiple excited states. In this paper, we have developed a hybrid approach in order to deal with CI sparse matrices. The proposed model includes a newly-developed hybrid format for storing CI sparse matrices on the Graphics Processing Unit (GPU). In addition to the new developed format, the proposed model includes the SpMV kernel for multiplying the CI matrix (proposed format) by a vector using the C language and the Compute Unified Device Architecture (CUDA) platform. The proposed SpMV kernel is a vector kernel that uses the warp approach. We have gauged the newly developed model in terms of two primary factors, memory usage and performance. Our proposed kernel was compared to the cuSPARSE library and the CSR5 (Compressed Sparse Row 5) format and already outperformed both.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Saad, Aouatif, Adil Echchelh, Mohammed Hattabi, and Ganaoui El. "An improved computational method for non isothermal resin transfer moulding simulation." Thermal Science 15, suppl. 2 (2011): 275–89. http://dx.doi.org/10.2298/tsci100928016s.

Повний текст джерела
Анотація:
The optimization in the simulation time of non-isothermal filling process without losing effectiveness remains a challenge in the resin transfer moulding process simulation. We are interested in this work on developing an improved computational approach based on finite element method coupled with control volume approach. Simulations can predict the position of the front of resin flow, pressure and temperature distribution at each time step. Our optimization approach is first based on the modification of conventional control volume/finite element method, then on the adaptation of the iterative algorithm of conjugate gradient to Compressed Sparse Row (CSR) storage scheme. The approach has been validated by comparison with available results. The proposed method yielded smoother flow fronts and reduced the error in the pressure and temperature pattern that plagued the conventional fixed grid methods. The solution accuracy was considerably higher than that of the conventional method since we could proceed in the mesh refinement without a significant increase in the computation time. Various thermal engineering situations can be simulated by using the developed code.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Sadineni, Praveen Kumar. "Compression Methodologies for Columnar Database Optimization." Journal of Computational Science and Intelligent Technologies 3, no. 1 (2022): 24–32. http://dx.doi.org/10.53409/mnaa/jcsit/e202203012432.

Повний текст джерела
Анотація:
Today’s life is completely dependent on data. Conventional relational databases take longer to respond to queries because they are built for row-wise data storage and retrieval. Due to their efficient read and write operations to and from hard discs, which reduce the time it takes for queries to produce results, columnar databases have recently overtaken traditional databases. To execute Business Intelligence and create decision-making systems, vast amounts of data gathered from various sources are required in data warehouses, where columnar databases are primarily created. Since the data are stacked closely together, and the seek time is reduced, columnar databases perform queries more quickly. With aggregation queries to remove unnecessary data, they allow several compression techniques for faster data access. To optimise the efficiency of columnar databases, various compression approaches, including NULL Suppression, Dictionary Encoding, Run Length Encoding, Bit Vector Encoding, and Lempel Ziv Encoding, are discussed in this work. Database operations are conducted on the compressed data to demonstrate the decrease in memory needs and speed improvements.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Ahmed, Muhammad, Sardar Usman, Nehad Ali Shah, M. Usman Ashraf, Ahmed Mohammed Alghamdi, Adel A. Bahadded, and Khalid Ali Almarhabi. "AAQAL: A Machine Learning-Based Tool for Performance Optimization of Parallel SPMV Computations Using Block CSR." Applied Sciences 12, no. 14 (July 13, 2022): 7073. http://dx.doi.org/10.3390/app12147073.

Повний текст джерела
Анотація:
The sparse matrix–vector product (SpMV), considered one of the seven dwarfs (numerical methods of significance), is essential in high-performance real-world scientific and analytical applications requiring solution of large sparse linear equation systems, where SpMV is a key computing operation. As the sparsity patterns of sparse matrices are unknown before runtime, we used machine learning-based performance optimization of the SpMV kernel by exploiting the structure of the sparse matrices using the Block Compressed Sparse Row (BCSR) storage format. As the structure of sparse matrices varies across application domains, optimizing the block size is important for reducing the overall execution time. Manual allocation of block sizes is error prone and time consuming. Thus, we propose AAQAL, a data-driven, machine learning-based tool that automates the process of data distribution and selection of near-optimal block sizes based on the structure of the matrix. We trained and tested the tool using different machine learning methods—decision tree, random forest, gradient boosting, ridge regressor, and AdaBoost—and nearly 700 real-world matrices from 43 application domains, including computer vision, robotics, and computational fluid dynamics. AAQAL achieved 93.47% of the maximum attainable performance with a substantial difference compared to in practice manual or random selection of block sizes. This is the first attempt at exploiting matrix structure using BCSR, to select optimal block sizes for the SpMV computations using machine learning techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Zeng, Guangsen, and Yi Zou. "Leveraging Memory Copy Overlap for Efficient Sparse Matrix-Vector Multiplication on GPUs." Electronics 12, no. 17 (August 31, 2023): 3687. http://dx.doi.org/10.3390/electronics12173687.

Повний текст джерела
Анотація:
Sparse matrix-vector multiplication (SpMV) is central to many scientific, engineering, and other applications, including machine learning. Compressed Sparse Row (CSR) is a widely used sparse matrix storage format. SpMV using the CSR format on GPU computing platforms is widely studied, where the access behavior of GPU is often the performance bottleneck. The Ampere GPU architecture recently from NVIDIA provides a new asynchronous memory copy instruction, memcpy_async, for more efficient data movement in shared memory. Leveraging the capability of this new memcpy_async instruction, we first propose the CSR-Partial-Overlap to carefully overlap the data copy from global memory to shared memory and computation, allowing us to take full advantage of the data transfer time. In addition, we design the dynamic batch partition and the dynamic threads distribution to achieve effective load balancing, avoid the overhead of fixing up partial sums, and improve thread utilization. Furthermore, we propose the CSR-Full-Overlap based on the CSR-Partial-Overlap, which takes the overlap of data transfer from host to device and SpMV kernel execution into account as well. The CSR-Full-Overlap unifies the two major overlaps in SpMV and hides the computation as much as possible in the two important access behaviors of the GPU. This allows CSR-Full-Overlap to achieve the best performance gains from both overlaps. As far as we know, this paper is the first in-depth study of how memcpy_async can be potentially applied to help accelerate SpMV computation in GPU platforms. We compare CSR-Full-Overlap to the current state-of-the-art cuSPARSE, where our experimental results show an average 2.03x performance gain and up to 2.67x performance gain.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Wang, Zhi, and Sinan Fang. "Three-Dimensional Inversion of Borehole-Surface Resistivity Method Based on the Unstructured Finite Element." International Journal of Antennas and Propagation 2021 (September 24, 2021): 1–13. http://dx.doi.org/10.1155/2021/5154985.

Повний текст джерела
Анотація:
The electromagnetic wave signal from the electromagnetic field source generates induction signals after reaching the target geological body through the underground medium. The time and spatial distribution rules of the artificial or the natural electromagnetic fields are obtained for the exploration of mineral resources of the subsurface and determining the geological structure of the subsurface to solve the geological problems. The goal of electromagnetic data processing is to suppress the noise and improve the signal-to-noise ratio and the inversion of resistivity data. Inversion has always been the focus of research in the field of electromagnetic methods. In this paper, the three-dimensional borehole-surface resistivity method is explored based on the principle of geometric sounding, and the three-dimensional inversion algorithm of the borehole-surface resistivity method in arbitrary surface topography is proposed. The forward simulation and calculation start from the partial differential equation and the boundary conditions of the total potential of the three-dimensional point current source field are satisfied. Then the unstructured tetrahedral grids are used to discretely subdivide the calculation area that can well fit the complex structure of subsurface and undulating surface topography. The accuracy of the numerical solution is low due to the rapid attenuation of the electric field at the point current source and the nearby positions and sharply varying potential gradients. Therefore, the mesh density is defined at the local area, that is, the vicinity of the source electrode and the measuring electrode. The mesh refinement can effectively reduce the influence of the source point and its vicinity and improve the accuracy of the numerical solution. The stiffness matrix is stored with Compressed Row Storage (CSR) format, and the final large linear equations are solved using the Super Symmetric Over Relaxation Preconditioned Conjugate Gradient (SSOR-PCG) method. The quasi-Newton method with limited memory (L_BFGS) is used to optimize the objective function in the inversion calculation, and a double-loop recursive method is used to solve the normal equation obtained at each iteration in order to avoid computing and storing the sensitivity matrix explicitly and reduce the amount of calculation. The comprehensive application of the above methods makes the 3D inversion algorithm efficient, accurate, and stable. The three-dimensional inversion test is performed on the synthetic data of multiple theoretical geoelectric models with topography (a single anomaly model under valley and a single anomaly model under mountain) to verify the effectiveness of the proposed algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Sreenivasulu, P., and S. Varadarajan. "An Efficient Lossless ROI Image Compression Using Wavelet-Based Modified Region Growing Algorithm." Journal of Intelligent Systems 29, no. 1 (November 14, 2018): 1063–78. http://dx.doi.org/10.1515/jisys-2018-0180.

Повний текст джерела
Анотація:
Abstract Nowadays, medical imaging and telemedicine are increasingly being utilized on a huge scale. The expanding interest in storing and sending medical images brings a lack of adequate memory spaces and transmission bandwidth. To resolve these issues, compression was introduced. The main aim of lossless image compression is to improve accuracy, reduce the bit rate, and improve the compression efficiency for the storage and transmission of medical images while maintaining an acceptable image quality for diagnosis purposes. In this paper, we propose lossless medical image compression using wavelet transform and encoding method. Basically, the proposed image compression system comprises three modules: (i) segmentation, (ii) image compression, and (iii) image decompression. First, the input medical image is segmented into region of interest (ROI) and non-ROI using a modified region growing algorithm. Subsequently, the ROI is compressed by discrete cosine transform and set partitioning in hierarchical tree encoding method, and the non-ROI is compressed by discrete wavelet transform and merging-based Huffman encoding method. Finally, the compressed image combination of the compressed ROI and non-ROI is obtained. Then, in the decompression stage, the original medical image is extracted using the reverse procedure. The experimentation was carried out using different medical images, and the proposed method obtained better results compared to different other methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Mysior, Marek, Paweł Stępień, and Sebastian Koziołek. "Modeling and Experimental Validation of Compression and Storage of Raw Biogas." Processes 8, no. 12 (November 27, 2020): 1556. http://dx.doi.org/10.3390/pr8121556.

Повний текст джерела
Анотація:
A significant challenge in sustainability and development of energy systems is connected with limited diversity and availability of fuels, especially in rural areas. A potential solution to this problem is compression, transport, and storage of raw biogas, that would increase diversity and availability of energy sources in remote areas. The aim of this study was to perform experimental research on raw biogas compression concerning biogas volume that can be stored in a cylinder under the pressure of 20 MPa and to compare obtained results with numerical models used to describe the state of gas at given conditions. Results were used to determine the theoretical energy content of raw biogas, assuming its usage in CHP systems. In the study, six compression test runs were conducted on-site in an agricultural biogas plant. Compression time, pressure as well as gas volume, and temperature rise were measured for raw biogas supplied directly from the digester. Obtained results were used to evaluate raw biogas compressibility factor Z and were compared with several equations of state and numerical methods for calculating the Z-factor. For experimental compression cycles, a theoretical energy balance was calculated based on experimental results published elsewhere. As a result, gas compressibility factor Z for storage pressure of 20 MPa and a temperature of 319.9 K was obtained and compared with 6 numerical models used for similar gases. It was shown that widely known numerical models can predict the volume of compressed gas with AARE% as low as 4.81%. It was shown that raw biogas supplied directly from the digester can be successfully compressed and stored in composite cylinders under pressure up to 20 MPa. This proposes a new method to utilize raw biogas in remote areas, increasing the diversity of energy sources and increasing the share of renewable fuels worldwide.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Mostert, Clemens, Berit Ostrander, Stefan Bringezu, and Tanja Kneiske. "Comparing Electrical Energy Storage Technologies Regarding Their Material and Carbon Footprint." Energies 11, no. 12 (December 3, 2018): 3386. http://dx.doi.org/10.3390/en11123386.

Повний текст джерела
Анотація:
The need for electrical energy storage technologies (EEST) in a future energy system, based on volatile renewable energy sources is widely accepted. The still open question is which technology should be used, in particular in such applications where the implementation of different storage technologies would be possible. In this study, eight different EEST were analysed. The comparative life cycle assessment focused on the storage of electrical excess energy from a renewable energy power plant. The considered EEST were lead-acid, lithium-ion, sodium-sulphur, vanadium redox flow and stationary second-life batteries. In addition, two power-to-gas plants storing synthetic natural gas and hydrogen in the gas grid and a new underwater compressed air energy storage were analysed. The material footprint was determined by calculating the raw material input RMI and the total material requirement TMR and the carbon footprint by calculating the global warming impact GWI. All indicators were normalised per energy fed-out based on a unified energy fed-in. The results show that the second-life battery has the lowest greenhouse gas (GHG) emissions and material use, followed by the lithium-ion battery and the underwater compressed air energy storage. Therefore, these three technologies are preferred options compared to the remaining five technologies with respect to the underlying assumptions of the study. The production phase accounts for the highest share of GHG emissions and material use for nearly all EEST. The results of a sensitivity analysis show that lifetime and storage capacity have a comparable high influence on the footprints. The GHG emissions and the material use of the power-to-gas technologies, the vanadium redox flow battery as well as the underwater compressed air energy storage decline strongly with increased storage capacity.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Hasugian, Paska Marto, and Rizki Manullang. "File Compression Application Design Using Half Byte Algorithm." Jurnal Info Sains : Informatika dan Sains 11, no. 2 (September 1, 2021): 18–21. http://dx.doi.org/10.54209/infosains.v11i2.43.

Повний текст джерела
Анотація:
The need for large data storage capacity seems increasingly important. This need is caused by the data that must be stored more and more. The storage is not allocated in one place only. But they will also store data or files elsewhere as data backups. How much storage capacity must be provided to hold all the data. Half Byte algorithm is one of the data compression algorithms. The Half Byte algorithm utilizes four left bits that are often the same sequentially especially text files. When the first four bits of the character are received in a row seven or more times, the algorithm compresses the data with marker bits, then the first character of the same four-bit series is followed by the last four-bit pair of the next series
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Du, Jiang, Bin Lang Chen, and Zeng Qin. "The Application of Compound Document Storage Technology in Forensic Analysis System." Applied Mechanics and Materials 610 (August 2014): 756–59. http://dx.doi.org/10.4028/www.scientific.net/amm.610.756.

Повний текст джерела
Анотація:
Due to the special nature of electronic data, we need to create a complete copy of the raw disks before computer forensics. There is two ways to create a copy. One is copy from Disk to disk, another is disk mirroring. The former capacity of the copy disk has fixed while image file of the latter is highly compressed already. Neither any of them can add the forensic analysis evidence; it will be handed over to the court as a whole set of evidence. This will affect the completeness and admissibility of the evidence. In this thesis, we will take the disk mirroring and storage technology into consideration by using the Compound Document storage technology, by this way we can add the evidence into the evidence copy which makes an “evidence gathered” effect. At the same time, it can highly compressed the original evidence, save the capacity and ensure the safety of the data.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Chlopkowski, Marek, Maciej Antczak, Michal Slusarczyk, Aleksander Wdowinski, Michal Zajaczkowski, and Marta Kasprzak. "High-order statistical compressor for long-term storage of DNA sequencing data." RAIRO - Operations Research 50, no. 2 (March 24, 2016): 351–61. http://dx.doi.org/10.1051/ro/2015039.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Shin, Hyun Kyu, and Sung Kyu Ha. "A Review on the Cost Analysis of Hydrogen Gas Storage Tanks for Fuel Cell Vehicles." Energies 16, no. 13 (July 7, 2023): 5233. http://dx.doi.org/10.3390/en16135233.

Повний текст джерела
Анотація:
The most practical way of storing hydrogen gas for fuel cell vehicles is to use a composite overwrapped pressure vessel. Depending on the driving distance range and power requirement of the vehicles, there can be various operational pressure and volume capacity of the tanks, ranging from passenger vehicles to heavy-duty trucks. The current commercial hydrogen storage method for vehicles involves storing compressed hydrogen gas in high-pressure tanks at pressures of 700 bar for passenger vehicles and 350 bar to 700 bar for heavy-duty trucks. In particular, hydrogen is stored in rapidly refillable onboard tanks, meeting the driving range needs of heavy-duty applications, such as regional and line-haul trucking. One of the most important factors for fuel cell vehicles to be successful is their cost-effectiveness. So, in this review, the cost analysis including the process analysis, raw materials, and manufacturing processes is reviewed. It aims to contribute to the optimization of both the cost and performance of compressed hydrogen storage tanks for various applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Pope, Gunnar C., and Ryan J. Halter. "Design and Implementation of an Ultra-Low Resource Electrodermal Activity Sensor for Wearable Applications ‡." Sensors 19, no. 11 (May 29, 2019): 2450. http://dx.doi.org/10.3390/s19112450.

Повний текст джерела
Анотація:
While modern low-power microcontrollers are a cornerstone of wearable physiological sensors, their limited on-chip storage typically makes peripheral storage devices a requirement for long-term physiological sensing—significantly increasing both size and power consumption. Here, a wearable biosensor system capable of long-term recording of physiological signals using a single, 64 kB microcontroller to minimize sensor size and improve energy performance is described. Electrodermal (EDA) signals were sampled and compressed using a multiresolution wavelet transformation to achieve long-term storage within the limited memory of a 16-bit microcontroller. The distortion of the compressed signal and errors in extracting common EDA features is evaluated across 253 independent EDA signals acquired from human volunteers. At a compression ratio (CR) of 23.3×, the root mean square error (RMSErr) is below 0.016 μ S and the percent root-mean-square difference (PRD) is below 1%. Tonic EDA features are preserved at a CR = 23.3× while phasic EDA features are more prone to reconstruction errors at CRs > 8.8×. This compression method is shown to be competitive with other compressive sensing-based approaches for EDA measurement while enabling on-board access to raw EDA data and efficient signal reconstructions. The system and compression method provided improves the functionality of low-resource microcontrollers by limiting the need for external memory devices and wireless connectivity to advance the miniaturization of wearable biosensors for mobile applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Li, Daoyuan, Tegawende F. Bissyande, Jacques Klein, and Yves Le Traon. "Time Series Classification with Discrete Wavelet Transformed Data." International Journal of Software Engineering and Knowledge Engineering 26, no. 09n10 (November 2016): 1361–77. http://dx.doi.org/10.1142/s0218194016400088.

Повний текст джерела
Анотація:
Time series mining has become essential for extracting knowledge from the abundant data that flows out from many application domains. To overcome storage and processing challenges in time series mining, compression techniques are being used. In this paper, we investigate the loss/gain of performance of time series classification approaches when fed with lossy-compressed data. This extended empirical study is essential for reassuring practitioners, but also for providing more insights on how compression techniques can even be effective in smoothing and reducing noise in time series data. From a knowledge engineering perspective, we show that time series may be compressed by 90% using discrete wavelet transforms and still achieve remarkable classification accuracy, and that residual details left by popular wavelet compression techniques can sometimes even help to achieve higher classification accuracy than the raw time series data, as they better capture essential local features.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Liu, Jing, Xiaoqing Tian, Jiayuan Jiang, and Kaiyu Huang. "Distributed Compressed Sensing Based Ground Moving Target Indication for Dual-Channel SAR System." Sensors 18, no. 7 (July 21, 2018): 2377. http://dx.doi.org/10.3390/s18072377.

Повний текст джерела
Анотація:
The dual-channel synthetic aperture radar (SAR) system is widely applied in the field of ground moving-target indication (GMTI). With the increase of the imaging resolution, the resulting substantial raw data samples increase the transmission and storage burden. We tackle the problem by adopting the joint sparsity model 1 (JSM-1) in distributed compressed sensing (DCS) to exploit the correlation between the two channels of the dual-channel SAR system. We propose a novel algorithm, namely the hierarchical variational Bayesian based distributed compressed sensing (HVB-DCS) algorithm for the JSM-1 model, which decouples the common component from the innovation components by applying variational Bayesian approximation. Using the proposed HVB-DCS algorithm in the dual-channel SAR based GMTI (SAR-GMTI) system, we can jointly reconstruct the dual-channel signals, and simultaneously detect the moving targets and stationary clutter, which enables sampling at a further lower rate in azimuth as well as improves the reconstruction accuracy. The simulation and experimental results show that the proposed HVB-DCS algorithm is capable of detecting multiple moving targets while suppressing the clutter at a much lower data rate in azimuth compared with the compressed sensing (CS) and range-Doppler (RD) algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Rohr, David. "Usage of GPUs in ALICE Online and Offline processing during LHC Run 3." EPJ Web of Conferences 251 (2021): 04026. http://dx.doi.org/10.1051/epjconf/202125104026.

Повний текст джерела
Анотація:
ALICE will significantly increase its Pb–Pb data taking rate from the 1 kHz of triggered readout in Run 2 to 50 kHz of continuous readout for LHC Run 3. Updated tracking detectors are installed for Run 3 and a new twophase computing strategy is employed. In the first synchronous phase during the data taking, the raw data is compressed for storage to an on-site disk buffer and the required data for the detector calibration is collected. In the second asynchronous phase the compressed raw data is reprocessed using the final calibration to produce the final reconstruction output. Traditional CPUs are unable to cope with the huge data rate and processing demands of the synchronous phase, therefore ALICE employs GPUs to speed up the processing. Since the online computing farm performs a part of the asynchronous processing when there is no beam in the LHC, ALICE plans to use the GPUs also for this second phase. This paper gives an overview of the GPU processing in the synchronous phase, the full system test to validate the reference GPU architecture, and the prospects for the GPU usage in the asynchronous phase.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Et. al., G. Megala,. "State-Of-The-Art In Video Processing: Compression, Optimization And Retrieval." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 5 (April 11, 2021): 1256–72. http://dx.doi.org/10.17762/turcomat.v12i5.1793.

Повний текст джерела
Анотація:
Video compression plays a vital role in the modern social media networking with plethora of multimedia applications. It empowers transmission medium to competently transfer videos and enable resources to store the video efficiently. Nowadays high-resolution video data are transferred through the communication channel having high bit rate in order to send multiple compressed videos. There are many advances in transmission ability, efficient storage ways of these compressed video where compression is the primary task involved in multimedia services. This paper summarizes the compression standards, describes the main concepts involved in video coding. Video compression performs conversion of large raw bits of video sequence into a small compact one, achieving high compression ratio with good video perceptual quality. Removing redundant information is the main task in the video sequence compression. A survey on various block matching algorithms, quantization and entropy coding are focused. It is found that many of the methods having computational complexities needs improvement with optimization.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Vijaykumar, Padmaja, and Jeevan K. Mani. "Face Recognition with Frame size reduction and DCT compression using PCA algorithm." Indonesian Journal of Electrical Engineering and Computer Science 22, no. 1 (April 1, 2021): 168. http://dx.doi.org/10.11591/ijeecs.v22.i1.pp168-178.

Повний текст джерела
Анотація:
<p><span>Face recognition has become a very important study of research because it has a variety of applications in research field such as human computer interaction, pattern recognition (PR). A successful face recognition procedure, be it mathematical or numerical, depends on the particular choice of the features used by the classifier. Feature selection in pattern recognition consists of the derivation of salient features present in the raw input data in order to reduce the amount of data used for classification. For the successful face recognition, the database images must have sufficient information so that when presented with the probe image, the recognition must be possible. Majority of times, there is always excess information present in the database images, leads higher storage, hence optimum size of the images needs to be stored in the database for good performance, are compressed with reduction in frame size and then compressed with that of the </span>DCT.</p><p> </p>
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Crosa, G., F. Pittaluga, A. Trucco, F. Beltrami, A. Torelli, and F. Traverso. "Heavy-Duty Gas Turbine Plant Aerothermodynamic Simulation Using Simulink." Journal of Engineering for Gas Turbines and Power 120, no. 3 (July 1, 1998): 550–56. http://dx.doi.org/10.1115/1.2818182.

Повний текст джерела
Анотація:
This paper presents a physical simulator for predicting the off-design and dynamic behavior of a single shaft heavy-duty gas turbine plant, suitable for gas-steam combined cycles. The mathematical model, which is nonlinear and based on the lumped parameter approach, is described by a set of first-order differential and algebraic equations. The plant components are described adding to their steady-state characteristics the dynamic equations of mass, momentum, and energy balances. The state variables are mass flow rates, static pressures, static temperatures of the fluid, wall temperatures, and shaft rotational speed. The analysis has been applied to a 65 MW heavy-duty gas turbine plant with two off-board, silo-type combustion chambers. To model the compressor, equipped with variable inlet guide vanes, a subdivision into five partial compressors is adopted, in serial arrangement, separated by dynamic blocks. The turbine is described using a one-dimensional, row-by-row mathematical model, that takes into account both the air bleed cooling effect and the mass storage among the stages. The simulation model considers also the air bleed transformations from the compressor down to the turbine. Both combustion chambers have been modeled utilizing a sequence of several sub-volumes, to simulate primary and secondary zones in presence of three hybrid burners. A code has been created in Simulink environment. Some dynamic responses of the simulated plant, equipped with a proportional-integral speed regulator, are presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Pankiraj, Jeya Bright, Vishnuvarthanan Govindaraj, Yudong Zhang, Pallikonda Rajasekaran Murugan, and Anisha Milton. "Development of Scalable Coding of Encrypted Images Using Enhanced Block Truncation Code." Webology 19, no. 1 (January 20, 2022): 1620–39. http://dx.doi.org/10.14704/web/v19i1/web19109.

Повний текст джерела
Анотація:
Only few researchers are reported on scalable coding of encrypted images, and it is an important area of research. In this paper, a novel method of scalable coding of encrypted images using Enhanced Block Truncation Code (EBTC) has been proposed. The raw image is compressed using EBTC and then encrypted using the pseudo-random number (PSRN) at the transmitter and the Key is disseminated to the receiver. The transmitted image is decrypted at the receiver by using the PSRN key. Finally, the output image is constructed using EBTC, scaled by scaling factor 2 and Bilinear Interpolation Technique. The proposed system gives better PSNR, Compression ratio and storage requirement than existing techniques such as Hadamard, DMMBTC and BTC.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Huang, Y. S., G. Q. Zhou, T. Yue, H. B. Yan, W. X. Zhang, X. Bao, Q. Y. Pan, and J. S. Ni. "VECTOR AND RASTER DATA LAYERED FUSION AND 3D VISUALIZATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W10 (February 8, 2020): 1127–34. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w10-1127-2020.

Повний текст джерела
Анотація:
Abstract. Although contemporary geospatial science has made great progress, spatial data fusion of vector and raster data is still a problem in the geoinformation science environment. In order to solve the problem, this paper proposes a method which merges vector and raster data. Firstly, the row and column numbers of the raster data, and the X, Y values of the vector data are represented by Morton code in the C++ environment, respectively. Secondly, we establish the the raster data table and the vector data table in the Oracle database to store the vector data and the raster data. Third, this paper uses the minimum selection bounding box method to extract the top data of the building model. Finally, we divide the vector and raster data into four steps to obtain the fusion data table, and we call the fusion data in the database for 3D visualization. This method compresses the size of data of the original data, and simultaneously divides the data into three levels, which not only solves the problem of data duplication storage and unorganized storage, but also can realize vector data storage and the raster data storage in the same database at the same time. Thus, the fusion original orthophoto data contains the gray values of building roofs and the elevation data, which can improve the availability of vector data and the raster data in the 3D Visualization application.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Hussain, D. Mansoor, D. Surendran, and A. Benazir Begum. "Feature Extraction in JPEG domain along with SVM for Content Based Image Retrieval." International Journal of Engineering & Technology 7, no. 2.19 (April 17, 2018): 1. http://dx.doi.org/10.14419/ijet.v7i2.19.11656.

Повний текст джерела
Анотація:
Content Based Image Retrieval (CBIR) applies computer vision methods for image retreival purposes from the databases. It is majorly based on the user query, which is in visual form rather than the traditional text form. CBIR is applied in different fields extending from surveillance to remote sensing, E-purchase, medical image processing, security systems to historical research and many others. JPEG, a very commonly used method of lossy compression is used to reduce the size of the image before being stored or transmitted. Almost every digital camera in the market are storing the captured images in jpeg format. The storage industry has seen many major transformations in the past decades while the retrieval technologies are still developing. Though there are some breakthroughs happened in text retrieval, the same is not true for the image and other multimedia retrieval. Specifically image retreival has witnessed many algorithms in the spatial or the raw domain but since majority of the images are stored in the JPEG format, it takes time to decode the compressed image before extracting features and retrieving. Hence, in this research work, we focus on extracting the features from the compressed domain itself and then utilize support vector machines (SVM) for improving the retrieval results. Our proof of concept shows us that the features extracted in compressed domain helps retrieve the images 43% faster than the same set of images in the spatial domain and the accuracy is improved to 93.4% through SVM based feedback mechanism.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Huang, Lan, Jia Zeng, Shiqi Sun, Wencong Wang, Yan Wang, and Kangping Wang. "Coarse-Grained Pruning of Neural Network Models Based on Blocky Sparse Structure." Entropy 23, no. 8 (August 13, 2021): 1042. http://dx.doi.org/10.3390/e23081042.

Повний текст джерела
Анотація:
Deep neural networks may achieve excellent performance in many research fields. However, many deep neural network models are over-parameterized. The computation of weight matrices often consumes a lot of time, which requires plenty of computing resources. In order to solve these problems, a novel block-based division method and a special coarse-grained block pruning strategy are proposed in this paper to simplify and compress the fully connected structure, and the pruned weight matrices with a blocky structure are then stored in the format of Block Sparse Row (BSR) to accelerate the calculation of the weight matrices. First, the weight matrices are divided into square sub-blocks based on spatial aggregation. Second, a coarse-grained block pruning procedure is utilized to scale down the model parameters. Finally, the BSR storage format, which is much more friendly to block sparse matrix storage and computation, is employed to store these pruned dense weight blocks to speed up the calculation. In the following experiments on MNIST and Fashion-MNIST datasets, the trend of accuracies with different pruning granularities and different sparsity is explored in order to analyze our method. The experimental results show that our coarse-grained block pruning method can compress the network and can reduce the computational cost without greatly degrading the classification accuracy. The experiment on the CIFAR-10 dataset shows that our block pruning strategy can combine well with the convolutional networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Weihe, Thomas, Robert Wagner, Uta Schnabel, Mathias Andrasch, Yukun Su, Jörg Stachowiak, Heinz Jörg Noll, and Jörg Ehlbeck. "Microbial Control of Raw and Cold-Smoked Atlantic Salmon (Salmo salar) through a Microwave Plasma Treatment." Foods 11, no. 21 (October 25, 2022): 3356. http://dx.doi.org/10.3390/foods11213356.

Повний текст джерела
Анотація:
The control of the pathogenic load on foodstuffs is a key element in food safety. Particularly, seafood such as cold-smoked salmon is threatened by pathogens such as Salmonella sp. or Listeria monocytogenes. Despite strict existing hygiene procedures, the production industry constantly demands novel, reliable methods for microbial decontamination. Against that background, a microwave plasma-based decontamination technique via plasma-processed air (PPA) is presented. Thereby, the samples undergo two treatment steps, a pre-treatment step where PPA is produced when compressed air flows over a plasma torch, and a post-treatment step where the PPA acts on the samples. This publication embraces experiments that compare the total viable count (tvc) of bacteria found on PPA-treated raw (rs) and cold-smoked salmon (css) samples and their references. The tvc over the storage time is evaluated using a logistic growth model that reveals a PPA sensitivity for raw salmon (rs). A shelf-life prolongation of two days is determined. When cold-smoked salmon (css) is PPA-treated, the treatment reveals no further impact. When PPA-treated raw salmon (rs) is compared with PPA-untreated cold-smoked salmon (css), the PPA treatment appears as reliable as the cold-smoking process and retards the growth of cultivable bacteria in the same manner. The experiments are flanked by quality measurements such as color and texture measurements before and after the PPA treatment. Salmon samples, which undergo an overtreatment, solely show light changes such as a whitish surface flocculation. A relatively mild treatment as applied in the storage experiments has no further detected impact on the fish matrix.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Shan, Nanliang, Xinghua Xu, Xianqiang Bao, and Shaohua Qiu. "Fast Fault Diagnosis in Industrial Embedded Systems Based on Compressed Sensing and Deep Kernel Extreme Learning Machines." Sensors 22, no. 11 (May 25, 2022): 3997. http://dx.doi.org/10.3390/s22113997.

Повний текст джерела
Анотація:
With the complexity and refinement of industrial systems, fast fault diagnosis is crucial to ensuring the stable operation of industrial equipment. The main limitation of the current fault diagnosis methods is the lack of real-time performance in resource-constrained industrial embedded systems. Rapid online detection can help deal with equipment failures in time to prevent equipment damage. Inspired by the ideas of compressed sensing (CS) and deep extreme learning machines (DELM), a data-driven general method is proposed for fast fault diagnosis. The method contains two modules: data sampling and fast fault diagnosis. The data sampling module non-linearly projects the intensive raw monitoring data into low-dimensional sampling space, which effectively reduces the pressure of transmission, storage and calculation. The fast fault diagnosis module introduces the kernel function into DELM to accommodate sparse signals and then digs into the inner connection between the compressed sampled signal and the fault types to achieve fast fault diagnosis. This work takes full advantage of the sparsity of the signal to enable fast fault diagnosis online. It is a general method in industrial embedded systems under data-driven conditions. The results on the CWRU dataset and real platforms show that our method not only has a significant speed advantage but also maintains a high accuracy, which verifies the practical application value in industrial embedded systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Miquel, Jonathan, Laurent Latorre, and Simon Chamaillé-Jammes. "Energy-Efficient Audio Processing at the Edge for Biologging Applications." Journal of Low Power Electronics and Applications 13, no. 2 (April 27, 2023): 30. http://dx.doi.org/10.3390/jlpea13020030.

Повний текст джерела
Анотація:
Biologging refers to the use of animal-borne recording devices to study wildlife behavior. In the case of audio recording, such devices generate large amounts of data over several months, and thus require some level of processing automation for the raw data collected. Academics have widely adopted offline deep-learning-classification algorithms to extract meaningful information from large datasets, mainly using time-frequency signal representations such as spectrograms. Because of the high deployment costs of animal-borne devices, the autonomy/weight ratio remains by far the fundamental concern. Basically, power consumption is addressed using onboard mass storage (no wireless transmission), yet the energy cost associated with data storage activity is far from negligible. In this paper, we evaluate various strategies to reduce the amount of stored data, making the fair assumption that audio will be categorized using a deep-learning classifier at some point of the process. This assumption opens up several scenarios, from straightforward raw audio storage paired with further offline classification on one side, to a fully embedded AI engine on the other side, with embedded audio compression or feature extraction in between. This paper investigates three approaches focusing on data-dimension reduction: (i) traditional inline audio compression, namely ADPCM and MP3, (ii) full deep-learning classification at the edge, and (iii) embedded pre-processing that only computes and stores spectrograms for later offline classification. We characterized each approach in terms of total (sensor + CPU + mass-storage) edge power consumption (i.e., recorder autonomy) and classification accuracy. Our results demonstrate that ADPCM encoding brings 17.6% energy savings compared to the baseline system (i.e., uncompressed raw audio samples). Using such compressed data, a state-of-the-art spectrogram-based classification model still achieves 91.25% accuracy on open speech datasets. Performing inline data-preparation can significantly reduce the amount of stored data allowing for a 19.8% energy saving compared to the baseline system, while still achieving 89% accuracy during classification. These results show that while massive data reduction can be achieved through the use of inline computation of spectrograms, it translates to little benefit on device autonomy when compared to ADPCM encoding, with the added downside of losing original audio information.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Gudodagi, Raveendra, and R. Venkata Siva Reddy. "Security Provisioning and Compression of Diverse Genomic Data based on Advanced Encryption Standard (AES) Algorithm." International Journal of Biology and Biomedical Engineering 15 (May 14, 2021): 104–12. http://dx.doi.org/10.46300/91011.2021.15.14.

Повний текст джерела
Анотація:
Compression of genomic data has gained enormous momentum in recent years because of advances in technology, exponentially growing health concerns, and government funding for research. Such advances have driven us to personalize public health and medical care. These pose a considerable challenge for ubiquitous computing in data storage. One of the main issues faced by genomic laboratories is the 'cost of storage' due to the large data file of the human genome (ranging from 30 GB to 200 GB). Data preservation is a set of actions meant to protect data from unauthorized access or changes. There are several methods used to protect data, and encryption is one of them. Protecting genomic data is a critical concern in genomics as it includes personal data. We suggest a secure encryption and decryption technique for diverse genomic data (FASTA / FASTQ format) in this article. Since we know the sequenced data is massive in bulk, the raw sequenced file is broken into sections and compressed. The Advanced Encryption Standard (AES) algorithm is used for encryption, and the Galois / Counter Mode (GCM) algorithm, is used to decode the encrypted data. This approach reduces the amount of storage space used for the data disc while preserving the data. This condition necessitates the use of a modern data compression strategy. That not only reduces storage but also improves process efficiency by using a k-th order Markov chain. In this regard, no efforts have been made to address this problem separately, from both the hardware and software realms. In this analysis, we support the need for a tailor-made hardware and software ecosystem that will take full advantage of the current stand-alone solutions. The paper discusses sequenced DNA, which may take the form of raw data obtained from sequencing. Inappropriate use of genomic data presents unique risks because it can be used to classify any individual; thus, the study focuses on the security provisioning and compression of diverse genomic data using the Advanced Encryption Standard (AES) Algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Pathak, Sudipta, and Sanguthevar Rajasekaran. "RETRACTED: LFQC: a lossless compression algorithm for FASTQ files." Bioinformatics 35, no. 9 (October 24, 2014): e1-e7. http://dx.doi.org/10.1093/bioinformatics/btu701.

Повний текст джерела
Анотація:
Abstract Motivation Next-generation sequencing (NGS) technologies have revolutionized genomic research by reducing the cost of whole-genome sequencing. One of the biggest challenges posed by modern sequencing technology is economic storage of NGS data. Storing raw data is infeasible because of its enormous size and high redundancy. In this article, we address the problem of storage and transmission of large Fastq files using innovative compression techniques. Results We introduce a new lossless non-reference-based fastq compression algorithm named lossless FastQ compressor. We have compared our algorithm with other state of the art big data compression algorithms namely gzip, bzip2, fastqz, fqzcomp, G-SQZ, SCALCE, Quip, DSRC, DSRC-LZ etc. This comparison reveals that our algorithm achieves better compression ratios. The improvement obtained is up to 225%. For example, on one of the datasets (SRR065390_1), the average improvement (over all the algorithms compared) is 74.62%. Availability and implementation The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/FastqPrograms.zip.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Wang, Rongjie, Junyi Li, Yang Bai, Tianyi Zang, and Yadong Wang. "BdBG: a bucket-based method for compressing genome sequencing data with dynamic de Bruijn graphs." PeerJ 6 (October 19, 2018): e5611. http://dx.doi.org/10.7717/peerj.5611.

Повний текст джерела
Анотація:
Dramatic increases in data produced by next-generation sequencing (NGS) technologies demand data compression tools for saving storage space. However, effective and efficient data compression for genome sequencing data has remained an unresolved challenge in NGS data studies. In this paper, we propose a novel alignment-free and reference-free compression method, BdBG, which is the first to compress genome sequencing data with dynamic de Bruijn graphs based on the data after bucketing. Compared with existing de Bruijn graph methods, BdBG only stored a list of bucket indexes and bifurcations for the raw read sequences, and this feature can effectively reduce storage space. Experimental results on several genome sequencing datasets show the effectiveness of BdBG over three state-of-the-art methods. BdBG is written in python and it is an open source software distributed under the MIT license, available for download at https://github.com/rongjiewang/BdBG.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Kullaa, Jyrki. "Damage Detection and Localization under Variable Environmental Conditions Using Compressed and Reconstructed Bayesian Virtual Sensor Data." Sensors 22, no. 1 (December 31, 2021): 306. http://dx.doi.org/10.3390/s22010306.

Повний текст джерела
Анотація:
Structural health monitoring (SHM) with a dense sensor network and repeated vibration measurements produces lots of data that have to be stored. If the sensor network is redundant, data compression is possible by storing the signals of selected Bayesian virtual sensors only, from which the omitted signals can be reconstructed with higher accuracy than the actual measurement. The selection of the virtual sensors for storage is done individually for each measurement based on the reconstruction accuracy. Data compression and reconstruction for SHM is the main novelty of this paper. The stored and reconstructed signals are used for damage detection and localization in the time domain using spatial or spatiotemporal correlation. Whitening transformation is applied to the training data to take the environmental or operational influences into account. The first principal component of the residuals is used to localize damage and also to design the extreme value statistics control chart for damage detection. The proposed method was studied with a numerical model of a frame structure with a dense accelerometer or strain sensor network. Only five acceleration or three strain signals out of the total 59 signals were stored. The stored and reconstructed data outperformed the raw measurement data in damage detection and localization.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Giada, Giuffrida, Rosa Caponetto, and Francesco Nocera. "Hygrothermal Properties of Raw Earth Materials: A Literature Review." Sustainability 11, no. 19 (September 27, 2019): 5342. http://dx.doi.org/10.3390/su11195342.

Повний текст джерела
Анотація:
Raw earth historic and contemporary architectures are renowned for their good environmental properties of recyclability and low embodied energy along the production process. Earth massive walls are universally known to be able to regulate indoor thermal and hygroscopic conditions containing energy consumptions, creating comfortable interior spaces with a low carbon footprint. Therefore, earth buildings are de facto green buildings. As a result of this, some earthen technologies have been rediscovered and implemented to be adapted to the contemporary building production sector. Nevertheless, the diffusion of contemporary earthen architecture is decelerated by the lack of broadly accepted standards on its anti-seismic and thermal performance. Indeed, the former issue has been solved using high-tensile materials inside the walls or surface reinforcements on their sides to improve their flexural strength. The latter issue is related to the penalization of earth walls thermal behavior in current regulations, which tent to evaluate only the steady-state performance of building components, neglecting the benefit of heat storage and hygrothermal buffering effect provided by massive and porous envelopes as raw earth ones. In this paper, we show the results of a paper review concerning the hygrothermal performance of earthen materials for contemporary housing: great attention is given to the base materials which are used (inorganic soils, natural fibers, and mineral or recycled aggregates, chemical stabilizers), manufacturing procedures (when described), performed tests and final performances. Different earth techniques (adobe, cob, extruded bricks, rammed earth, compressed earth blocks, light earth) have been considered in order to highlight that earth material can act both as a conductive and insulating meterial depending on how it is implemented, adapting to several climate contests. The paper aims to summarize current progress in the improvement of thermal performance of raw earth traditional mixes, discuss the suitability of existing measurement protocols for hygroscopic and natural materials and provide guidance for further researches.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Wilke, Daniel N., Paul W. Cleary, and Nicolin Govender. "From discrete element simulation data to process insights." EPJ Web of Conferences 249 (2021): 15001. http://dx.doi.org/10.1051/epjconf/202124915001.

Повний текст джерела
Анотація:
Industrial-scale discrete element simulations typically generate Gigabytes of data per time step, which implies that even opening a single file may require 5 - 15 minutes on conventional magnetic storage devices. Data science’s inherent multi-disciplinary nature makes the extraction of useful information challenging, often leading to undiscovered details or new insights. This study explores the potential of statistical learning to identify potential regions of interest for large scale discrete element simulations. We demonstrate that our in-house knowledge discovery and data mining system (KDS) can decompose large datasets into i) regions of potential interest to the analyst, ii) multiple decompositions that highlight different aspects of the data, iii) simplify interpretation of DEM generated data by focusing attention on the interpretation of automatically decomposed regions, and iv) streamline the analysis of raw DEM data by letting the analyst control the number of decomposition and the way the decompositions are performed. Multiple decompositions can be automated in parallel and compressed, enabling agile engagement with the analyst’s processed data. This study focuses on spatial and not temporal inferences.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Smith, H. J., S. J. Bakke, B. Smevik, J. K. Hald, G. Moen, B. Rudenhed, and A. Abildgaard. "Comparison of 12-Bit and 8-Bit Gray Scale Resolution in Mr Imaging of the CNS." Acta Radiologica 33, no. 6 (November 1992): 505–11. http://dx.doi.org/10.1177/028418519203300601.

Повний текст джерела
Анотація:
A reduction in gray scale resolution of digital images from 12 to 8 bits per pixel usually means halving the storage space needed for the images. Theoretically, important diagnostic information may be lost in the process. We compared the sensitivity and specificity achieved by 4 radiologists in reading laser-printed films of original 12-bit MR images and cathode ray tube displays of the same images which had been compressed to 8 bits per pixel using a specially developed computer program. Receiver operating characteristic (ROC) curves showed no significant differences between film reading and screen reading. A paired 2-tailed t-test, applied on the data for actually positive cases, showed that the combined, average performance of the reviewers was significantly better at screen reading than at film reading. No such differences were found for actually negative cases. Some individual differences were found, but it is concluded that gray scale resolution of MR images may be reduced from 12 to 8 bits per pixel without any significant reduction in diagnostic information.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Santoyo-Garcia, Hector, Eduardo Fragoso-Navarro, Rogelio Reyes-Reyes, Clara Cruz-Ramos, and Mariko Nakano-Miyatake. "Visible Watermarking Technique Based on Human Visual System for Single Sensor Digital Cameras." Security and Communication Networks 2017 (2017): 1–18. http://dx.doi.org/10.1155/2017/7903198.

Повний текст джерела
Анотація:
In this paper we propose a visible watermarking algorithm, in which a visible watermark is embedded into the Bayer Colour Filter Array (CFA) domain. The Bayer CFA is the most common raw image representation for images captured by single sensor digital cameras equipped in almost all mobile devices. In proposed scheme, the captured image is watermarked before it is compressed and stored in the storage system. Then this method enforces the rightful ownership of the watermarked image, since there is no other version of the image rather than the watermarked one. We also take into consideration the Human Visual System (HVS) so that the proposed technique provides desired characteristics of a visible watermarking scheme, such that the embedded watermark is sufficiently perceptible and at same time not obtrusive in colour and grey-scale images. Unlike other Bayer CFA domain visible watermarking algorithms, in which only binary watermark pattern is supported, proposed watermarking algorithm allows grey-scale and colour images as watermark patterns. It is suitable for advertisement purpose, such as digital library and e-commerce, besides copyright protection.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Silitonga, Parasian D. P., and Irene Sri Morina. "Compression and Decompression of Audio Files Using the Arithmetic Coding Method." Scientific Journal of Informatics 6, no. 1 (May 24, 2019): 73–81. http://dx.doi.org/10.15294/sji.v6i1.17839.

Повний текст джерела
Анотація:
Audio file size is relatively larger when compared to files with text format. Large files can cause various obstacles in the form of large space requirements for storage and a long enough time in the shipping process. File compression is one solution that can be done to overcome the problem of large file sizes. Arithmetic coding is one algorithm that can be used to compress audio files. The arithmetic coding algorithm encodes the audio file and changes one row of input symbols with a floating point number and obtains the output of the encoding in the form of a number of values greater than 0 and smaller than 1. The process of compression and decompression of audio files in this study is done against several wave files. Wave files are standard audio file formats developed by Microsoft and IBM that are stored using PCM (Pulse Code Modulation) coding. The wave file compression ratio obtained in this study was 16.12 percent with an average compression process time of 45.89 seconds, while the average decompression time was 0.32 seconds.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Akoguz, A., S. Bozkurt, A. A. Gozutok, G. Alp, E. G. Turan, M. Bogaz, and S. Kent. "COMPARISON OF OPEN SOURCE COMPRESSION ALGORITHMS ON VHR REMOTE SENSING IMAGES FOR EFFICIENT STORAGE HIERARCHY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B4 (June 10, 2016): 3–9. http://dx.doi.org/10.5194/isprs-archives-xli-b4-3-2016.

Повний текст джерела
Анотація:
High resolution level in satellite imagery came with its fundamental problem as big amount of telemetry data which is to be stored after the downlink operation. Moreover, later the post-processing and image enhancement steps after the image is acquired, the file sizes increase even more and then it gets a lot harder to store and consume much more time to transmit the data from one source to another; hence, it should be taken into account that to save even more space with file compression of the raw and various levels of processed data is a necessity for archiving stations to save more space. Lossless data compression algorithms that will be examined in this study aim to provide compression without any loss of data holding spectral information. Within this objective, well-known open source programs supporting related compression algorithms have been implemented on processed GeoTIFF images of Airbus Defence &amp; Spaces SPOT 6 &amp; 7 satellites having 1.5&amp;thinsp;m. of GSD, which were acquired and stored by ITU Center for Satellite Communications and Remote Sensing (ITU CSCRS), with the algorithms Lempel-Ziv-Welch (LZW), Lempel-Ziv-Markov chain Algorithm (LZMA &amp; LZMA2), Lempel-Ziv-Oberhumer (LZO), Deflate &amp; Deflate 64, Prediction by Partial Matching (PPMd or PPM2), Burrows-Wheeler Transform (BWT) in order to observe compression performances of these algorithms over sample datasets in terms of how much of the image data can be compressed by ensuring lossless compression.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Zhang, Yang, Yuandong Liu, and Lee D. Han. "Real-Time Piecewise Regression." Transportation Research Record: Journal of the Transportation Research Board 2643, no. 1 (January 2017): 9–18. http://dx.doi.org/10.3141/2643-02.

Повний текст джерела
Анотація:
Ubiquitous sensing technologies make big data a trendy topic and a favored approach in transportation studies and applications, but the increasing volumes of data sets present remarkable challenges to data collection, storage, transfer, visualization, and processing. Fundamental aspects of big data in transportation are discussed, including how many data to collect and how to collect data effectively and economically. The focus is GPS trajectory data, which are used widely in this domain. An incremental piecewise regression algorithm is used to evaluate and compress GPS locations as they are produced. Row-wise QR decomposition and singular value decomposition are shown to be valid numerical algorithms for incremental regression. Sliding window–based piecewise regression can subsample the GPS streaming data instantaneously to preserve only the points of interest. Algorithm performance is evaluated completely as accuracy and compression power. A procedure is presented for users to choose the best parameter value for their GPS devices. Results of experiments with real-world trajectory data indicate that when the proper parameter value is selected, the proposed method achieves significant compression power (more than 10 times), maintains acceptable accuracy (less than 5 m), and always outperforms the fixed-rate sampling approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Dritsas, Elias, Andreas Kanavos, Maria Trigka, Spyros Sioutas, and Athanasios Tsakalidis. "Storage Efficient Trajectory Clustering and k-NN for Robust Privacy Preserving Spatio-Temporal Databases." Algorithms 12, no. 12 (December 11, 2019): 266. http://dx.doi.org/10.3390/a12120266.

Повний текст джерела
Анотація:
The need to store massive volumes of spatio-temporal data has become a difficult task as GPS capabilities and wireless communication technologies have become prevalent to modern mobile devices. As a result, massive trajectory data are produced, incurring expensive costs for storage, transmission, as well as query processing. A number of algorithms for compressing trajectory data have been proposed in order to overcome these difficulties. These algorithms try to reduce the size of trajectory data, while preserving the quality of the information. In the context of this research work, we focus on both the privacy preservation and storage problem of spatio-temporal databases. To alleviate this issue, we propose an efficient framework for trajectories representation, entitled DUST (DUal-based Spatio-temporal Trajectory), by which a raw trajectory is split into a number of linear sub-trajectories which are subjected to dual transformation that formulates the representatives of each linear component of initial trajectory; thus, the compressed trajectory achieves compression ratio equal to M : 1 . To our knowledge, we are the first to study and address k-NN queries on nonlinear moving object trajectories that are represented in dual dimensional space. Additionally, the proposed approach is expected to reinforce the privacy protection of such data. Specifically, even in case that an intruder has access to the dual points of trajectory data and try to reproduce the native points that fit a specific component of the initial trajectory, the identity of the mobile object will remain secure with high probability. In this way, the privacy of the k-anonymity method is reinforced. Through experiments on real spatial datasets, we evaluate the robustness of the new approach and compare it with the one studied in our previous work.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Lee, Dohyeon, and Giltae Song. "FastqCLS: a FASTQ compressor for long-read sequencing via read reordering using a novel scoring model." Bioinformatics 38, no. 2 (October 8, 2021): 351–56. http://dx.doi.org/10.1093/bioinformatics/btab696.

Повний текст джерела
Анотація:
Abstract Motivation Over the past decades, vast amounts of genome sequencing data have been produced, requiring an enormous level of storage capacity. The time and resources needed to store and transfer such data cause bottlenecks in genome sequencing analysis. To resolve this issue, various compression techniques have been proposed to reduce the size of original FASTQ raw sequencing data, but these remain suboptimal. Long-read sequencing has become dominant in genomics, whereas most existing compression methods focus on short-read sequencing only. Results We designed a compression algorithm based on read reordering using a novel scoring model for reducing FASTQ file size with no information loss. We integrated all data processing steps into a software package called FastqCLS and provided it as a Docker image for ease of installation and execution to help users easily install and run. We compared our method with existing major FASTQ compression tools using benchmark datasets. We also included new long-read sequencing data in this validation. As a result, FastqCLS outperformed in terms of compression ratios for storing long-read sequencing data. Availability and implementation FastqCLS can be downloaded from https://github.com/krlucete/FastqCLS. Supplementary information Supplementary data are available at Bioinformatics online.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Rekha, Bhanu, and Ravi Kumar AV. "High Definition Video Compression Using Saliency Features." Indonesian Journal of Electrical Engineering and Computer Science 7, no. 3 (September 1, 2017): 708. http://dx.doi.org/10.11591/ijeecs.v7.i3.pp708-717.

Повний текст джерела
Анотація:
<p>High Definition (HD) devices requires HD-videos for the effective uses of HD devices. However, it consists of some issues such as high storage capacity, limited battery power of high definition devices, long encoding time, and high computational complexity when it comes to the transmission, broadcasting and internet traffic. Many existing techniques consists these above-mentioned issues. Therefore, there is a need of an efficient technique, which reduces unnecessary amount of space, provides high compression rate and requires low bandwidth spectrum. Therefore, in the paper we have introduced an efficient video compression technique as modified HEVC coding based on saliency features to counter these existing drawbacks. We highlight first, on extracting features on the raw data and then compressed it largely. This technique makes our model powerful and provides effective performance in terms of compression. Our experiment results proves that our model provide better efficiency in terms of average PSNR, MSE and bitrate. Our experimental results outperforms all the existing techniques in terms of saliency map detection, AUC, NSS, KLD and JSD. The average AUC, NSS and KLD value by our proposed method are 0.846, 1.702 and 0.532 respectively which is very high compare to other existing technique.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії