Academic literature on the topic 'Compressed Row Storage'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Compressed Row Storage.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Compressed Row Storage"

1

Bani-Ismail, Basel, and Ghassan Kanaan. "Comparing Different Sparse Matrix Storage Structures as Index Structure for Arabic Text Collection." International Journal of Information Retrieval Research 2, no. 2 (April 2012): 52–67. http://dx.doi.org/10.4018/ijirr.2012040105.

Full text
Abstract:
In the authors’ study they evaluate and compare the storage efficiency of different sparse matrix storage structures as index structure for Arabic text collection and their corresponding sparse matrix-vector multiplication algorithms to perform query processing in any Information Retrieval (IR) system. The study covers six sparse matrix storage structures including the Coordinate Storage (COO), Compressed Sparse Row (CSR), Compressed Sparse Column (CSC), Block Coordinate (BCO), Block Sparse Row (BSR), and Block Sparse Column (BSC). Evaluation depends on the storage space requirements for each storage structure and the efficiency of the query processing algorithm. The experimental results demonstrate that CSR is more efficient in terms of storage space requirements and query processing time than the other sparse matrix storage structures. The results also show that CSR requires the least amount of disk space and performs the best in terms of query processing time compared with the other point entry storage structures (COO, CSC). The results demonstrate that BSR requires the least amount of disk space and performs the best in terms of query processing time compared with the other block entry storage structures (BCO, BSC).
APA, Harvard, Vancouver, ISO, and other styles
2

Mohammed, Saira Banu Jamal, M. Rajasekhara Babu, and Sumithra Sriram. "GPU Implementation of Image Convolution Using Sparse Model with Efficient Storage Format." International Journal of Grid and High Performance Computing 10, no. 1 (January 2018): 54–70. http://dx.doi.org/10.4018/ijghpc.2018010104.

Full text
Abstract:
With the growth of data parallel computing, role of GPU computing in non-graphic applications such as image processing becomes a focus in research fields. Convolution is an integral operation in filtering, smoothing and edge detection. In this article, the process of convolution is realized as a sparse linear system and is solved using Sparse Matrix Vector Multiplication (SpMV). The Compressed Sparse Row (CSR) format of SPMV shows better CPU performance compared to normal convolution. To overcome the stalling of threads for short rows in the GPU implementation of CSR SpMV, a more efficient model is proposed, which uses the Adaptive-Compressed Row Storage (A-CSR) format to implement the same. Using CSR in the convolution process achieves a 1.45x and a 1.159x increase in speed compared to the normal convolution of image smoothing and edge detection operations, respectively. An average speedup of 2.05x is achieved for image smoothing technique and 1.58x for edge detection technique in GPU platform usig adaptive CSR format.
APA, Harvard, Vancouver, ISO, and other styles
3

Yan, Kaizhuang, Yongxian Wang, and Wenbin Xiao. "A New Compression and Storage Method for High-Resolution SSP Data Based-on Dictionary Learning." Journal of Marine Science and Engineering 10, no. 8 (August 10, 2022): 1095. http://dx.doi.org/10.3390/jmse10081095.

Full text
Abstract:
The sound speed profile data of seawater provide an important basis for carrying out underwater acoustic modeling and analysis, sonar performance evaluation, and underwater acoustic assistant decision-making. The data volume of the high-resolution sound speed profile is vast, and the demand for data storage space is high, which severely limits the analysis and application of the high-resolution sound speed profile data in the field of marine acoustics. This paper uses the dictionary learning method to achieve sparse coding of the high-resolution sound speed profile and uses a compressed sparse row method to compress and store the sparse characteristics of the data matrix. The influence of related parameters on the compression rate and recovery data error is analyzed and discussed, as are different scenarios and the difference in compression processing methods. Through comparative experiments, the average error of the sound speed profile data compressed is less than 0.5 m/s, the maximum error is less than 3 m/s, and the data volume is about 10% to 15% of the original data volume. This method significantly reduces the storage capacity of high-resolution sound speed profile data and ensures the accuracy of the data, providing technical support for efficient and convenient access to high-resolution sound speed profiles.
APA, Harvard, Vancouver, ISO, and other styles
4

Knopp, T., and A. Weber. "Local System Matrix Compression for Efficient Reconstruction in Magnetic Particle Imaging." Advances in Mathematical Physics 2015 (2015): 1–7. http://dx.doi.org/10.1155/2015/472818.

Full text
Abstract:
Magnetic particle imaging (MPI) is a quantitative method for determining the spatial distribution of magnetic nanoparticles, which can be used as tracers for cardiovascular imaging. For reconstructing a spatial map of the particle distribution, the system matrix describing the magnetic particle imaging equation has to be known. Due to the complex dynamic behavior of the magnetic particles, the system matrix is commonly measured in a calibration procedure. In order to speed up the reconstruction process, recently, a matrix compression technique has been proposed that makes use of a basis transformation in order to compress the MPI system matrix. By thresholding the resulting matrix and storing the remaining entries in compressed row storage format, only a fraction of the data has to be processed when reconstructing the particle distribution. In the present work, it is shown that the image quality of the algorithm can be considerably improved by using a local threshold for each matrix row instead of a global threshold for the entire system matrix.
APA, Harvard, Vancouver, ISO, and other styles
5

Christnatalis, Christnatalis, Bachtiar Bachtiar, and Rony Rony. "Comparative Compression of Wavelet Haar Transformation with Discrete Wavelet Transform on Colored Image Compression." JOURNAL OF INFORMATICS AND TELECOMMUNICATION ENGINEERING 3, no. 2 (January 20, 2020): 202–9. http://dx.doi.org/10.31289/jite.v3i2.3154.

Full text
Abstract:
In this research, the algorithm used to compress images is using the haar wavelet transformation method and the discrete wavelet transform algorithm. The image compression based on Wavelet Wavelet transform uses a calculation system with decomposition with row direction and decomposition with column direction. While discrete wavelet transform-based image compression, the size of the compressed image produced will be more optimal because some information that is not so useful, not so felt, and not so seen by humans will be eliminated so that humans still assume that the data can still be used even though it is compressed. The data used are data taken directly, so the test results are obtained that digital image compression based on Wavelet Wavelet Transformation gets a compression ratio of 41%, while the discrete wavelet transform reaches 29.5%. Based on research problems regarding the efficiency of storage media, it can be concluded that the right algorithm to choose is the Haar Wavelet transformation algorithm. To improve compression results it is recommended to use wavelet transforms other than haar, such as daubechies, symlets, and so on.
APA, Harvard, Vancouver, ISO, and other styles
6

Tanaka, Teruo, Ryo Otsuka, Akihiro Fujii, Takahiro Katagiri, and Toshiyuki Imamura. "Implementation of D-Spline-Based Incremental Performance Parameter Estimation Method with ppOpen-AT." Scientific Programming 22, no. 4 (2014): 299–307. http://dx.doi.org/10.1155/2014/310879.

Full text
Abstract:
In automatic performance tuning (AT), a primary aim is to optimize performance parameters that are suitable for certain computational environments in ordinary mathematical libraries. For AT, an important issue is to reduce the estimation time required for optimizing performance parameters. To reduce the estimation time, we previously proposed the Incremental Performance Parameter Estimation method (IPPE method). This method estimates optimal performance parameters by inserting suitable sampling points that are based on computational results for a fitting function. As the fitting function, we introduced d-Spline, which is highly adaptable and requires little estimation time. In this paper, we report the implementation of the IPPE method with ppOpen-AT, which is a scripting language (set of directives) with features that reduce the workload of the developers of mathematical libraries that have AT features. To confirm the effectiveness of the IPPE method for the runtime phase AT, we applied the method to sparse matrix–vector multiplication (SpMV), in which the block size of the sparse matrix structure blocked compressed row storage (BCRS) was used for the performance parameter. The results from the experiment show that the cost was negligibly small for AT using the IPPE method in the runtime phase. Moreover, using the obtained optimal value, the execution time for the mathematical library SpMV was reduced by 44% on comparing the compressed row storage and BCRS (block size 8).
APA, Harvard, Vancouver, ISO, and other styles
7

AlAhmadi, Sarah, Thaha Mohammed, Aiiad Albeshri, Iyad Katib, and Rashid Mehmood. "Performance Analysis of Sparse Matrix-Vector Multiplication (SpMV) on Graphics Processing Units (GPUs)." Electronics 9, no. 10 (October 13, 2020): 1675. http://dx.doi.org/10.3390/electronics9101675.

Full text
Abstract:
Graphics processing units (GPUs) have delivered a remarkable performance for a variety of high performance computing (HPC) applications through massive parallelism. One such application is sparse matrix-vector (SpMV) computations, which is central to many scientific, engineering, and other applications including machine learning. No single SpMV storage or computation scheme provides consistent and sufficiently high performance for all matrices due to their varying sparsity patterns. An extensive literature review reveals that the performance of SpMV techniques on GPUs has not been studied in sufficient detail. In this paper, we provide a detailed performance analysis of SpMV performance on GPUs using four notable sparse matrix storage schemes (compressed sparse row (CSR), ELLAPCK (ELL), hybrid ELL/COO (HYB), and compressed sparse row 5 (CSR5)), five performance metrics (execution time, giga floating point operations per second (GFLOPS), achieved occupancy, instructions per warp, and warp execution efficiency), five matrix sparsity features (nnz, anpr, nprvariance, maxnpr, and distavg), and 17 sparse matrices from 10 application domains (chemical simulations, computational fluid dynamics (CFD), electromagnetics, linear programming, economics, etc.). Subsequently, based on the deeper insights gained through the detailed performance analysis, we propose a technique called the heterogeneous CPU–GPU Hybrid (HCGHYB) scheme. It utilizes both the CPU and GPU in parallel and provides better performance over the HYB format by an average speedup of 1.7x. Heterogeneous computing is an important direction for SpMV and other application areas. Moreover, to the best of our knowledge, this is the first work where the SpMV performance on GPUs has been discussed in such depth. We believe that this work on SpMV performance analysis and the heterogeneous scheme will open up many new directions and improvements for the SpMV computing field in the future.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Xi Xi, Yu Jing Jia, and Guang Zhen Cheng. "The Water Sump Cleaning Machine by Vacuum Suction." Applied Mechanics and Materials 201-202 (October 2012): 785–88. http://dx.doi.org/10.4028/www.scientific.net/amm.201-202.785.

Full text
Abstract:
This article describes a vacuum water sump cleaning machine which is used to clean up coal mine water sump. Cleaning machine is composed of mechanical structure and electrical control devices. The parts of machine are made up of Walk the flatbed, storage mud tank, vacuum pumps, suction pipe, mud tubes, swing devices, control valves, suction pipe and pressure tracheal. When working, under the function of vacuum pumping, cleaning machine pulls out the vacuum from storage mud tank through the vacuum air feeder. As the vacuum level in the tank is increasing, under the function of atmospheric pressure outside world, the mud flows into the reservoir along the suction tube. When storage mud tank is full, vacuum pump automatically shut down. Turning off the vacuum valve and opening the pressure valve, the slime in the tank under the function of compressed air comes into the mine car through the row mud tube. The layout of this cleaning machine is reasonable, what is more, it is flexible and convenient to operate, so that it reduces the labor intensity significantly and improves the work efficiency of the clearance.
APA, Harvard, Vancouver, ISO, and other styles
9

Ji, Guo Liang, Yang De Feng, Wen Kai Cui, and Liang Gang Lu. "Implementation Procedures of Parallel Preconditioning with Sparse Matrix Based on FEM." Applied Mechanics and Materials 166-169 (May 2012): 3166–73. http://dx.doi.org/10.4028/www.scientific.net/amm.166-169.3166.

Full text
Abstract:
A technique to assemble global stiffness matrix stored in sparse storage format and two parallel solvers for sparse linear systems based on FEM are presented. The assembly method uses a data structure named associated node at intermediate stages to finally arrive at the Compressed Sparse Row (CSR) format. The associated nodes record the information about the connection of nodes in the mesh. The technique can reduce large memory because it only stores the nonzero elements of the global stiffness matrix. This method is simple and effective. The solvers are Restarted GMRES iterative solvers with Jacobi and sparse appropriate inverse (SPAI) preconditioning, respectively. Some numerical experiments show that the both preconditioners can improve the convergence of the iterative method, and SPAI is more powerful than Jacobi in the sence of reducing the number of iterations and parallel efficiency. Both of the two solvers can be used to solve large sparse linear system.
APA, Harvard, Vancouver, ISO, and other styles
10

Mahmoud, Mohammed, Mark Hoffmann, and Hassan Reza. "Developing a New Storage Format and a Warp-Based SpMV Kernel for Configuration Interaction Sparse Matrices on the GPU." Computation 6, no. 3 (August 24, 2018): 45. http://dx.doi.org/10.3390/computation6030045.

Full text
Abstract:
Sparse matrix-vector multiplication (SpMV) can be used to solve diverse-scaled linear systems and eigenvalue problems that exist in numerous, and varying scientific applications. One of the scientific applications that SpMV is involved in is known as Configuration Interaction (CI). CI is a linear method for solving the nonrelativistic Schrödinger equation for quantum chemical multi-electron systems, and it can deal with the ground state as well as multiple excited states. In this paper, we have developed a hybrid approach in order to deal with CI sparse matrices. The proposed model includes a newly-developed hybrid format for storing CI sparse matrices on the Graphics Processing Unit (GPU). In addition to the new developed format, the proposed model includes the SpMV kernel for multiplying the CI matrix (proposed format) by a vector using the C language and the Compute Unified Device Architecture (CUDA) platform. The proposed SpMV kernel is a vector kernel that uses the warp approach. We have gauged the newly developed model in terms of two primary factors, memory usage and performance. Our proposed kernel was compared to the cuSPARSE library and the CSR5 (Compressed Sparse Row 5) format and already outperformed both.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Compressed Row Storage"

1

Ramesh, Chinthala. "Hardware-Software Co-Design Accelerators for Sparse BLAS." Thesis, 2017. http://etd.iisc.ac.in/handle/2005/4276.

Full text
Abstract:
Sparse Basic Linear Algebra Subroutines (Sparse BLAS) is an important library. Sparse BLAS includes three levels of subroutines. Level 1, Level2 and Level 3 Sparse BLAS routines. Level 1 Sparse BLAS routines do computations over sparse vector and spare/dense vector. Level 2 deals with sparse matrix and vector operations. Level 3 deals with sparse matrix and dense matrix operations. The computations of these Sparse BLAS routines on General Purpose Processors (GPPs) not only suffer from less utilization of hardware resources but also takes more compute time than the workload due to poor data locality of sparse vector/matrix storage formats. In the literature, tremendous efforts have been put into software to improve these Sparse BLAS routines performance on GPPs. GPPs best suit for applications with high data locality, whereas Sparse BLAS routines operate on applications with less data locality hence, GPPs performance is poor. Various Custom Function Units (Hardware Accelerators) are proposed in the literature and are proved to be efficient than soft wares which tried to accelerate Sparse BLAS subroutines. Though existing hardware accelerators improved the Sparse BLAS performance compared to software Sparse BLAS routines, there is still lot of scope to improve these accelerators. This thesis describes both the existing software and hardware software co-designs (HW/SW co-design) and identifies the limitations of these existing solutions. We propose a new sparse data representation called Sawtooth Compressed Row Storage (SCRS) and corresponding SpMV and SpMM algorithms. SCRS based SpMV and SpMM are performing better than existing software solutions. Even though SCRS based SpMV and SpMM algorithms perform better than existing solutions, they still could not reach theoretical peak performance. The knowledge gained from the study of limitations of these existing solutions including the proposed SCRS based SpMV and SpMM is used to propose new HW/SW co-designs. Software accelerators are limited by the hardware properties of GPPs, and GPUs itself, hence, we propose HW/SW co-designs to accelerate few basic Sparse BLAS operations (SpVV and SpMV). Our proposed Parallel Sparse BLAS HW/SW co-design achieves near theoretical peak performance with reasonable hardware resources.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Compressed Row Storage"

1

D’Azevedo, Eduardo F., Mark R. Fahey, and Richard T. Mills. "Vectorized Sparse Matrix Multiply for Compressed Row Storage Format." In Lecture Notes in Computer Science, 99–106. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11428831_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jensen, Søren Kejser, Christian Thomsen, and Torben Bach Pedersen. "ModelarDB: Integrated Model-Based Management of Time Series from Edge to Cloud." In Transactions on Large-Scale Data- and Knowledge-Centered Systems LIII, 1–33. Berlin, Heidelberg: Springer Berlin Heidelberg, 2023. http://dx.doi.org/10.1007/978-3-662-66863-4_1.

Full text
Abstract:
AbstractTo ensure critical infrastructure is operating as expected, high-quality sensors are increasingly installed. However, due to the enormous amounts of high-frequency time series they produce, it is impossible or infeasible to transfer or even store these time series in the cloud when using state-of-the-practice compression methods. Thus, simple aggregates, e.g., 1–10-minutes averages, are stored instead of the raw time series. However, by only storing these simple aggregates, informative outliers and fluctuations are lost. Many Time Series Management System (TSMS) have been proposed to efficiently manage time series, but they are generally designed for either the edge or the cloud. In this paper, we describe a new version of the open-source model-based TSMS ModelarDB. The system is designed to be modular and the same binary can be efficiently deployed on the edge and in the cloud. It also supports continuously transferring high-frequency time series compressed using models from the edge to the cloud. We first provide an overview of ModelarDB, analyze the requirements and limitations of the edge, and evaluate existing query engines and data stores for use on the edge. Then, we describe how ModelarDB has been extended to efficiently manage time series on the edge, a novel file-based data store, how ModelarDB’s compression has been improved by not storing time series that can be derived from base time series, and how ModelarDB transfers high-frequency time series from the edge to the cloud. As the work that led to ModelarDB began in 2015, we also reflect on the lessons learned while developing it.
APA, Harvard, Vancouver, ISO, and other styles
3

Shah, Tawheed Jan, and M. Tariq Banday. "Empirical Performance Analysis of Wavelet Transform Coding-Based Image Compression Techniques." In Examining Fractal Image Processing and Analysis, 57–99. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-0066-8.ch004.

Full text
Abstract:
In this chapter, the performance of wavelet transform-based EZW coding and SPIHT coding technique have been evaluated and compared in terms of CR, PSNR, and MSE by applying them to similar color images in two standard resolutions. The application of these techniques on entire color images such as passport size photograph in which the region containing the face of a person is more significant than other regions results in equal loss of information content and less compression ratio. So, to achieve the high CRs and distribute the quality of the image unevenly, this chapter proposes the ROI coding technique. Compressing ROI portion using discrete wavelet transform with Huffman coding and NROI compressed with Huffman, EZW coding, SPIHT coding suggested effective compression at nearly no loss of quality in the ROI portion of the photograph. Further, higher CR and PSNR with lower MSE have been found in high-resolution photographs, thereby permitting the reduction of storage space, faster transmission on low bandwidth channels, and faster processing.
APA, Harvard, Vancouver, ISO, and other styles
4

Rahman, Hakikur. "Interactive Multimedia Technologies for Distance Education Systems." In Encyclopedia of Multimedia Technology and Networking, Second Edition, 742–48. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-014-1.ch100.

Full text
Abstract:
Information is typically stored, manipulated, delivered, and retrieved using a plethora of existing and emerging technologies. Businesses and organizations must adopt these emerging technologies to remain competitive. However, the evolution and progress of the technology (object orientation, high-speed networking, Internet, and so on) has been so rapid, that organizations are constantly facing new challenges in end-user training programs. These new technologies are impacting the whole organization, creating a paradigm shift which, in turn, enables them to do business in ways never possible before (Chatterjee & Jin, 1997). Information systems based on hypertext can be extended to include a wide range of data types, resulting in hypermedia, providing a new approach to information access with data storage devices, such as magnetic media, video disk, and compact disk. Along with alphanumeric data, today’s computer systems can handle text, graphics, and images, thus bringing audio and video into everyday use. DETF Report (2000) refers that technology can be classified into noninteractive and time-delayed interactive systems, and interactive distance learning systems. Noninteractive and time-delayed interactive systems include printed materials, correspondence, one-way radio, and television broadcasting. Interactive distance learning systems can be termed as “live interactive” or “stored interactive,” and range from satellite and compressed videoconferencing, to standalone computer-assisted instruction with two or more participants linked together, but situated in locations that are separated by time and/or place. Different types of telecommunications technology are available for the delivery of educational programs to single and multiple sites throughout disunited areas and locations. Diaz (1999) indicated that there are numerous multimedia technologies that can facilitate self-directed, practice-centered learning and meet the challenges of educational delivery to the adult learner. Though, delivering content via the WWW has been tormented by unreliability and inconsistency of information transfer, resulting in unacceptable delays and the inability to effectively deliver complex multimedia elements, including audio, video, and graphics. A CD/Web hybrid, a Web site on a compact disc (CD), combining the strengths of the CD-ROM and the WWW, can facilitate the delivery of multimedia elements by preserving connectivity, even at constricted bandwidth. Compressing a Web site onto a CD-ROM can reduce the amount of time that students spend interacting with a given technology, and can increase the amount of time they spend learning. University teaching and learning experiences are being replicated independently of time and place via appropriate technology-mediated learning processes, like the Internet, the Web, CD-ROM, and so on. However, it is possible to increase the educational gains possible by using the Internet while continuing to optimize the integration of other learning media and resources through interactive multimedia communications. Among other conventional interactive teaching methods, Interactive Multimedia Methods (IMMs) seems to be adopted as another mainstream in the path of distance learning system.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Compressed Row Storage"

1

Blumenthal, Rob, Brad Hutchinson, and Laith Zori. "Investigation of Transient CFD Methods Applied to a Transonic Compressor Stage." In ASME 2011 Turbo Expo: Turbine Technical Conference and Exposition. ASMEDC, 2011. http://dx.doi.org/10.1115/gt2011-46635.

Full text
Abstract:
Understanding unsteady flow phenomena in compressor stages often requires the use of time-accurate CFD simulations. Due to the inherent differences in blade pitch between adjacent blade rows, the flow conditions at any given instant in adjacent blade rows differ. Simplified computation of the stage represented by a single blade in each row and simple periodic boundary conditions is therefore not possible. Depending on the blade counts, it may be necessary to model the entire annulus of the stage; however, this requires considerable computational time and memory resources. Several methods for modeling the transient flow in turbomachinery stages which require a minimal number of blade passages per row, and therefore reduced computational demands, have been presented in the literature. Recently, some of these methods have become available in commercial CFD solvers. This paper provides a brief description of the methods used, and how they are applied to a transonic compressor stage. The methods are evaluated and compared in terms of computational efficiency and storage requirements, and comparison is made to steady stage simulations. Comparisons to overall performance data and two-dimensional LDV measurements are used to assess the predictive capabilities of the methods. Computed flow features are examined, and compared with reported measurements.
APA, Harvard, Vancouver, ISO, and other styles
2

Imaino, W., H. Rosen, K. Rubin, T. Strand, and M. Best. "Extending the Compact Disk Format to High Capacity for Video Applications." In Optical Data Storage. Washington, D.C.: Optica Publishing Group, 1994. http://dx.doi.org/10.1364/ods.1994.wa4.

Full text
Abstract:
Optical storage disks with multiple data layers1 offer large potential increases in capacity over standard single layer disks. In this approach individual data layers are spaced far enough apart so that any significant crosstalk is avoided and arc accessed by moving the objective lens in the focus direction. The large numerical aperture of the objective allows data layer separations that arc small enough to permit access by standard focus actuators in existing optical storage mechanisms. This scheme requires no change in the CD format and is fully backward compatible with CD Audio, CD ROM and CDR disks and provides a favorable development path for the delivery of compressed video. Two times current CD capacities would enable playback of MPEG 1 compressed full length movies from a single disk, while higher capacities would permit improvements of the video quality.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Bing, Jiaxin Chen, Dongming Zhang, Xiuguo Bao, and Di Huang. "Representation Learning for Compressed Video Action Recognition via Attentive Cross-modal Interaction with Motion Enhancement." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/148.

Full text
Abstract:
Compressed video action recognition has recently drawn growing attention, since it remarkably reduces the storage and computational cost via replacing raw videos by sparsely sampled RGB frames and compressed motion cues (e.g., motion vectors and residuals). However, this task severely suffers from the coarse and noisy dynamics and the insufficient fusion of the heterogeneous RGB and motion modalities. To address the two issues above, this paper proposes a novel framework, namely Attentive Cross-modal Interaction Network with Motion Enhancement (MEACI-Net). It follows the two-stream architecture, i.e. one for the RGB modality and the other for the motion modality. Particularly, the motion stream employs a multi-scale block embedded with a denoising module to enhance representation learning. The interaction between the two streams is then strengthened by introducing the Selective Motion Complement (SMC) and Cross-Modality Augment (CMA) modules, where SMC complements the RGB modality with spatio-temporally attentive local motion features and CMA further combines the two modalities with selective feature augmentation. Extensive experiments on the UCF-101, HMDB-51 and Kinetics-400 benchmarks demonstrate the effectiveness and efficiency of MEACI-Net.
APA, Harvard, Vancouver, ISO, and other styles
4

Crosa, G., F. Pittaluga, A. Trucco Martinengo, F. Beltrami, A. Torelli, and F. Traverso. "Heavy-Duty Gas Turbine Plant Aerothermodynamic Simulation Using Simulink." In ASME 1996 Turbo Asia Conference. American Society of Mechanical Engineers, 1996. http://dx.doi.org/10.1115/96-ta-022.

Full text
Abstract:
This paper presents a physical simulator for predicting the off-design and dynamic behaviour of a single shaft heavy-duty gas turbine plant, suitable for gas-steam combined cycles. The mathematical model, which is non linear and based on the lumped parameter approach, is described by a set of first-order differential and algebraic equations. The plant components are described adding to their steady state characteristics the dynamic equations of mass, momentum and energy balances. The state variables are mass flow rates, static pressures, static temperatures of the fluid, wall temperatures and shaft rotational speed. The analysis has been applied to a 65 MW heavy-duty gas turbine plant with two off-board silo-type combustion chambers. To model the compressor, equipped with variable inlet guide vanes, a subdivision into five partial compressors is adopted, in serial arrangement, separated by dynamic blocks. The turbine is described using a one dimensional row by row mathematical model, that takes into account both the air bleed cooling effect and the mass storage among the stages. The simulation model considers also the air bleed transformations from the compressor down to the turbine. Both combustion chambers have been modelled utilising a sequence of several sub-volumes, to simulate primary and secondary zones in presence of three hybrid burners. A code has been created in Simulink environment. Some dynamic responses of the simulated plant, equipped with a proportional-integral speed regulator, are presented.
APA, Harvard, Vancouver, ISO, and other styles
5

Blazejewski, Theodore E., and Rajneesh Moudgil. "The BOS Compressor: An Economical Compressor Developed to Serve the Higher Speed, Higher Rod Load, and Higher Horsepower Requirements of the Gas Pipeline, Storage, Gathering, Process, and Similar Industries." In 2002 4th International Pipeline Conference. ASMEDC, 2002. http://dx.doi.org/10.1115/ipc2002-27219.

Full text
Abstract:
Traditionally, large slow speed integral gas engines were used in Pipeline applications. Today, the need exists for smaller high-speed units with a lower total installed cost and quicker return on investments. Utilizing well over 100 years of gas compressor experience, Dresser-Rand has developed a new Big Oilfield Separable (BOS) frame and Dresser-Rand Advanced Reciprocating Technology (DART) cylinder line-up to serve the high-speed engine and motor driven Pipeline/Injection markets. By incorporating a longer stroke capability, this compressor also serves the electric motor driven Process Gas markets. First, this paper will identify the complete design, development, and test verification of the frame and running gear. Second, this paper will then identify the complete design, development, and performance verification of the DART cylinder. Together, the BOS frame and DART cylinders provide an economical replacement to the older slow speed integral gas engines that are utilized in Pipeline applications.
APA, Harvard, Vancouver, ISO, and other styles
6

Ji, Shanhong, and Feng Liu. "Computation of Flutter of Turbomachinery Cascades Using a Parallel Unsteady Navier-Stokes Code." In ASME 1998 International Gas Turbine and Aeroengine Congress and Exhibition. American Society of Mechanical Engineers, 1998. http://dx.doi.org/10.1115/98-gt-043.

Full text
Abstract:
A quasi-three-dimensional multigrid Navier-Stokes solver on single and multiple passage domains is presented for solving unsteady flows around oscillating turbine and compressor blades. The conventional “direct store” method is used for applying the phase-shifted periodic boundary condition over a single blade passage. A parallel version of the solver using the Message Passing Interface (MPI) standard is developed for multiple passage computations. In the parallel multiple passage computations, the phase-shifted periodic boundary condition is converted to simple in-phase periodic condition. Euler and Navier-Stokes solutions are obtained for unsteady flows through an oscillating turbine cascade blade row with both the sequential and the parallel code. It is found that the parallel code offers almost linear speedup with multiple CPUs. In addition, significant improvement is achieved in convergence of the computation to a periodic unsteady state in the parallel multiple passage computations due to the use of in-phase periodic boundary conditions as compared to that in the single passage computations with phase-lagged periodic boundary conditions via the “direct store” method. The parallel Navier-Stokes code is also used to calculate the flow through an oscillating compressor cascade. Results are compared with experimental data and computations by other authors.
APA, Harvard, Vancouver, ISO, and other styles
7

Davis, Mathew, and Iraj Ershaghi. "Geological Aspects of Using Saline Aquifers in the San Joaquin Basin for Energy Storage and Carbon Dioxide Sequestration." In SPE Western Regional Meeting. SPE, 2022. http://dx.doi.org/10.2118/209319-ms.

Full text
Abstract:
Abstract A question in the minds of many is the potential use of saline aquifers in California for storing compressed air and for CO2 storage. This paper is the result of an extensive study on the geological properties of subsurface saline water containing geologic layers located below the freshwater limits in the San Joaquin Valley (SJV) of California. There are many thousands of pass-through wells drilled for hydrocarbon extraction in the area that can provide subsurface information on the saline aquifers. We discuss some of the saline aquifer properties and geologic aspects associated with the subsurface storage of compressed air and or carbon dioxide. The raw database to generate the information included archives of CalGEM with respect to the existing and previously drilled oil and gas wells in the SJV Basin as well as separate studies by the USGS, Kang (2016), and Gillespi (2019). We mapped these aquifers across the valley and estimated ranges of pore volumes, the deliverability and the injectivity range for storage purposes. We also studied the sealing characteristics of these sands with respect to over and under burden and the geologic faulting in the San Joaquin Basin. We studied the drilling reports of many key wells and identified the lithologies of interest and examined relevant petrophysical properties. Estimates of capacity and deliverability were generated for these intervals. The legal ownership issues of operating these saline aquifers as storage were not part of this study. Our critical observations include aspects of salinity, petrophysical properties, and the areal extent. Knowing the salt content of in-situ water is essential for site selection and the economics of repurposing idle wells to connect to these aquifers. We have noted that the base of underground sources of drinking water (USDWs) (<10,000 mg/L) slopes from northwest to southeast across the Kern County and is likely because of significant freshwater recharge from the Sierra Nevada Mountains. In the northwestern portion of Kern County, numerous wells contain waters between 3000 and 10,000 ppm at depths of less than 2000 ft, particularly in the nonmarine Tulare Formation. At North Belridge field, a salinity reversal is apparent below 6900 ft., and salinities for zones below 7200 ft. range from 10,000 to 32,000 ppm (Gillespi, 2019). From the maps and correlative sections that relate to the areal extent of the target saltwater sands, we estimated the range of storage volumes, injectivity, and deliverability capacities for various wet sands. The information generated and included in the paper is a reference point for the operators in the SJV, CA. It can help with the site selection for potentially converting some or all existing idle wells that are on the verge of abandonment and repurposing the wells for energy storage and for subsurface CO2 and other waste disposal purposes using the shallow saline aquifers.
APA, Harvard, Vancouver, ISO, and other styles
8

Germer, Maxim, Uwe Marschner, and Andreas Richter. "Stochastic Signal Analysis and Processing of Non-Harmonic, Periodic Vibrational Energy Harvesters." In ASME 2021 Conference on Smart Materials, Adaptive Structures and Intelligent Systems. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/smasis2021-68310.

Full text
Abstract:
Abstract This study presents a novel signal description based on stochastic signal analysis, applicable to electromagnetic, piezomagnetic or other transducers without energy storage as a capacitor. At present the peak power and average power are used to characterize the energy transducer. As for non-harmonic signals the last-mentioned quantities compress the time in a way, that inhibits predicting the transferable energy for different charge based interface circuits to charge a storage capacitor. The main objectives are first, to represent the data meaningfully, second, to incorporate the effect of lossy rectifiers on the raw data and third, to evaluate the performance of chosen energy harvesters together with the interface circuits efficiently. The foundation relies in considering the measured time signal as a realization of an ergodic process. The probability density function (PDF) and the cumulated density function (CDF) build the fundament and are used to link the signal magnitudes to their probability of occurrence. New distribution functions are introduced to determine both, the charges and the energy of an electromagnetic transducer above a specific signal amplitude, which can be imposed by a voltage drop of one or several components. The presented method is further applied on the well-known full wave rectifier and to calculate its momentary efficiency, being limited by 92% for a sinusoidal signal. As a result, the paper applies the new stochastic signal representation to energy harvesters to extends the state-of-the-art time and frequency signal analysis. Different interface circuits with energy storage can be quickly calculated for different initial storage voltages, much faster than powerful network simulators.
APA, Harvard, Vancouver, ISO, and other styles
9

Gonzalez, Bob. "Development of the G16CM34 Engine as a High Efficiency Engine for Gas Transmission, Storage and Withdrawal Services." In 2002 4th International Pipeline Conference. ASMEDC, 2002. http://dx.doi.org/10.1115/ipc2002-27301.

Full text
Abstract:
There is a new large horsepower engine available in North America and supported by a well-established network of authorized factory dealers. The driver is based on a reciprocating engine design delivering 7670 to 8180 HP depending on site conditions. The 16-cylinder prime mover is specially engineered for gas transmission, storage and withdrawal service. Built on a diesel engine designed block, crankshaft, bearings, rods, and gear train, it provides long service intervals and 120,000 hours before major overhaul. Electronic ignition and combustion controls help conserve fuel, minimize emissions and keep the engine operating at peak capability under a variety of ambient and loading conditions. Electronic monitoring protects critical components and systems while greatly simplifying maintenance. The electronic control tightly regulates the combustion process, cylinder by cylinder, to optimize efficiency. It also controls the main cylinder and prechamber fuel delivery. Using sensor data based on ambient and turbocharged-aftercooled air intake temperatures, the microprocessors in the control system continuously monitor available engine power. With this information, the PLC controlling the compressor has the ability to load or unload the compressor to match the available engine output. Fuel efficiency is less than 5900 Btu/bhp-hr and NOx emissions of 0.50 grams/bhp-hr. The mechanical efficiency of the engine is greater than 43%. The mechanical refinements designed into the prime mover, are behind the high efficiency. For instance, the long stroke design maximizes fuel efficiency. A solenoid operated gas admission valve for each cylinder provides precise fuel metering. A calibration ring in the upper part of the cylinder liner helps reduce CO and NMHC emissions. Combustion gases do not collect in the gap between the piston and the liner wall, where most of these gases form. Instead, piston action forces them back into the combustion chamber for complete burning. In the event of a decline in fuel quality, a three-piece connecting rod enables a quick change in compression ratio without changing the piston. The paper will also cover details on maintenance intervals and costs, with additional features on the product along with construction details. Figure 1, illustrates a side view of the engine as Seen from the front.
APA, Harvard, Vancouver, ISO, and other styles
10

Cao, Jianming, Paul Allaire, Timothy Dimond, and Saeid Dousti. "Auxiliary Bearing System Optimization for AMB Supported Rotors Based on Rotor Drop Analysis: Part II — Optimization for Example Vertical and Horizontal Machines." In ASME Turbo Expo 2016: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/gt2016-56324.

Full text
Abstract:
This paper forms Part II of the rotor drop analysis, focusing on the auxiliary bearing system design and optimization based on the rotor drop analysis methods, as introduced in Part I. Optimization focuses on shaft orbit, maximum ball bearing stress, and how to avoid possible ball bearing damage due to impact loading during rotor drop by optimizing auxiliary design including bearing selection, preload method, radial and axial damping element, and flexible bearing support. Using the detailed rotor drop model and time transient method, a variety of simulations are presented for 1) an energy storage vertical flywheel system, and 2) an 8-stage horizontal centrifugal compressor, are conducted to investigate the effects of auxiliary bearing design and to optimize the auxiliary system. Axial drops, radial drops and combination of radial/axial drops are all evaluated considering angular contact auxiliary bearing size, number of rows, preload, and flexible damped bearing supports in the axial and radial directions. The rotor drop analysis method introduced in this paper may be used as a design toolbox for the auxiliary bearing system.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Compressed Row Storage"

1

Coulson, Wendy, and James McCarthy. PR-312-16202-R02 GHG Emission Factor Development for Natural Gas Compressors. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), May 2018. http://dx.doi.org/10.55274/r0011488.

Full text
Abstract:
The U.S. EPA Greenhouse Gas (GHG) Reporting Program (GHGRP) requires compressor stations and underground storage facilities to measure compressor vent, rod packing, and seal emissions for facilities subject to 40 CFR, Part 98, Subpart W. The objective of the project is to gather and evaluate 2011 - 2016 Subpart W compressor vent and seal methane emissions data from site measurements, and present final results of an analysis to develop methane Emission Factors (EFs) based on these data. The EFs and analysis of relative contribution from different sources can be used: (1) as alternatives to current emission factors for compressor methane emissions used for Transmission and Storage (T and S) operations in EPA's annual GHG inventory; (2) to provide an EF based emission estimate for Subpart W that replaces ongoing annual GHGRP vent measurements; and (3) to document the relative contribution of different compressor leak/seal sources and support alternative leak mitigation strategies. Comparisons of the EPA Annual GHG Inventory EFs to Subpart W based EFs in this report show consistently lower compressor emissions than estimates based on historical data or reports. Large leaks, which stem from less than 3% of the compressor measurements, increase the EFs by 26% to 194%, thus greatly impacting the EF results. Alternative EFs are provided for transmission and storage compressor methane emissions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography