Academic literature on the topic 'Indexed data compression'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Indexed data compression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Indexed data compression"

1

Kaneiwa, Ken, and Koji Fujiwara. "The Compression of Indexed Data and Fast Search for Large RDF Graphs." Transactions of the Japanese Society for Artificial Intelligence 33, no. 2 (2018): E—H43_1–10. http://dx.doi.org/10.1527/tjsai.e-h43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

M.K., Bouza. "Analysis and modification of graphic data compression algorithms." Artificial Intelligence 25, no. 4 (December 25, 2020): 32–40. http://dx.doi.org/10.15407/jai2020.04.032.

Full text
Abstract:
The article examines the algorithms for JPEG and JPEG-2000 compression of various graphic images. The main steps of the operation of both algorithms are given, their advantages and disadvantages are noted. The main differences between JPEG and JPEG-2000 are analyzed. It is noted that the JPEG-2000 algorithm allows re-moving visually unpleasant effects. This makes it possible to highlight important areas of the image and improve the quality of their compression. The features of each step of the algorithms are considered and the difficulties of their implementation are compared. The effectiveness of each algorithm is demonstrated by the example of a full-color image of the BSU emblem. The obtained compression ratios were obtained and shown in the corresponding tables using both algorithms. Compression ratios are obtained for a wide range of quality values from 1 to ten. We studied various types of images: black and white, business graphics, indexed and full color. A modified LZW-Lempel-Ziv-Welch algorithm is presented, which is applicable to compress a variety of information from text to images. The modification is based on limiting the graphic file to 256 colors. This made it possible to index the color with one byte instead of three. The efficiency of this modification grows with increasing image sizes. The modified LZW-algorithm can be adapted to any image from single-color to full-color. The prepared tests were indexed to the required number of colors in the images using the FastStone Image Viewer program. For each image, seven copies were obtained, containing 4, 8, 16, 32, 64, 128 and 256 colors, respectively. Testing results showed that the modified version of the LZW algorithm allows for an average of twice the compression ratio. However, in a class of full-color images, both algorithms showed the same results. The developed modification of the LZW algorithm can be successfully applied in the field of site design, especially in the case of so-called flat design. The comparative characteristics of the basic and modified methods are presented.
APA, Harvard, Vancouver, ISO, and other styles
3

Senthilkumar, Radha, Gomathi Nandagopal, and Daphne Ronald. "QRFXFreeze: Queryable Compressor for RFX." Scientific World Journal 2015 (2015): 1–8. http://dx.doi.org/10.1155/2015/864750.

Full text
Abstract:
The verbose nature of XML has been mulled over again and again and many compression techniques for XML data have been excogitated over the years. Some of the techniques incorporate support for querying the XML database in its compressed format while others have to be decompressed before they can be queried. XML compression in which querying is directly supported instantaneously with no compromise over time is forced to compromise over space. In this paper, we propose the compressor, QRFXFreeze, which not only reduces the space of storage but also supports efficient querying. The compressor does this without decompressing the compressed XML file. The compressor supports all kinds of XML documents along with insert, update, and delete operations. The forte of QRFXFreeze is that the textual data are semantically compressed and are indexed to reduce the querying time. Experimental results show that the proposed compressor performs much better than other well-known compressors.
APA, Harvard, Vancouver, ISO, and other styles
4

Hernández-Illera, Antonio, Miguel A. Martínez-Prieto, Javier D. Fernández, and Antonio Fariña. "iHDT++: improving HDT for SPARQL triple pattern resolution." Journal of Intelligent & Fuzzy Systems 39, no. 2 (August 31, 2020): 2249–61. http://dx.doi.org/10.3233/jifs-179888.

Full text
Abstract:
RDF self-indexes compress the RDF collection and provide efficient access to the data without a previous decompression (via the so-called SPARQL triple patterns). HDT is one of the reference solutions in this scenario, with several applications to lower the barrier of both publication and consumption of Big Semantic Data. However, the simple design of HDT takes a compromise position between compression effectiveness and retrieval speed. In particular, it supports scan and subject-based queries, but it requires additional indexes to resolve predicate and object-based SPARQL triple patterns. A recent variant, HDT++, improves HDT compression ratios, but it does not retain the original HDT retrieval capabilities. In this article, we extend HDT++ with additional indexes to support full SPARQL triple pattern resolution with a lower memory footprint than the original indexed HDT (called HDT-FoQ). Our evaluation shows that the resultant structure, iHDT++ , requires 70 - 85% of the original HDT-FoQ space (and up to 48 - 72% for an HDT Community variant). In addition, iHDT++ shows significant performance improvements (up to one level of magnitude) for most triple pattern queries, being competitive with state-of-the-art RDF self-indexes.
APA, Harvard, Vancouver, ISO, and other styles
5

Moneta, G. L., A. D. Nicoloff, and J. M. Porter. "Compression Treatment of Chronic Venous Ulceration: A Review." Phlebology: The Journal of Venous Disease 15, no. 3-4 (December 2000): 162–68. http://dx.doi.org/10.1177/026835550001500316.

Full text
Abstract:
Objective: To review the recent medical literature with regard to the use of compressive therapy in healing and preventing the recurrence of venous ulceration. Methods: Searches of Medline and Embase medical literature databases. Appropriate non-indexed journals and textbooks were also reviewed. Synthesis: Elastic compression therapy is regarded as the ‘gold standard’ treatment for venous ulceration. The benefits of elastic compression therapy in the treatment of venous ulceration may be mediated through favourable alterations in venous haemodynamics, micro-circulatory haemodynamics and/or improvement in subcutaneous Starling forces. Available data indicate compressive therapy is highly effective in healing of the large majority of venous ulcers. Elastic compression stockings, Unna boots, as well as multi-layer elastic wraps, have all been noted to achieve excellent healing rates for venous ulcers. In compliant patients it appears that approximately 75% of venous ulcers can be healed by 6 months, and up to 90% by 1 year. Non-healing of venous ulcers is associated with lack of patient compliance with treatment, large and long-standing venous ulceration and the coexistence of arterial insufficiency. Recurrence of venous ulceration is, however, a significant problem after healing with compressive therapy, even in compliant patients; approximately 20-30% of venous ulcers will recur by 2 years. Conclusions: Compressive therapy is capable of achieving high rates of healing of venous ulceration in compliant patients. Various forms of compression, including elastic, rigid and multi-layer dressings, are available depending on physician preference, the clinical situation and the needs of the individual patient. Compressive therapy, while effective, remains far from ideal. The future goals are to achieve faster healing of venous ulceration, less painful healing and freedom from ulcer recurrence.
APA, Harvard, Vancouver, ISO, and other styles
6

Selivanova, Irina V. "Limitations of Applying the Data Compression Method to the Classification of Abstracts of Publications Indexed in Scopus." Vestnik NSU. Series: Information Technologies 18, no. 3 (2020): 57–68. http://dx.doi.org/10.25205/1818-7900-2020-18-3-57-68.

Full text
Abstract:
The paper describes the limitations of applying the method of classification of scientific texts based on data compression to all categories indicated in the ASJC classification used in the Scopus bibliographic database. It is shown that the automatic generation of learning samples for each category is a rather time-consuming process, and in some cases is impossible due to the restriction on data upload installed in Scopus and the lack of category names in the Scopus Search API. Another reason is that in many subject areas there are completely no journals and, accordingly, publications that have only one category. Application of the method to all 26 subject areas is impossible due to their vastness, as well as the initial classification of Scopus. Often in different subject areas there are terminologically close categories, which makes it difficult to classify a publication as a true area. These findings also indicate that the classification currently used in Scopus and SciVal may not be completely reliable. For example, according to SciVal in terms of the number of publications, the category “Theoretical computer science” is in second place among all publications in the subject area “Mathematics”. The study showed that this category is one of the smallest categories, both in terms of the presence of journals and publications with only this category. Thus, many studies based on the use of publications in ASJC may have some inaccuracies.
APA, Harvard, Vancouver, ISO, and other styles
7

Shibuya, Yoshihiro, and Matteo Comin. "Indexing k-mers in linear space for quality value compression." Journal of Bioinformatics and Computational Biology 17, no. 05 (October 2019): 1940011. http://dx.doi.org/10.1142/s0219720019400110.

Full text
Abstract:
Many bioinformatics tools heavily rely on [Formula: see text]-mer dictionaries to describe the composition of sequences and allow for faster reference-free algorithms or look-ups. Unfortunately, naive [Formula: see text]-mer dictionaries are very memory-inefficient, requiring very large amount of storage space to save each [Formula: see text]-mer. This problem is generally worsened by the necessity of an index for fast queries. In this work, we discuss how to build an indexed linear reference containing a set of input [Formula: see text]-mers and its application to the compression of quality scores in FASTQ files. Most of the entropies of sequencing data lie in the quality scores, and thus they are difficult to compress. Here, we present an application to improve the compressibility of quality values while preserving the information for SNP calling. We show how a dictionary of significant [Formula: see text]-mers, obtained from SNP databases and multiple genomes, can be indexed in linear space and used to improve the compression of quality value. Availability: The software is freely available at https://github.com/yhhshb/yalff .
APA, Harvard, Vancouver, ISO, and other styles
8

Gupta, Shweta, Sunita Yadav, and Rajesh Prasad. "Document Retrieval using Efficient Indexing Techniques." International Journal of Business Analytics 3, no. 4 (October 2016): 64–82. http://dx.doi.org/10.4018/ijban.2016100104.

Full text
Abstract:
Document retrieval plays a crucial role in retrieving relevant documents. Relevancy depends upon the occurrences of query keywords in a document. Several documents include a similar key terms and hence they need to be indexed. Most of the indexing techniques are either based on inverted index or full-text index. Inverted index create lists and support word-based pattern queries. While full-text index handle queries comprise of any sequence of characters rather than just words. Problems arise when text cannot be separated as words in some western languages. Also, there are difficulties in space used by compressed versions of full-text indexes. Recently, one of the unique data structure called wavelet tree has been popular in the text compression and indexing. It indexes words or characters of the text documents and help in retrieving top ranked documents more efficiently. This paper presents a review on most recent efficient indexing techniques used in document retrieval.
APA, Harvard, Vancouver, ISO, and other styles
9

Navarro, Gonzalo. "Indexing Highly Repetitive String Collections, Part I." ACM Computing Surveys 54, no. 2 (April 2021): 1–31. http://dx.doi.org/10.1145/3434399.

Full text
Abstract:
Two decades ago, a breakthrough in indexing string collections made it possible to represent them within their compressed space while at the same time offering indexed search functionalities. As this new technology permeated through applications like bioinformatics, the string collections experienced a growth that outperforms Moore’s Law and challenges our ability to handle them even in compressed form. It turns out, fortunately, that many of these rapidly growing string collections are highly repetitive, so that their information content is orders of magnitude lower than their plain size. The statistical compression methods used for classical collections, however, are blind to this repetitiveness, and therefore a new set of techniques has been developed to properly exploit it. The resulting indexes form a new generation of data structures able to handle the huge repetitive string collections that we are facing. In this survey, formed by two parts, we cover the algorithmic developments that have led to these data structures. In this first part, we describe the distinct compression paradigms that have been used to exploit repetitiveness, and the algorithmic techniques that provide direct access to the compressed strings. In the quest for an ideal measure of repetitiveness, we uncover a fascinating web of relations between those measures, as well as the limits up to which the data can be recovered, and up to which direct access to the compressed data can be provided. This is the basic aspect of indexability, which is covered in the second part of this survey.
APA, Harvard, Vancouver, ISO, and other styles
10

Gupta, Pranjal, Amine Mhedhbi, and Semih Salihoglu. "Columnar storage and list-based processing for graph database management systems." Proceedings of the VLDB Endowment 14, no. 11 (July 2021): 2491–504. http://dx.doi.org/10.14778/3476249.3476297.

Full text
Abstract:
We revisit column-oriented storage and query processing techniques in the context of contemporary graph database management systems (GDBMSs). Similar to column-oriented RDBMSs, GDBMSs support read-heavy analytical workloads that however have fundamentally different data access patterns than traditional analytical workloads. We first derive a set of desiderata for optimizing storage and query processors of GDBMS based on their access patterns. We then present the design of columnar storage, compression, and query processing techniques based on these desiderata. In addition to showing direct integration of existing techniques from columnar RDBMSs, we also propose novel ones that are optimized for GDBMSs. These include a novel list-based query processor, which avoids expensive data copies of traditional block-based processors under many-to-many joins, a new data structure we call single-indexed edge property pages and an accompanying edge ID scheme, and a new application of Jacobson's bit vector index for compressing NULL values and empty lists. We integrated our techniques into the GraphflowDB in-memory GDBMS. Through extensive experiments, we demonstrate the scalability and query performance benefits of our techniques.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Indexed data compression"

1

Montecuollo, Ferdinando. "Compression and indexing of genomic data with confidentiality protection." Doctoral thesis, Universita degli studi di Salerno, 2015. http://hdl.handle.net/10556/1945.

Full text
Abstract:
2013 - 2014
The problem of data compression having specific security properties in order to guarantee user’s privacy is a living matter. On the other hand, high-throughput systems in genomics (e.g. the so-called Next Generation Sequencers) generate massive amounts of genetic data at affordable costs. As a consequence, huge DBMSs integrating many types of genomic information, clinical data and other (personal, environmental, historical, etc.) information types are on the way. This will allow for an unprecedented capability of doing large-scale, comprehensive and in-depth analysis of human beings and diseases; however, it will also constitute a formidable threat to user’s privacy. Whilst the confidential storage of clinical data can be done with well-known methods in the field of relational databases, it is not the same for genomic data; so the main goal of my research work was the design of new compressed indexing schemas for the management of genomic data with confidentiality protection. For the effective processing of a huge amount of such data, a key point will be the possibility of doing high speed search operations in secondary storage, directly operating on the data in compressed and encrypted form; therefore, I spent a big effort to obtain algorithms and data structures enabling pattern search operations on compressed and encrypted data in secondary storage, so that there is no need to preload data in main memory before starting that operations. [edited by Author]
XIII n.s.
APA, Harvard, Vancouver, ISO, and other styles
2

Machado, Lennon de Almeida. "Busca indexada de padrões em textos comprimidos." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-09062010-222653/.

Full text
Abstract:
A busca de palavras em uma grande coleção de documentos é um problema muito recorrente nos dias de hoje, como a própria utilização dos conhecidos \"motores de busca\" revela. Para que as buscas sejam realizadas em tempo que independa do tamanho da coleção, é necessário que a coleção seja indexada uma única vez. O tamanho destes índices é tipicamente linear no tamanho da coleção de documentos. A compressão de dados é outro recurso bastante utilizado para lidar com o tamanho sempre crescente da coleção de documentos. A intenção deste estudo é aliar a indexação utilizada nas buscas à compressão de dados, verificando alternativas às soluções já propostas e visando melhorias no tempo de resposta das buscas e no consumo de memória utilizada nos índices. A análise das estruturas de índice com os algoritmos de compressão mostra que arquivo invertido por blocos em conjuntos com compressão Huffman por palavras é uma ótima opção para sistemas com restrição de consumo de memória, pois proporciona acesso aleatório e busca comprimida. Neste trabalho também são propostas novas codificações livres de prefixo a fim de melhorar a compressão obtida e capaz de gerar códigos auto-sincronizados, ou seja, com acesso aleatório realmente viável. A vantagem destas novas codificações é que elas eliminam a necessidade de gerar a árvore de codificação Huffman através dos mapeamentos propostos, o que se traduz em economia de memória, codificação mais compacta e menor tempo de processamento. Os resultados obtidos mostram redução de 7% e 9% do tamanho dos arquivos comprimidos com tempos de compressão e descompressão melhores e menor consumo de memória.
Pattern matching over a big document collection is a very recurrent problem nowadays, as the growing use of the search engines reveal. In order to accomplish the search in a period of time independent from the collection size, it is necessary to index the collecion only one time. The index size is typically linear in the size of document collection. Data compression is another powerful resource to manage the ever growing size of the document collection. The objective in this assignment is to ally the indexed search to data compression, verifying alternatives to the current solutions, seeking improvement in search time and memory usage. The analysis on the index structures and compression algorithms indicates that joining the block inverted les with Huffman word-based compression is an interesting solution because it provides random access and compressed search. New prefix free codes are proposed in this assignment in order to enhance the compression and facilitate the generation of self-sinchronized codes, furthermore, with a truly viable random access. The advantage in this new codes is that they eliminate the need of generating the Huffman-code tree through the proposed mappings, which stands for economy of memory, compact encoding and shorter processing time. The results demonstrate gains of 7% and 9% in the compressed le size, with better compression and decompression times and lower memory consumption.
APA, Harvard, Vancouver, ISO, and other styles
3

Chan, Ho Yin. "Graph-theoretic approach to the non-binary index assignment problem /." View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?ECED%202008%20CHAN.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hcc-Hang, Jang. "Mismatch Address Index Encoding for Data Compression in Scan Test." 2007. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0009-1801200713012500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sadoghi, Hamedani Mohammad. "An Efficient, Extensible, Hardware-aware Indexing Kernel." Thesis, 2013. http://hdl.handle.net/1807/65515.

Full text
Abstract:
Modern hardware has the potential to play a central role in scalable data management systems. A realization of this potential arises in the context of indexing queries, a recurring theme in real-time data analytics, targeted advertising, algorithmic trading, and data-centric workflows, and of indexing data, a challenge in multi-version analytical query processing. To enhance query and data indexing, in this thesis, we present an efficient, extensible, and hardware-aware indexing kernel. This indexing kernel rests upon novel data structures and (parallel) algorithms that utilize the capabilities offered by modern hardware, especially abundance of main memory, multi-core architectures, hardware accelerators, and solid state drives. This thesis focuses on presenting our query indexing techniques to cope with processing queries in data-intensive applications that are susceptible to ever increasing data volume and velocity. At the core of our query indexing kernel lies the BE-Tree family of memory-resident indexing structures that scales by overcoming the curse of dimensionality through a novel two-phase space-cutting technique, an effective Top-k processing, and adaptive parallel algorithms to operate directly on compressed data (that exploits the multi-core architecture). Furthermore, we achieve line-rate processing by harnessing the unprecedented degrees of parallelism and pipelining only available through low-level logic design using FPGAs. Finally, we present a comprehensive evaluation that establishes the superiority of BE-Tree in comparison with state-of-the-art algorithms. In this thesis, we further expand the scope of our indexing kernel and describe how to accelerate analytical queries on (multi-version) databases by enabling indexes on the most recent data. Our goal is to reduce the overhead of index maintenance, so that indexes can be used effectively for analytical queries without being a heavy burden on transaction throughput. To achieve this end, we re-design the data structures in the storage hierarchy to employ an extra level of indirection over solid state drives. This indirection layer dramatically reduces the amount of magnetic disk I/Os that is needed for updating indexes and localizes the index maintenance. As a result, by rethinking how data is indexed, we eliminate the dilemma between update vs. query performance and reduce index maintenance and query processing cost substantially.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Indexed data compression"

1

Levy, David M., and Ieva Saule. General anaesthesia for caesarean delivery. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780198713333.003.0022.

Full text
Abstract:
General anaesthesia (GA) is most often indicated for category 1 (immediate threat to life of mother or baby) caesarean delivery (CD) or when neuraxial anaesthesia has failed or is contraindicated. Secure intravenous access is essential. Jugular venous cannulation (with ultrasound guidance) is required if peripheral access is inadequate. A World Health Organization surgical safety checklist must be used. The shoulders and upper back should be ramped. Left lateral table tilt or other means of uterine displacement are essential to minimize aortocaval compression, and a head-up position is recommended to improve the efficiency of preoxygenation and reduce the likelihood of gastric contents reaching the oropharynx. Cricoid pressure is controversial. In the United Kingdom, thiopental remains the induction agent of choice, although there is scant evidence upon which to avoid propofol. In pre-eclampsia, it is essential to obtund the pressor response to laryngoscopy with remifentanil or alfentanil. Rocuronium is an acceptable alternative to succinylcholine for neuromuscular blockade. Sugammadex offers the possibility of swifter reversal of rocuronium than spontaneous recovery from succinylcholine. Management of difficult tracheal intubation is focused on ‘oxygenation without aspiration’ and prevention of airway trauma. The Classic™ laryngeal mask airway is the most commonly used rescue airway in the United Kingdom. There is a large set of data from fasted women of low body mass index who have undergone elective CD safely with a Proseal™ or Supreme™ laryngeal mask airway. Sevoflurane is the most popular volatile agent for maintenance of GA. The role of electroencephalography-based depth of anaesthesia monitors at CD remains to be established. Intraoperative end-tidal carbon dioxide tension should be maintained below 4.0 kPa.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Indexed data compression"

1

Gao, Yanzhen, Xiaozhen Bao, Jing Xing, Zheng Wei, Jie Ma, and Peiheng Zhang. "STrieGD: A Sampling Trie Indexed Compression Algorithm for Large-Scale Gene Data." In Lecture Notes in Computer Science, 27–38. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-05677-3_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pibiri, Giulio Ermanno, and Rossano Venturini. "Inverted Index Compression." In Encyclopedia of Big Data Technologies, 1–8. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-63962-8_52-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pibiri, Giulio Ermanno, and Rossano Venturini. "Inverted Index Compression." In Encyclopedia of Big Data Technologies, 1–9. Cham: Springer International Publishing, 2012. http://dx.doi.org/10.1007/978-3-319-63962-8_52-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pibiri, Giulio Ermanno, and Rossano Venturini. "Inverted Index Compression." In Encyclopedia of Big Data Technologies, 1051–58. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-77525-8_52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gupta, Sonam, Neha Katiyar, Arun Kumar Yadav, and Divakar Yadav. "Index Optimization Using Wavelet Tree and Compression." In Proceedings of Data Analytics and Management, 809–21. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-6289-8_66.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Akritidis, Leonidas, and Panayiotis Bozanis. "Positional Data Organization and Compression in Web Inverted Indexes." In Lecture Notes in Computer Science, 422–29. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-32600-4_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Wei-Soul, Wen-Shyong Hsieh, and Ming-Hong Sun. "Index LOCO-I: A Hybrid Method of Data Hiding and Image Compression." In Lecture Notes in Computer Science, 463–68. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11553939_66.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mallia, Antonio, Michał Siedlaczek, and Torsten Suel. "An Experimental Study of Index Compression and DAAT Query Processing Methods." In Lecture Notes in Computer Science, 353–68. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15712-8_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Valencia, David, and Antonio Plaza. "FPGA-Based Hyperspectral Data Compression Using Spectral Unmixing and the Pixel Purity Index Algorithm." In Computational Science – ICCS 2006, 888–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11758501_130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Šalgová, Veronika. "The Impact of Table and Index Compression on Data Access Time and CPU Costs." In Information Systems and Technologies, 484–94. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04829-6_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Indexed data compression"

1

Cotumaccio, Nicola. "Graphs can be succinctly indexed for pattern matching in $O(\vert E\vert ^{2}+\vert V\vert ^{5/2})$ time." In 2022 Data Compression Conference (DCC). IEEE, 2022. http://dx.doi.org/10.1109/dcc52660.2022.00035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Beagley, Nathaniel, Chad Scherrer, Yan Shi, Brian H. Clowers, William F. Danielson, and Anuj R. Shah. "Increasing the Efficiency of Data Storage and Analysis Using Indexed Compression." In 2009 5th IEEE International Conference on e-Science (e-Science). IEEE, 2009. http://dx.doi.org/10.1109/e-science.2009.18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Селиванова, И. В. "SCIENTIFIC TEXTS CLASSIFICATION BY COMPRESSING ABSTRACTS ON THE EXAMPLE OF PUBLICATIONS INDEXED IN SCOPUS BIBLIOGRAPHIC DATABASE." In XVII Российская конференция “Распределенные информационно-вычислительные ресурсы: Цифровые двойники и большие данные”. Crossref, 2019. http://dx.doi.org/10.25743/ict.2019.93.10.027.

Full text
Abstract:
В работе исследуется возможность применения метода автоматической классификации научных текстов на основе сжатия данных, успешно применявшегося к полным текстам научных статей к классификации текстов на основе аннотаций. Для классификации были использованы библиографические описания публикаций из базы данных Scopus. Сравнение результатов проводилось с использованием тематических рубрик Scopus. Выявлено, что построение обучающей выборки на основе высокоцитируемых публикаций улучшает качество классификации. The paper investigates the applicability of the method of automatic classification of scientific texts based on data compression, successfully applied to the full texts of scientific articles to classify texts based on annotations. For classification, bibliographic descriptions of publications from the Scopus database were used. A comparison of the results was carried out using subject areas from Scopus. It was revealed that the construction of a training set based on highly cited publications improves the quality of classification.
APA, Harvard, Vancouver, ISO, and other styles
4

"Author Index." In Data Compression Conference. IEEE, 2005. http://dx.doi.org/10.1109/dcc.2005.19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

"Author index." In Data Compression Conference. IEEE, 2003. http://dx.doi.org/10.1109/dcc.2003.1194078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Oliva, Marco, Massimiliano Rossi, Jouni Siren, Giovanni Manzini, Tamer Kahveci, Travis Gagie, and Christina Boucher. "Efficiently Merging r-indexes." In 2021 Data Compression Conference (DCC). IEEE, 2021. http://dx.doi.org/10.1109/dcc50243.2021.00028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

"Author Index." In 2009 Data Compression Conference. IEEE, 2009. http://dx.doi.org/10.1109/dcc.2009.91.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

"Author Index." In 2010 Data Compression Conference. IEEE, 2010. http://dx.doi.org/10.1109/dcc.2010.103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hrbek, Lukas, and Jan Holub. "Approximate String Matching for Self-Indexes." In 2016 Data Compression Conference (DCC). IEEE, 2016. http://dx.doi.org/10.1109/dcc.2016.25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chiu, Sheng-Yuan, Wing-Kai Hon, Rahul Shah, and Jeffrey Scott Vitter. "I/O-Efficient Compressed Text Indexes: From Theory to Practice." In 2010 Data Compression Conference. IEEE, 2010. http://dx.doi.org/10.1109/dcc.2010.45.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Indexed data compression"

1

Newman-Toker, David E., Susan M. Peterson, Shervin Badihian, Ahmed Hassoon, Najlla Nassery, Donna Parizadeh, Lisa M. Wilson, et al. Diagnostic Errors in the Emergency Department: A Systematic Review. Agency for Healthcare Research and Quality (AHRQ), December 2022. http://dx.doi.org/10.23970/ahrqepccer258.

Full text
Abstract:
Objectives. Diagnostic errors are a known patient safety concern across all clinical settings, including the emergency department (ED). We conducted a systematic review to determine the most frequent diseases and clinical presentations associated with diagnostic errors (and resulting harms) in the ED, measure error and harm frequency, as well as assess causal factors. Methods. We searched PubMed®, Cumulative Index to Nursing and Allied Health Literature (CINAHL®), and Embase® from January 2000 through September 2021. We included research studies and targeted grey literature reporting diagnostic errors or misdiagnosis-related harms in EDs in the United States or other developed countries with ED care deemed comparable by a technical expert panel. We applied standard definitions for diagnostic errors, misdiagnosis-related harms (adverse events), and serious harms (permanent disability or death). Preventability was determined by original study authors or differences in harms across groups. Two reviewers independently screened search results for eligibility; serially extracted data regarding common diseases, error/harm rates, and causes/risk factors; and independently assessed risk of bias of included studies. We synthesized results for each question and extrapolated U.S. estimates. We present 95 percent confidence intervals (CIs) or plausible range (PR) bounds, as appropriate. Results. We identified 19,127 citations and included 279 studies. The top 15 clinical conditions associated with serious misdiagnosis-related harms (accounting for 68% [95% CI 66 to 71] of serious harms) were (1) stroke, (2) myocardial infarction, (3) aortic aneurysm and dissection, (4) spinal cord compression and injury, (5) venous thromboembolism, (6/7 – tie) meningitis and encephalitis, (6/7 – tie) sepsis, (8) lung cancer, (9) traumatic brain injury and traumatic intracranial hemorrhage, (10) arterial thromboembolism, (11) spinal and intracranial abscess, (12) cardiac arrhythmia, (13) pneumonia, (14) gastrointestinal perforation and rupture, and (15) intestinal obstruction. Average disease-specific error rates ranged from 1.5 percent (myocardial infarction) to 56 percent (spinal abscess), with additional variation by clinical presentation (e.g., missed stroke average 17%, but 4% for weakness and 40% for dizziness/vertigo). There was also wide, superimposed variation by hospital (e.g., missed myocardial infarction 0% to 29% across hospitals within a single study). An estimated 5.7 percent (95% CI 4.4 to 7.1) of all ED visits had at least one diagnostic error. Estimated preventable adverse event rates were as follows: any harm severity (2.0%, 95% CI 1.0 to 3.6), any serious harms (0.3%, PR 0.1 to 0.7), and deaths (0.2%, PR 0.1 to 0.4). While most disease-specific error rates derived from mainly U.S.-based studies, overall error and harm rates were derived from three prospective studies conducted outside the United States (in Canada, Spain, and Switzerland, with combined n=1,758). If overall rates are generalizable to all U.S. ED visits (130 million, 95% CI 116 to 144), this would translate to 7.4 million (PR 5.1 to 10.2) ED diagnostic errors annually; 2.6 million (PR 1.1 to 5.2) diagnostic adverse events with preventable harms; and 371,000 (PR 142,000 to 909,000) serious misdiagnosis-related harms, including more than 100,000 permanent, high-severity disabilities and 250,000 deaths. Although errors were often multifactorial, 89 percent (95% CI 88 to 90) of diagnostic error malpractice claims involved failures of clinical decision-making or judgment, regardless of the underlying disease present. Key process failures were errors in diagnostic assessment, test ordering, and test interpretation. Most often these were attributed to inadequate knowledge, skills, or reasoning, particularly in “atypical” or otherwise subtle case presentations. Limitations included use of malpractice claims and incident reports for distribution of diseases leading to serious harms, reliance on a small number of non-U.S. studies for overall (disease-agnostic) diagnostic error and harm rates, and methodologic variability across studies in measuring disease-specific rates, determining preventability, and assessing causal factors. Conclusions. Although estimated ED error rates are low (and comparable to those found in other clinical settings), the number of patients potentially impacted is large. Not all diagnostic errors or harms are preventable, but wide variability in diagnostic error rates across diseases, symptoms, and hospitals suggests improvement is possible. With 130 million U.S. ED visits, estimated rates for diagnostic error (5.7%), misdiagnosis-related harms (2.0%), and serious misdiagnosis-related harms (0.3%) could translate to more than 7 million errors, 2.5 million harms, and 350,000 patients suffering potentially preventable permanent disability or death. Over two-thirds of serious harms are attributable to just 15 diseases and linked to cognitive errors, particularly in cases with “atypical” manifestations. Scalable solutions to enhance bedside diagnostic processes are needed, and these should target the most commonly misdiagnosed clinical presentations of key diseases causing serious harms. New studies should confirm overall rates are representative of current U.S.-based ED practice and focus on identified evidence gaps (errors among common diseases with lower-severity harms, pediatric ED errors and harms, dynamic systems factors such as overcrowding, and false positives). Policy changes to consider based on this review include: (1) standardizing measurement and research results reporting to maximize comparability of measures of diagnostic error and misdiagnosis-related harms; (2) creating a National Diagnostic Performance Dashboard to track performance; and (3) using multiple policy levers (e.g., research funding, public accountability, payment reforms) to facilitate the rapid development and deployment of solutions to address this critically important patient safety concern.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography