Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Compressione dati.

Articles de revues sur le sujet « Compressione dati »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Compressione dati ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Visocchi, M., M. Meglio, F. La Marca et B. Cioni. « Esostosi ereditaria multipla e compressione cervicodorsale ». Rivista di Neuroradiologia 9, no 4 (août 1996) : 501–3. http://dx.doi.org/10.1177/197140099600900424.

Texte intégral
Résumé :
L'esostosi multipla ereditaria o discondroplasia o condroplasia ereditaria deformante o aclasia diafisale è una malattia relativamente rara a carattere autosomico dominante a penetranza variabile. Nel presente studio, relativo ad un caso clinico giunto alla nostra osservazione, segnaliamo la peculiarità della localizzazione neurochirurgica (cervicodorsale) della malattia e suggeriamo, anche alla luce dei dati di letteratura, la strategia terapeutica più appropriata. La gravità della storia naturale, la possibilità di recidiva e la scarsa radiosensibilità, a nostro avviso stabiliscono la priorità dell'indicazione al trattamento chirurgico quanto più radicale possibile.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Severoni, Cecilia. « La sicurezza dell'aviazione civile e i limiti alla libertà di circolazione : riflessioni a seguito della pandemia da COVID-19 ». RIVISTA ITALIANA DI DIRITTO DEL TURISMO, no 30 (septembre 2020) : 148–84. http://dx.doi.org/10.3280/dt2020-030011.

Texte intégral
Résumé :
Il quadro normativo attuale in tema di sicurezza dell'aviazione civile è stato modificato in modo sostanziale dopo l'attacco alle torri gemelle e dispone un insieme di norme puntuali e dettagliate. Occorre tuttavia trovare un compromesso ragionevole tra l'obiettivo indicato della prevenzione, indagine, accertamento e repressione degli atti di terrorismo ed altri reati gravi e quello della protezione della libertà di circolazione e dei dati personali nel rispetto della vita privata degli interessati. In questa ottica la compressione del diritto alla protezione dei dati personali deve rispondere a regole chiare e deve essere strettamente proporzionale all'obiettivo da conseguire. Analoghe riflessioni possono oggi essere ripetute in merito ad una più recente applicazione del principio contenuto nell'art. 16, primo comma, Cost. in tema di libertà di circolazione. Non sfugge, infatti, all'interprete l'analogia tra le restrizioni alla libertà di circolazione ed il diritto di muoversi liberamente derivante dalla normativa in materia di sicurezza dell'aviazione civile, e la più recente vicenda legata alla pandemia da Coronavirus che ha imposto una limitazione alla libertà di circolazione delle persone: si tratta nei due casi di un compromesso dettato da «motivi di sanità e sicurezza», circostanze accomunate dal legislatore costituzionale tra le ipotesi per le quali è possibile prevedere per legge una limitazione alla piena esplicazione dei diritti sopra ricordati.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Cardinale, Antonio, et Francesco Paolo Calciano. « Insufficienza venosa cronica : epidemiologia, fisiopatologia e diagnosi ». Cardiologia Ambulatoriale 29, no 1 (30 mai 2021) : 41–53. http://dx.doi.org/10.17473/1971-6818-2021-1-6.

Texte intégral
Résumé :
La medicina dovrebbe essere un gioco di squadra, oggi sempre declamato ma poco realizzato. Andare al di là dell'individualismo, valido in ogni disciplina, lo è anche in ambito angiologico. La gestione della insufficienza venosa cronica (IVC) coinvolge diversi specialisti, dal medico di medicina generale (MMG), all’angiologo, al cardiologo, al nutrizionista, al ginecologo, al chirurgo vascolare per citarne alcuni. La prevalenza dell’IVC è elevata: un italiano su due è colpito da questa malattia. L’età media di chi la contrae è in genere sopra i 50 anni, e sono le donne a esserne maggiormente colpite. I segni clinici sono correlati alle alterazioni fisiopatologiche. Il quadro clinico della IVC è caratterizzato da sintomi e segni legati all’ipertensione venosa, con alterazioni strutturali o funzionali delle vene. Vengono richiamati i punti fondamentali della fisiopatologia della IVC: ipertensione venosa passiva, stasi, aumento della permeabilità, disfunzione endoteliale, attivazione infiammatoria. La diagnosi di trombosi venosa profonda (TVP) è sottostimata. Il gold-standard diagnostico è l’eco-color-Doppler con l’utilizzo della ultrasonografia per compressione semplificata (CUS). Nell’era della digitalizzazione è nata l’esigenza di realizzare un database condiviso da tutti, con dati uniformi, per cui da alcuni anni è stata messa a punto la MEVec (mappa venosa emodinamica). Infine viene sottolineata l’importanza della tempestività nell'iniziare la terapia medica seguendo le linee guide emanate dalle società scientifiche per ridurre morbilità e mortalità associate e contrastare l’incidenza di sequele a distanza.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Shevchuk, Yury Vladimirovich. « Memory-efficient sensor data compression ». Program Systems : Theory and Applications 13, no 2 (4 avril 2022) : 35–63. http://dx.doi.org/10.25209/2079-3316-2022-13-2-35-63.

Texte intégral
Résumé :
We treat scalar data compression in sensor network nodes in streaming mode (compressing data points as they arrive, no pre-compression buffering). Several experimental algorithms based on linear predictive coding (LPC) combined with run length encoding (RLE) are considered. In entropy coding stage we evaluated (a) variable-length coding with dynamic prefixes generated with MTF-transform, (b) adaptive width binary coding, and (c) adaptive Golomb-Rice coding. We provide a comparison of known and experimental compression algorithms on 75 sensor data sources. Compression ratios achieved in the tests are about 1.5/4/1000000 (min/med/max), with compression context size about 10 bytes.
Styles APA, Harvard, Vancouver, ISO, etc.
5

P, Srividya. « Optimization of Lossless Compression Algorithms using Multithreading ». Journal of Information Technology and Sciences 9, no 1 (2 mars 2023) : 36–42. http://dx.doi.org/10.46610/joits.2022.v09i01.005.

Texte intégral
Résumé :
The process of reducing the number of bits required to characterize data is referred to as compression. The advantages of compression include a reduction in the time taken to transfer data from one point to another, and a reduction in the cost required for the storage space and network bandwidth. There are two types of compression algorithms namely lossy compression algorithm and lossless compression algorithm. Lossy algorithms find utility in compressing audio and video signals whereas lossless algorithms are used in compressing text messages. The advent of the internet and its worldwide usage has not only raised the utility but also the storage of text, audio and video files. These multimedia files demand more storage space as compared to traditional files. This has given rise to the requirement for an efficient compression algorithm. There is a considerable improvement in the computing performance of the machines due to the advent of the multi-core processor. However, this multi-core architecture is not used by compression algorithms. This paper shows the implementation of lossless compression algorithms namely the Lempel-Ziv-Markov Algorithm, BZip2 and ZLIB algorithms using the concept of multithreading. The results obtained prove that the ZLIB algorithm proves to be more efficient in terms of the time taken to compress and decompress the text. The comparison is done for both compressions without multithreading and compression with multi-threading.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Xiao, Ling, Renfa Li, Juan Luo et Zhu Xiao. « Energy-efficient recognition of human activity in body sensor networks via compressed classification ». International Journal of Distributed Sensor Networks 12, no 12 (décembre 2016) : 155014771667966. http://dx.doi.org/10.1177/1550147716679668.

Texte intégral
Résumé :
Energy efficiency is an important challenge to broad deployment of wireless body sensor networks for long-term physical movement monitoring. Inspired by theories of sparse representation and compressed sensing, the power-aware compressive classification approach SRC-DRP (sparse representation–based classification with distributed random projection) for activity recognition is proposed, which integrates data compressing and classification. Random projection as a data compression tool is individually implemented on each sensor node to reduce the amount of data for transmission. Compressive classification can be applied directly on the compressed samples received from all nodes. This method was validated on the Wearable Action Recognition Dataset and implemented on embedded nodes for offline and online experiments. It is shown that our method reduces energy consumption by approximately 20% while maintaining an activity recognition accuracy of 88% at a compression ratio of 0.5.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Ko, Yousun, Alex Chadwick, Daniel Bates et Robert Mullins. « Lane Compression ». ACM Transactions on Embedded Computing Systems 20, no 2 (mars 2021) : 1–26. http://dx.doi.org/10.1145/3431815.

Texte intégral
Résumé :
This article presents Lane Compression, a lightweight lossless compression technique for machine learning that is based on a detailed study of the statistical properties of machine learning data. The proposed technique profiles machine learning data gathered ahead of run-time and partitions values bit-wise into different lanes with more distinctive statistical characteristics. Then the most appropriate compression technique is chosen for each lane out of a small number of low-cost compression techniques. Lane Compression’s compute and memory requirements are very low and yet it achieves a compression rate comparable to or better than Huffman coding. We evaluate and analyse Lane Compression on a wide range of machine learning networks for both inference and re-training. We also demonstrate the profiling prior to run-time and the ability to configure the hardware based on the profiling guarantee robust performance across different models and datasets. Hardware implementations are described and the scheme’s simplicity makes it suitable for compressing both on-chip and off-chip traffic.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Saidhbi, Sheik. « An Intelligent Multimedia Data Encryption and Compression and Secure Data Transmission of Public Cloud ». Asian Journal of Engineering and Applied Technology 8, no 2 (5 mai 2019) : 37–40. http://dx.doi.org/10.51983/ajeat-2019.8.2.1141.

Texte intégral
Résumé :
Data compression is a method of reducing the size of the data file so that the file should take less disk space for storage. Compression of a file depends upon encoding of file. In lossless data compression algorithm there is no data loss while compressing a file, therefore confidential data can be reproduce if it is compressed using lossless data compression. Compression reduces the redundancy and if a compressed file is encrypted it is having a better security and faster transfer rate across the network than encrypting and transferring uncompressed file. Most of the computer applications related to health are not secure and these applications exchange lot of confidential health data having different file formats like HL7, DICOM images and other audio, image, textual and video data formats etc. These types of confidential data need to be transmitted securely and stored efficiently. Therefore this paper proposes a learning compression- encryption model for identifying the files that should be compressed before encrypting and the files that should be encrypted without compressing them.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Andreula, C., et I. Kambas. « Il dolore lombosacrale da ernie discali lombosacrali e patologia degenerativa correlata ». Rivista di Neuroradiologia 15, no 4 (août 2002) : 421–30. http://dx.doi.org/10.1177/197140090201500411.

Texte intégral
Résumé :
La patogenesi del dolore lombo-sacrale è ancora motivo di discussione e potrebbe essere sostenuta non solo da fattori meccanici diretti di compressione del disco (protrusione o ernia) sul nervo con conseguente alterazione della guaina mielinica, ma anche da fattori meccanici indiretti generati da stasi venosa e conseguente ischemia delle radici particolarmente sensibili all'ipossia e da fattori infiammatori di tipo immunomediato e di tipo bioumorale legati al disco. La gestione del paziente lombosciatalgico affidata al chirurgo dopo il fallimento della terapia medica, conservativa e fisiatrica ha rivelato che nelle casistiche chirurgiche più equilibrate la percentuale di successo degli interventi per ernia del disco lombosacrale si aggira sul 95–98% a breve termine con un'incidenza di reale recidiva erniaria nel 2–6%, la percentuale di successo scende a distanza fino all' 80–85%, per la comparsa di sintomatologia legata al fallimento chirurgico (Failed Back Surgery Sindrome FBSS), caratterizzata da recidive e/o cicatrici ipertrofiche, con sintomi rilevanti nel 20%, e vera e propria FBSS nel 15%. Tali dati hanno indotto a ricercare sempre nuove tecniche microchirurgiche per ridurre tali risultati indesiderati e contemporaneamente sono state approntate tecniche di trattamento percutaneo secondo procedure intervenzionali (chemiodiscolisi con chimopapaina, con ossigeno-ozono, nucleoaspirazione secondo la tecnica di Onik …) per ridurre al minimo da un lato l' “invasività” chirurgica, e dall'altro le non rare complicazioni di natura infettiva correlate all'intervento. Tutte le tecniche percutanee sono atti medici poco invasivi, con tempi di ospedalizzazione brevi. Il loro approccio extra canale spinale elimina i rischi connessi all'atto chirurgico di cicatrice post-operatoria, spesso responsabile di recidiva di sintomatologia dolorosa. Hanno inoltre il vantaggio di essere ripetibili nello stesso paziente senza precludere in caso di insuccesso il ricorso alla chirurgia tradizionale. Le percentuali di successo riportate da numerose casistiche si aggirano sul 65–75% di risultati ottimi o buoni. Queste procedure interventistiche spinali agirebbero sulla genesi meccanica del dolore riducendo quantitativamente il materiale nucleare, ma non espleterebbero alcuna azione sulla componente infiammatoria di origine radicolare e/o gangliare, talvolta causa autonoma del dolore. Pertanto in corso di trattamento di chemiodiscolisi con miscela di ossigeno-ozono, si è proceduto all'aggiunta di infiltrazione periradicolare e periganglionare con ossigeno-ozono, steroidi e anestetici. Gli autori riportano la loro personale esperienza sull'utilizzo del trattamento di Chemiodiscolisi con nucleoptesi con ossigeno-ozono con infiltrazione periradicolare e periganglionare nelle ernie discali lombosacrali e patologia degenerativa correlata.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Budiman, Gelar, Andriyan Bayu Suksmono et Donny Danudirdjo. « Compressive Sampling with Multiple Bit Spread Spectrum-Based Data Hiding ». Applied Sciences 10, no 12 (24 juin 2020) : 4338. http://dx.doi.org/10.3390/app10124338.

Texte intégral
Résumé :
We propose a novel data hiding method in an audio host with a compressive sampling technique. An over-complete dictionary represents a group of watermarks. Each row of the dictionary is a Hadamard sequence representing multiple bits of the watermark. Then, the singular values of the segment-based host audio in a diagonal matrix are multiplied by the over-complete dictionary, producing a lower size matrix. At the same time, we embed the watermark into the compressed audio. In the detector, we detect the watermark and reconstruct the audio. This proposed method offers not only hiding the information, but also compressing the audio host. The application of the proposed method is broadcast monitoring and biomedical signal recording. We can mark and secure the signal content by hiding the watermark inside the signal while we compress the signal for memory efficiency. We evaluate the performance in terms of payload, compression ratio, audio quality, and watermark quality. The proposed method can hide the data imperceptibly, in the range of 729–5292 bps, with a compression ratio 1.47–4.84, and a perfectly detected watermark.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Öztekin, Ertekin. « ANN based investigations of reliabilities of the models for concrete under triaxial compression ». Engineering Computations 33, no 7 (3 octobre 2016) : 2019–44. http://dx.doi.org/10.1108/ec-03-2015-0065.

Texte intégral
Résumé :
Purpose A lot of triaxial compressive models for different concrete types and different concrete strength classes were proposed to be used in structural analyses. The existence of so many models creates conflicts and confusions during the selection of the models. In this study, reliability analyses were carried out to prevent such conflicts and confusions and to determine the most reliable model for normal- and high-strength concrete (NSC and HSC) under combined triaxial compressions. The paper aims to discuss these issues. Design/methodology/approach An analytical model was proposed to estimate the strength of NSC and HSC under different triaxial loadings. After verifying the validity of the model by making comparisons with the models in the literature, reliabilities of all models were investigated. The Monte Carlo simulation method was used in the reliability studies. Artificial experimental data required for the Monte Carlo simulation method were generated by using artificial neural networks. Findings The validity of the proposed model was verified. Reliability indexes of triaxial compressive models were obtained for the limit states, different concrete strengths and different lateral compressions. Finally, the reliability indexes were tabulated to be able to choose the best model for NSC and HSC under different triaxial compressions. Research limitations/implications Concrete compressive strength and lateral compression were taken as variables in the model. Practical implications The reliability indexes were tabulated to be able to choose the best model for NSC and HSC under different triaxial compressions. Originality/value A new analytical model was proposed to estimate the strength of NSC and HSC under different triaxial loadings. Reliability indexes of triaxial compressive models were obtained for the limit states, different concrete strengths and different lateral compressions. Artificial experimental data were obtained by using artificial neural networks. Four different artificial neural networks were developed to generate artificial experimental data. They can also be used in the estimations of the strength of NSC and HSC under different triaxial loadings.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Borysenko, Oleksiy, Svitlana Matsenko, Toms Salgals, Sandis Spolitis et Vjaceslavs Bobrovs. « The Lossless Adaptive Binomial Data Compression Method ». Applied Sciences 12, no 19 (26 septembre 2022) : 9676. http://dx.doi.org/10.3390/app12199676.

Texte intégral
Résumé :
In this paper, we propose a new method for the binomial adaptive compression of binary sequences of finite length without loss of information. The advantage of the proposed binomial adaptive compression method compared with the binomial compression method previously developed by the authors is an increase in the compression rate. This speed is accompanied in the method by the appearance of a new quality—noise immunity of compression. The novelty of the proposed method, which makes it possible to achieve these positive results, is manifested in the adaptation of the compression ratio of compressible sequences to the required time, which is carried out by dividing the initial set of binary sequences into compressible and incompressible sequences. The method is based on the theorem proved by the authors on the decomposition of a stationary Bernoulli source of information into the combinatorial and probabilistic source. The last of them is the source of the number of units. It acquires an entropy close to zero and practically does not affect the compression ratio at considerable lengths of binary sequences. Therefore, for the proposed compression method, a combinatorial source generating equiprobable sequences is paramount since it does not require a set of statistical data and is implemented by numerical coding methods. As one of these methods, we choose a technique that uses binomial numbers based on the developed binomial number system. The corresponding compression procedure consists of three steps. The first is the transformation of the compressible sequence into an equilibrium combination, the second is its transformation into a binomial number, and the third is the transformation of a binomial number into a binary number. The restoration of the compressed sequence occurs in reverse order. In terms of the degree of compression and universalization, the method is similar to statistical methods of compression. The proposed method is convenient for hardware implementation using noise-immune binomial circuits. It also enables a potential opportunity to build effective systems for protecting information from unauthorized access.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Yang, Le, Zhao Yang Guo, Shan Shan Yong, Feng Guo et Xin An Wang. « A Hardware Implementation of Real Time Lossless Data Compression and Decompression Circuits ». Applied Mechanics and Materials 719-720 (janvier 2015) : 554–60. http://dx.doi.org/10.4028/www.scientific.net/amm.719-720.554.

Texte intégral
Résumé :
This paper presents a hardware implementation of real time data compression and decompression circuits based on the LZW algorithm. LZW is a dictionary based data compression, which has the advantage of fast speed, high compression, and small resource occupation. In compression circuit, the design creatively utilizes two dictionaries alternately to improve efficiency and compressing rate. In decompression circuit, an integrated State machine control module is adopted to save hardware resource. Through hardware description and language programming, the circuits finally reach function simulation and timing simulation. The width of data sample is 12bits, and the dictionary storage capacity is 1K. The simulation results show the compression and decompression circuits have complete function. Compared to software method, hardware implementation can save more storage and compressing time. It has a high practical value in the future.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Dal Pozzo, G., I. Fusi, M. Santoni, F. Dal Pozzo, G. Fabris et M. Leonardi. « Patologia degenerativa disco-vertebrale ed ernia discale ». Rivista di Neuroradiologia 8, no 2 (avril 1995) : 259–308. http://dx.doi.org/10.1177/197140099500800218.

Texte intégral
Résumé :
I processi di invecchiamento e degenerazione discale sono caratterizzati da progressiva disidratazione del nucleo polposo e dell'anello fibroso e dalla loro trasformazione fibrosa. Tali alterazioni rappresentano ilmomento prelimnare più importante nella patogenesi dell'ernia del disco. La degenerazione discale si associa spesso ad alterazioni dei corpi vertebrali adiacenti caratterizzate da modificazioni strutturali in seno al midollo osseo della spongiosa vertebrale, sclerosi delle limitanti somatiche, osteofitosi, ernie di Schmorl. Ancora oggi si ritiene che, in presenza di una sintomatologia mieloradicolare, non si possa prescindere da un esame radiologico convenzionale della colonna vertebrale, nonostante la bassa sensibilità per la patologia degenerativa del disco ed in particolare per l'ernia. L'indagine consente in tempi rapidi una valutazione panoramica del rachide, valutando l'allineamento dei metameri vertebrali ed evidenziando eventuali alterazioni vertebrali di natura malformativa, degenerativa, infiammatoria o neoplastica. La saccoradicolografia e la mielografia consentono un'accurata diagnostica dell'ernia discale, mostrando i classici segni di compressione extradurale e permettendo di valutare gli effetti del carico e della postura sulle compresioni mieloradicolari. La discografia, metodica invasiva e non esente da rischi al pari delle precedenti, evidenzia le alterazioni degenerative iniziali ed avanzate del disco ed anche la fuoriuscita di materiale nucleare (ernia). Attualmente trova indicazione solo come momento preparatorio ai trattamenti percutanei dell'ernia discale lombare (nucleoaspirazione e nucleolisi). Un fondamentale e innovativo apporto per il progresso delle conoscenze sulla patologia degenerativa del rachide è stato offerto dalle nuove tecnologie diagnostiche, in particolare dalla tomografia computerizzata e dalla risonanza magnetica che, in maniera non invasiva, hanno fornito dati più precisi sull'invecchiamento e sulla degenerazione del disco intervertebrale, sull'ernia discale e sulle alterazioni osteovertebrali associate. La TC consente una precisa definizione delle alterazioni discali ed ossee più avanzate, mentre non è in grado di apprezzare iniziali fenomeni degenerativi. Permette inoltre di riconoscere direttamente l'ernia discale e di valutarne l'esatta topografia, le dimensioni, lo sviluppo, le caratteristiche strutturali e di stimare il grado di occupazione dello speco vertebrale. La TC è molto più affidabile a livello lombosacrale, rispetto ai tratti cervicale e dorsale, per la presenza di condizioni anatomiche particolarmente favorevoli. La RM, in considerazione della assoluta non invasività, dell'elevata risoluzione di contrasto e della possibilità di uno studio multiplanare diretto (proiezioni sagittali!) rappresenta senza alcun dubbio una grande innovazione nella diagnostica per immagini della patologia degenerativa disco-vertebrale. La RM è particolarmente sensibile ai fenomeni di degenerazione del disco intervertebrale, evidenziando alterazioni sia morfologiche che strutturali (bulging, riduzione di spessore, disidratazione, vacuum phenomenon, calcificazioni del nucleo polposo). Le sequenze Spin-Echo sono più utili nel valutazione della disidratazione del disco, le Gradient Echo nel rilievo delle calcificazioni e del vacuum phenomenon.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Zhao, Huihuang, Yaonan Wang, Zhijun Qiao et Bin Fu. « Solder joint imagery compressing and recovery based on compressive sensing ». Soldering & ; Surface Mount Technology 26, no 3 (27 mai 2014) : 129–38. http://dx.doi.org/10.1108/ssmt-09-2013-0024.

Texte intégral
Résumé :
Purpose – The purpose of this paper is to develop an improved compressive sensing algorithm for solder joint imagery compressing and recovery. The improved algorithm can improve the performance in terms of peak signal to noise ratio (PSNR) of solder joint imagery recovery. Design/methodology/approach – Unlike the traditional method, at first, the image was transformed into a sparse signal by discrete cosine transform; then the solder joint image was divided into blocks, and each image block was transformed into a one-dimensional data vector. At last, a block compressive sampling matching pursuit was proposed, and the proposed algorithm with different block sizes was used in recovering the solder joint imagery. Findings – The experiments showed that the proposed algorithm could achieve the best results on PSNR when compared to other methods such as the orthogonal matching pursuit algorithm, greedy basis pursuit algorithm, subspace pursuit algorithm and compressive sampling matching pursuit algorithm. When the block size was 16 × 16, the proposed algorithm could obtain better results than when the block size was 8 × 8 and 4 × 4. Practical implications – The paper provides a methodology for solder joint imagery compressing and recovery, and the proposed algorithm can also be used in other image compressing and recovery applications. Originality/value – According to the compressed sensing (CS) theory, a sparse or compressible signal can be represented by a fewer number of bases than those required by the Nyquist theorem. The findings provide fundamental guidelines to improve performance in image compressing and recovery based on compressive sensing.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Mishra, Ishani, et Sanjay Jain. « Soft computing based compressive sensing techniques in signal processing : A comprehensive review ». Journal of Intelligent Systems 30, no 1 (11 septembre 2020) : 312–26. http://dx.doi.org/10.1515/jisys-2019-0215.

Texte intégral
Résumé :
Abstract In this modern world, a massive amount of data is processed and broadcasted daily. This includes the use of high energy, massive use of memory space, and increased power use. In a few applications, for example, image processing, signal processing, and possession of data signals, etc., the signals included can be viewed as light in a few spaces. The compressive sensing theory could be an appropriate contender to manage these limitations. “Compressive Sensing theory” preserves extremely helpful while signals are sparse or compressible. It very well may be utilized to recoup light or compressive signals with less estimation than customary strategies. Two issues must be addressed by CS: plan of the estimation framework and advancement of a proficient sparse recovery calculation. The essential intention of this work expects to audit a few ideas and utilizations of compressive sensing and to give an overview of the most significant sparse recovery calculations from every class. The exhibition of acquisition and reconstruction strategies is examined regarding the Compression Ratio, Reconstruction Accuracy, Mean Square Error, and so on.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Darmawan, Rendi Editya, Untung Sujianto et Nana Rochana. « Implementation of Chest Compression for Cardiac Arrest Patient in Indonesia : True or False ». Jurnal Ners 16, no 1 (19 janvier 2021) : 13. http://dx.doi.org/10.20473/jn.v16i1.17508.

Texte intégral
Résumé :
Introduction: The highest cause of death is cardiac arrest. Proper manual chest compression will increase survival of cardiac arrest. The aim of this study was to know the implementation of chest compressions for cardiac arrest patient in Indonesia.Methods: This study used a descriptive quantitative design. The samples were nurse and code blue team when performing manual chest compression to 74 patients experiencing cardiac arrest. The sample have body mass index (BMI) > 20. Research was conducted in two hospitals in Java, Indonesia. Implementation of chest compression is measured based on depth accuracy. Depth accuracy of chest compressions was assessed based on the comparison of the number of R waves with a height >10 mV on the bedside monitor with the number of chest compressions performed. The data were analyzed descriptively (mean, median, mode, standard deviation, and variances).Results: Result of this study is the mean of accuracy of compression depth is 75.97%. The result shows accuracy of compression depth on manual chest compression still under the American Heart Association (AHA) recommendation of 80%, because chest compression rate are not standardized. Chest compression rates are between 100-160 rates/minute, while AHA’s recommendations are 100-120 rates/minute. High compression speed causes a decrease in accuracy of chest compressions depth.Conclusion: In conclusion, the implementation of chest compressions in Indonesia if measured based on accuracy of compression depth is not effective. Nurses and the code blue team have to practice considering the use of cardiac resuscitation aids.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Tsutsumi, Satoshi, Hideo Ono et Yukimasa Yasumoto. « Vascular Compression of the Anterior Optic Pathway : A Rare Occurrence ? » Canadian Association of Radiologists Journal 68, no 4 (novembre 2017) : 409–13. http://dx.doi.org/10.1016/j.carj.2017.02.001.

Texte intégral
Résumé :
Background Vascular compression of the anterior optic pathway has been documented as an infrequent cause of visual impairments. Here we characterize such vascular compression using magnetic resonance imaging. Methods A total of 183 patients without pathologies affecting the optic pathways underwent T2-weighted or constructive interference steady-state sequence magnetic resonance imaging. Imaging data from coronal sections were analyzed. Results A vascular compression of the anterior optic pathway was identified in 20 patients (11%). They comprised 13 men and 7 women with a mean age of 60.8 years. The vascular compressions were observed at 22 sites, 15 on the optic nerve (ON) and 7 on the optic chiasm (OC). Twelve of them were on the right and 10 were on the left side. The offending vessels were the supraclinoid portion of the internal carotid artery in 86.4% and the A1 segment of the anterior cerebral artery in 13.6%. Compression sites at the ON and OC were variable, with the inferolateral surface being the most frequent (77.3% occurrences). In 2 patients (9.1%), the ON was compressed in a sandwich manner. Conclusions Vascular compression of the ON and OC may not be an infrequent occurrence in the cranial cavity. Progressive and unexplainable visual impairment might possibly be caused by vascular-compressive neuropathy.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Zhou, Xichuan, Lang Xu, Shujun Liu, Yingcheng Lin, Lei Zhang et Cheng Zhuo. « An Efficient Compressive Convolutional Network for Unified Object Detection and Image Compression ». Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 juillet 2019) : 5949–56. http://dx.doi.org/10.1609/aaai.v33i01.33015949.

Texte intégral
Résumé :
This paper addresses the challenge of designing efficient framework for real-time object detection and image compression. The proposed Compressive Convolutional Network (CCN) is basically a compressive-sensing-enabled convolutional neural network. Instead of designing different components for compressive sensing and object detection, the CCN optimizes and reuses the convolution operation for recoverable data embedding and image compression. Technically, the incoherence condition, which is the sufficient condition for recoverable data embedding, is incorporated in the first convolutional layer of the CCN model as regularization; Therefore, the CCN convolution kernels learned by training over the VOC and COCO image set can be used for data embedding and image compression. By reusing the convolution operation, no extra computational overhead is required for image compression. As a result, the CCN is 3.1 to 5.0 fold more efficient than the conventional approaches. In our experiments, the CCN achieved 78.1 mAP for object detection and 3.0 dB to 5.2 dB higher PSNR for image compression than the examined compressive sensing approaches.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Kaur, Harjit. « Image Compression Techniques with LZW method ». International Journal for Research in Applied Science and Engineering Technology 10, no 1 (31 janvier 2022) : 1773–77. http://dx.doi.org/10.22214/ijraset.2022.39999.

Texte intégral
Résumé :
Abstract: Image compression is a technique which is used to reduce the size of the data. In other words, it means to remove the extra data from the available by applying some techniques and tricks which makes the data easy for storing and transmitting it over the transmission medium. The compression techniques are broadly divided into two categories. First one is Lossy Compression in which some of the data is lost while compressing it and second technique is lossless technique in which data is not lost after compressing it. These compression techniques can be applied on different image formats. This review paper compares the different compression techniques. Keywords: lossy, lossless, image formats, compression techniques.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Sustika, Rika, et Bambang Sugiarto. « Compressive Sensing Algorithm for Data Compression on Weather Monitoring System ». TELKOMNIKA (Telecommunication Computing Electronics and Control) 14, no 3 (1 septembre 2016) : 974. http://dx.doi.org/10.12928/telkomnika.v14i3.3021.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Ochoa, Idoia, Mikel Hernaez et Tsachy Weissman. « Aligned genomic data compression via improved modeling ». Journal of Bioinformatics and Computational Biology 12, no 06 (décembre 2014) : 1442002. http://dx.doi.org/10.1142/s0219720014420025.

Texte intégral
Résumé :
With the release of the latest Next-Generation Sequencing (NGS) machine, the HiSeq X by Illumina, the cost of sequencing the whole genome of a human is expected to drop to a mere $1000. This milestone in sequencing history marks the era of affordable sequencing of individuals and opens the doors to personalized medicine. In accord, unprecedented volumes of genomic data will require storage for processing. There will be dire need not only of compressing aligned data, but also of generating compressed files that can be fed directly to downstream applications to facilitate the analysis of and inference on the data. Several approaches to this challenge have been proposed in the literature; however, focus thus far has been on the low coverage regime and most of the suggested compressors are not based on effective modeling of the data. We demonstrate the benefit of data modeling for compressing aligned reads. Specifically, we show that, by working with data models designed for the aligned data, we can improve considerably over the best compression ratio achieved by previously proposed algorithms. Our results indicate that the pareto-optimal barrier for compression rate and speed claimed by Bonfield and Mahoney (2013) [Bonfield JK and Mahoneys MV, Compression of FASTQ and SAM format sequencing data, PLOS ONE, 8(3):e59190, 2013.] does not apply for high coverage aligned data. Furthermore, our improved compression ratio is achieved by splitting the data in a manner conducive to operations in the compressed domain by downstream applications.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Shivanna, Gunasheela Keragodu, et Haranahalli Shreenivasamurthy Prasantha. « Two-dimensional satellite image compression using compressive sensing ». International Journal of Electrical and Computer Engineering (IJECE) 12, no 1 (1 février 2022) : 311. http://dx.doi.org/10.11591/ijece.v12i1.pp311-319.

Texte intégral
Résumé :
Compressive sensing is receiving a lot of attention from the image processing research community as a promising technique for image recovery from very few samples. The modality of compressive sensing technique is very useful in the applications where it is not feasible to acquire many samples. It is also prominently useful in satellite imaging applications since it drastically reduces the number of input samples thereby reducing the storage and communication bandwidth required to store and transmit the data into the ground station. In this paper, an interior point-based method is used to recover the entire satellite image from compressive sensing samples. The compression results obtained are compared with the compression results from conventional satellite image compression algorithms. The results demonstrate the increase in reconstruction accuracy as well as higher compression rate in case of compressive sensing-based compression technique.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Pompa, N., D. O’Dochartaigh, M. J. Douma, P. Jaggi, S. Ryan et M. MacKenzie. « P116 : A randomized cross-over trial of conventional bimanual versus single elbow (Koch) chest compression quality in a height-restricted aeromedical helicopter ». CJEM 20, S1 (mai 2018) : S98. http://dx.doi.org/10.1017/cem.2018.314.

Texte intégral
Résumé :
Introduction: Aeromedical helicopters and fixed wing aircraft are used across Canada to transfer patients to definitive care. Given height limitation in aeromedical transport, CPR performance can be affected. An adapted manual compression technique has been proposed by H. Koch (pron. Cook) that uses the elbow to compress the sternum rather than the conventional hand. This preliminary study evaluated the quality of Koch compressions versus conventional bimanual compressions. Methods: Paramedics (5), registered nurses (3) and a physician (1) were recruited. Each participant performed a 2 minute cycle of each technique, were randomized to determine which technique was performed first, and rested 5 minutes between compression cycles. A Resusci Anne SkillReporter manikin atop a stretcher in a BK117 helicopter was used. The compressors performed without feedback or prompting. Outcomes include compression rate, depth, recoil, and fatigue. Results: The mean conventional compression rate was (bpm) 118 +/− 13 versus 111 +/− 10 in the Koch scenario (p=0.02) (target 100 to 120). Mean conventional compression depth (mm) was 44 +/− 9 versus 49 +/− 7 in the Koch scenario (p=0.01) (target 50 to 60). The mean percentage of compressions with complete release in the conventional scenario was 86 +/− 20 versus 84 +/− 22 in the Koch scenario (p=0.9) (target 100%). Using a Modified Borg Scale of 1 to 10, mean provider fatigue after conventional CPR was 7 (+/− 1.6) versus 3 (+/− 1.2) using Koch technique (p<0.001). On average, Koch technique improved the percentage of compressions at target rate by 26%, the percentage at correct depth by 9%, overall compression quality score by 13% and were more less fatiguing. Conclusion: Using an elbow in a height-restricted environment improved compression depth and reduced provider fatigue. From our limited data, Koch compressions appear to improve compression quality. Further study and external validation are required.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Klein, Shmuel T., et Dana Shapira. « On the Randomness of Compressed Data ». Information 11, no 4 (7 avril 2020) : 196. http://dx.doi.org/10.3390/info11040196.

Texte intégral
Résumé :
It seems reasonable to expect from a good compression method that its output should not be further compressible, because it should behave essentially like random data. We investigate this premise for a variety of known lossless compression techniques, and find that, surprisingly, there is much variability in the randomness, depending on the chosen method. Arithmetic coding seems to produce perfectly random output, whereas that of Huffman or Ziv-Lempel coding still contains many dependencies. In particular, the output of Huffman coding has already been proven to be random under certain conditions, and we present evidence here that arithmetic coding may produce an output that is identical to that of Huffman.
Styles APA, Harvard, Vancouver, ISO, etc.
26

A. Al-Khayyat, Kamal, Imad F. Al-Shaikhli et V. Vijayakuumar. « On Randomness of Compressed Data Using Non-parametric Randomness Tests ». Bulletin of Electrical Engineering and Informatics 7, no 1 (1 mars 2018) : 63–69. http://dx.doi.org/10.11591/eei.v7i1.902.

Texte intégral
Résumé :
Four randomness tests were used to test the outputs (compressed files) of four lossless compressions algorithms: JPEG-LS and JPEG-2000 algorithms are image-dedicated algorithms, while 7z and Bzip2 algorithms are general-purpose algorithms. The relationship between the result of randomness tests and the compression ratio was investigated. This paper reports the important relationship between the statistical information behind these tests and the compression ratio. It shows that, this statistical information almost the same at least, for the four lossless algorithms under test. This information shows that 50 % of the compressed data are grouping of runs, 50% of it has positive signs when comparing adjacent values, 66% of the files containing turning points, and using Cox-Stuart test, 25% of the file give positive signs, which reflects the similarity aspects of compressed data. When it comes to the relationship between the compression ratio and these statistical information, the paper shows also, that, the greater values of these statistical numbers, the greater compression ratio we get.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Kandasamy, Jeyapal, Peter S. Theobald, Ian K. Maconochie et Michael D. Jones. « Can real-time feedback improve the simulated infant cardiopulmonary resuscitation performance of basic life support and lay rescuers ? » Archives of Disease in Childhood 104, no 8 (4 juin 2019) : 793–801. http://dx.doi.org/10.1136/archdischild-2018-316576.

Texte intégral
Résumé :
BackgroundPerforming high-quality chest compressions during cardiopulmonary resuscitation (CPR) requires achieving of a target depth, release force, rate and duty cycle.ObjectiveThis study evaluates whether ‘real time’ feedback could improve infant CPR performance in basic life support-trained (BLS) and lay rescuers. It also investigates whether delivering rescue breaths hinders performing high-quality chest compressions. Also, this study reports raw data from the two methods used to calculate duty cycle performance.MethodologyBLS (n=28) and lay (n=38) rescuers were randomly allocated to respective ‘feedback’ or ‘no-feedback’ groups, to perform two-thumb chest compressions on an instrumented infant manikin. Chest compression performance was then investigated across three compression algorithms (compression only; five rescue breaths then compression only; five rescue breaths then 15:2 compressions). Two different routes to calculate duty cycle were also investigated, due to conflicting instruction in the literature.ResultsNo-feedback BLS and lay groups demonstrated <3% compliance against each performance target. The feedback rescuers produced 20-fold and 10-fold increases in BLS and lay cohorts, respectively, achieving all targets concurrently in >60% and >25% of all chest compressions, across all three algorithms. Performing rescue breaths did not impede chest compression quality.ConclusionsA feedback system has great potential to improve infant CPR performance, especially in cohorts that have an underlying understanding of the technique. The addition of rescue breaths—a potential distraction—did not negatively influence chest compression quality. Duty cycle performance depended on the calculation method, meaning there is an urgent requirement to agree a single measure.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Foks, Nathan Leon, Richard Krahenbuhl et Yaoguo Li. « Adaptive sampling of potential-field data : A direct approach to compressive inversion ». GEOPHYSICS 79, no 1 (1 janvier 2014) : IM1—IM9. http://dx.doi.org/10.1190/geo2013-0087.1.

Texte intégral
Résumé :
Compressive inversion uses computational algorithms that decrease the time and storage needs of a traditional inverse problem. Most compression approaches focus on the model domain, and very few, other than traditional downsampling focus on the data domain for potential-field applications. To further the compression in the data domain, a direct and practical approach to the adaptive downsampling of potential-field data for large inversion problems has been developed. The approach is formulated to significantly reduce the quantity of data in relatively smooth or quiet regions of the data set, while preserving the signal anomalies that contain the relevant target information. Two major benefits arise from this form of compressive inversion. First, because the approach compresses the problem in the data domain, it can be applied immediately without the addition of, or modification to, existing inversion software. Second, as most industry software use some form of model or sensitivity compression, the addition of this adaptive data sampling creates a complete compressive inversion methodology whereby the reduction of computational cost is achieved simultaneously in the model and data domains. We applied the method to a synthetic magnetic data set and two large field magnetic data sets; however, the method is also applicable to other data types. Our results showed that the relevant model information is maintained after inversion despite using 1%–5% of the data.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Kamineni, Srinath, Zubair Wani, Zong-Ping Luo, Yoshida Ruriko et Kai-Nan An. « CHONDROCYTE RESPONSE TO TENSILE AND COMPRESSIVE CYCLIC LOADING MODALITIES ». Journal of Musculoskeletal Research 15, no 01 (mars 2012) : 1250006. http://dx.doi.org/10.1142/s0218957712500066.

Texte intégral
Résumé :
There is very little data addressing cartilage response to tensile forces, and no literature attempts to correlate compressive with tensile modalities. Our hypothesis was that the cyclic compression and tension modulate chondrocyte matrix proteoglycan synthetic response differently. Porcine chondrocytes cultured to confluence on a flexible membrane were subjected to cyclic compression (Group A: 13 KPa at 1 Hz) or tension (Group C: 10% strain at 1 Hz) for 16 or 32 h; while controls not subjected to any force were kept (Group B). The chondrocytes were then stained with alcian blue and stained areas quantified with confocal microscopy and image processing software. Two-factor ANOVA with post-hoc tests (Scheffe and Bonferroni) statistical analysis were used. Proteoglycan staining covered 46% (range 28%–61%) and 39% (range 26%–49%) of the surface area following 32 and 16 h of compression respectively, 23% (range 15%–49%) for control, and 19% (range 10%–29%) and 16% (range 9%–25%) following 16 and 32 h tension respectively. Proteoglycan content following all compressions was significantly greater than with cyclic tension or control (p < 0.0001). Our data demonstrate that chondrocytes cultured in vitro respond to compression distinctly different from tension and that it is highly sensitive to mechanical loading, with rapid adaptation to its mechanical environment. These results imply that cartilage grown in culture, with the intention of transplantation, may structurally benefit from an environment of cyclic loading at higher frequencies.
Styles APA, Harvard, Vancouver, ISO, etc.
30

WANG, YANFEI, CHANGCHUN YANG et JINGJIE CAO. « ON TIKHONOV REGULARIZATION AND COMPRESSIVE SENSING FOR SEISMIC SIGNAL PROCESSING ». Mathematical Models and Methods in Applied Sciences 22, no 02 (février 2012) : 1150008. http://dx.doi.org/10.1142/s0218202511500084.

Texte intégral
Résumé :
Using compressive sensing and sparse regularization, one can nearly completely reconstruct the input (sparse) signal using limited numbers of observations. At the same time, the reconstruction methods by compressing sensing and optimizing techniques overcome the obstacle of the number of sampling requirement of the Shannon/Nyquist sampling theorem. It is well known that seismic reflection signal may be sparse, sometimes and the number of sampling is insufficient for seismic surveys. So, the seismic signal reconstruction problem is ill-posed. Considering the ill-posed nature and the sparsity of seismic inverse problems, we study reconstruction of the wavefield and the reflection seismic signal by Tikhonov regularization and the compressive sensing. The l0, l1 and l2 regularization models are studied. Relationship between Tikhonov regularization and the compressive sensing is established. In particular, we introduce a general lp - lq (p, q ≥ 0) regularization model, which overcome the limitation on the assumption of convexity of the objective function. Interior point methods and projected gradient methods are studied. To show the potential for application of the regularized compressive sensing method, we perform both synthetic seismic signal and field data compression and restoration simulations using a proposed piecewise random sub-sampling. Numerical performance indicates that regularized compressive sensing is applicable for practical seismic imaging.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Hayati, Anis Kamilah, et Haris Suka Dyatmika. « THE EFFECT OF JPEG2000 COMPRESSION ON REMOTE SENSING DATA OF DIFFERENT SPATIAL RESOLUTIONS ». International Journal of Remote Sensing and Earth Sciences (IJReSES) 14, no 2 (8 janvier 2018) : 111. http://dx.doi.org/10.30536/j.ijreses.2017.v14.a2724.

Texte intégral
Résumé :
The huge size of remote sensing data implies the information technology infrastructure to store, manage, deliver and process the data itself. To compensate these disadvantages, compressing technique is a possible solution. JPEG2000 compression provide lossless and lossy compression with scalability for lossy compression. As the ratio of lossy compression getshigher, the size of the file reduced but the information loss increased. This paper tries to investigate the JPEG2000 compression effect on remote sensing data of different spatial resolution. Three set of data (Landsat 8, SPOT 6 and Pleiades) processed with five different level of JPEG2000 compression. Each set of data then cropped at a certain area and analyzed using unsupervised classification. To estimate the accuracy, this paper utilized the Mean Square Error (MSE) and the Kappa coefficient agreement. The study shows that compressed scenes using lossless compression have no difference with uncompressed scenes. Furthermore, compressed scenes using lossy compression with the compression ratioless than 1:10 have no significant difference with uncompressed data with Kappa coefficient higher than 0.8.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Klöwer, Milan, Miha Razinger, Juan J. Dominguez, Peter D. Düben et Tim N. Palmer. « Compressing atmospheric data into its real information content ». Nature Computational Science 1, no 11 (novembre 2021) : 713–24. http://dx.doi.org/10.1038/s43588-021-00156-2.

Texte intégral
Résumé :
AbstractHundreds of petabytes are produced annually at weather and climate forecast centers worldwide. Compression is essential to reduce storage and to facilitate data sharing. Current techniques do not distinguish the real from the false information in data, leaving the level of meaningful precision unassessed. Here we define the bitwise real information content from information theory for the Copernicus Atmospheric Monitoring Service (CAMS). Most variables contain fewer than 7 bits of real information per value and are highly compressible due to spatio-temporal correlation. Rounding bits without real information to zero facilitates lossless compression algorithms and encodes the uncertainty within the data itself. All CAMS data are 17× compressed relative to 64-bit floats, while preserving 99% of real information. Combined with four-dimensional compression, factors beyond 60× are achieved. A data compression Turing test is proposed to optimize compressibility while minimizing information loss for the end use of weather and climate forecast data.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Dahunsi, F. M., O. A. Somefun, A. A. Ponnle et K. B. Adedeji. « Compression Techniques of Electrical Energy Data for Load Monitoring : A Review ». Nigerian Journal of Technological Development 18, no 3 (5 novembre 2021) : 194–208. http://dx.doi.org/10.4314/njtd.v18i3.4.

Texte intégral
Résumé :
In recent years, the electric grid has experienced increasing deployment, use, and integration of smart meters and energy monitors. These devices transmit big time-series load data representing consumed electrical energy for load monitoring. However, load monitoring presents reactive issues concerning efficient processing, transmission, and storage. To promote improved efficiency and sustainability of the smart grid, one approach to manage this challenge is applying data-compression techniques. The subject of compressing electrical energy data (EED) has received quite an active interest in the past decade to date. However, a quick grasp of the range of appropriate compression techniques remains somewhat a bottleneck to researchers and developers starting in this domain. In this context, this paper reviews the compression techniques and methods (lossy and lossless) adopted for load monitoring. Selected top-performing compression techniques metrics were discussed, such as compression efficiency, low reconstruction error, and encoding-decoding speed. Additionally reviewed is the relation between electrical energy, data, and sound compression. This review will motivate further interest in developing standard codecs for the compression of electrical energy data that matches that of other domains.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Bíscaro, Helton Hideraldo, et José Paulo Lima. « Compressive Representation of Three-dimensional Models ». Journal on Interactive Systems 6, no 1 (9 octobre 2015) : 1. http://dx.doi.org/10.5753/jis.2015.656.

Texte intégral
Résumé :
Due to recent developments in data acquisition mechanisms, called 3d scanners, mesh compression has become an important tool for manipulating geometric data in several areas. In this context, a recent approach to the theory of signs called Compressive Sensing states that a signal can be recovered from far fewer samples than those provided by the classical theory. In this paper, we investigate the applicability of this new theory with the purpose of to obtain a compressive representation of geometric meshes. We developed an experiment which combines sampling, compression and reconstruction of various mesh sizes. Besides figuring compression rates, we also measured the relative error between the original mesh and the recovered mesh. We also compare two measurement techniques through their processing times, which are: the use of Gaussian matrices; and the use of Noiselet matrices. Gaussian matrices performed better in terms of processing speed, with equivalent performance in compression capacity. The results indicate that compressive sensing is very useful for mesh compression showing quite comparable results with traditional mesh compression techniques.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Picard, Christopher, Richard Drew, Domhnall O'Dochartaigh, Matthew Douma, Candice Keddie et Colleen Norris. « The clinical effects of CPR meter on chest compression quality : a QI project ». Canadian Journal of Emergency Nursing 44, no 2 (20 juillet 2021) : 9–10. http://dx.doi.org/10.29173/cjen149.

Texte intégral
Résumé :
The clinical effects of CPR meter on chest compression quality: a QI project. Christopher Picard, Richard Drew, Domhnall O’Dochartaigh, Matthew J Douma, Candice Keddie, Colleen Norris. Background: High-quality chest compressions are the cornerstone of resuscitation. Training guidelines require CPR feedback, and pre-clinical data shows that feedback devices improve chest compression quality; but devices are not being used in many emergency departments, and their impact on clinical care is less well understood. Some services use defibrillator generated reports for quality improvement, but these measurements may be limited in scope and have not been rigorously compared to other tools. Methods: Laerdal CPRMeter 2 chest compression feedback devices were purchased using funds made available by a zone QI initiative. Initial training for implementation consisted of staff performing one minute of blinded chest compression using the feedback device, followed by one minute of chest compression unblinded. Staff were shown the raw percentage of chest compressions meeting target depth, release, and rate under both conditions as well as overall improvement. Following initial orientation, devices were incorporated into clinical care and all subsequent staff simulation and training. Clinically, use of the feedback device and completion or QI tracking forms was not mandated but was encouraged by drawing code participant names from completed forms for a free ACLS or PALS course. Data from all codes were automatically collected by the LifePak 20, data from any resuscitation using the Laerdal CPRmeter 2 were also automatically recorded when the device was used: these data were downloaded weekly. Completed questionnaire forms were submitted to the Clinical Educators and extracted as received. Evaluation Methods: Chest compression quality data was collected in two ways: first, using a Laerdal CPRMeter2, second, by downloading and analyzing cardiac arrest data from a LifePak20 defibrillator using CodeStatTM software. Device data were matched and synthesized by an emergency department CNE using Microsoft excel and IBM SPSS 26. Descriptive statistics (mean and standard deviations) are used to describe the data. Differences in chest compression quality and duration of resuscitations between resuscitation that did or did not use a feedback device or a backboard were compared using independent t-testing. Differences in chest compressions at the target depth, release, and rate between the numbers of staff involved were assessed using ANOVA. Agreement between devices (CPRMeter2 and LifePak) used during the resuscitations were evaluated using paired t-testing, Pearson correlations, and Bland-Altman plots. All tests were two-tailed with predetermined significance levels set at a=0.05. Results: Data collection occurred between August 2019 and December 2020. There were a total of 50 cardiac arrests included, 36 had questionnaire data returned, 36 had data collected from the CPR meter 2, 24 had data collected from the LifePak, and 10 had data collected using all three methods. The average duration of resuscitation (number of chest compressions) was 1079.56 (SD=858.25); there was no difference in the duration of resuscitation (number of chest compressions) between resuscitations using versus not using CPR feedback devices (p=0.673). Resuscitations utilizing chest compression feedback had a higher percentage of chest compressions at the target rate compared to resuscitations not using feedback (74.08% vs 42.18%, p=0.007). Resuscitations that utilized a backboard had a higher percentage of chest compressions at target depth (72.92% vs 48.73%, p=0.048). There were no differences noted in the duration of resuscitation attempt (p=0.167) or percentages of chest compressions at the target depth (p=0.181), release (p=0.538), or rate (p=0.656) between resuscitations with different sized teams (4-5, 6-7, 8-9, >10 staff involved). There was a strong positive correlation (r=0.771, p=0.005, n=11) between the two measurement methods and chest compression rates, and no statistically significant difference in measured scores (p=0.999), with 100% of values falling within the Bland-Altman confidence intervals of 36.72 and -36.72, n=11. Interpretation of the levels of agreement between these two device measures methods should be done cautiously however, given the small sample size and wide confidence intervals. Implications 1) Incorporation of visual chest compression feedback and use of a backboard are fast andaffordable and significantly improved the percentage of chest compression at the target rateand depth. 2) There was no correlation between the size of the resuscitation team and the percentage ofchest compressions at the target depth, release or rate; nor was the feedback device useassociated with the duration of the resuscitation attempt. 3) The implications of improvement with the CPR meter suggests that areas or service not usingfeedback should consider implementing its use to achieve the target compression rate. 4) Compared to LifePak feedback alone the CPRMeter2 will also allow services to target depthand release targets as well as rate.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Kim, Jaehoon, Jaebong Jung, Taejoon Park, Daeyong Kim, Young Hoon Moon, Farhang Pourboghrat et Ji Hoon Kim. « Characterisation of Compressive Behaviour of Low-Carbon and Third Generation Advanced High Strength Steel Sheets with Freely Movable Anti-buckling Bars ». Metals 12, no 1 (17 janvier 2022) : 161. http://dx.doi.org/10.3390/met12010161.

Texte intégral
Résumé :
Measuring the compressive behaviour of sheet materials is an important process for understanding the material behaviour and numerical simulation of metal forming. The application of side force on both surfaces of a specimen in the thickness direction is an effective way to prevent buckling when conducting compressive tests. However, the side effects of side forces (such as the biaxial stress state and non-uniform deformation) make it difficult to interpret the measured data and derive the intrinsic compressive behaviour. It is even more difficult for materials with tension–compression asymmetry such as steels that undergo transformation-induced plasticity. In this study, a novel design for a sheet compression tester was developed with freely movable anti-buckling bars on both sides of the specimen to prevent buckling during in-plane compressive loading. Tensile and compressive tests under side force were conducted for low-carbon steel using the digital image correlation method. The raw tensile and compressive stress–strain data of the low-carbon steel showed apparent flow stress asymmetry of tension and compression, originating from the biaxial and thickness effects. A finite element method-based data correction procedure was suggested and validated for the low-carbon steel. The third generation advanced high strength steels showed intrinsic tension–compression asymmetry at room temperature whereas the asymmetry was significantly reduced at 175 °C.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Li, Chunbao, Xiaosong Ma, Shifeng Xue, Haiyang Chen, Pengju Qin et Gaojie Li. « Compressive Capacity of Vortex-Compression Nodular Piles ». Advances in Civil Engineering 2021 (12 janvier 2021) : 1–18. http://dx.doi.org/10.1155/2021/6674239.

Texte intégral
Résumé :
Compared with traditional equal-section pile, the nodular parts of nodular pile expand the contact area between the pile and foundation soil, which can greatly improve the bearing capacity of pile foundation and increase the stability of pile body structure. In this paper, the mechanism of pile-soil interaction in the construction of vortex-compression nodular pile is studied with the purpose of evaluating the compressive capacity of nodular piles. Through the indoor model test and ABAQUS numerical simulation analysis, the compressive characteristics of 12 types of vortex-compression nodular pile are obtained, and the variation rules of the parameters of the compressive characteristics of vortex-compression nodular piles are quantitatively analyzed, including the failure pattern of foundation soil, load-settlement relationship, and load transfer law of vortex-compression nodular piles. The results showed that the compressive capacity of vortex-compression nodular piles has significant advantages over that of traditional equal-section piles. Based on the results of the indoor model test and numerical simulation, the calculation method and formula of the compressive capacity of vortex-compression nodular piles are given by modifying the corresponding calculation formula of traditional nodular piles. The new method and formula are more in line with the actual working conditions and provide theoretical and data support for the further engineering application of vortex-compression nodular piles.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Baklanova, Olga E., et Vladimir A. Vasilenko. « Data compression with $\Sigma\Pi$-approximations based on splines ». Applications of Mathematics 38, no 6 (1993) : 405–10. http://dx.doi.org/10.21136/am.1993.104563.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
39

Chatamoni, Anil Kumar, et Rajendra Naik Bhukya. « Lightweight Compressive Sensing for Joint Compression and Encryption of Sensor Data ». International Journal of Engineering and Technology Innovation 12, no 2 (22 février 2022) : 167–81. http://dx.doi.org/10.46604/ijeti.2022.8599.

Texte intégral
Résumé :
The security and energy efficiency of resource-constrained distributed sensors are the major concerns in the Internet of Things (IoT) network. A novel lightweight compressive sensing (CS) method is proposed in this study for simultaneous compression and encryption of sensor data in IoT scenarios. The proposed method reduces the storage space and transmission cost and increases the IoT security, with joint compression and encryption of data by image sensors. In this proposed method, the cryptographic advantage of CS with a structurally random matrix (SRM) is considered. Block compressive sensing (BCS) with an SRM-based measurement matrix is performed to generate the compressed and primary encrypted data. To enhance security, a stream cipher-based pseudo-error vector is added to corrupt the compressed data, preventing the leakage of statistical information. The experimental results and comparative analyses show that the proposed scheme outperforms the conventional and state-of-art schemes in terms of reconstruction performance and encryption efficiency.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Liao, Ying Di, Chao Hua Jiang et Xing Guo Feng. « An Empirical Correlation between Unconfined Compression Strength and Curing Time for Cement-Soil ». Applied Mechanics and Materials 268-270 (décembre 2012) : 642–45. http://dx.doi.org/10.4028/www.scientific.net/amm.268-270.642.

Texte intégral
Résumé :
Different cement types were used to stabilize coastal soft soil. The unconfined compression strength of each cement type treated soil was tested at different curing time. The results showed that the higher strength degree cement lead to the higher unconfined compression strength with same cement addition after curing 90 days. An empirical correlation between unconfined compressive strength and curing time was presented to forecast the unconfined compression strength of cement-soil. Additionally, the 14 day and the data of unconfined compressive strength at that time were suggested to use as the basic standard time and standard strength data respectively.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Jebur, Qusai Hatem, Philip Harrison, Zaoyang Guo, Gerlind Schubert, Xiangyang Ju et Vincent Navez. « Characterisation and modelling of a transversely isotropic melt-extruded low-density polyethylene closed cell foam under uniaxial compression ». Proceedings of the Institution of Mechanical Engineers, Part C : Journal of Mechanical Engineering Science 226, no 9 (2 décembre 2011) : 2168–77. http://dx.doi.org/10.1177/0954406211431528.

Texte intégral
Résumé :
This article describes uniaxial compression tests on a melt-extruded closed-cell low-density polyethylene foam. The stress–strain response shows that the mechanical behaviour of the foam is predominantly transversely isotropic viscoelastic and compressible. Image analysis is used to estimate the Poisson’s ratio under large strains. When the deformation is less than 5%, the compression kinematics and mechanical response of the polymer foam can be well described by a linear compressible transversely isotropic elastic model. For large strain, a simple method is proposed to estimate the uniaxial compression response of the foam at any arbitrary orientation by manipulating experimental data obtained from compression tests in the principal and transverse directions (stress vs. strain and Poisson’s ratio) and a simple shear test. An isotropic compressible hyperfoam model is then used to implement this behaviour in a finite element code.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Kumar, M. Tanooj, T. Praveena, T. Lakshman et M. Sai Krishna. « Optimized and secured data collection and recovery for IoT applications ». International Journal of Engineering & ; Technology 7, no 2.7 (18 mars 2018) : 433. http://dx.doi.org/10.14419/ijet.v7i2.7.10857.

Texte intégral
Résumé :
This paper proposes a new compressive sensing based method for data collection and recovery related to IoT based systems. It performs data capturing, its compression and encryption at the same time, transmission, storage and its recovery. The measurement matrix, used in compressive sensing, is generated based on the user private key and is utilized for encrypting the captured data. Basis Pursuit is used for reconstruction of the data. The results shows that it is very suitable for the IoT based applications by considering data security, transmission cost and storage cost.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Kumar, Sanjeev, Suneeta Agarwal et Ranvijay. « WBFQC : A new approach for compressing next-generation sequencing data splitting into homogeneous streams ». Journal of Bioinformatics and Computational Biology 16, no 05 (octobre 2018) : 1850018. http://dx.doi.org/10.1142/s021972001850018x.

Texte intégral
Résumé :
Genomic data nowadays is playing a vital role in number of fields such as personalized medicine, forensic, drug discovery, sequence alignment and agriculture, etc. With the advancements and reduction in the cost of next-generation sequencing (NGS) technology, these data are growing exponentially. NGS data are being generated more rapidly than they could be significantly analyzed. Thus, there is much scope for developing novel data compression algorithms to facilitate data analysis along with data transfer and storage directly. An innovative compression technique is proposed here to address the problem of transmission and storage of large NGS data. This paper presents a lossless non-reference-based FastQ file compression approach, segregating the data into three different streams and then applying appropriate and efficient compression algorithms on each. Experiments show that the proposed approach (WBFQC) outperforms other state-of-the-art approaches for compressing NGS data in terms of compression ratio (CR), and compression and decompression time. It also has random access capability over compressed genomic data. An open source FastQ compression tool is also provided here ( http://www.algorithm-skg.com/wbfqc/home.html ).
Styles APA, Harvard, Vancouver, ISO, etc.
44

Sirota, A. A., M. A. Dryuchenko et E. Yu Mitrofanova. « Digital watermarking method based on heteroassociative image compression and its realization with artificial neural networks ». Computer Optics 42, no 3 (25 juillet 2018) : 483–94. http://dx.doi.org/10.18287/2412-6179-2018-42-3-483-494.

Texte intégral
Résumé :
In this paper, we present a digital watermarking method and associated algorithms that use a heteroassociative compressive transformation to embed a digital watermark bit sequence into blocks (fragments) of container images. A principal feature of the proposed method is the use of the heteroassociative compressing transformation – a mutual mapping with the compression of two neighboring image regions of an arbitrary shape. We also present the results of our experiments, namely the dependencies of quality indicators of thus created digital watermarks, which show the container distortion level, and the probability of digital watermark extraction error. In the final section, we analyze the performance of the proposed digital watermarking algorithms under various distortions and transformations aimed at destroying the hidden data, and compare these algorithms with the existing ones.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Ali, Hiba Hani Mohammed, Faisal Al-Akayleh, Abdel Hadi Al Jafari et Iyad Rashid. « Investigating Variation in Compressional Behavior of a Ternary Mixture from a Plastic, Elastic and Brittle Fracture Perspective in the Context of Optimum Composition of a Pharmaceutical Blend ». Polymers 15, no 5 (21 février 2023) : 1063. http://dx.doi.org/10.3390/polym15051063.

Texte intégral
Résumé :
The choice of optimum composition of a mixture of binary and ternary excipients for optimum compressional properties was investigated in this work. Excipients were chosen based on three types of excipients: plastic, elastic, and brittle fracture. Mixture compositions were selected based on a one-factor experimental design using the response surface methodology technique. Compressive properties comprising Heckel and Kawakita parameters, work of compression, and tablet hardness were measured as the main responses of this design. The one-factor RSM analysis revealed that there exist specific mass fractions that are associated with optimum responses for binary mixtures. Furthermore, the RSM analysis of the ‘mixture’ design type for the three components revealed a region of optimal responses around a specific composition. The foregoing had a mass ratio of 80:15:5 for microcrystalline cellulose: starch: magnesium silicate, respectively. Upon comparison using all RSM data, ternary mixtures were found to perform better in compression and tableting properties than binary mixtures. Finally, the finding of an optimal mixture composition has proven effective in its applicability in the context of the dissolution of model drugs (metronidazole and paracetamol).
Styles APA, Harvard, Vancouver, ISO, etc.
46

Lyngby, Rasmus Meyer, Dimitra Nikoletou, Fredrik Folke et Tom Quinn. « PP25 Does unguided cardio-pulmonary-resuscitation in copenhagen achieve high quality recommendations ? » Emergency Medicine Journal 37, no 10 (25 septembre 2020) : e12.1-e12. http://dx.doi.org/10.1136/emermed-2020-999abs.25.

Texte intégral
Résumé :
BackgroundSurvival from out-of-hospital cardiac arrest (OHCA) is associated with the quality of cardio-pulmonary-resuscitation (CPR). The European Resuscitation Council (ERC) and American Heart Association (AHA) define high quality CPR as compression depth of 5–6 centimeters, compression rate of 100–120 compressions/minute, full recoil (>400 milliseconds) after each compression and a hands-on time (compression fraction) of at least 60% (ERC) or 80% (AHA). The aim of this study was to investigate if unguided CPR performed by Copenhagen Emergency Medical Services (EMS) met these recommendations.MethodFrom October throughout December 2018, OHCA data were collected from ambulances within the Capital Region of Denmark using Zoll X-series defibrillator (without CPR feedback dashboard or metronome). Only cases where EMS performed CPR were included. Data was uploaded to a central database and extracted to EXCEL for descriptive statistics and preliminary results.ResultsEMS CPR was performed in 330 cases of which 252 were available for analysis. Mean (SD) compression depth was 5.6±1.7 centimeters, compression rate was 110±9.8 compressions/minute, release velocity was 410±125.1 milliseconds, compression quality (correct compression depth + correct compression rate) was 13.8% and compression fraction was 69.7%.ConclusionThe quality of EMS-delivered CPR, unguided by feedback or metronome, was within recommendations for compression depth, compression rate and release velocity. CPR fraction was between ERC and AHA guidelines. Compression quality, which is not included in ERC/AHA recommendations, did not reach the manufactures recommended >60%. Further work is ongoing to evaluate the effect of adding real-time feedback to guide EMS CPR.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Schmitz, Jan, Anton Ahlbäck, James DuCanto, Steffen Kerkhoff, Matthieu Komorowski, Vanessa Löw, Thais Russomano et al. « Randomized Comparison of Two New Methods for Chest Compressions during CPR in Microgravity—A Manikin Study ». Journal of Clinical Medicine 11, no 3 (27 janvier 2022) : 646. http://dx.doi.org/10.3390/jcm11030646.

Texte intégral
Résumé :
Background: Although there have been no reported cardiac arrests in space to date, the risk of severe medical events occurring during long-duration spaceflights is a major concern. These critical events can endanger both the crew as well as the mission and include cardiac arrest, which would require cardiopulmonary resuscitation (CPR). Thus far, five methods to perform CPR in microgravity have been proposed. However, each method seems insufficient to some extent and not applicable at all locations in a spacecraft. The aim of the present study is to describe and gather data for two new CPR methods in microgravity. Materials and Methods: A randomized, controlled trial (RCT) compared two new methods for CPR in a free-floating underwater setting. Paramedics performed chest compressions on a manikin (Ambu Man, Ambu, Germany) using two new methods for a free-floating position in a parallel-group design. The first method (Schmitz–Hinkelbein method) is similar to conventional CPR on earth, with the patient in a supine position lying on the operator’s knees for stabilization. The second method (Cologne method) is similar to the first, but chest compressions are conducted with one elbow while the other hand stabilizes the head. The main outcome parameters included the total number of chest compressions (n) during 1 min of CPR (compression rate), the rate of correct chest compressions (%), and no-flow time (s). The study was registered on clinicaltrials.gov (NCT04354883). Results: Fifteen volunteers (age 31.0 ± 8.8 years, height 180.3 ± 7.5 cm, and weight 84.1 ± 13.2 kg) participated in this study. Compared to the Cologne method, the Schmitz–Hinkelbein method showed superiority in compression rates (100.5 ± 14.4 compressions/min), correct compression depth (65 ± 23%), and overall high rates of correct thoracic release after compression (66% high, 20% moderate, and 13% low). The Cologne method showed correct depth rates (28 ± 27%) but was associated with a lower mean compression rate (73.9 ± 25.5/min) and with lower rates of correct thoracic release (20% high, 7% moderate, and 73% low). Conclusions: Both methods are feasible without any equipment and could enable immediate CPR during cardiac arrest in microgravity, even in a single-helper scenario. The Schmitz–Hinkelbein method appears superior and could allow the delivery of high-quality CPR immediately after cardiac arrest with sufficient quality.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Huo, Fu Lei, Guo Li Zhang, Jia Lu Li, Guang Wei Chen et Li Chen. « Study on the Compression Properties of Epoxy Matrix Composites Reinforced by PES Warp-Knitted Spacer Fabric ». Advanced Materials Research 217-218 (mars 2011) : 1208–11. http://dx.doi.org/10.4028/www.scientific.net/amr.217-218.1208.

Texte intégral
Résumé :
This research paper presents an experimental investigation on the compression and compressive resilience properties of warp-knitted spacer fabric composites with different resin content and different kinds of resin. By means of hand roller coating technology, four kinds of warp-knitted spacer fabric composites were made. The experiments were tested according to GB/T1453-2005 and ISO3386/2:1984. It is shown that the resin content and resin type seriously affect the compression and compressive resilience properties of warp-knitted spacer fabric composite. The data indicated that when the warp-knitted spacer fabric composite coated with the same kind of resin, with increasing resin content the elastic modulus added and the compressive resilience decreased. Having the same resin content, the compression properties of pacer fabric composite increase with the increase of flexural strength of resin; while the compressive resilience decrease.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Emberts, Z. T., J. H. Schwab, A. L. Williams, M. L. Birnbaum, P. D. Padjen, A. Bhattacharya et S. K. Olson. « (P2-3) Analysis of Chest Compression Rate and Its Affect on the Quality of Chest Compressions ». Prehospital and Disaster Medicine 26, S1 (mai 2011) : s136. http://dx.doi.org/10.1017/s1049023x1100447x.

Texte intégral
Résumé :
BackgroundIn the last 50 years of modern-era cardiopulmonary resuscitation (CPR), survival rates remain dismal, worldwide. International CPR guidelines recommended a compression rate of at least 100 per minute. There is little evidence documenting if and to what extent high compression rates affect the quality of chest compressions.ObjectivesAn objective of this study was to evaluate the effect mean compression rate (MCR) had on the overall quality of chest compressions. Investigators hypothesized that MCRs > 110 would result in a smaller percentage of adequate: compressions (PAC); depth (PAD); and recoil (PAR).MethodsIn this observational pilot study, basic life support providers were recruited from prehospital and in-hospital settings to provide 10 minutes of continuous chest compressions, based on the 2005 American Heart Association guidelines. An adequate compression was defined as a compression that was > 35 mm, had full recoil, and correct hand position. Data were recorded using the Laerdal PC Skill reporting System.ResultsNinety four (91.3%) of 103 participants completed 10 minutes of compressions. Rescuers represented a variety of backgrounds, average age of 35.5 ± 11.0 years. Fifty eight (56.2%) rescuers had performed CPR in the last two years, and 54 (52.4%) practiced prehospital EMS. Providers that did not complete the entire 10 minutes tended to have a higher MCR than those completing 10 minutes, 114.2 ± 19.3 vs. 105.8 ± 15.4 respectively. Within the first two minutes, rescuers with a MCR > 110 delivered 45% of their compressions adequately, compared to 60% when a rescuer's MCR was < 110. This initial disparity was primarily due to decreased PAR, not decreased PAD. After 2 minutes, higher MCRs correlated with decreased PAC, due to decreased PAD.ConclusionsData indicates a higher MCR results in decreased PAC, PAD, and PAR, likely attributed to increased rescuer fatigue.
Styles APA, Harvard, Vancouver, ISO, etc.
50

J. Sarkar, Subhra, Nabendu Kr. Sarkar, Trishayan Dutta, Panchalika Dey et Aindrila Mukherjee. « Arithmatic Coding Based Approach for Power System Parameter Data Compression ». Indonesian Journal of Electrical Engineering and Computer Science 2, no 2 (1 mai 2016) : 268. http://dx.doi.org/10.11591/ijeecs.v2.i2.pp268-274.

Texte intégral
Résumé :
<p>For stable power system operation, various system parameters like voltage, current, frequency, active and reactive power etc. are monitored at a regular basis. These data are either stored in the database of the system or transmitted to the monitoring station through SCADA. If these data can be compressed by suitable data compression techniques, there will be reduced memory requirement as well as lower energy consumption for transmitting the data. In this paper, an algorithm based on Arithmetic Coding is developed for compressing and decompressing such parameters in MATLAB environment. The compression ratio of the algorithm clearly indicates the effectiveness of the algorithm.</p>
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie