Academic literature on the topic 'Data storage reduction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Data storage reduction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Data storage reduction"

1

Bostoen, Tom, Sape Mullender, and Yolande Berbers. "Power-reduction techniques for data-center storage systems." ACM Computing Surveys 45, no. 3 (June 2013): 1–38. http://dx.doi.org/10.1145/2480741.2480750.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Singhal, Shubhanshi, Pooja Sharma, Rajesh Kumar Aggarwal, and Vishal Passricha. "A Global Survey on Data Deduplication." International Journal of Grid and High Performance Computing 10, no. 4 (October 2018): 43–66. http://dx.doi.org/10.4018/ijghpc.2018100103.

Full text
Abstract:
This article describes how data deduplication efficiently eliminates the redundant data by selecting and storing only single instance of it and becoming popular in storage systems. Digital data is growing much faster than storage volumes, which shows the importance of data deduplication among scientists and researchers. Data deduplication is considered as most successful and efficient technique of data reduction because it is computationally efficient and offers a lossless data reduction. It is applicable to various storage systems, i.e. local storage, distributed storage, and cloud storage. This article discusses the background, components, and key features of data deduplication which helps the reader to understand the design issues and challenges in this field.
APA, Harvard, Vancouver, ISO, and other styles
3

Tong, Yulai, Jiazhen Liu, Hua Wang, Ke Zhou, Rongfeng He, Qin Zhang, and Cheng Wang. "Sieve: A Learned Data-Skipping Index for Data Analytics." Proceedings of the VLDB Endowment 16, no. 11 (July 2023): 3214–26. http://dx.doi.org/10.14778/3611479.3611520.

Full text
Abstract:
Modern data analytics services are coupled with external data storage services, making I/O from remote cloud storage one of the dominant costs for query processing. Techniques such as columnar block-based data organization and compression have become standard practices for these services to save storage and processing cost. However, the problem of effectively skipping irrelevant blocks at low overhead is still open. Existing data-skipping efforts maintain lightweight summaries (e.g., min/max, histograms) for each block to filter irrelevant data. However, such techniques ignore patterns in real-world data, enabling ineffective use of the storage budget and may cause serious false positives. This paper presents Sieve, a learning-enhanced index designed to efficiently filter out irrelevant blocks by capturing data patterns. Specifically, Sieve utilizes piece-wise linear functions to capture block distribution trends over the key space. Based on the captured trends, Sieve trades off storage consumption and false positives by grouping neighboring keys with similar block distributions into a single region. We have evaluated Sieve using Presto, and experiments on real-world datasets demonstrate that Sieve achieves up to 80% reduction in blocks accessed and 42% reduction in query times compared to its counterparts.
APA, Harvard, Vancouver, ISO, and other styles
4

Sheetal, Annabathula Phani, Giddaluru Lalitha, Arepalli Peda Gopi, and Vejendla Lakshman Narayana. "Secured Data Transmission with Integrated Fault Reduction Scheduling in Cloud Computing." Ingénierie des systèmes d information 26, no. 2 (April 30, 2021): 225–30. http://dx.doi.org/10.18280/isi.260209.

Full text
Abstract:
Cloud computing offers end users a scalable and cost-effective way to access multi-platform data. While the Cloud Storage features endorse it, resource loss is also likely. A fault-tolerant mechanism is therefore required to achieve uninterrupted cloud service performances. The two widely used defect-tolerant mechanisms are task relocation and replication. But the replication approach leads to enormous overhead storage and computing as the number of tasks gradually increases. When a large number of defects occur, it creates more overhead storage and time complexity depending on task criticalities. An Integrated Fault Reduction Scheduling (IFRS) cloud computing model is used to resolve these problems. The probability of failure of a VM is calculated by finding the previous failures and active executions in this model. Then a fault-related adaptive recovery timer is retained, modified depending on the fault type. Experimental findings showed that IFRS reached 67% lower storage costs and 24% less response time when comparing with the current technique for sensitive tasks.
APA, Harvard, Vancouver, ISO, and other styles
5

Ming-Huang Chiang, David, Chia-Ping Lin, and Mu-Chen Chen. "Data mining based storage assignment heuristics for travel distance reduction." Expert Systems 31, no. 1 (December 26, 2012): 81–90. http://dx.doi.org/10.1111/exsy.12006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Szekely, Geza, Th Lindblad, L. Hildingsson, and W. Klamra. "On the reduction of data storage from high-dispersion experiments." Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 292, no. 2 (July 1990): 431–34. http://dx.doi.org/10.1016/0168-9002(90)90398-p.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yasuda, Shin, Jiro Minabe, and Katsunori Kawano. "Optical noise reduction for dc-removed coaxial holographic data storage." Optics Letters 32, no. 2 (December 23, 2006): 160. http://dx.doi.org/10.1364/ol.32.000160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Firtha, Ferenc. "Development of data reduction function for hyperspectral imaging." Progress in Agricultural Engineering Sciences 3, no. 1 (December 1, 2007): 67–88. http://dx.doi.org/10.1556/progress.3.2007.4.

Full text
Abstract:
Algorithms have been developed for controlling the calibration and the measurement cycles of hyperspectral equipment. Special calibration and preprocessing methods were necessary to obtain suitable signal level and acceptable repeatability of the measurements. Therefore, the effect of the noise of NIR sensor was decreased, the signal level was enhanced and stability was ensured simultaneously. In order to investigate the properties of the number of objects suitable for statistical analysis, the enormous size of acquired hypercube (gigabytes per object) should be reduced by vector-to-scalar mathematical operators in real-time to extract the desired features. The algorithm developed was able to calculate the score of operators during scanning and the matrices were displayed as pseudo-images to show the distribution of the properties on the surface. The operators had to be determined by analysis of a sample set in preliminary experiments. Stored carrot was chosen as a model sample for investigation of the detection of moisture loss by hyperspectral properties. Determination of the proper operator on different tissues could help to analyze and model drying process and to control storage. Hyperspectral data of different carrot cultivars were tested under different storage conditions. Using improved measurement method the spectral parameter of the suitable operator described quite well the moisture loss of the different carrot tissues.
APA, Harvard, Vancouver, ISO, and other styles
9

Abd Manan, Wan Nurazieelin Wan, and Mohamad Aizi Salamat. "Concept of minimizing the response time for reducing dynamic data redundancy in cloud computing." Indonesian Journal of Electrical Engineering and Computer Science 15, no. 3 (September 1, 2019): 1597. http://dx.doi.org/10.11591/ijeecs.v15.i3.pp1597-1602.

Full text
Abstract:
<span>Reduction of dynamic data redundancy in cloud computing is one of the best ways to maintain the storage capacity from being fully utilized. Cloud storage is a part of cloud computing technology which holds a high demand in any organization for reducing the cost of purchasing and maintaining storage infrastructures. Increase in the number of users will require a larger storage capacity for storing their data. Reduction of dynamic data redundancy allows service providers to be energy savvy and minimize maintenance cost. Recent researches focus more on static data nature despite its limited capability as compared to dynamic data characteristic in cloud storage. Therefore, this paper theoretically compares various techniques for reduction of redundant dynamic data in cloud computing and suggests the best technique for completing the task in terms of response time.</span>
APA, Harvard, Vancouver, ISO, and other styles
10

Kim, Jang Hyun, and Hyunseok Yang. "TuC-1-4 NOISE REDUCTION METHOD USING EXTENDED KALMAN FILTER FOR TILT SERVO CONTROL IN HOLOGRAPHIC DATA STORAGE SYSTEM." Proceedings of JSME-IIP/ASME-ISPS Joint Conference on Micromechatronics for Information and Precision Equipment : IIP/ISPS joint MIPE 2015 (2015): _TuC—1–4–1—_TuC—1–4–3. http://dx.doi.org/10.1299/jsmemipe.2015._tuc-1-4-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Data storage reduction"

1

Huffman, Michael John. "JDiet: Footprint Reduction for Memory-constrained Systems." DigitalCommons@CalPoly, 2009. https://digitalcommons.calpoly.edu/theses/108.

Full text
Abstract:
Main memory remains a scarce computing resource. Even though main memory is becoming more abundant, software applications are inexorably engineered to consume as much memory as is available. For example, expert systems, scientific computing, data mining, and embedded systems commonly suffer from the lack of main memory availability. This thesis introduces JDiet, an innovative memory management system for Java applications. The goal of JDiet is to provide the developer with a highly configurable framework to reduce the memory footprint of a memory-constrained system, enabling it to operate on much larger working sets. Inspired by buffer management techniques common in modern database management systems, JDiet frees main memory by evicting non-essential data to a disk-based store. A buffer retains a fixed amount of managed objects in main memory. As non-resident objects are accessed, they are swapped from the store to the buffer using an extensible replacement policy. While the Java virtual machine naïvely delegates virtual memory management to the operating system, JDiet empowers the system designer to select both the managed data and replacement policy. Guided by compile-time configuration, JDiet performs aspect-oriented bytecode engineering, requiring no explicit coupling to the source or compiled code. The results of an experimental evaluation of the effectiveness of JDiet are reported. A JDiet-enabled XML DOM parser is capable of parsing and processing over 200% larger input documents by sacrificing less than an order of magnitude in performance.
APA, Harvard, Vancouver, ISO, and other styles
2

Dini, Cosmin. "Mécanismes de traitement des données dans les réseaux de capteurs sans fils dans les cas d'accès intermittent à la station de base." Phd thesis, Université de Haute Alsace - Mulhouse, 2010. http://tel.archives-ouvertes.fr/tel-00576919.

Full text
Abstract:
Les réseaux des capteurs sans fil sont considérés comme une alternative aux réseaux câblés afin de permettre l'installation dans des zones peu accessibles. Par conséquent, de nouveaux protocoles ont été conçus pour supporter le manque des ressources qui est spécifique à ce type de réseau. La communication entre les nœuds est réalisée par des protocoles spécifiques pour la gestion efficace de l'énergie. La gestion des données collectées par ces nœuds doit être également prise en compte car la communication entre les nœuds engendre un coût non négligeable en termes d'énergie. De plus, l'installation de ce type de réseau dans des régions lointaines facilite les attaques sur la structure des réseaux ainsi que sur les données collectées. Les mesures de sécurité envisagées amènent des coûts d'énergie supplémentaires. Un aspect souvent négligé concerne le cas où un nœud ne peut pas communiquer avec la station de base (sink node) qui collectionne et traite les données. Cependant, les nœuds continuent à accumuler des informations en suivant les plans de collection. Si la situation continue, l'espace de mémoire (storage) diminue à un point où la collection de nouvelles données n'est plus possible.Nous proposons des mécanismes pour la réduction contrôlée de données en considérant leur priorité relative. Les données sont divisées dans des unités auxquelles un niveau d'importance est alloué, en fonction des considérations d'utilité et de missions qui les utilisent. Nous proposons un ensemble de primitives (opérations) qui permettent la réduction d'espace de stockage nécessaire, tout en préservant un niveau raisonnable de résolution des informations collectées. Pour les larges réseaux à multiple nœuds, nous proposons des mécanismes pour le partage de données (data load sharing) ainsi que la redondance. Des algorithmes ont été proposés pour évaluer l'efficacité de ces techniques de gestion de données vis-à-vis de l'énergie nécessaire pour transférer les données.A travers des simulations, nous avons validé le fait que les résultats sont très utiles dans les cas à mémoire limitée (wireless nades) et pour les communications intermittentes.
APA, Harvard, Vancouver, ISO, and other styles
3

Majed, Aliah. "Sensing-based self-reconfigurable strategies for autonomous modular robotic systems." Electronic Thesis or Diss., Brest, École nationale supérieure de techniques avancées Bretagne, 2022. http://www.theses.fr/2022ENTA0013.

Full text
Abstract:
Les systèmes robotiques modulaires (MRS) font aujourd’hui l’objet de recherches très actives. Ils ont la capacité de changer la perspective des systèmes robotiques, passant de machines conçues pour effectuer certaines tâches à des outils polyvalents capables d'accomplir presque toutes les tâches. Ils sont utilisés dans un large éventail d'applications, notamment la reconnaissance, les missions de sauvetage, l'exploration spatiale, les tâches militaires, etc. Constamment, MRS est constitué de "modules" allant de quelques à plusieurs centaines, voire milliers. Chaque module implique des actionneurs, des capteurs, des capacités de calcul et de communication. Habituellement, ces systèmes sont homogènes où tous les modules sont identiques ; cependant, il pourrait y avoir des systèmes hétérogènes contenant différents modules pour maximiser la polyvalence. L’un des avantages de ces systèmes est leur capacité à fonctionner dans des environnements difficiles dans lesquels les schémas de travail contemporains avec intervention humaine sont risqués, inefficaces et parfois irréalisables. Dans cette thèse, nous nous intéressons à la robotique modulaire auto-reconfigurable. Dans de tels systèmes, il utilise un ensemble de détecteurs afin de détecter en permanence son environnement, de localiser sa propre position, puis de se transformer en une forme spécifique pour effectuer les tâches requises. Par conséquent, MRS est confronté à trois défis majeurs. Premièrement, il offre une grande quantité de données collectées qui surchargent la mémoire de stockage du robot. Deuxièmement, cela génère des données redondantes qui compliquent la prise de décision concernant la prochaine morphologie du contrôleur. Troisièmement, le processus d'auto-reconfiguration nécessite une communication massive entre les modules pour atteindre la morphologie cible et prend un temps de traitement important pour auto-reconfigurer le robot. Par conséquent, les stratégies des chercheurs visent souvent à minimiser la quantité de données collectées par les modules sans perte considérable de fidélité. Le but de cette réduction est d'abord d'économiser de l'espace de stockage dans le MRS, puis de faciliter l'analyse des données et la prise de décision sur la morphologie à utiliser ensuite afin de s'adapter aux nouvelles circonstances et d'effectuer de nouvelles tâches. Dans cette thèse, nous proposons un mécanisme efficace de traitement de données et de prise de décision auto-reconfigurable dédié aux systèmes robotiques modulaires. Plus spécifiquement, nous nous concentrons sur la réduction du stockage de données, la prise de décision d'auto-reconfiguration et la gestion efficace des communications entre les modules des MRS dans le but principal d'assurer un processus d'auto-reconfiguration rapide
Modular robotic systems (MRSs) have become a highly active research today. It has the ability to change the perspective of robotic systems from machines designed to do certain tasks to multipurpose tools capable of accomplishing almost any task. They are used in a wide range of applications, including reconnaissance, rescue missions, space exploration, military task, etc. Constantly, MRS is built of “modules” from a few to several hundreds or even thousands. Each module involves actuators, sensors, computational, and communicational capabilities. Usually, these systems are homogeneous where all the modules are identical; however, there could be heterogeneous systems that contain different modules to maximize versatility. One of the advantages of these systems is their ability to operate in harsh environments in which contemporary human-in-the-loop working schemes are risky, inefficient and sometimes infeasible. In this thesis, we are interested in self-reconfigurable modular robotics. In such systems, it uses a set of detectors in order to continuously sense its surroundings, locate its own position, and then transform to a specific shape to perform the required tasks. Consequently, MRS faces three major challenges. First, it offers a great amount of collected data that overloads the memory storage of the robot. Second it generates redundant data which complicates the decision making about the next morphology in the controller. Third, the self reconfiguration process necessitates massive communication between the modules to reach the target morphology and takes a significant processing time to self-reconfigure the robotic. Therefore, researchers’ strategies are often targeted to minimize the amount of data collected by the modules without considerable loss in fidelity. The goal of this reduction is first to save the storage space in the MRS, and then to facilitate analyzing data and making decision about what morphology to use next in order to adapt to new circumstances and perform new tasks. In this thesis, we propose an efficient mechanism for data processing and self-reconfigurable decision-making dedicated to modular robotic systems. More specifically, we focus on data storage reduction, self-reconfiguration decision-making, and efficient communication management between modules in MRSs with the main goal of ensuring fast self-reconfiguration process
APA, Harvard, Vancouver, ISO, and other styles
4

"Kernel-space inline deduplication file systems for virtual machine image storage." 2013. http://library.cuhk.edu.hk/record=b5549294.

Full text
Abstract:
從文件系統設計的角度,我們探索了利用重復數據删除技術來消除硬盤陣列存儲設備當中的重復數據。我們提出了ScaleDFS,一個重復數據删除技術的文件系統, 旨在硬盤陣列存儲設備上實現可擴展的吞吐性能。ScaleDFS有三個主要的特點。第一,利用多核CPU並行計算出用作識別重復數據的加密指紋,以提高寫入速度。第二,緩存曾經讀取過的重復數據塊,以顯著提高讀取速度。第三,優化用作查找指紋的內存數據結構,以更加節省內存。ScaleDFS是一個以Linux系統內核模塊開發的,與POSIX兼容的,可以用在一般低成本硬件配置上的文件系統。我們進行了一系列的微觀性能測試,以及用42個不同版本的Linux虛擬鏡像文件進行了宏觀性能測試。我們證實,ScaleDFS在磁盤陣列上比目前已有的開源重復數據删除文件系統擁有更好的讀寫性能。
We explore the use of deduplication for eliminating the storage of redundant data in RAID from a file-system design perspective. We propose ScaleDFS, a deduplication file system that seeks to achieve scalable read/write throughput in RAID. ScaleDFS is built on three novel design features. First, we improve the write throughput by exploiting multiple CPU cores to parallelize the processing of the cryptographic fingerprints that are used to identify redundant data. Second, we improve the read throughput by specifically caching in memory the recently read blocks that have been deduplicated. Third, we reduce the memory usage by enhancing the data structures that are used for fingerprint lookups. ScaleDFS is implemented as a POSIX-compliant, kernel-space driver module that can be deployed in commodity hardware configurations. We conduct microbenchmark experiments using synthetic workloads, and macrobenchmark experiments using a dataset of 42 VM images of different Linux distributions. We show that ScaleDFS achieves higher read/write throughput than existing open-source deduplication file systems in RAID.
Detailed summary in vernacular field only.
Ma, Mingcao.
"October 2012."
Thesis (M.Phil.)--Chinese University of Hong Kong, 2013.
Includes bibliographical references (leaves 39-42).
Abstracts also in Chinese.
Chapter 1 --- Introduction --- p.2
Chapter 2 --- Literature Review --- p.5
Chapter 2.1 --- Backup systems --- p.5
Chapter 2.2 --- Use of special hardware --- p.6
Chapter 2.3 --- Scalable storage --- p.6
Chapter 2.4 --- Inline DFSs --- p.6
Chapter 2.5 --- VM image storage with deduplication --- p.7
Chapter 3 --- ScaleDFS Background --- p.8
Chapter 3.1 --- Spatial Locality of Fingerprint Placement --- p.9
Chapter 3.2 --- Prefetching of Fingerprint Stores --- p.12
Chapter 3.3 --- Journaling --- p.13
Chapter 4 --- ScaleDFS Design --- p.15
Chapter 4.1 --- Parallelizing Deduplication --- p.15
Chapter 4.2 --- Caching Read Blocks --- p.17
Chapter 4.3 --- Reducing Memory Usage --- p.17
Chapter 5 --- Implementation --- p.20
Chapter 5.1 --- Choice of Hash Function --- p.20
Chapter 5.2 --- OpenStack Deployment --- p.21
Chapter 6 --- Experiments --- p.23
Chapter 6.1 --- Microbenchmarks --- p.23
Chapter 6.2 --- OpenStack Deployment --- p.28
Chapter 6.3 --- VM Image Operations in a RAID Setup --- p.33
Chapter 7 --- Conclusions and FutureWork --- p.38
Bibliography --- p.39
APA, Harvard, Vancouver, ISO, and other styles
5

"Live deduplication storage of virtual machine images in an open-source cloud." 2012. http://library.cuhk.edu.hk/record=b5549139.

Full text
Abstract:
重覆數據删除技術是一個消除冗餘數據存儲塊的技術。尤其是,在儲存數兆位元組的虛擬機器影像時,它已被證明可以減少使用磁碟空間。但是,在會經常加入和讀取虛擬機器影像的雲端平台,部署重覆數據删除技術仍然存在挑戰。我們提出了一個在內核運行的重覆數據删除檔案系統LiveDFS,它可以在一個在低成本硬件配置的開源雲端平台中作為儲存虛擬機器影像的後端。LiveDFS有幾個新穎的特點。具體來說,LiveDFS中最重要的特點是在考慮檔案系統佈局時,它利用空間局部性放置重覆數據删除中繼資料。LiveDFS是POSIX兼容的Linux內核檔案系統。我們透過使用42個不同Linux發行版的虛擬機器影像,在實驗平台測試了LiveDFS的讀取和寫入性能。我們的工作證明了在低成本硬件配置的雲端平台部署LiveDFS的可行性。
Deduplication is a technique that eliminates the storage of redundant data blocks. In particular, it has been shown to effectively reduce the disk space for storing multi-gigabyte virtual machine (VM) images. However, there remain challenging deployment issues of enabling deduplication in a cloud platform, where VM images are regularly inserted and retrieved. We propose a kernel-space deduplication file systems called LiveDFS, which can serve as a VM image storage backend in an open-source cloud platform that is built on low-cost commodity hardware configurations. LiveDFS is built on several novel design features. Specifically, the main feature of LiveDFS is to exploit spatial locality of placing deduplication metadata on disk with respect to the underlying file system layout. LiveDFS is POSIX-compliant and is implemented as Linux kernel-space file systems. We conduct testbed experiments of the read/write performance of LiveDFS using a dataset of 42 VM images of different Linux distributions. Our work justifies the feasibility of deploying LiveDFS in a cloud platform under commodity settings.
Detailed summary in vernacular field only.
Ng, Chun Ho.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2012.
Includes bibliographical references (leaves 39-42).
Abstracts also in Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 2 --- LiveDFS Design --- p.5
Chapter 2.1 --- File System Layout --- p.5
Chapter 2.2 --- Deduplication Primitives --- p.6
Chapter 2.3 --- Deduplication Process --- p.8
Chapter 2.3.1 --- Fingerprint Store --- p.9
Chapter 2.3.2 --- Fingerprint Filter --- p.11
Chapter 2.4 --- Prefetching of Fingerprint Stores --- p.14
Chapter 2.5 --- Journaling --- p.15
Chapter 2.6 --- Ext4 File System --- p.17
Chapter 3 --- Implementation Details --- p.18
Chapter 3.1 --- Choice of Hash Function --- p.18
Chapter 3.2 --- OpenStack Deployment --- p.19
Chapter 4 --- Experiments --- p.21
Chapter 4.1 --- I/O Throughput --- p.21
Chapter 4.2 --- OpenStack Deployment --- p.26
Chapter 5 --- Related Work --- p.34
Chapter 6 --- Conclusions and Future Work --- p.37
Bibliography --- p.39
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Data storage reduction"

1

SINGH, Dr ANIMESH, Dr BHAWNA CHOUDHARY, and Dr MANISHA GUPTA. TRANSFORMING BUSINESS THROUGH DIGITALIZATION. KAAV PUBLICATIONS, DELHI, INDIA, 2021. http://dx.doi.org/10.52458/9789391842390.2021.eb.

Full text
Abstract:
The theme of this book “Transforming Business through Digitization‖ was chosen due to its relevance in the contemporary globalized world. The world is witnessing the pace of change of digitalization like never before the similar trend will be seen in future too. With integration of value chains and supply chains becoming a global imperative, the contribution of IT enabled services and digitalization has had great impact on Tran‘s nationalisation of businesses. The responsiveness in the value chains and in the larger supply chains will be the key to increasing the market share in future. The application of Artificial Intelligence has helped the stakeholders in value chains and supply chains in making informed & quick decisions. This has been made possible due to integrated and well organized businesses linkages leading to better storage, access and management of data. The increase digitalization and ability to track and capture data at different nodes in the value chain and supply chain will help the marketers understand the impact of various variables on the sales performance of various brands. The marketers have to work of ways to convince the stakeholders about the privacy of the data. In future there is a possibility of mixing compete data privacy with fluid artificial intelligence across the supply chain making business processes easier using the technology of block chains. The most important contribution of the digitalization in the supply chain may be seen in the area of sustainability and green initiatives. The may be made possible by the way of assessing the levels of reduction in exploitative and polluting systems and processes and making progressive modifications in those systems and processes. The book- ―transforming business through digitization‖ is an attempt to record Innovative and novel manuscripts, research-based articles, case studies, conceptual outcome-oriented business models, and practices from the innovative minds of researchers and academicians. The book encompasses twenty-four chapters with research-based perspectives in the area of e-commerce, digital governance, digital transaction platforms, business analytics, and digitalization in agriculture, digital marketing, block chain, nuero marketing, search engine marketing, UPIs, Search Engine Marketing, Digi-preneurship, and digital finance. The book can be read as a compendium of readings of digitization of business and industry.
APA, Harvard, Vancouver, ISO, and other styles
2

Maugeri, Leonardo. Beyond the Age of Oil. ABC-CLIO, LLC, 2010. http://dx.doi.org/10.5040/9798400618161.

Full text
Abstract:
This book offers a revealing picture of the myths and realities of the energy world by one of our most renowned energy experts and managers. At the end of the first decade of the 21st century, the human race finds itself caught in an "energy trap." Carbon-rich fossil fuels—coal, petroleum and natural gas—are firmly entrenched as the dominant sources of our energy and power. Their highly concentrated forms, versatility of use, ease of transport and storage, ready availability, and comparatively low costs combine to give fossil fuels an unassailable competitive advantage over all alternative sources of energy. This economic reality means that fossil fuels will inevitably continue to be the backbone of the global economy for the next quarter of a century, even while the adverse climate and environmental effects of our dependence on fossil fuels hurtle toward global crisis levels. To avert unacceptable environmental consequences, the world must deliberately and incrementally supplant fossil fuels with alternative energy sources, on a schedule that will have them overtake fossil fuels in the world's energy budget by 2035. To achieve this urgent goal without massive economic dislocation and reduction in standards of living, global investment in fossil fuel efficiency will be just as important as the development and massive deployment of alternative energy technologies and delivery systems. In this eagerly awaited sequel to his prize-winning bestseller, The Age of Oil, Leonardo Maugeri, the strategy director of one of the world's biggest energy companies, puts forward a hard-headed, concrete plan in simple everyday language for how to shift the world economy's primary energy dependence from fossil fuels to renewable energies by 2035. Assuming no specialized knowledge, the author walks the reader chapter by chapter through each of the fossil fuels (oil, coal, and natural gas) and each of the alternative energy sources (nuclear, hydroelectric, biofuel, wind, solar, geothermal, and hydrogen). Drawing on the unparalleled data and analysis resources at his command, Maugeri assesses the problems and advantages of each energy source in turn in order to constrain the optimal mix of energy sources that the world should be aiming for in 2035. Critically, he lays out the arduous path for getting from here to there. Maugeri shows that the next 25 years will be a rocky marriage between the old and the new energy paradigms, during which we must dramatically improve the efficiency of our continuing use of fossil fuels, while driving ahead on all fronts to an energy future based on a suite of sustainable energy sources.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Data storage reduction"

1

Čtvrtník, Mikuláš. "Data Minimisation—Storage Limitation—Archiving." In Archives and Records, 197–240. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-18667-7_8.

Full text
Abstract:
AbstractThe chapter summarises the conclusions drawn for the area of data reduction and minimisation in records management and archiving. A major focus is placed, among other things, on the process of archival appraisal, within which the most significant data and record reduction in records management and archiving is carried out. Archival science, methodology, and practice, however, have so far neglected the potential risks of the misuse of sensitive personal data contained in permanently (or long-term) stored records and have focused almost exclusively on the records information content and their future usability by various research projects and private research interests. The sector should aspire to change this in the future and include a more substantial consideration of the protection of (not only) personality rights and privacy in archival appraisal. In this chapter, the author thus analyses some models of records appraisal as they have taken shape in the post-1945 period. In the analysis of the process of the reduction and minimisation of data maintained in archives, the author will also look at the anonymisation and pseudonymisation of data that are either already stored or aspire to be archived in the future. In this context, it will finally also address the current growing risks of deanonymisation and reidentification.
APA, Harvard, Vancouver, ISO, and other styles
2

Lofstead, Jay, Gregory Jean-Baptiste, and Ron Oldfield. "Delta: Data Reduction for Integrated Application Workflows and Data Storage." In Lecture Notes in Computer Science, 142–52. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46079-6_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ma, Jeonghyeon, and Chanik Park. "Parallelizing Inline Data Reduction Operations for Primary Storage Systems." In Lecture Notes in Computer Science, 301–7. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-62932-2_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zou, Ruobing, Oscar C. Au, Lin Sun, Sijin Li, and Wei Dai. "An Adaptive Motion Data Storage Reduction Method for Temporal Predictor." In Advances in Image and Video Technology, 48–59. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-25346-1_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yu, Wangyang, Guanjun Liu, and Leifeng He. "A Reduction Method of Analyzing Data-Liveness and Data-Boundedness for a Class of E-commerce Business Process Nets." In Security, Privacy, and Anonymity in Computation, Communication, and Storage, 70–83. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-49148-6_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chakravarthy, S. Kalyan, N. Sudhakar, E. Srinivasa Reddy, D. Venkata Subramanian, and P. Shankar. "Dimension Reduction and Storage Optimization Techniques for Distributed and Big Data Cluster Environment." In Soft Computing and Medical Bioinformatics, 47–54. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-0059-2_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chandrasekhar, A. Poorna, and T. Sobha Rani. "Storage and Retrieval of Large Data Sets: Dimensionality Reduction and Nearest Neighbour Search." In Communications in Computer and Information Science, 262–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-32129-0_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Guanglin, Kaijiang Yi, Wenqian Zhang, and Demin Li. "Cost Reduction for Micro-Grid Powered Data Center Networks with Energy Storage Devices." In Wireless Algorithms, Systems, and Applications, 647–59. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-94268-1_53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sethuramalingam, R., Abhishek Asthana, S. Xygkaki, K. Liu, J. Eduardo, S. Wilson, and C. Bater. "Energy Demand Reduction in Data Centres Using Computational Fluid Dynamics." In Springer Proceedings in Energy, 275–84. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30960-1_26.

Full text
Abstract:
AbstractA data centre is a facility where it hosts the server systems, computer systems, and its associated components such as cooling units, redundancy power supplies and power storage systems. Data centres are a very energy-demanding sector. Data Centre Dynamics magazine forecasts that by 2025, Data Centres will consume more than 2% of the global electricity supply. Due to this forecast, it is become vital to reduce the energy consumption in the data centre industry. On average, data centres use 30–50% of their total energy supply on mechanical cooling to cool their IT equipment. However, many of them still have difficulties with high-temperature regions such as hot spots in the server data hall which contributes to server downtime. Along with this, the power densities of the data centres are on the rise as the telecommunication industry at exponential growth over the years. This inefficiency in the temperature distribution can be resolved through advanced computational fluid dynamics software. It also becomes essential to expand the use of CFD (computational fluid dynamics) into key sections of Data Centre design, to reduce thermal inefficiencies. It is necessary to identify the potential issues at the initial stages to deliver efficient solutions which will work at a low Power Usage Effectiveness (PUE), to future-proof data centre facilities. This paper outlines the importance of a computational fluid dynamics (CFD) analysis in the data centre design. The mock-up data centre internal and external models are analysed in 6Sigma Software. The various parameters were investigated to optimise the energy performance of the infrastructure. The results also provided the analysis of the data hall with detailed rack inlet and 3D modelling of the data hall, external simulations with chillers and generators inlet temperatures highlighting trouble areas. Additional to this, Water cooled, and Air-cooled chiller performance comparison also studied and concluded that Water cooled chiller performance well than Air cooled chiller. Having the data hall air supply temperature 27 °C than 24 °C, has improved the energy efficiency in the data centre. The model developed in this study can be used as a benchmark study for the present and future thermal optimization of data centres.
APA, Harvard, Vancouver, ISO, and other styles
10

Tamura, Takao. "Improvement of the Flood-Reduction Function of Forests Based on Their Interception Evaporation and Surface Storage Capacities." In Ecological Research Monographs, 93–104. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-6791-6_7.

Full text
Abstract:
AbstractForests have a flood-reduction function that reduces flood peak flow and delays the flood peak time. In the mountains of Japan, artificial forests planted between the 1950s and 1970s are widespread; however, many of these forests are not well managed. The effective use of the flood-reduction function of forests as a remarkable approach for river basin management has been discussed for several years. In this study, two aspects of the water cycle in forests were explored: the interception evaporation process in the forest canopy and the groundwater storage process on the forest slope. A runoff model was applied to the hydrological data obtained in several forest basins with different characteristics to evaluate the effects of the processes. In the case of the Japanese cedar plantations studied, it was suggested that the improvement of interception evaporation capacity and surface storage capacity by conversion to mixed forests and selective logging would significantly reduce the peak flood discharge on a timescale of approximately 20–30 years.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Data storage reduction"

1

Nakajima, M., M. Hamada, M. Moribe, H. Hirano, K. Itoh, and S. Ogawa. "Reduction of Media Noise in Optical Disks." In Optical Data Storage. Washington, D.C.: Optica Publishing Group, 1985. http://dx.doi.org/10.1364/ods.1985.thcc5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Okubo, Shuichi, Masayuki Kubogata, and Mitsuya Okada. "Reduction of cross erase in phase change media." In Optical Data Storage. Washington, D.C.: Optica Publishing Group, 1998. http://dx.doi.org/10.1364/ods.1998.wb.3.

Full text
Abstract:
The reduction of cross erase in phase change media is crucial to the achievement of land/groove recording with a track pitch of less than 0.6 μm. The method of cross erase reduction that we report here have succeeded without the use of deep groove substrates [1].
APA, Harvard, Vancouver, ISO, and other styles
3

Ushiyama, Junko, Yasushi Miyauchi, Toshinori Sugiyama, Toshimichi Shintani, Takahiro Kurokawa, and Harukazu Miyamoto. "Interlayer Cross-talk Reduction by Controlling Backward Reflectivity for Multilayer Disks." In Optical Data Storage. Washington, D.C.: OSA, 2007. http://dx.doi.org/10.1364/ods.2007.wdpdp3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Okubo, Shuichi, Masayuki Kubogata, and Mitsuya Okada. "Reduction of cross-erase in phase-change media." In Optical Data Storage '98, edited by Shigeo R. Kubota, Tomas D. Milster, and Paul J. Wehrenberg. SPIE, 1998. http://dx.doi.org/10.1117/12.327934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Eto, Soichiro, Hiroyuki Minemura, Yumiko Anzai, and Toshimichi Shintani. "Disc Design for Reduction of Random Data Bit Error Rate in Super-Resolution." In Optical Data Storage. Washington, D.C.: OSA, 2007. http://dx.doi.org/10.1364/ods.2007.wdpdp5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kim, Hye-Rim, Ki-Mun Pak, Ji-Song Lim, and Yong-Hyub Won. "Error reduction in reconstruction of kinoform CGH patterns for a hologram ID tag system." In Optical Data Storage 2010, edited by Susanna Orlic and Ryuichi Katayama. SPIE, 2010. http://dx.doi.org/10.1117/12.858951.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Milster, Tom D., Robert M. Trusty, Mark S. Wang, Fred F. Froehlich, and J. Kevin Erwin. "Micro-optic lens for data storage." In Optical Data Storage. Washington, D.C.: Optica Publishing Group, 1991. http://dx.doi.org/10.1364/ods.1991.tud3.

Full text
Abstract:
There are several types of micro lenses commonly used in optical data storage systems. The most common are molded glass and molded plastic lenses. Molded optics weigh less than conventional multiple-element designs. Typical apertures are 3.5mm to 4.5mm in diameter, and numerical apertures (NAs) range from 0.45 to 0.55. For optical data storage applications, the smallest molded micro lenses available have an entrance pupil diameter (d) of 3.0mm. (1) There are several reasons to use smaller optics. If a smaller micro lens is used as an objective lens, the same number of disks can fit into a smaller stack height, and volumetric storage density will increase. If all hardware dimensions scale linearly with optics size, a reduction in lens size by some factor will allow a reduction in moving mass by that factor to the third power, since mass scales with volume. The result will be an improvement in access time.
APA, Harvard, Vancouver, ISO, and other styles
8

van Rosmalen, G. E., J. A. H. Kahlman, and C. M. J. van Uijen. "A Compact, One-Laser, Optical Tape Recording System for High-Definition Digital Video." In Optical Data Storage. Washington, D.C.: Optica Publishing Group, 1994. http://dx.doi.org/10.1364/ods.1994.mb3.

Full text
Abstract:
Since the introduction of digital optical recording in the early eighties, there has been a steady demand for an increased capacity storage medium. Apart from creative approaches such as superresolution, the density enhancement is usually pursued by more conventional techniques, i.e. by reduction of the spot size using shorter laser wavelengths and higher numerical aperture optics.
APA, Harvard, Vancouver, ISO, and other styles
9

Lu, Maohua, David Chambliss, Joseph Glider, and Cornel Constantinescu. "Insights for data reduction in primary storage." In the 5th Annual International Systems and Storage Conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2367589.2367606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gardner, K., PR Helfet, RJ Longman, and RM Pettigrew. "Plasmon Media Technology." In Optical Data Storage. Washington, D.C.: Optica Publishing Group, 1985. http://dx.doi.org/10.1364/ods.1985.wdd4.

Full text
Abstract:
The ME write once disc developed by Plasmon Data Systems is differentiated from other optical discs by the use of a unique microstructure called Moth Eye on the disc surface. This surface structure takes its name from the fact that similar surfaces are found on the cornea of certain nocturnal insects including the night moth. The microstructure, which has a pitch significantly below the wavelength of light, acts as an impedance matching layer to reduce the surface reflection. The reduction in reflectivity can also be achieved in the case of metal and dielectric coatings on the ME surface and in this case the absorption of the film is increased.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Data storage reduction"

1

LaBonte, Don, Etan Pressman, Nurit Firon, and Arthur Villordon. Molecular and Anatomical Characterization of Sweetpotato Storage Root Formation. United States Department of Agriculture, December 2011. http://dx.doi.org/10.32747/2011.7592648.bard.

Full text
Abstract:
Original objectives: Anatomical study of storage root initiation and formation. Induction of storage root formation. Isolation and characterization of genes involved in storage root formation. During the normal course of storage root development. Following stress-induced storage root formation. Background:Sweetpotato is a high value vegetable crop in Israel and the U.S. and acreage is expanding in both countries and the research herein represents an important backstop to improving quality, consistency, and yield. This research has two broad objectives, both relating to sweetpotato storage root formation. The first objective is to understand storage root inductive conditions and describe the anatomical and physiological stages of storage root development. Sweetpotato is propagated through vine cuttings. These vine cuttings form adventitious roots, from pre-formed primordiae, at each node underground and it is these small adventitious roots which serve as initials for storage and fibrous (non-storage) “feeder” roots. What perplexes producers is the tremendous variability in storage roots produced from plant to plant. The marketable root number may vary from none to five per plant. What has intrigued us is the dearth of research on sweetpotato during the early growth period which we hypothesize has a tremendous impact on ultimate consistency and yield. The second objective is to identify genes that change the root physiology towards either a fleshy storage root or a fibrous “feeder” root. Understanding which genes affect the ultimate outcome is central to our research. Major conclusions: For objective one, we have determined that the majority of adventitious roots that are initiated within 5-7 days after transplanting possess the anatomical features associated with storage root initiation and account for 86 % of storage root count at 65 days after transplanting. These data underscore the importance of optimizing the growing environment during the critical storage root initiation period. Water deprivation during this phenological stage led to substantial reduction in storage root number and yield as determined through growth chamber, greenhouse, and field experiments. Morphological characterization of adventitious roots showed adjustments in root system architecture, expressed as lateral root count and density, in response to water deprivation. For objective two, we generated a transcriptome of storage and lignified (non-storage) adventitious roots. This transcriptome database consists of 55,296 contigs and contains data as regards to differential expression between initiating and lignified adventitious roots. The molecular data provide evidence that a key regulatory mechanism in storage root initiation involves the switch between lignin biosynthesis and cell division and starch accumulation. We extended this research to identify genes upregulated in adventitious roots under drought stress. A subset of these genes was expressed in salt stressed plants.
APA, Harvard, Vancouver, ISO, and other styles
2

Badia, R., J. Ejarque, S. Böhm, C. Soriano, and R. Rossi. D4.4 API and runtime (complete with documentation and basic unit testing) for IO employing fast local storage. Scipedia, 2021. http://dx.doi.org/10.23967/exaqute.2021.9.001.

Full text
Abstract:
This deliverable presents the activities performed on the ExaQUte project task 4.5 Development of interface to fast local storage. The activities have been focused in two aspects: reduction of the storage space used by applications and design and implementation of an interface that optimizes the use of fast local storage by MPI simulations involved in the project applications. In the rst case, for one of the environments involved in the project (PyCOMPSs) the default behavior is to keep all intermediate les until the end of the execution, in case these les are reused later by any additional task. In the case of the other environment (HyperLoom), all les are deleted by default. To unify these two behaviours, the calls \delete object" and \detele le"have been added to the API and a ag \keep" that can be set to true to keep the les and objects that maybe needed later on. We are reporting results on the optimization of the storage needed by a small case of the project application that reduces the storage needed from 25GB to 350MB. The second focus has been on the de nition of an interface that enables the optimization of the use of local storage disk. This optimization focuses on MPI simulations that may be executed across multiple nodes. The added annotation enables to de ne access patters of the processes in the MPI simulations, with the objective of giving hints to the runtime of where to allocate the di erent MPI processes and reduce the data transfers, as well as the storage usage.
APA, Harvard, Vancouver, ISO, and other styles
3

Berkowitz, Jacob, Nathan Beane, Kevin Philley, Nia Hurst, and Jacob Jung. An assessment of long-term, multipurpose ecosystem functions and engineering benefits derived from historical dredged sediment beneficial use projects. Engineer Research and Development Center (U.S.), August 2021. http://dx.doi.org/10.21079/11681/41382.

Full text
Abstract:
The beneficial use of dredged materials improves environmental outcomes while maximizing navigation benefits and minimizing costs, in accordance with the principles of the Engineering With Nature® (EWN) initiative. Yet, few studies document the long-term benefits of innovative dredged material management strategies or conduct comprehensive life-cycle analysis because of a combination of (1) short monitoring time frames and (2) the paucity of constructed projects that have reached ecological maturity. In response, we conducted an ecological functional and engineering benefit assessment of six historic (>40 years old) dredged material–supported habitat improvement projects where initial postconstruction beneficial use monitoring data was available. Conditions at natural reference locations were also documented to facilitate a comparison between natural and engineered landscape features. Results indicate the projects examined provide valuable habitat for a variety of species in addition to yielding a number of engineering (for example, shoreline protection) and other (for example, carbon storage) benefits. Our findings also suggest establishment of ecological success criteria should not overemphasize replicating reference conditions but remain focused on achieving specific ecological functions (that is, habitat and biogeochemical cycling) and engineering benefits (that is, storm surge reduction, navigation channel maintenance) achievable through project design and operational management.
APA, Harvard, Vancouver, ISO, and other styles
4

Coulson, Wendy, Tom McGrath, and James McCarthy. PR-312-16202-R03 Methane Emissions from Transmission and Storage Subpart W Sources. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), September 2019. http://dx.doi.org/10.55274/r0011619.

Full text
Abstract:
A 2018 PRCI report evaluated related emissions from compressor seals, isolation valves, and blowdown valves based on direct measurements required by Subpart W of the GHG Reporting Program. This report presents the methane emissions data from 2011 - 2016 for the balance of the Subpart W emission sources, including: facility leaks (other than from compressor isolation valves and blowdown valves), pneumatic controller venting, condensate tank dump valve leakage, and blowdown emissions from stations. Transmission pipeline blowdown emission reporting was added to the EPA regulation in late 2015, and 2016 and 2017 pipeline blowdown data are presented in this report. The objective of the project is to evaluate and analyze the dataset, and compare methane emission estimates from these sources to historical data used by EPA, primarily the emission factors (EFs) from the mid-1990s EPA/Gas Research Institute (GRI) study used by EPA in its annual GHG inventory (GHGi) report. The results and related EFs and analysis of relative contribution from different sources can be used: (1) as an alternative to GHGi EFs for estimating methane emissions for Transmission and Storage (T and S) operations; (2) to document the relative contribution of different T and S methane emission sources; and (3) to identify reductions relative to historical estimates and support more efficient methane mitigation strategies. The Subpart W data for leaks and pneumatic devices are consistently lower than GHGi estimates, and blowdown emissions from compressor stations and transmission pipelines are similar in magnitude to GHGi estimates.
APA, Harvard, Vancouver, ISO, and other styles
5

Lichter, Amnon, Joseph L. Smilanick, Dennis A. Margosan, and Susan Lurie. Ethanol for postharvest decay control of table grapes: application and mode of action. United States Department of Agriculture, July 2005. http://dx.doi.org/10.32747/2005.7587217.bard.

Full text
Abstract:
Original objectives: Dipping of table grapes in ethanol was determined to be an effective measure to control postharvest gray mold infection caused by Botrytis cinerea. Our objectives were to study the effects of ethanol on B.cinerea and table grapes and to conduct research that will facilitate the implementation of this treatment. Background: Botrytis cinerea is known as the major pathogen of table grapes in cold storage. To date, the only commercial technology to control it relied on sulfur dioxide (SO₂) implemented by either fumigation of storage facilities or from slow release generator pads which are positioned directly over the fruits. This treatment is very effective but it has several drawbacks such as aftertaste, bleaching and hypersensitivity to humans which took it out of the GRAS list of compounds and warranted further seek for alternatives. Prior to this research ethanol was shown to control several pathogens in different commodities including table grapes and B. cinerea. Hence it seemed to be a simple and promising technology which could offer a true alternative for storage of table grapes. Further research was however required to answer some practical and theoretical questions which remained unanswered. Major conclusions, solutions, achievements: In this research project we have shown convincingly that 30% ethanol is sufficient to prevent germination of B. cinerea and kill the spores. In a comparative study it was shown that Alternaria alternata is also rather sensitive but Rhizopus stolonifer and Aspergillus niger are less sensitive to ethanol. Consequently, ethanol protected the grapes from decay but did not have a significant effect on occurrence of mycotoxigenic Aspergillus species which are present on the surface of the berry. B. cinerea responded to ethanol or heat treatments by inducing sporulation and transient expression of the heat shock protein HSP104. Similar responses were not detected in grape berries. It was also shown that application of ethanol to berries did not induce subsequent resistance and actually the berries were slightly more susceptible to infection. The heat dose required to kill the spores was determined and it was proven that a combination of heat and ethanol allowed reduction of both the ethanol and heat dose. Ethanol and heat did not reduce the amount or appearance of the wax layers which are an essential component of the external protection of the berry. The ethanol and acetaldehyde content increased after treatment and during storage but the content was much lower than the natural ethanol content in other fruits. The efficacy of ethanol applied before harvest was similar to that of the biological control agent, Metschnikowia fructicola, Finally, the performance of ethanol could be improved synergistically by packaging the bunches in modified atmosphere films which prevent the accumulation of free water. Implications, both scientific and agricultural: It was shown that the major mode of action of ethanol is mediated by its lethal effect on fungal inoculum. Because ethanol acts mainly on the cell membranes, it was possible to enhance its effect by lowering the concentration and elevating the temperature of the treatment. Another important development was the continuous protection of the treated bunches by modified atmosphere that can solve the problem of secondary or internal infection. From the practical standpoint, a variety of means were offered to enhance the effect of the treatment and to offer a viable alternative to SO2 which could be instantly adopted by the industry with a special benefit to growers of organic grapes.
APA, Harvard, Vancouver, ISO, and other styles
6

Botulinum Neurotoxin-Producing Clostridia, Working Group on. Report on Botulinum Neurotoxin-Producing Clostridia. Food Standards Agency, August 2023. http://dx.doi.org/10.46756/sci.fsa.ozk974.

Full text
Abstract:
In 1992 a working group of the UK Advisory Committee on the Microbiological Safety of Food presented a report on Vacuum Packaging and Associated Processes regarding the microbiological safety of chilled foods. The report supported subsequent guidance provided by the UK Food Standards Agency for the safe manufacture of vacuum packed and modified atmosphere packed chilled foods. In 2021 the ACMSF requested that a new subgroup should update and build on the 1992 report as well as considering, in addition to chilled foods, some foods that are intended to be stored at ambient temperatures. The new subgroup agreed a scope that includes the conditions that support growth and/or neurotoxin formation by C. botulinum, and other clostridia, as well as identification of limiting conditions that provide control. Other foodborne pathogens that need to be considered separately and some foods including raw beef, pork and lamb were explicitly excluded. The subgroup considered the taxonomy, detection, epidemiology, occurrence, growth, survival and risks associated with C. botulinum and other neurotoxin-forming clostridia. There has been no significant change in the nature of foodborne botulism in recent decades except for the identification of rare cases caused by neurotoxigenic C. butyricum, C. baratii and C. sporogenes. Currently evidence indicates that non-clostridia do not pose a risk in relation to foodborne botulism. The subgroup has compiled lists of incidents and outbreaks of botulism, reported in the UK and worldwide, and have reviewed published information concerning growth parameters and control factors in relation to proteolytic C. botulinum, non-proteolytic C. botulinum and the other neurotoxigenic clostridia. The subgroup concluded that the frequency of occurrence of foodborne botulism is very low (very rare but cannot be excluded) with high severity (severe illness: causing life threatening or substantial sequelae or long-term illness). Uncertainty associated with the assessment of the frequency of occurrence, and with the assessment of severity, of foodborne botulism is low (solid and complete data; strong evidence in multiple sources). The vast majority of reported botulism outbreaks, for chilled or ambient stored foods, are identified with proteolytic C. botulinum and temperature abuse is the single most common cause. In the last 30 years, in the UK and worldwide where a cause can be identified, there is evidence that known controls, combined with the correct storage, would have prevented the reported incidents of foodborne botulism. The subgroup recommends that foods should continue to be formulated to control C. botulinum, and other botulinum neurotoxin-producing clostridia, in accordance with the known factors. With regard to these controls, the subgroup recommends some changes to the FSA guidelines that reflect improved information about using combinations of controls, the z-value used to establish equivalent thermal processes and the variable efficacy associated with some controls such as herbs and spices. Current information does not facilitate revision of the current reference process, heating at 90°C for 10 minutes, but there is strong evidence that this provides a lethality that exceeds the target 6 order of magnitude reduction in population size that is widely attributed to the process and the subgroup includes a recommendation that the FSA considers this issue. Early detection and connection of cases and rapid, effective coordinated responses to very rare incidents are identified as crucial elements for reducing risks from foodborne botulism. The subgroup recommends that the FSA works closely with other agencies to establish clear and validated preparedness in relation to potential major incidents of foodborne botulism in the UK.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography