Academic literature on the topic 'Write data'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Write data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Write data"

1

Muddiman, Esther, and Lesley Pugsley. "Write and represent qualitative data." Education for Primary Care 27, no. 6 (October 28, 2016): 503–6. http://dx.doi.org/10.1080/14739879.2016.1245590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Berners-Lee, Tim, and Kieron O’Hara. "The read–write Linked Data Web." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 371, no. 1987 (March 28, 2013): 20120513. http://dx.doi.org/10.1098/rsta.2012.0513.

Full text
Abstract:
This paper discusses issues that will affect the future development of the Web, either increasing its power and utility, or alternatively suppressing its development. It argues for the importance of the continued development of the Linked Data Web, and describes the use of linked open data as an important component of that. Second, the paper defends the Web as a read–write medium, and goes on to consider how the read–write Linked Data Web could be achieved.
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Gyuyeong, and Wonjun Lee. "In-network leaderless replication for distributed data stores." Proceedings of the VLDB Endowment 15, no. 7 (March 2022): 1337–49. http://dx.doi.org/10.14778/3523210.3523213.

Full text
Abstract:
Leaderless replication allows any replica to handle any type of request to achieve read scalability and high availability for distributed data stores. However, this entails burdensome coordination overhead of replication protocols, degrading write throughput. In addition, the data store still requires coordination for membership changes, making it hard to resolve server failures quickly. To this end, we present NetLR, a replicated data store architecture that supports high performance, fault tolerance, and linearizability simultaneously. The key idea of NetLR is moving the entire replication functions into the network by leveraging the switch as an on-path in-network replication orchestrator. Specifically, NetLR performs consistency-aware read scheduling, high-performance write coordination, and active fault adaptation in the network switch. Our in-network replication eliminates inter-replica coordination for writes and membership changes, providing high write performance and fast failure handling. NetLR can be implemented using programmable switches at a line rate with only 5.68% of additional memory usage. We implement a prototype of NetLR on an Intel Tofino switch and conduct extensive testbed experiments. Our evaluation results show that NetLR is the only solution that achieves high throughput and low latency and is robust to server failures.
APA, Harvard, Vancouver, ISO, and other styles
4

Septafi, Gesita. "Analisis Kemampuan Menulis Artikel Ilmiah Mahasiswa Pendidikan Guru Sekolah Dasar Angkatan 2019." Educational Technology Journal 1, no. 2 (October 12, 2021): 1–16. http://dx.doi.org/10.26740/etj.v1n2.p1-16.

Full text
Abstract:
As a student who will prepare for a final project, it is necessary to strengthen and write scientific papers, as a novice writer, students are given a test to make individual scientific articles. This writer aims to analyze 1) the ability of students to write systematic scientific articles, 2) the ability of students to write the contents of scientific articles according to the systematics, and 3) the ability to use Indonesian spelling in writing scientific articles. The research approach is descriptive qualitative. The data source is PGSD students at State University of Malang batch 2019. The data is the result of student work, namely writing scientific articles. The data collection technique in this research is a test. The data analysis technique used is the Miles and Huberman model. The steps are, data reduction, data presentation, and drawing conclusions. The results of the first research, the ability to write systematic scientific articles. 30 students or 86% categorized as being able to write systematic scientific articles. The rest, 5 students or 14% are categorized as needing guidance. Second, the ability of students to write the contents of scientific articles. Students are categorized as good because more than 75% can write. 25% of students are categorized as needing guidance. Third, the ability to use Indonesian spelling. There are errors, because students do not read scientific papers, so that knowledge of the use of Indonesian spelling is still relatively lacking.
APA, Harvard, Vancouver, ISO, and other styles
5

Wu, Zhi Hao. "A Log-Structured File System Based on LevelDB." Applied Mechanics and Materials 602-605 (August 2014): 3481–84. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.3481.

Full text
Abstract:
Traditional file systems have some shortages in storing small files, such as randomness of data layout, waste of disk space and lack of inode resources. In this thesis, a log-structured file system named LevelFS based on LevelDB is presented. By setting the write buffer, it can make disk randomized writes of small files into disk sequential writes, and reduce the distance of related data, so as to improve the read and write performance of file system. Experiments show that LevelFS can greatly improve read and write performance of small files without affect the large ones.
APA, Harvard, Vancouver, ISO, and other styles
6

Suminar, Ratna Prasasti, and Giska Putri. "The Effectiveness of TTW (Think-Talk-Write) Strategy in Teaching Writing Descriptive Text." Academic Journal Perspective : Education, Language, and Literature 2, no. 2 (November 14, 2018): 300. http://dx.doi.org/10.33603/perspective.v2i2.1666.

Full text
Abstract:
This research entitled “The Effectiveness of TTW (Think-Talk-Write) Strategy in Teaching Writing Descriptive Text. TTW (Think-Talk-Write) strategy is one of strategy in teaching learning process, TTW (Think-Talk-Write) strategy one of cooperative learning. Think-Talk-Write (TTW) Strategy is starting from involvement of students in thinking or dialogue with it self after reading process. Then talk and share ideas (sharing) with friend before writing. One group consist of 4-6 students, in this group of students requested making notes, explaining, listening and sharing ideas with friends and express them through writing. The problem of the research is “To find out the effectiveness TTW (ThinkTalk-Write) strategy in teaching writing descriptive text?” The population in the research is the second grade students of UNSWAGATI CIREBON. The writer takes two classes of the second grade students as the sample from this research which were divided into two groups; experimental group (7AB) and control group (7CD). The instruments of collecting data are tests; pre-test and post-test. To analyze of data, the writer used a quasiexperimental design. The writer gave writing test to gather the data. There were pre-test and post-test. The formula that was used analyze the data was t-test. It was used to determine whether there was significance difference between students’ score in experimental group and control group.
APA, Harvard, Vancouver, ISO, and other styles
7

Amiladini, Rahmi, Lisa Tavriyanti, and Yandri Yandri. "An analysis of the third year english students` ability of fkip bung hatta university to write A cause and effect essay." International Journal of Educational Dynamics 2, no. 1 (January 17, 2020): 124–33. http://dx.doi.org/10.24036/ijeds.v2i1.240.

Full text
Abstract:
This research has attempted to describe the ability of the third year English students` ability of FKIP Bung Hatta University to write a cause and effect essay. The design of this research was descriptive. The numbers of population of this research were 122 students. The writer used cluster random sampling technique to determine the sample since the students were separated into four classes (A, B, C, and D). The writer chose one class as a sample of this research. Class B was decided to be the sample of this research. The numbers of this class were 20 students. The writer used writing essay test to collect data. Generally, the result of data analyzing the data showed that the ability of FKIP Bung Hatta University of the third year English students to write a cause and effect essay was moderate. It could be seen that 20% students had high ability, 70% had moderate ability, and 10% had low ability. Finally, based on the result above, the writer suggests the teachers to give more knowledge, explanation, practice in order to help students to improve their ability to write cause and effect essay. And students should do a lot of practices in order to improve their ability to write cause and effect essay.
APA, Harvard, Vancouver, ISO, and other styles
8

Ma, Jian Hui, Zhi Xue Wang, Gang Wang, Yuan Yang Liu, and Yan Qiang Li. "A Research and Implement of Data Storage and Management Method Based on the Embedded MCU Data Flash." Advanced Materials Research 756-759 (September 2013): 1984–88. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.1984.

Full text
Abstract:
This paper designed method for non-volatile data storage using MCU internal data Flash, certain data Flash sector is divided into multiple data partitions, different data partition storage data copies in different historical time, the current data partition storagethe latest copy of the data; In the data read operation, first calculate the latest data copying Flash storage location, then directly reads the address. In the data write operation, first judge if the data writing position is already erased, if not,write data in next partition, while copy the other data in the current partition to the next partition; if the write position has been erased, write data directly to the current partition. This method is similar to EEPROM data read and write, easy to operate, and give a simple application interface, and can avoid the sector erase operation, to improve storage efficiency, while increasing the service life of the MCU's internal data Flash.
APA, Harvard, Vancouver, ISO, and other styles
9

Jahan, Mosarrat, Mohsen Rezvani, Qianrui Zhao, Partha Sarathi Roy, Kouichi Sakurai, Aruna Seneviratne, and Sanjay Jha. "Light Weight Write Mechanism for Cloud Data." IEEE Transactions on Parallel and Distributed Systems 29, no. 5 (May 1, 2018): 1131–46. http://dx.doi.org/10.1109/tpds.2017.2782253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

De Capitani di Vimercati, Sabrina, Sara Foresti, Sushil Jajodia, Giovanni Livraga, Stefano Paraboschi, and Pierangela Samarati. "Enforcing dynamic write privileges in data outsourcing." Computers & Security 39 (November 2013): 47–63. http://dx.doi.org/10.1016/j.cose.2013.01.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Write data"

1

Walter, Sarah. "Parallel read/write system for optical data storage." Diss., Connect to online resource, 2005. http://wwwlib.umi.com/cr/colorado/fullcit?p1425767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ibanez, Luis Daniel. "Towards a read/write web of linked data." Nantes, 2015. http://archive.bu.univ-nantes.fr/pollux/show.action?id=9089939a-874b-44e1-a049-86a4c5c5d0e6.

Full text
Abstract:
L’initiative «Web des données» a mis en disponibilité des millions des données pour leur interrogation par une fédération de participants autonomes. Néanmoins, le Web des Données a des problèmes de hétérogénéité et qualité. Nous considérons le problème de hétérogèneité comme une médiation «Local-as-View» (LAV). Malheureusement, LAV peut avoir besoin d’exécuter un certain nombre de « reformulations » exponentiel dans le nombre de sous-objectifs d’une requête. Nous proposons l’algorithme «Graph-Union» (GUN) pour maximiser les résultats obtenus á partir d’un sous-ensemble de reformulations. GUN réduit le temps d’exécution et maximise les résultats en échange d’une utilisation de la mémoire plus élevée. Pour permettre aux participants d’améliorer la qualité des données, il est nécessaire de faire évoluer le Web des Données vers Lecture-Écriture, par contre, l’écriture mutuelle des données entre participants autonomes pose des problèmes de cohérence. Nous modélisons le Web des Données en Lecture -Écriture comme un réseau social où les acteurs copient les données que leur intéressent, les corrigent et publient les mises à jour pour les échanger. Nous proposons deux algorithmes pour supporter cet échange : SU-Set, qui garantit la Cohérence Inéluctable Forte (CIF), et Col-Graph, qui garantit la Cohérence des Fragments, plus forte que CIF. Nous étudions les complexités des deux algorithmes et nous estimons expérimentalement le cas moyen de Col-Graph, les résultats suggèrant qu'il est faisable pour des topologies sociales
The Linked Data initiative has made available millions of pieces of data for querying through a federation of autonomous participants. However, the Web of Linked data suffers of problems of data heterogeneity and quality. We cast the problem of integrating heterogeneous data sources as a Local-as-View mediation (LAV) problem, unfortunately, LAV may require the execution of a number of “rewritings” exponential on the number of query subgoals. We propose the Graph-Union (GUN) strategy to maximise the results obtained from a subset of rewritings. Compared to traditional rewriting execution strategies, GUN improves execution time and number of results obtained in exchange of higher memory consumption. Once data can be queried data consumers can detect quality issues, but to resolve them they need to write on the data of the sources, i. E. , to evolve Linked Data from Read/Only to Read-Write. However, writing among autonomous participants raises consistency issues. We model the Read-Write Linked Data as a social network where actors copy the data they are interested into, update it and publish updates to exchange with others. We propose two algorithms for update exchange: SU-Set, that achieves Strong Eventual Consistency (SEC) and Col-Graph, that achieves Fragment Consistency, stronger than SEC. We analyze the worst and best case complexities of both algorithms and estimate experimentally the average complexity of Col-Graph, results suggest that is feasible for social network topologies
APA, Harvard, Vancouver, ISO, and other styles
3

Horne, Ross J. "Programming languages and principles for read-write linked data." Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/210899/.

Full text
Abstract:
This work addresses a gap in the foundations of computer science. In particular, only a limited number of models address design decisions in modern Web architectures. The development of the modern Web architecture tends to be guided by the intuition of engineers. The intuition of an engineer is probably more powerful than any model; however, models are important tools to aid principled design decisions. No model is sufficiently strong to provide absolute certainty of correctness; however, an architecture accompanied by a model is stronger than an architecture accompanied solely by intuition lead by the personal, hence subjective, subliminal ego. The Web of Data describes an architecture characterised by key W3C standards. Key standards include a semi-structured data format, entailment mechanism and query language. Recently, prominent figures have drawn attention to the necessity of update languages for the Web of Data, coining the notion of Read–Write Linked Data. A dynamicWeb of Data with updates is a more realistic reflection of the Web. An established and versatile approach to modelling dynamic languages is to define an operational semantics. This work provides such an operational semantics for a Read–Write Linked Data architecture. Furthermore, the model is sufficiently general to capture the established standards, including queries and entailments. Each feature is relative easily modelled in isolation; however a model which checks that the key standards socialise is a greater challenge to which operational semantics are suited. The model validates most features of the standards while raising some serious questions. Further to evaluating W3C standards, the operational mantics provides a foundation for static analysis. One approach is to derive an algebra for the model. The algebra is proven to be sound with respect to the operational semantics. Soundness ensures that the algebraic rules preserve operational behaviour. If the algebra establishes that two updates are equivalent, then they have the same operational capabilities. This is useful for optimisation, since the real cost of executing the updates may differ, despite their equivalent expressive powers. A notion of operational refinement is discussed, which allows a non-deterministic update to be refined to a more deterministic update. Another approach to the static analysis of Read–Write Linked Data is through a type system. The simplest type system for this application simply checks that well understood terms which appear in the semi-structured data, such as numbers and strings of characters, are used correctly. Static analysis then verifies that basic runtime errors in a well typed program do not occur. Type systems for URIs are also investigated, inspired by W3C standards. Type systems for URIs are controversial, since URIs have no internal structure thus have no obvious non-trivial types. Thus a flexible type system which accommodates several approaches to typing URIs is proposed.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Frank Zhigang. "Advanced magnetic thin-film heads under read-while-write operation." Thesis, University of Plymouth, 1999. http://hdl.handle.net/10026.1/2353.

Full text
Abstract:
A Read-While-Write (RWW) operation for tape and/or potentially disk applications is needed in the following three cases: 1. High reliability; 2. Data servo systems; 3. Buried servo systems. All these applications mean that the read (servo) head and write head are operative simultaneously. Consequently, RWW operation will require work to suppress the so-called crossfeed field radiation from the write head. Traditionally, write-read crossfeed has been reduced in conventional magnetic recording heads by a variety of screening methods, but the effectness of these methods is very limited. On the other hand, the early theoretical investigations of the crossfeed problem concentrating on the flux line pattern in front of a head structure based on a simplified model, may not be comprehensive. Today a growing number of magnetic recording equipment manufacturers employ thin-film technology to fabricate heads and thereby the size of the modern head is much smaller than in the past. The increasing use of thin-film metallic magnetic materials for heads, along with the appearance of other new technologies, such as the MR reproductive mode and keepered media, has stimulated the need for an increased understanding of the crossfeed problem by advanced analysis methods and a satisfactory practical solution to achieve the RWW operation. The work described in this thesis to suppress the crossfeed field involves both a novel reproductive mode of a Dual Magnetoresistive (DMR) head, which was originally designed to gain a large reproduce sensitivity at high linear recording densities exceeding 100 kFCI, playing the key role in suppressing the crossfeed (the corresponding signal-noise ratio is over 38 dB), and several other compensation schemes, giving further suppression. Advanced analytical and numerical methods of estimating crossfeed in single and multi track thin-film/MR heads under both DC and AC excitations can often help a head designer understand how the crossfeed field spreads and therefore how to suppress the crossfeed field from the standpoint of an overall head configuration. This work also assesses the scale of the crossfeed problem by making measurements on current and improved heads, thereby adapting the main contributors to crossfeed. The relevance of this work to the computer industry is clear for achieving simultaneous operation of the read head and write head, especially in a thin-film head assembly. This is because computer data rates must increase to meet the demands of storing more and more information in less time as computer graphics packages become more sophisticated.
APA, Harvard, Vancouver, ISO, and other styles
5

Bai, Daniel Zhigang. "Micromagnetic Modeling of Write Heads for High-Density and High-Data-Rate Perpendicular Recording." Research Showcase @ CMU, 2004. http://repository.cmu.edu/dissertations/922.

Full text
Abstract:
In this dissertation, three dimensional dynamic micromagnetic modeling based on Landau-Lifshitz equation with Gilbert damping has been used to study the magnetic processes of the thin film write heads for high density and high data rate perpendicular magnetic recording. In extremely narrow track width regime, for example, around or below 100 nm, the head field is found to suffer from significant loss from the ideal AttM s value for perpendicular recording. In the meantime, remanent head field becomes significant, posing potential issue of head remanence erasure. Using micromagnetic modeling, various novel head designs have been investigated. For an overall head dimension around one micron, the shape and structure of the head yoke have been found to greatly affect the head magnetization reversal performance, therefore the field rise time, especially for moderate driving currents. A lamination of the head across its thickness, both in the yoke and in the pole tip, yields excellent field reversal speed, and more importantly, it suppresses the remanent field very well and thus making itself a simple and effective approach to robust near-zero remanence. A single pole head design with a stitched pole tip and a recessed side yoke can produce significantly enhanced head field compared to a traditional single pole head. Various head design parameters have been examined via micromagnetic modeling. Using the dynamic micromagnetic model, the magnetization reversal processes at data rates beyond 1 G bit/s have been studied. The excitation of spin wave during the head field reversal and the energy dissipation afterwards were found im portant in dictating the field rise time. Both the drive current rise time and the Gilbert damping constant affect the field reversal speed. The effect of the soft underlayer (SUL) in both the write and the read processes have been studied via micromagnetic modeling. Although it is relatively easy to fulfill the requirement for the magnetic imaging in writing, the SUL deteriorates the readback performance and lowers the achievable recording linear density. Various parameters have been investigated and solutions have been proposed. The effect of stress in magnetostrictive thin films has been studied both analytically and by simulation. The micromagnetic model has been extended to incorporate the stress-induced anisotropy effect. Simulation was done on both a magnetic thin film undergoing stresses to show the static domains and a conceptual write head design that utilizes the stress induced anisotropy to achieve better performance. A self-consistent model based on energy minimization has been developed to model both the magnetization and the stress-strain states of a magnetic thin film.
APA, Harvard, Vancouver, ISO, and other styles
6

Söderbäck, Karl. "Organizing HLA data for improved navigation and searchability." Thesis, Linköpings universitet, Databas och informationsteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176029.

Full text
Abstract:
Pitch Technologies specializes their work on the HLA standard, a standard that specifies data exchange between simulators. The company provides a solution for recording HLA data into a database as raw byte data entries. In this thesis, different design solutions to store and organize recorded HLA data in a manner that reflects the content of the data are proposed and implemented, with the aim of making the data possible to query and analyze after recording. The design solutions impact on storage, read- and write performance as well as usability are evaluated through a suite of tests run on a PostgreSQL database and a TimescaleDB database. It is concluded that none of the design alternatives is the best solution for all aspects, but the most promising combination is proposed.
APA, Harvard, Vancouver, ISO, and other styles
7

Kalezhi, Josephat. "Modelling data storage in nano-island magnetic materials." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/modelling-data-storage-in-nanoisland-magnetic-materials(9b449925-1a39-4711-8d55-82e6d8ac215c).html.

Full text
Abstract:
Data storage in current hard disk drives is limited by three factors. These are thermal stability of recorded data, the ability to store data, and the ability to read back the stored data. An attempt to alleviate one factor can affect others. This ultimately limits magnetic recording densities that can be achieved using traditional forms of data storage. In order to advance magnetic recording and postpone these inhibiting factors, new approaches are required. One approach is recording on Bit Patterned Media (BPM) where the medium is patterned into nanometer-sized magnetic islands where each stores a binary digit.This thesis presents a statistical model of write errors in BPM composed of single domain islands. The model includes thermal activation in a calculation of write errors without resorting to time consuming micromagnetic simulations of huge populations of islands. The model incorporates distributions of position, magnetic and geometric properties of islands. In order to study the impact of island geometry variations on the recording performance of BPM systems, the magnetometric demagnetising factors for a truncated elliptic cone, a generalised geometry that reasonably describe most proposed island shapes, were derived analytically.The inclusion of thermal activation was enabled by an analytic derivation of the energy barrier for a single domain island. The energy barrier is used in a calculation of transition rates that enable the calculation of error rates. The model has been used to study write-error performance of BPM systems having distributions of position, geometric and magnetic property variations. Results showed that island intrinsic anisotropy and position variations have a larger impact on write-error performance than geometric variations.The model was also used to study thermally activated Adjacent Track Erasure (ATE) for a specific write head. The write head had a rectangular main pole of 13 by 40 nm (cross-track x down-track) with pole trailing shield gap of 5 nm and pole side shield gap of 10 nm. The distance from the pole to the top surface of the medium was 5 nm, the medium was 10 nm thick and there was a 2 nm interlayer between the soft underlayer (SUL) and the medium, making a total SUL to pole spacing of 17 nm. The results showed that ATE would be a major problem and that cross-track head field gradients need to be more tightly controlled than down-track. With the write head used, recording at 1 Tb/in² would be possible on single domain islands.
APA, Harvard, Vancouver, ISO, and other styles
8

Amur, Hrishikesh. "Storage and aggregation for fast analytics systems." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50397.

Full text
Abstract:
Computing in the last decade has been characterized by the rise of data- intensive scalable computing (DISC) systems. In particular, recent years have wit- nessed a rapid growth in the popularity of fast analytics systems. These systems exemplify a trend where queries that previously involved batch-processing (e.g., run- ning a MapReduce job) on a massive amount of data, are increasingly expected to be answered in near real-time with low latency. This dissertation addresses the problem that existing designs for various components used in the software stack for DISC sys- tems do not meet the requirements demanded by fast analytics applications. In this work, we focus specifically on two components: 1. Key-value storage: Recent work has focused primarily on supporting reads with high throughput and low latency. However, fast analytics applications require that new data entering the system (e.g., new web-pages crawled, currently trend- ing topics) be quickly made available to queries and analysis codes. This means that along with supporting reads efficiently, these systems must also support writes with high throughput, which current systems fail to do. In the first part of this work, we solve this problem by proposing a new key-value storage system – called the WriteBuffer (WB) Tree – that provides up to 30× higher write per- formance and similar read performance compared to current high-performance systems. 2. GroupBy-Aggregate: Fast analytics systems require support for fast, incre- mental aggregation of data for with low-latency access to results. Existing techniques are memory-inefficient and do not support incremental aggregation efficiently when aggregate data overflows to disk. In the second part of this dis- sertation, we propose a new data structure called the Compressed Buffer Tree (CBT) to implement memory-efficient in-memory aggregation. We also show how the WB Tree can be modified to support efficient disk-based aggregation.
APA, Harvard, Vancouver, ISO, and other styles
9

Vysocký, Ondřej. "Optimalizace distribuovaného I/O subsystému projektu k-Wave." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255408.

Full text
Abstract:
This thesis deals with an effective solution of the parallel I/O of the k-Wave tool, which is designed for time domain acoustic and ultrasound simulations. k-Wave is a supercomputer application, it runs on a Lustre file system and it requires to be implemented with MPI and stores the data in suitable data format (HDF5). I designed three methods of optimization which fits k-Wave's needs. It uses accumulation and redistribution techniques. In comparison with the native write, every optimization method led to better write speed, up to 13.6GB/s. It is possible to use these methods to optimize every data distributed application with the write speed issue.
APA, Harvard, Vancouver, ISO, and other styles
10

Hernane, Soumeya-Leila. "Modèles et algorithmes de partage de données cohérents pour le calcul parallèle distribué à haut débit." Thesis, Université de Lorraine, 2013. http://www.theses.fr/2013LORR0042/document.

Full text
Abstract:
Data Handover est une librairie de fonctions adaptée aux systèmes distribués à grande échelle. Dho offre des routines qui permettent d'acquérir des ressources en lecture ou en écriture de façon cohérente et transparente pour l'utilisateur. Nous avons modélisé le cycle de vie de Dho par un automate d'état fini puis, constaté expérimentalement, que notre approche produit un recouvrement entre le calcul de l'application et le contrôle de la donnée. Les expériences ont été menées en mode simulé en utilisant la libraire GRAS de SimGrid puis, en exploitant un environnement réel sur la plate-forme Grid'5000. Par la théorie des files d'attente, la stabilité du modèle a été démontrée dans un contexte centralisé. L'algorithme distribué d'exclusion mutuelle de Naimi et Tréhel a été enrichi pour offrir les fonctionnalités suivantes: (1) Permettre la connexion et la déconnexion des processus (ADEMLE), (2) admettre les locks partagés (AEMLEP) et enfin (3) associer les deux propriétés dans un algorithme récapitulatif (ADEMLEP). Les propriétés de sûreté et de vivacité ont été démontrées théoriquement. Le système peer-to-peer proposé combine nos algorithmes étendus et le modèle originel Dho. Les gestionnaires de verrou et de ressource opèrent et interagissent mutuellement dans une architecture à trois niveaux. Suite à l'étude expérimentale du système sous-jacent menée sur Grid'5000, et des résultats obtenus, nous avons démontré la performance et la stabilité du modèle Dho face à une multitude de paramètres
Data Handover is a library of functions adapted to large-scale distributed systems. It provides routines that allow acquiring resources in reading or writing in the ways that are coherent and transparent for users. We modelled the life cycle of Dho by a finite state automaton and through experiments; we have found that our approach produced an overlap between the calculation of the application and the control of the data. These experiments were conducted both in simulated mode and in real environment (Grid'5000). We exploited the GRAS library of the SimGrid toolkit. Several clients try to access the resource concurrently according the client-server paradigm. By the theory of queues, the stability of the model was demonstrated in a centralized environment. We improved, the distributed algorithm for mutual exclusion (of Naimi and Trehel), by introducing following features: (1) Allowing the mobility of processes (ADEMLE), (2) introducing shared locks (AEMLEP) and finally (3) merging both properties cited above into an algorithm summarising (ADEMLEP). We proved the properties, safety and liveliness, theoretically for all extended algorithms. The proposed peer-to-peer system combines our extended algorithms and original Data Handover model. Lock and resource managers operate and interact each other in an architecture based on three levels. Following the experimental study of the underlying system on Grid'5000, and the results obtained, we have proved the performance and stability of the model Dho over a multitude of parameters
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Write data"

1

How to write tutorial documentation. Englewood Cliffs, N.J: Prentice-Hall, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ramsay, James W. Write your own user guide. San Jose, California: Peer-to-Peer Communications, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

How to write a really good user's manual. New York: Van Nostrand Reinhold, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

J, Nohynek Gerhard, Copping Graham, and Wells Monique Y, eds. Presenting toxicology results: How to evaluate data and write reports. London: Taylor & Francis, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wilson, Wendy. Print out: Using the computer to write. Toronto: Harcourt Brace & Co. Canada, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

1943-, Smith Jerry C., ed. Write away!: Research paper development using Microsoft Word 97. Upper Saddle River, NJ: Prentice Hall, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Stewart, Kellerman, ed. You send me: Getting it right when you write online. New York: Harcourt, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Murach, Mike. Write better with a PC: A publisher's guide to business and technical writing. Fresno, Calif: M. Murach & Associates, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Michael, Egan. Write now!: Total quality writing in the age of computers. Champaign, Ill: Stipes Pub., 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Grimm, Susan J. How to write computer documentation for users. 2nd ed. New York: Van Nostrand Reinhold, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Write data"

1

Honda, Naoki, and Kiyoshi Yamakawa. "Write Heads: Fundamentals." In Developments in Data Storage, 78–96. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2011. http://dx.doi.org/10.1002/9781118096833.ch5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gerbing, David W. "Read and Write Data." In R Data Analysis without Programming, 21–40. 2nd ed. New York: Routledge, 2023. http://dx.doi.org/10.4324/9781003278412-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hanna, Michael. "Data Preparation." In How to Write Better Medical Papers, 63–64. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-02955-5_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sippu, Seppo, and Eljas Soisalon-Soininen. "Processing of Write-Intensive Transactions." In Data-Centric Systems and Applications, 351–69. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-12292-2_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hanna, Michael. "Figures: Data Graphs." In How to Write Better Medical Papers, 97–107. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-02955-5_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Livraga, Giovanni. "Enforcing Dynamic Read and Write Privileges." In Protecting Privacy in Data Release, 139–82. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-16109-9_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hanna, Michael. "Ethics of Data Analysis." In How to Write Better Medical Papers, 57–61. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-02955-5_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Measey, John. "Data Management." In How to Write a PhD in Biological Sciences, 169–72. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003212560-33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Graefe, Goetz, Wey Guy, and Caetano Sauer. "File Systems and Data Files." In Instant Recovery with Write-Ahead Logging, 95–99. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-031-01857-2_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Young, Suzanne. "Collecting Primary Data." In How to Write Your Undergraduate Dissertation in Criminology, 75–87. London: Routledge, 2022. http://dx.doi.org/10.4324/9781003016335-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Write data"

1

Minemura, H., H. Shirai, and R. Tamura. "Study on 200 Mbps High Speed Write/Read using a Phase-Change Write-Once Disk." In Optical Data Storage. Washington, D.C.: OSA, 2003. http://dx.doi.org/10.1364/ods.2003.ma3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gotoh, Hironori, H. Kobayashi, and Kiichi Ueyanagi. "Write-once recording by phase separation." In Optical Data Storage, edited by Donald B. Carlin and David B. Kay. SPIE, 1990. http://dx.doi.org/10.1117/12.22010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dulude, Jeffrey R. "Software Transparency and Write Once Optical Disk Drives." In Optical Data Storage. Washington, D.C.: Optica Publishing Group, 1987. http://dx.doi.org/10.1364/ods.1987.fb3.

Full text
Abstract:
This paper discusses the three levels of software transparency achievable with write once optical disk drives. First, the unusual nature of write once optical drives is addressed to demonstrate why software is needed that is different than software used with magnetic disk drives. This is followed by a framework for viewing different solutions to write-once software issues. Finally, this framework is used to examine the three levels of software transparency achievable with write-once drives.
APA, Harvard, Vancouver, ISO, and other styles
4

Tongeren, H. v., and M. Sens. "Write-Once Phase-Change Recording in GaSb." In Optical Data Storage. Washington, D.C.: Optica Publishing Group, 1987. http://dx.doi.org/10.1364/ods.1987.wc3.

Full text
Abstract:
This paper deals with phase-change recording of EFM-modulated data according to the standard for the signals of the Compact Disc system. Application of GaSb sensitive layers allows a simple single-sided disc construction. Recording is based on locally transformation of the as-deposited amorphous state to the crystalline state.
APA, Harvard, Vancouver, ISO, and other styles
5

Bender, Michael A., Martín Farach-Colton, Rob Johnson, Simon Mauras, Tyler Mayer, Cynthia A. Phillips, and Helen Xu. "Write-Optimized Skip Lists." In SIGMOD/PODS'17: International Conference on Management of Data. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3034786.3056117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Minemura, Hiroyuki, Koichi Watanabe, Kazuyoshi Adachi, and Reiji Tamura. "High-Speed Write/Read Techniques for a Blu-Ray Write-Once Disc." In International Symposium on Optical Memory and Optical Data Storage. Washington, D.C.: OSA, 2005. http://dx.doi.org/10.1364/isom_ods.2005.thb1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Perrier, R., R. Anciant, MF Armand, and Y. Lee. "Dual-level inorganic write-once blu-ray disc." In Optical Data Storage. Washington, D.C.: OSA, 2003. http://dx.doi.org/10.1364/ods.2003.tue1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Williams, David P., and Mark R. Burnside. "Subsystem Integration of Write-Once Optical Storage Products." In Optical Data Storage. Washington, D.C.: Optica Publishing Group, 1989. http://dx.doi.org/10.1364/ods.1989.tub1.

Full text
Abstract:
Today's buyer of 5-1/4 inch Write-Once optical storage products is presented with greater choice and variety of product than ever before. The technology now offers capacities from 200 megabytes to 1.2 gigabytes, with various access times and data transfer rates. Additional issues such as external or internal mount, form factor conformance, capability of software/firmware supplied with the unit, and operating system support make the choice difficult at best. In all cases, low cost-per-megabyte figures, high capacity, and removable media are the attributes that make optical drives ideal for data intensive applications.
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Bing-Mau, and Ru-Lin Yeh. "Blue laser inorganic write-once media." In Optical Data Storage Topical Meeting 2004, edited by B. V. K. Vijaya Kumar and Hiromichi Kobori. SPIE, 2004. http://dx.doi.org/10.1117/12.556713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lange, Gerald R. "Practical Specifications for Characterizing Write/Read Performance of Optical Disk Media." In Optical Data Storage. Washington, D.C.: Optica Publishing Group, 1987. http://dx.doi.org/10.1364/ods.1987.wc4.

Full text
Abstract:
The measurement of write/read characteristics of the media is a common requirement of all media manufacturers. This paper describes the methods used at Eastman Kodak Company in the characterization of the 14" write once media.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Write data"

1

Galland, Martin. COMTOG Report on ‘Bury Me My Love’. European Center for Populism Studies (ECPS), April 2023. http://dx.doi.org/10.55271/rp0036.

Full text
Abstract:
Bury Me My Love is a game about distance. It is a game which places front and center relationships between humans, how they interact, and what drives people to take a leap into the unknown and risk their lives in the hope of reaching safety. The eponymous phrase, ‘Bury Me My Love,’ is an Arabic expression to take care roughly meant to signify, “don’t think about dying before I do.” The game is inspired by but does not tell, the real-life story of Dana, a Syrian woman having left her country in September 2015. Both the journalist who wrote the article on Dana’s story and Dana herself working as part of the game’s editorial team
APA, Harvard, Vancouver, ISO, and other styles
2

Burniske, Jessica, and Naz Modirzadeh. Pilot Empirical Survey Study on the Impact of Counterterrorism Measures on Humanitarian Action & Comment on the Study. Harvard Law School Program on International Law and Armed Conflict, March 2017. http://dx.doi.org/10.54813/kecj6355.

Full text
Abstract:
To help determine the measurable impact of counterterrorism laws on humanitarian action, the Counterterrorism and Humanitarian Engagement (CHE) Project at the Harvard Law School Program on International Law and Armed Conflict collected data from humanitarian actors demonstrating the impact (or lack thereof) of counterterrorism laws and regulations on humanitarian organizations and their work. The Pilot Empirical Survey Study on the Impact of Counterterrorism Measures on Humanitarian Action (by Jessica S. Burniske and Naz K. Modirzadeh, March 2017) captures the resulting initial attempt at a pilot empirical study in this domain. Modirzadeh wrote a Comment on the Study (March 2017). That Comment raises considerations for states and donors, for humanitarian organizations, and for researchers.
APA, Harvard, Vancouver, ISO, and other styles
3

bin Ahsan, Wahid, Md Tanvir Hasan, Danilson Placid Purification, Nilim Ahsan, Naima Haque Numa, and Mostain Billa Tusar. Challenges and Opportunities in Bangladesh’s Content Writing Industry: A Qualitative Exploration. Userhub, July 2023. http://dx.doi.org/10.58947/ghkp-lxdn.

Full text
Abstract:
This research provides a qualitative, in-depth exploration of the content writing industry in Bangladesh, identifying prevalent trends, challenges, and growth potential. The study utilizes data from 44 participants, which include content writers and clients with diverse levels of industry experience, collected via online surveys and detailed interviews. Key findings suggest that while the industry is marked by a high demand for unique, engaging, and SEO-optimized content, issues pertaining to AI’s role, market saturation, and remuneration concerns persist. Despite these challenges, strategies for success emerged, such as continuous learning, effective client-writer communication, and strategic use of AI and social media tools. The study highlights the industry’s considerable potential for growth and recommends enhancing skill sets, promoting clear communication, creating unique value propositions, and encouraging supportive industry-wide policies. The study also signals future research directions, including the exploration of AI’s impact, pay practices, professional development programs, and the differential roles of social media platforms.
APA, Harvard, Vancouver, ISO, and other styles
4

Ramm-Granberg, Tynan, F. Rocchio, Catharine Copass, Rachel Brunner, and Eric Nelsen. Revised vegetation classification for Mount Rainier, North Cascades, and Olympic national parks: Project summary report. National Park Service, February 2021. http://dx.doi.org/10.36967/nrr-2284511.

Full text
Abstract:
Field crews recently collected more than 10 years of classification and mapping data in support of the North Coast and Cascades Inventory and Monitoring Network (NCCN) vegetation maps of Mount Rainier (MORA), Olympic (OLYM), and North Cascades (NOCA) National Parks. Synthesis and analysis of these 6000+ plots by Washington Natural Heritage Program (WNHP) and Institute for Natural Resources (INR) staff built on the foundation provided by the earlier classification work of Crawford et al. (2009). These analyses provided support for most of the provisional plant associations in Crawford et al. (2009), while also revealing previously undescribed vegetation types that were not represented in the United States National Vegetation Classification (USNVC). Both provisional and undescribed types have since been submitted to the USNVC by WNHP staff through a peer-reviewed process. NCCN plots were combined with statewide forest and wetland plot data from the US Forest Service (USFS) and other sources to create a comprehensive data set for Washington. Analyses incorporated Cluster Analysis, Nonmetric Multidimensional Scaling (NMS), Multi-Response Permutation Procedure (MRPP), and Indicator Species Analysis (ISA) to identify, vet, and describe USNVC group, alliance, and association distinctions. The resulting revised classification contains 321 plant associations in 99 alliances. A total of 54 upland associations were moved through the peer review process and are now part of the USNVC. Of those, 45 were provisional or preliminary types from Crawford et al. (2009), with 9 additional new associations that were originally identified by INR. WNHP also revised the concepts of 34 associations, wrote descriptions for 2 existing associations, eliminated/archived 2 associations, and created 4 new upland alliances. Finally, WNHP created 27 new wetland alliances and revised or clarified an additional 21 as part of this project (not all of those occur in the parks). This report and accompanying vegetation descriptions, keys and synoptic and environmental tables (all products available from the NPS Data Store project reference: https://irma.nps.gov/DataStore/Reference/Profile/2279907) present the fruit of these combined efforts: a comprehensive, up-to-date vegetation classification for the three major national parks of Washington State.
APA, Harvard, Vancouver, ISO, and other styles
5

Coram, Alexander James, Allen Robert Kingston, and Simon Northridge. Cod catches from demersal and pelagic trawl gears in the Clyde estuary: results from an industry-led survey in 2016: a report on behalf of the Clyde Fishermen's Association. Marine Alliance for Science and Technology for Scotland (MASTS), August 2022. http://dx.doi.org/10.15664/10023.26247.

Full text
Abstract:
[Extract from Foreword] This ‘cruise report’ is the first of a short series, reflecting the aspiration of the Clyde Fishermen’s Association to establish a rigorous sampling scheme to monitor changes in the abundance and distribution of cod (and later other gadoid species) within the Clyde area. The Scottish Oceans Institute was approached to provide independent scientific support in early 2016. A series of surveys was then conducted in 2016, 2017 and 2018. In each survey the SOI provided observers, collected data and wrote up a cruise report detailing the methods used and the location, numbers, weights, sex and maturity states of fish caught. Trials were halted after 2018 firstly because of pressing issues resulting from Brexit which absorbed any potentially available human and other resources, and secondly because of the COVID pandemic. The reports remained as unapproved and incomplete drafts until 2022. Picking up these reports again in 2022, we have responded to reviewers’ comments since made by Marine Scotland Science and have finalised all four reports in the 2016-2018 current series.
APA, Harvard, Vancouver, ISO, and other styles
6

Das, Jishnu, Joanna Härmä, Lant Pritchett, and Jason Silberstein. Forum: Why and How the Public vs. Private Schooling Debate Needs to Change. Research on Improving Systems of Education (RISE), March 2023. http://dx.doi.org/10.35489/bsg-rise-misc_2023/12.

Full text
Abstract:
“Are private schools better than public schools?” This ubiquitous debate in low- and middle-income countries is the wrong one to have. The foreword and three essays collected in this Forum each explore how to move past the stuck “public vs. private” binary. Jason Silberstein is a Research Fellow at RISE. His foreword is titled “A Shift in Perspective: Zooming Out from School Type and Bringing Neighborhood Education Systems into Focus.” It summarizes the current state of the “public vs. private” debate, outlines an alternative approach focused on neighborhood education systems, and then synthesizes key findings from the other essays. Jishnu Das has conducted decades of research on school systems in low-income countries, including in Zambia, India, and Pakistan. His essay is titled “The Emergence and Consequence of Schooling Markets.” It describes exactly what schooling markets look like in Pakistan, including the incredible variance in school quality in both public and private schools within the same village. Das then reviews the evidence on how to engineer local education markets to improve learning in all schools, including polices that have underdelivered (e.g., vouchers) and more promising policies (e.g., finance and information structured to take advantage of inter-school competition, and a focus on the lowest performing public schools). Das’ research on Pakistan is available through leaps.hks.harvard.edu, which also houses the data and documentation for the project. Lant Pritchett writes from a global lens grounded in his work on systems thinking in education. His essay is titled “Schooling Ain’t Just Learning: Controlling the Means of Producing Citizens.” It observes that governments supply, and families demand, education for many reasons. The academic emphasis on one of these reasons, producing student learning, has underweighted the critical importance of other features of education, in particular the socialization function of schooling, which more persuasively explain patterns of provision of both public school and different kinds of private schools. With this key fact in mind, Pritchett argues that there is a strong liberty case for allowing private schools, but that calls for governments to fund them are either uncompelling or “aggressively missing the point”. Joanna Härmä has done mixed-methods research on private schools across many cities and rural areas in sub-Saharan Africa and India, and has also founded a heavily-subsidized private school in Uttar Pradesh, India. Her essay responds to both Das and Pritchett and is titled “Why We Need to Stop Worrying About People’s Coping Mechanism for the ‘Global Learning Crisis’—Their Preference for Low-Fee Private Schools”. It outlines the different forces behind the rise of low-fee private schools and asserts that both the international development sector and governments have failed to usefully respond. Policy toward these private schools is sometimes overzealous, as seen in regulatory regimes that in practice are mostly used to extract bribes, and at other times overly solicitous, as seen in government subsidies that would usually be better spent improving the worst government schools. Perhaps, Härmä concludes, “we should leave well enough alone.”
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography