Academic literature on the topic 'Read data'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Read data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Read data"

1

Li, Donghe, Wonji Kim, Longfei Wang, Kyong-Ah Yoon, Boyoung Park, Charny Park, Sun-Young Kong, et al. "Comparison of INDEL Calling Tools with Simulation Data and Real Short-Read Data." IEEE/ACM Transactions on Computational Biology and Bioinformatics 16, no. 5 (September 1, 2019): 1635–44. http://dx.doi.org/10.1109/tcbb.2018.2854793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liao, Jianwei, Jun Li, Mingwang Zhao, Zhibing Sha, and Zhigang Cai. "Read Refresh Scheduling and Data Reallocation against Read Disturb in SSDs." ACM Transactions on Embedded Computing Systems 21, no. 2 (March 31, 2022): 1–27. http://dx.doi.org/10.1145/3495254.

Full text
Abstract:
Read disturb is a circuit-level noise in flash-based Solid-State Drives (SSDs), induced by intensive read requests, which may result in unexpected read errors. The approach of read refresh (RR) is commonly adopted to mitigate its negative effects by unconditionally migrating all valid data pages in the RR block to another new block. However, routine RR operations greatly impact the I/O responsiveness of SSDs, because the processing on normal I/O requests must be blocked at the same time. To further reduce the negative effects of read refresh, this article proposes a read refresh scheduling and data reallocation method to deal with two primary issues with respect to an RR operation, including where to place data pages and when to trigger page migrations. Specifically, we first construct a data reallocation model to match the data pages in the RR block and the destination blocks for addressing the issue of where to place the data. The model considers not only the read hotness of pages in the RR block, but also the accumulated read counts of the destination blocks. Moreover, for addressing the issue of when to trigger data migrations, we build a timing decision model to determine the time points for completing page migrations by considering the factors of the intensity of I/Os and the disturb situation on the RR block. Through a series of simulation experiments based on several realistic disk traces, we illustrate that the proposed RR scheduling and data reallocation mechanism can noticeably reduce the read errors by more than 10.3% , on average, and the long-tail latency by between 43.9% and 64.0% at the 99.99th percentile, in contrast to state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Eisenstein, Michael. "Startups use short-read data to expand long-read sequencing market." Nature Biotechnology 33, no. 5 (May 2015): 433–35. http://dx.doi.org/10.1038/nbt0515-433.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shumate, Alaina, Brandon Wong, Geo Pertea, and Mihaela Pertea. "Improved transcriptome assembly using a hybrid of long and short reads with StringTie." PLOS Computational Biology 18, no. 6 (June 1, 2022): e1009730. http://dx.doi.org/10.1371/journal.pcbi.1009730.

Full text
Abstract:
Short-read RNA sequencing and long-read RNA sequencing each have their strengths and weaknesses for transcriptome assembly. While short reads are highly accurate, they are rarely able to span multiple exons. Long-read technology can capture full-length transcripts, but its relatively high error rate often leads to mis-identified splice sites. Here we present a new release of StringTie that performs hybrid-read assembly. By taking advantage of the strengths of both long and short reads, hybrid-read assembly with StringTie is more accurate than long-read only or short-read only assembly, and on some datasets it can more than double the number of correctly assembled transcripts, while obtaining substantially higher precision than the long-read data assembly alone. Here we demonstrate the improved accuracy on simulated data and real data from Arabidopsis thaliana, Mus musculus, and human. We also show that hybrid-read assembly is more accurate than correcting long reads prior to assembly while also being substantially faster. StringTie is freely available as open source software at https://github.com/gpertea/stringtie.
APA, Harvard, Vancouver, ISO, and other styles
5

SHIMADA, Y. "How to Read Blood Gas Data." JAPANES JOURNAL OF MEDICAL INSTRUMENTATION 64, no. 12 (December 1, 1994): 560–63. http://dx.doi.org/10.4286/ikakikaigaku.64.12_560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zheng, Yuanqing, and Mo Li. "Read Bulk Data From Computational RFIDs." IEEE/ACM Transactions on Networking 24, no. 5 (October 2016): 3098–108. http://dx.doi.org/10.1109/tnet.2015.2502979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Moretti. "Introduction to “Learning to Read Data”." Victorian Studies 54, no. 1 (2011): 78. http://dx.doi.org/10.2979/victorianstudies.54.1.78.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Berners-Lee, Tim, and Kieron O’Hara. "The read–write Linked Data Web." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 371, no. 1987 (March 28, 2013): 20120513. http://dx.doi.org/10.1098/rsta.2012.0513.

Full text
Abstract:
This paper discusses issues that will affect the future development of the Web, either increasing its power and utility, or alternatively suppressing its development. It argues for the importance of the continued development of the Linked Data Web, and describes the use of linked open data as an important component of that. Second, the paper defends the Web as a read–write medium, and goes on to consider how the read–write Linked Data Web could be achieved.
APA, Harvard, Vancouver, ISO, and other styles
9

Stöcker, Bianca K., Johannes Köster, and Sven Rahmann. "SimLoRD: Simulation of Long Read Data." Bioinformatics 32, no. 17 (May 10, 2016): 2704–6. http://dx.doi.org/10.1093/bioinformatics/btw286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tan, Yuxiang, Yann Tambouret, and Stefano Monti. "SimFuse: A Novel Fusion Simulator for RNA Sequencing (RNA-Seq) Data." BioMed Research International 2015 (2015): 1–5. http://dx.doi.org/10.1155/2015/780519.

Full text
Abstract:
The performance evaluation of fusion detection algorithms from high-throughput sequencing data crucially relies on the availability of data with known positive and negative cases of gene rearrangements. The use of simulated data circumvents some shortcomings of real data by generation of an unlimited number of true and false positive events, and the consequent robust estimation of accuracy measures, such as precision and recall. Although a few simulated fusion datasets from RNA Sequencing (RNA-Seq) are available, they are of limited sample size. This makes it difficult to systematically evaluate the performance of RNA-Seq based fusion-detection algorithms. Here, we present SimFuse to address this problem. SimFuse utilizes real sequencing data as the fusions’ background to closely approximate the distribution of reads from a real sequencing library and uses a reference genome as the template from which to simulate fusions’ supporting reads. To assess the supporting read-specific performance, SimFuse generates multiple datasets with various numbers of fusion supporting reads. Compared to an extant simulated dataset, SimFuse gives users control over the supporting read features and the sample size of the simulated library, based on which the performance metrics needed for the validation and comparison of alternative fusion-detection algorithms can be rigorously estimated.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Read data"

1

Burger, Joseph. "Real-time engagement area dvelopment program (READ-Pro)." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02Jun%5FBurger.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lecompte, Lolita. "Structural variant genotyping with long read data." Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S054.

Full text
Abstract:
Les variants de structure (SVs) sont des réarrangements génomiques de plus de 50 paires de base et restent encore aujourd'hui peu étudiés malgré les impacts importants qu'ils peuvent avoir sur le fonctionnement des génomes. Récemment, les technologies de séquençage de troisième génération ont été développées et produisent des données de longues lectures qui s'avèrent très utiles car elles peuvent chevaucher les réarrangements. À l'heure actuelle, les méthodes bioinformatiques se sont concentrées sur le problème de la découverte de SVs avec des données de longues lectures. Aucune méthode n'a cependant été proposée pour répondre spécifiquement à la question du génotypage de SVs avec ce même type de données. L'objectif du génotypage de SVs vise pour un ensemble de SVs donné à évaluer les allèles présents dans un nouvel échantillon séquencé. Cette thèse propose une nouvelle méthode pour génotyper des SVs avec des longues lectures et repose sur la représentation des séquences des allèles. Notre méthode a été implémentée dans l'outil SVJedi. Nous avons testé notre outil à la fois sur des données simulées et réelles afin de valider notre méthode. SVJedi obtient une précision élevée qui dépasse les performances des autres outils de génotypage de SVs, notamment des outils de détection de SVs et des outils de génotypage de SVs de lectures courtes
Structural Variants (SVs) are genomic rearrangements of more than 50 base pairs. Since SVs can reach several thousand base pairs, they can have huge impacts on genome functions, studying SVs is, therefore, of great interest. Recently, a new generation of sequencing technologies has been developed and produce long read data of tens of thousand of base pairs which are particularly useful for spanning over SV breakpoints. So far, bioinformatics methods have focused on the SV discovery problem with long read data. However, no method has been proposed to specifically address the issue of genotyping SVs with long read data. The purpose of SV genotyping is to assess for each variant of a given input set which alleles are present in a newly sequenced sample. This thesis proposes a new method for genotyping SVs with long read data, based on the representation of each allele sequences. We also defined a set of conditions to consider a read as supporting an allele. Our method has been implemented in a tool called SVJedi. Our tool has been validated on both simulated and real human data and achieves high genotyping accuracy. We show that SVJedi obtains better performances than other existing long read genotyping tools and we also demonstrate that SV genotyping is considerably improved with SVJedi compared to other approaches, namely SV discovery and short read SV genotyping approaches
APA, Harvard, Vancouver, ISO, and other styles
3

Walter, Sarah. "Parallel read/write system for optical data storage." Diss., Connect to online resource, 2005. http://wwwlib.umi.com/cr/colorado/fullcit?p1425767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ibanez, Luis Daniel. "Towards a read/write web of linked data." Nantes, 2015. http://archive.bu.univ-nantes.fr/pollux/show.action?id=9089939a-874b-44e1-a049-86a4c5c5d0e6.

Full text
Abstract:
L’initiative «Web des données» a mis en disponibilité des millions des données pour leur interrogation par une fédération de participants autonomes. Néanmoins, le Web des Données a des problèmes de hétérogénéité et qualité. Nous considérons le problème de hétérogèneité comme une médiation «Local-as-View» (LAV). Malheureusement, LAV peut avoir besoin d’exécuter un certain nombre de « reformulations » exponentiel dans le nombre de sous-objectifs d’une requête. Nous proposons l’algorithme «Graph-Union» (GUN) pour maximiser les résultats obtenus á partir d’un sous-ensemble de reformulations. GUN réduit le temps d’exécution et maximise les résultats en échange d’une utilisation de la mémoire plus élevée. Pour permettre aux participants d’améliorer la qualité des données, il est nécessaire de faire évoluer le Web des Données vers Lecture-Écriture, par contre, l’écriture mutuelle des données entre participants autonomes pose des problèmes de cohérence. Nous modélisons le Web des Données en Lecture -Écriture comme un réseau social où les acteurs copient les données que leur intéressent, les corrigent et publient les mises à jour pour les échanger. Nous proposons deux algorithmes pour supporter cet échange : SU-Set, qui garantit la Cohérence Inéluctable Forte (CIF), et Col-Graph, qui garantit la Cohérence des Fragments, plus forte que CIF. Nous étudions les complexités des deux algorithmes et nous estimons expérimentalement le cas moyen de Col-Graph, les résultats suggèrant qu'il est faisable pour des topologies sociales
The Linked Data initiative has made available millions of pieces of data for querying through a federation of autonomous participants. However, the Web of Linked data suffers of problems of data heterogeneity and quality. We cast the problem of integrating heterogeneous data sources as a Local-as-View mediation (LAV) problem, unfortunately, LAV may require the execution of a number of “rewritings” exponential on the number of query subgoals. We propose the Graph-Union (GUN) strategy to maximise the results obtained from a subset of rewritings. Compared to traditional rewriting execution strategies, GUN improves execution time and number of results obtained in exchange of higher memory consumption. Once data can be queried data consumers can detect quality issues, but to resolve them they need to write on the data of the sources, i. E. , to evolve Linked Data from Read/Only to Read-Write. However, writing among autonomous participants raises consistency issues. We model the Read-Write Linked Data as a social network where actors copy the data they are interested into, update it and publish updates to exchange with others. We propose two algorithms for update exchange: SU-Set, that achieves Strong Eventual Consistency (SEC) and Col-Graph, that achieves Fragment Consistency, stronger than SEC. We analyze the worst and best case complexities of both algorithms and estimate experimentally the average complexity of Col-Graph, results suggest that is feasible for social network topologies
APA, Harvard, Vancouver, ISO, and other styles
5

Horne, Ross J. "Programming languages and principles for read-write linked data." Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/210899/.

Full text
Abstract:
This work addresses a gap in the foundations of computer science. In particular, only a limited number of models address design decisions in modern Web architectures. The development of the modern Web architecture tends to be guided by the intuition of engineers. The intuition of an engineer is probably more powerful than any model; however, models are important tools to aid principled design decisions. No model is sufficiently strong to provide absolute certainty of correctness; however, an architecture accompanied by a model is stronger than an architecture accompanied solely by intuition lead by the personal, hence subjective, subliminal ego. The Web of Data describes an architecture characterised by key W3C standards. Key standards include a semi-structured data format, entailment mechanism and query language. Recently, prominent figures have drawn attention to the necessity of update languages for the Web of Data, coining the notion of Read–Write Linked Data. A dynamicWeb of Data with updates is a more realistic reflection of the Web. An established and versatile approach to modelling dynamic languages is to define an operational semantics. This work provides such an operational semantics for a Read–Write Linked Data architecture. Furthermore, the model is sufficiently general to capture the established standards, including queries and entailments. Each feature is relative easily modelled in isolation; however a model which checks that the key standards socialise is a greater challenge to which operational semantics are suited. The model validates most features of the standards while raising some serious questions. Further to evaluating W3C standards, the operational mantics provides a foundation for static analysis. One approach is to derive an algebra for the model. The algebra is proven to be sound with respect to the operational semantics. Soundness ensures that the algebraic rules preserve operational behaviour. If the algebra establishes that two updates are equivalent, then they have the same operational capabilities. This is useful for optimisation, since the real cost of executing the updates may differ, despite their equivalent expressive powers. A notion of operational refinement is discussed, which allows a non-deterministic update to be refined to a more deterministic update. Another approach to the static analysis of Read–Write Linked Data is through a type system. The simplest type system for this application simply checks that well understood terms which appear in the semi-structured data, such as numbers and strings of characters, are used correctly. Static analysis then verifies that basic runtime errors in a well typed program do not occur. Type systems for URIs are also investigated, inspired by W3C standards. Type systems for URIs are controversial, since URIs have no internal structure thus have no obvious non-trivial types. Thus a flexible type system which accommodates several approaches to typing URIs is proposed.
APA, Harvard, Vancouver, ISO, and other styles
6

Huang, Songbo, and 黄颂博. "Detection of splice junctions and gene fusions via short read alignment." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B45862527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Saleem, Muhammad. "Automated Analysis of Automotive Read-Out Data for Better Decision Making." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-63785.

Full text
Abstract:
The modern automobile is a complex electromechanical system controlled by control systems which consist of several interdependent electronic control units (ECUs). Analysis of the data generated by these modules is very important in order to observe the interesting patterns among data. At Volvo Cars Corporation today, diagnostic read-out data is retrieved from client machines installed at workshops in different countries around the world. The problem with this data is that it does not show a clear picture as what is causing what i.e. tracking the problem. Diagnostic engineers at Volvo Cars Corporation perform routine based statistical analysis of diagnostic read-out data manually, which is time consuming and tedious work. Moreover, this analysis is restricted to basic level mainly statistical analysis of diagnostic readout data. We present an approach based on statistical analysis and cluster analysis. Our approach focused on analysing the data from a pure statistical stand-point to isolate the problem in diagnostic read-out data, thereby helping to visualize and analyse the nature of the problem at hand. Different general statistical formulae were applied to get meaningful information from large amount of DRO data. Cluster analysis was carried out to get clusters consisting of similar trouble codes. Different methods and techniques were considered for the purpose of cluster analysis. Hierarchical and non-hierarchical clusters were extracted by applying appropriate algorithms. The results obtained from the thesis work show that the diagnostic read-out data consist of independent and interdependent fault codes. Groups were generated which consist of similar trouble codes. Furthermore, corresponding factors from freeze frame data which shows significant variation for these groups were also extracted. These faults, groups of faults and factors were later interpreted and validated by diagnostic engineers.
APA, Harvard, Vancouver, ISO, and other styles
8

Frousios, Kimon. "Bioinformatic analysis of genomic sequencing data : read alignment and variant evaluation." Thesis, King's College London (University of London), 2014. http://kclpure.kcl.ac.uk/portal/en/theses/bioinformatic-analysis-of-genomic-sequencing-data(e3a55df7-543e-4eaa-a81e-6534eacf6250).html.

Full text
Abstract:
The invention and rise in popularity of Next Generation Sequencing technologies has led to a steep increase of sequencing data and the rise of new challenges. This thesis aims to contribute methods for the analysis of NGS data, and focuses on two of the challenges presented by these data. The first challenge regards the need for NGS reads to be aligned to a reference sequence, as their short length complicates direct assembly. A great number of tools exist that carry out this task quickly and efficiently, yet they all rely on the mere count of mismatches in order to assess alignments, ignoring the knowledge that genome composition and mutation frequencies are biased. Thus, the use of a scoring matrix that incorporates the mutation and composition biases observed among humans was tested with simulated reads. The scoring matrix was implemented and incorporated into the in-house algorithm REAL, allowing side-by-side comparison of the performance of the biased model and the mismatch count. The algorithm REAL was also used to investigate the applicability of NGS RNA-seq data to the understanding of the relationship between genomic expression and the compartmentalisation of genomic base composition into isochores. The second challenge regards the evaluation of the variants (SNPs) that are discovered by sequencing. NGS technologies have caused a sharp rise in the rate with which new SNPs are discovered, rendering impossible the experimental validation of each one. Several tools exist that take into account various properties of the genome, the transcripts and the protein products relevant to the location of a SNP and attempt to predict the SNP's impact. These tools are valuable in screening and prioritising SNPs likely to have a causative association with a genetic disease of interest. Despite the number of individual tools and the diversity of their resources, no attempt had been made to draw a consensus among them. Two consensus approaches were considered, one based on a very simplistic vote majority of the tools considered, and one based on machine learning. Both methods proved to offer highly competitive classification both against the individual tools and against other consensus methods that were published in the meantime.
APA, Harvard, Vancouver, ISO, and other styles
9

Hoffmann, Steve. "Genome Informatics for High-Throughput Sequencing Data Analysis." Doctoral thesis, Universitätsbibliothek Leipzig, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-152643.

Full text
Abstract:
This thesis introduces three different algorithmical and statistical strategies for the analysis of high-throughput sequencing data. First, we introduce a heuristic method based on enhanced suffix arrays to map short sequences to larger reference genomes. The algorithm builds on the idea of an error-tolerant traversal of the suffix array for the reference genome in conjunction with the concept of matching statistics introduced by Chang and a bitvector based alignment algorithm proposed by Myers. The algorithm supports paired-end and mate-pair alignments and the implementation offers methods for primer detection, primer and poly-A trimming. In our own benchmarks as well as independent bench- marks this tool outcompetes other currently available tools with respect to sensitivity and specificity in simulated and real data sets for a large number of sequencing protocols. Second, we introduce a novel dynamic programming algorithm for the spliced alignment problem. The advantage of this algorithm is its capability to not only detect co-linear splice events, i.e. local splice events on the same genomic strand, but also circular and other non-collinear splice events. This succinct and simple algorithm handles all these cases at the same time with a high accuracy. While it is at par with other state- of-the-art methods for collinear splice events, it outcompetes other tools for many non-collinear splice events. The application of this method to publically available sequencing data led to the identification of a novel isoform of the tumor suppressor gene p53. Since this gene is one of the best studied genes in the human genome, this finding is quite remarkable and suggests that the application of our algorithm could help to identify a plethora of novel isoforms and genes. Third, we present a data adaptive method to call single nucleotide variations (SNVs) from aligned high-throughput sequencing reads. We demonstrate that our method based on empirical log-likelihoods automatically adjusts to the quality of a sequencing experiment and thus renders a \"decision\" on when to call an SNV. In our simulations this method is at par with current state-of-the-art tools. Finally, we present biological results that have been obtained using the special features of the presented alignment algorithm
Diese Arbeit stellt drei verschiedene algorithmische und statistische Strategien für die Analyse von Hochdurchsatz-Sequenzierungsdaten vor. Zuerst führen wir eine auf enhanced Suffixarrays basierende heuristische Methode ein, die kurze Sequenzen mit grossen Genomen aligniert. Die Methode basiert auf der Idee einer fehlertoleranten Traversierung eines Suffixarrays für Referenzgenome in Verbindung mit dem Konzept der Matching-Statistik von Chang und einem auf Bitvektoren basierenden Alignmentalgorithmus von Myers. Die vorgestellte Methode unterstützt Paired-End und Mate-Pair Alignments, bietet Methoden zur Erkennung von Primersequenzen und zum trimmen von Poly-A-Signalen an. Auch in unabhängigen Benchmarks zeichnet sich das Verfahren durch hohe Sensitivität und Spezifität in simulierten und realen Datensätzen aus. Für eine große Anzahl von Sequenzierungsprotokollen erzielt es bessere Ergebnisse als andere bekannte Short-Read Alignmentprogramme. Zweitens stellen wir einen auf dynamischer Programmierung basierenden Algorithmus für das spliced alignment problem vor. Der Vorteil dieses Algorithmus ist seine Fähigkeit, nicht nur kollineare Spleiß- Ereignisse, d.h. Spleiß-Ereignisse auf dem gleichen genomischen Strang, sondern auch zirkuläre und andere nicht-kollineare Spleiß-Ereignisse zu identifizieren. Das Verfahren zeichnet sich durch eine hohe Genauigkeit aus: während es bei der Erkennung kollinearer Spleiß-Varianten vergleichbare Ergebnisse mit anderen Methoden erzielt, schlägt es die Wettbewerber mit Blick auf Sensitivität und Spezifität bei der Vorhersage nicht-kollinearer Spleißvarianten. Die Anwendung dieses Algorithmus führte zur Identifikation neuer Isoformen. In unserer Publikation berichten wir über eine neue Isoform des Tumorsuppressorgens p53. Da dieses Gen eines der am besten untersuchten Gene des menschlichen Genoms ist, könnte die Anwendung unseres Algorithmus helfen, eine Vielzahl weiterer Isoformen bei weniger prominenten Genen zu identifizieren. Drittens stellen wir ein datenadaptives Modell zur Identifikation von Single Nucleotide Variations (SNVs) vor. In unserer Arbeit zeigen wir, dass sich unser auf empirischen log-likelihoods basierendes Modell automatisch an die Qualität der Sequenzierungsexperimente anpasst und eine \"Entscheidung\" darüber trifft, welche potentiellen Variationen als SNVs zu klassifizieren sind. In unseren Simulationen ist diese Methode auf Augenhöhe mit aktuell eingesetzten Verfahren. Schließlich stellen wir eine Auswahl biologischer Ergebnisse vor, die mit den Besonderheiten der präsentierten Alignmentverfahren in Zusammenhang stehen
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Frank Zhigang. "Advanced magnetic thin-film heads under read-while-write operation." Thesis, University of Plymouth, 1999. http://hdl.handle.net/10026.1/2353.

Full text
Abstract:
A Read-While-Write (RWW) operation for tape and/or potentially disk applications is needed in the following three cases: 1. High reliability; 2. Data servo systems; 3. Buried servo systems. All these applications mean that the read (servo) head and write head are operative simultaneously. Consequently, RWW operation will require work to suppress the so-called crossfeed field radiation from the write head. Traditionally, write-read crossfeed has been reduced in conventional magnetic recording heads by a variety of screening methods, but the effectness of these methods is very limited. On the other hand, the early theoretical investigations of the crossfeed problem concentrating on the flux line pattern in front of a head structure based on a simplified model, may not be comprehensive. Today a growing number of magnetic recording equipment manufacturers employ thin-film technology to fabricate heads and thereby the size of the modern head is much smaller than in the past. The increasing use of thin-film metallic magnetic materials for heads, along with the appearance of other new technologies, such as the MR reproductive mode and keepered media, has stimulated the need for an increased understanding of the crossfeed problem by advanced analysis methods and a satisfactory practical solution to achieve the RWW operation. The work described in this thesis to suppress the crossfeed field involves both a novel reproductive mode of a Dual Magnetoresistive (DMR) head, which was originally designed to gain a large reproduce sensitivity at high linear recording densities exceeding 100 kFCI, playing the key role in suppressing the crossfeed (the corresponding signal-noise ratio is over 38 dB), and several other compensation schemes, giving further suppression. Advanced analytical and numerical methods of estimating crossfeed in single and multi track thin-film/MR heads under both DC and AC excitations can often help a head designer understand how the crossfeed field spreads and therefore how to suppress the crossfeed field from the standpoint of an overall head configuration. This work also assesses the scale of the crossfeed problem by making measurements on current and improved heads, thereby adapting the main contributors to crossfeed. The relevance of this work to the computer industry is clear for achieving simultaneous operation of the read head and write head, especially in a thin-film head assembly. This is because computer data rates must increase to meet the demands of storing more and more information in less time as computer graphics packages become more sophisticated.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Read data"

1

Association, European Computer Manufacturers. Data interchange on read-only 120 mm optical data disks (CD-ROM). 2nd ed. Geneva: European Computer Manufacturers Assocation, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Inc, Toshiba America, and Toshiba America. Toshiba MOS memory products data book. [Tustin, Calif.]: Toshiba America, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jenkins, Gordon. The electronic data interchange handbook: A quick read on EDI. Etobicoke, Ont: [Electronic Data Interchange Council of Canada], 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Allerup, Peter. Why I like to read: Statistical analysis of questionnaire data. Kobenhavn: Danish National Institue for Educational Research, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Publications, Sun Technical, ed. Read me first!: A style guide for the computer industry. 2nd ed. Upper Saddle River, NJ: Prentice Hall, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Read me first!: A style guide for the computer industry. 3rd ed. Upper Saddle River, N.J: Prentice Hall ; Pearson Education [distributor], 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jenkins, Gordon. The electronic commerce handbook: A quick read on how electronic commerce can keep you competitive. Etobicoke, Ont: EDI Council of Canada, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Creasy, William C. Microcomputers & literary scholarship: Papers read at a Clark Library conference, 30 December 1982. Los Angeles, Calif: William Andrews Clark Memorial Library, University of California, Los Angeles, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Creasy, William C. Microcomputers & literary scholarship: Papers read at a Clark Library conference, 30 December 1982. Los Angeles, Calif: William Andrews Clark Memorial Library, University of California, Los Angeles, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tsiao, Sunny. Read you loud and clear!: The story of NASA'S spaceflight tracking and data network. Washington, DC: National Aeronautics and Space Administration, NASA History Division, Office of External Relations, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Read data"

1

Mok, F., G. Zhou, and D. Psaltis. "Holographic Read-Only Memory." In Holographic Data Storage, 399–407. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/978-3-540-47864-5_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chuang, Ernest, and Kevin Curtis. "Holographic Read Only Memories." In Holographic Data Storage, 373–401. Chichester, UK: John Wiley & Sons, Ltd, 2010. http://dx.doi.org/10.1002/9780470666531.ch15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Miller, Judith. "READ and DATA statements." In Beginning BASIC with the ZX Spectrum, 92–98. London: Macmillan Education UK, 1985. http://dx.doi.org/10.1007/978-1-349-81211-0_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gerbing, David W. "Read and Write Data." In R Data Analysis without Programming, 21–40. 2nd ed. New York: Routledge, 2023. http://dx.doi.org/10.4324/9781003278412-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Palcic, Donal, and Darragh Flannery. "Making Sense of Economic Data." In How to Read Economic News, 275–91. London: Routledge, 2023. http://dx.doi.org/10.4324/9781003154747-16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sbiaa, Rachid. "Magnetoresistive Read Heads: Fundamentals and Functionality." In Developments in Data Storage, 97–126. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2011. http://dx.doi.org/10.1002/9781118096833.ch6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Livraga, Giovanni. "Enforcing Dynamic Read and Write Privileges." In Protecting Privacy in Data Release, 139–82. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-16109-9_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gorman, Brian L. "Data Access (Create, Read, Update, Delete)." In Practical Entity Framework, 245–69. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-6044-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gorman, Brian L. "Data Access (Create, Read, Update, Delete)." In Practical Entity Framework Core 6, 269–97. Berkeley, CA: Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-7301-2_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Arumugam, Krithika, Irina Bessarab, Mindia A. S. Haryono, and Rohan B. H. Williams. "Recovery and Analysis of Long-Read Metagenome-Assembled Genomes." In Metagenomic Data Analysis, 235–59. New York, NY: Springer US, 2023. http://dx.doi.org/10.1007/978-1-0716-3072-3_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Read data"

1

Yardy, R., Blair I. Finkelstein, and Terry W. McDaniel. "Read stability in magneto-optical storage." In Optical Data Storage, edited by Donald B. Carlin and David B. Kay. SPIE, 1990. http://dx.doi.org/10.1117/12.22004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Herget, Philipp, Tuviah E. Schlesinger, James A. Bain, D. D. Stancil, and H. Awano. "MAMMOS read-only memory." In Optical Data Storage Topical Meeting 2004, edited by B. V. K. Vijaya Kumar and Hiromichi Kobori. SPIE, 2004. http://dx.doi.org/10.1117/12.556950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kitaguchi, Tohru, Etsuji Akimoto, Harumichi Tsukada, Seiichi Ohta, and Shuso Iyoshi. "Mechanism of read-instability of optical recording." In Optical Data Storage, edited by Donald B. Carlin and David B. Kay. SPIE, 1990. http://dx.doi.org/10.1117/12.22015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Suzhen, Weiwei Zhang, Bo Mao, and Hong Jiang. "HotR: Alleviating Read/Write Interference with Hot Read Data Replication for Flash Storage." In 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2019. http://dx.doi.org/10.23919/date.2019.8715100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zavislan, James M. "Analytic statistical model for magneto-optic read paths." In Optical Data Storage, edited by Donald B. Carlin and David B. Kay. SPIE, 1990. http://dx.doi.org/10.1117/12.22043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lynch, R. T. "A Performance Model for Optical Recording Read Channels." In Optical Data Storage. Washington, D.C.: Optica Publishing Group, 1987. http://dx.doi.org/10.1364/ods.1987.thb3.

Full text
Abstract:
A model of the readback, equalization and data detection processes in optical recording has been used to analyze the soft error rate performance that could be expected as a function of these parameters and the noise present in the channel. The model is based on one developed originally by P. H. Siegel(1), and extended by R. G. Hirko and T. D. Howel(2). This original model was designed to analyze inductive magnetic storage channels. The optical channel model uses much of the methodology and several of the components from the earlier work. Significant modifications and additions were required to allow analysis of codes of interest in optical storage, and to account for the different channel transfer function encountered in an optical channel.
APA, Harvard, Vancouver, ISO, and other styles
7

Loerincz, Emoeke, Ferenc Ujhelyi, Pal Koppa, A. Kerekes, Gabor Szarvas, Gabor Erdei, Jozsua Fodor, et al. "Read/write demonstrator of rewritable holographic memory card system." In Optical Data Storage, edited by Terril Hurst and Seiji Kobayashi. SPIE, 2002. http://dx.doi.org/10.1117/12.453409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Birukawa, M., N. Miyatake, and T. Suzuki. "Magnetically Super Resolution for Optical Read Only Memory Disks." In Optical Data Storage. Washington, D.C.: Optica Publishing Group, 1998. http://dx.doi.org/10.1364/ods.1998.pdp.1.

Full text
Abstract:
A new MSR-ROM method for high-density recording, utilizes the difference in coercivity between “rough” and “smooth” are as in a disk. This method is advantageous for mass production and for the compatibility with erasable magneto-optical recording using MSR. A recording density more than 5 Gbit/inch2 is expected in this method, the same as the double mask type MSR technique in rewritable disk. This paper describes the principle of MSR-ROM and the results of a readout experiment.
APA, Harvard, Vancouver, ISO, and other styles
9

Zheng, Yuanqing, and Mo Li. "Read bulk data from computational RFIDs." In IEEE INFOCOM 2014 - IEEE Conference on Computer Communications. IEEE, 2014. http://dx.doi.org/10.1109/infocom.2014.6847973.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dewey, Anthony G. "Optimizing the Noise Performance of a Magneto-Optic Read Channel." In Optical Data Storage. Washington, D.C.: Optica Publishing Group, 1989. http://dx.doi.org/10.1364/ods.1989.thc2.

Full text
Abstract:
One of the problems in the design a magneto-optic read-write head is determining the light level needed at the data detectors. Figure 1 is a schematic of a typical M-O head in which PBS1 and PBS2 are actually leaky beam-splitters that divert from the main optical path (laser to disk and back) some fraction of the p-polarized (unrotated) light. On the outward path this diverted light reduces the efficiency of the head and, especially from the point of view of writing on the disk, the goal would be to minimize this loss. On the return path the light diverted by PBS 1 goes to the servo detectors to provide focus and track-error signals, and that by PBS2 goes to the data detectors.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Read data"

1

Wendelberger, James. A First Look at WAMS Data via the Read Code. Office of Scientific and Technical Information (OSTI), February 2021. http://dx.doi.org/10.2172/1764743.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Unknown, Author. L51393 A.G.A. Gas-Liquid Data Bank. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), January 1990. http://dx.doi.org/10.55274/r0011062.

Full text
Abstract:
The purposes of the Data Bank are to organize field (operating pipeline) data from a wide variety of gas and oil pipelines and make these data available in a convenient form to interested pipeline designers and operators. The Data Bank itself consists of a computer tape written in IBM Fortran and a User's Manual. The tape contains two computer programs, (one to enter data and one to read back the data), the files of the data, an index listing pipelines contained in the Data Bank, a bibliography for the data in the Data Bank, and a brief summary describing the data. The types of data of interest include: composition, pipe geometry, pressure drop and holdup data (obtained pigging or by measurements). Transient data during pigging operations or from rapid flow changes are also of interest.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Howell, Enrique Saldivar-Carranza, Jijo K. Mathew, Woosung Kim, Jairaj Desai, Timothy Wells, and Darcy M. Bullock. Extraction of Vehicle CAN Bus Data for Roadway Condition Monitoring. Purdue University, 2020. http://dx.doi.org/10.5703/1288284317212.

Full text
Abstract:
Obtaining timely information across the state roadway network is important for monitoring the condition of the roads and operating characteristics of traffic. One of the most significant challenges in winter roadway maintenance is identifying emerging or deteriorating conditions before significant crashes occur. For instance, almost all modern vehicles have accelerometers, anti-lock brake (ABS) and traction control systems. This data can be read from the Controller Area Network (CAN) of the vehicle, and combined with GPS coordinates and cellular connectivity, can provide valuable on-the-ground sampling of vehicle dynamics at the onset of a storm. We are rapidly entering an era where this vehicle data can provide an agency with opportunities to more effectively manage their systems than traditional procedures that rely on fixed infrastructure sensors and telephone reports. This data could also reduce the density of roadway weather information systems (RWIS), similar to how probe vehicle data has reduced the need for micro loop or side fire sensors for collecting traffic speeds.
APA, Harvard, Vancouver, ISO, and other styles
4

Lin, Yawei, Yi Chen, Rongrong Liu, and Baohua Cao. Effect of exercise on rehabilitation of breast cancer surgery patients: A systematic review and meta-analysis of randomized controlled trials. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, October 2022. http://dx.doi.org/10.37766/inplasy2022.10.0065.

Full text
Abstract:
Review question / Objective: Exercise after breast cancer surgery has proved beneficial to rehabilitation. We evaluate the best exercise for different post-surgery complications. Information sources: China National Knowledge Infrastructure, Wanfang Data Knowledge Service Platform, VIP China Science and Technology Journal Database, China Biology Medicine, EMBASE and PubMed databases were searched. Combinations of breast cancer (“breast tumor”,“breast carcinoma”,“mammary carcinoma”,“breast neoplasm”) and rehabilitation exercise (“exercise”,“physical therapy”) were employed when screening abstracts/keywords of articles. Two researchers independently searched, read the title and abstract of the literature, read the full text of the preliminary included literature, and extracted the data. In case of divergence, a third researcher was consulted.
APA, Harvard, Vancouver, ISO, and other styles
5

Walker, Alex, Brian MacKenna, Peter Inglesby, Christopher Rentsch, Helen Curtis, Caroline Morton, Jessica Morley, et al. Clinical coding of long COVID in English primary care: a federated analysis of 58 million patient records in situ using OpenSAFELY. OpenSAFELY, 2021. http://dx.doi.org/10.53764/rpt.3917ab5ac5.

Full text
Abstract:
This OpenSAFELY report is a routine update of our peer-review paper published in the British Journal of General Practice on the Clinical coding of long COVID in English primary care: a federated analysis of 58 million patient records in situ using OpenSAFELY. It is a routine update of the analysis described in the paper. The data requires careful interpretation and there are a number of caveats. Please read the full detail about our methods and discussionis and the full analytical methods on this routine report are available on GitHub. OpenSAFELY is a new secure analytics platform for electronic patient records built on behalf of NHS England to deliver urgent academic and operational research during the pandemic. You can read more about OpenSAFELY on our website.
APA, Harvard, Vancouver, ISO, and other styles
6

Dubeck, Margaret M., Jonathan M. B. Stern, and Rehemah Nabacwa. Learning to Read in a Local Language in Uganda: Creating Learner Profiles to Track Progress and Guide Instruction Using Early Grade Reading Assessment Results. RTI Press, June 2021. http://dx.doi.org/10.3768/rtipress.2021.op.0068.2106.

Full text
Abstract:
The Early Grade Reading Assessment (EGRA) is used to evaluate studies and monitor projects that address reading skills in low- and middle-income countries. Results are often described solely in terms of a passage-reading subtask, thereby overlooking progress in related skills. Using archival data of cohort samples from Uganda at two time points in three languages (Ganda, Lango, and Runyankore-Rukiga), we explored a methodology that uses passage-reading results to create five learner profiles: Nonreader, Beginner, Instructional, Fluent, and Next-Level Ready. We compared learner profiles with results on other subtasks to identify the skills students would need to develop to progress from one profile to another. We then used regression models to determine whether students’ learner profiles were related to their results on the various subtasks. We found membership in four categories. We also found a shift in the distribution of learner profiles from Grade 1 to Grade 4, which is useful for establishing program effectiveness. The distribution of profiles within grades expanded as students progressed through the early elementary grades. We recommend that those who are discussing EGRA results describe students by profiles and by the numbers that shift from one profile to another over time. Doing so would help describe abilities and instructional needs and would show changes in a meaningful way.
APA, Harvard, Vancouver, ISO, and other styles
7

Ralph Best, S.J. Maheras, T.I. McSweeney, and S.B. Ross. Real Data for Real Routes. Office of Scientific and Technical Information (OSTI), August 2001. http://dx.doi.org/10.2172/805678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rodriguez, Dirk, and Cameron Williams. Channel Islands Nation Park: Terrestrial vegetation monitoring annual report - 2016. National Park Service, August 2022. http://dx.doi.org/10.36967/2293561.

Full text
Abstract:
This report presents the data collected in 2016 as part of the long-term terrestrial vegetation monitoring program at Channel Islands National Park. The purposes of the monitoring program are to document the long-term trends in the major vegetation communities within the park. The data collected are from 30 m point-line intercept transects. In the past, each transect was sampled annually. However, beginning in 2012 the program began adding randomly located transects to improve the representativeness of the sampling, and transitioned to a rotating panel design. Now only a core subset of the transects are read annually. Non-core transects are assigned to one of four panels, and those transects are read only once every four years. A summary analysis of the 2016 data shows that: 165 transects were read. The 165 transects were distributed across all five islands: Santa Rosa Island (n = 87), Santa Cruz Island (n = 33), Santa Barbara Island (n = 18), Anacapa Island (n = 9) and San Miguel Island (n = 11). Relative native plant cover averaged 63% across all islands and sampled communities while absolute native plant cover averaged 32%. Among plant communities, relative percent native cover ranged from a low of 1% in seablite scrub to a high of 98% in oak woodland. In general, the number of vegetation data points recorded per transect positively correlates with average rainfall, which is reflected in the number of “hits” or transect points intersecting vegetation. When precipitation declined there is a corresponding drop in the number of hits. In 2016, however this was not the case. Even though rainfall increased as compared to the previous 4 years (18.99 inches in 2016 vs an average of 6.32 for the previous 4 years), the average number of hits was only 64. To put this into perspective, the highest average number of hits was 240 in 1993, an El Niño year of high precipitation. The number of vegetation communities sampled varied by island with the larger islands having more communities. In 2016, there were 15 communities sampled on Santa Rosa Island, 12 communities on Santa Cruz Island, 7 communities on San Miguel Island, 7 communities on Santa Barbara Island, and 7 communities on Anacapa Island. Twenty-six vegetation types were sampled in 2016. Of these, 13 occurred on more than one island. The most commonly shared community was Valley/Foothill grassland which was found in one form or another on all five islands within the park. The next most commonly shared communities were coastal sage scrub and coastal scrub, which were found on four islands. Coastal bluff scrub and coreopsis scrub were monitored on three islands. Four communities—ironwood, mixed woodland, oak woodland, riparian, and seacliff scrub—were monitored on two islands, and 12 communities—Torrey pine woodland, shrub savannah, seablite scrub, Santa Cruz Island pine, perennial iceplant, lupine scrub, fennel, coastal strand, coastal marsh, cactus scrub, boxthorn scrub, barren, and Baccharis scrub—were each monitored on one island.
APA, Harvard, Vancouver, ISO, and other styles
9

Damianos, Laurie, Steve Wohlever, Robyn Kozeriok, and Jay Ponte. MiTPA for Real Users, Real Data, Real Problems. Fort Belvoir, VA: Defense Technical Information Center, April 2003. http://dx.doi.org/10.21236/ada459750.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

George and Hawley. PR-015-12600-R01 Ability of Ultrasonic Meters to Measure Accurately in Compressor-Induced Pulsating Flows. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), November 2013. http://dx.doi.org/10.55274/r0010808.

Full text
Abstract:
Transmission and storage operations frequently move natural gas using reciprocating compressors that may generate flow pulsations. Most measurement systems cannot accurately measure the flow rate of a pulsating gas stream, and the resulting errors can cause inaccurate gas volumes and accounting imbalances. Recent advances in ultrasonic meters may provide the ability to function without measurement error in pulsating gas streams. Tests were performed to examine the relationship between ultrasonic meter transducer sampling rates, the frequency and amplitude of pulsations from reciprocating compressors, and meter accuracy as a possible basis for using ultrasonic meters in gas pipelines with varying pulsations. Two ultrasonic natural gas meters of current design were tested at SwRI in flows that simulated reciprocating compressor pulsations. Diagnostics and flow data were collected from the meters and analyzed to identify pulsation conditions in which the meters read accurately, or in which meter data could be used to correct measurement errors.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography