Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Write data.

Rozprawy doktorskie na temat „Write data”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 24 najlepszych rozpraw doktorskich naukowych na temat „Write data”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Walter, Sarah. "Parallel read/write system for optical data storage". Diss., Connect to online resource, 2005. http://wwwlib.umi.com/cr/colorado/fullcit?p1425767.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Ibanez, Luis Daniel. "Towards a read/write web of linked data". Nantes, 2015. http://archive.bu.univ-nantes.fr/pollux/show.action?id=9089939a-874b-44e1-a049-86a4c5c5d0e6.

Pełny tekst źródła
Streszczenie:
L’initiative «Web des données» a mis en disponibilité des millions des données pour leur interrogation par une fédération de participants autonomes. Néanmoins, le Web des Données a des problèmes de hétérogénéité et qualité. Nous considérons le problème de hétérogèneité comme une médiation «Local-as-View» (LAV). Malheureusement, LAV peut avoir besoin d’exécuter un certain nombre de « reformulations » exponentiel dans le nombre de sous-objectifs d’une requête. Nous proposons l’algorithme «Graph-Union» (GUN) pour maximiser les résultats obtenus á partir d’un sous-ensemble de reformulations. GUN réduit le temps d’exécution et maximise les résultats en échange d’une utilisation de la mémoire plus élevée. Pour permettre aux participants d’améliorer la qualité des données, il est nécessaire de faire évoluer le Web des Données vers Lecture-Écriture, par contre, l’écriture mutuelle des données entre participants autonomes pose des problèmes de cohérence. Nous modélisons le Web des Données en Lecture -Écriture comme un réseau social où les acteurs copient les données que leur intéressent, les corrigent et publient les mises à jour pour les échanger. Nous proposons deux algorithmes pour supporter cet échange : SU-Set, qui garantit la Cohérence Inéluctable Forte (CIF), et Col-Graph, qui garantit la Cohérence des Fragments, plus forte que CIF. Nous étudions les complexités des deux algorithmes et nous estimons expérimentalement le cas moyen de Col-Graph, les résultats suggèrant qu'il est faisable pour des topologies sociales
The Linked Data initiative has made available millions of pieces of data for querying through a federation of autonomous participants. However, the Web of Linked data suffers of problems of data heterogeneity and quality. We cast the problem of integrating heterogeneous data sources as a Local-as-View mediation (LAV) problem, unfortunately, LAV may require the execution of a number of “rewritings” exponential on the number of query subgoals. We propose the Graph-Union (GUN) strategy to maximise the results obtained from a subset of rewritings. Compared to traditional rewriting execution strategies, GUN improves execution time and number of results obtained in exchange of higher memory consumption. Once data can be queried data consumers can detect quality issues, but to resolve them they need to write on the data of the sources, i. E. , to evolve Linked Data from Read/Only to Read-Write. However, writing among autonomous participants raises consistency issues. We model the Read-Write Linked Data as a social network where actors copy the data they are interested into, update it and publish updates to exchange with others. We propose two algorithms for update exchange: SU-Set, that achieves Strong Eventual Consistency (SEC) and Col-Graph, that achieves Fragment Consistency, stronger than SEC. We analyze the worst and best case complexities of both algorithms and estimate experimentally the average complexity of Col-Graph, results suggest that is feasible for social network topologies
Style APA, Harvard, Vancouver, ISO itp.
3

Horne, Ross J. "Programming languages and principles for read-write linked data". Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/210899/.

Pełny tekst źródła
Streszczenie:
This work addresses a gap in the foundations of computer science. In particular, only a limited number of models address design decisions in modern Web architectures. The development of the modern Web architecture tends to be guided by the intuition of engineers. The intuition of an engineer is probably more powerful than any model; however, models are important tools to aid principled design decisions. No model is sufficiently strong to provide absolute certainty of correctness; however, an architecture accompanied by a model is stronger than an architecture accompanied solely by intuition lead by the personal, hence subjective, subliminal ego. The Web of Data describes an architecture characterised by key W3C standards. Key standards include a semi-structured data format, entailment mechanism and query language. Recently, prominent figures have drawn attention to the necessity of update languages for the Web of Data, coining the notion of Read–Write Linked Data. A dynamicWeb of Data with updates is a more realistic reflection of the Web. An established and versatile approach to modelling dynamic languages is to define an operational semantics. This work provides such an operational semantics for a Read–Write Linked Data architecture. Furthermore, the model is sufficiently general to capture the established standards, including queries and entailments. Each feature is relative easily modelled in isolation; however a model which checks that the key standards socialise is a greater challenge to which operational semantics are suited. The model validates most features of the standards while raising some serious questions. Further to evaluating W3C standards, the operational mantics provides a foundation for static analysis. One approach is to derive an algebra for the model. The algebra is proven to be sound with respect to the operational semantics. Soundness ensures that the algebraic rules preserve operational behaviour. If the algebra establishes that two updates are equivalent, then they have the same operational capabilities. This is useful for optimisation, since the real cost of executing the updates may differ, despite their equivalent expressive powers. A notion of operational refinement is discussed, which allows a non-deterministic update to be refined to a more deterministic update. Another approach to the static analysis of Read–Write Linked Data is through a type system. The simplest type system for this application simply checks that well understood terms which appear in the semi-structured data, such as numbers and strings of characters, are used correctly. Static analysis then verifies that basic runtime errors in a well typed program do not occur. Type systems for URIs are also investigated, inspired by W3C standards. Type systems for URIs are controversial, since URIs have no internal structure thus have no obvious non-trivial types. Thus a flexible type system which accommodates several approaches to typing URIs is proposed.
Style APA, Harvard, Vancouver, ISO itp.
4

Wang, Frank Zhigang. "Advanced magnetic thin-film heads under read-while-write operation". Thesis, University of Plymouth, 1999. http://hdl.handle.net/10026.1/2353.

Pełny tekst źródła
Streszczenie:
A Read-While-Write (RWW) operation for tape and/or potentially disk applications is needed in the following three cases: 1. High reliability; 2. Data servo systems; 3. Buried servo systems. All these applications mean that the read (servo) head and write head are operative simultaneously. Consequently, RWW operation will require work to suppress the so-called crossfeed field radiation from the write head. Traditionally, write-read crossfeed has been reduced in conventional magnetic recording heads by a variety of screening methods, but the effectness of these methods is very limited. On the other hand, the early theoretical investigations of the crossfeed problem concentrating on the flux line pattern in front of a head structure based on a simplified model, may not be comprehensive. Today a growing number of magnetic recording equipment manufacturers employ thin-film technology to fabricate heads and thereby the size of the modern head is much smaller than in the past. The increasing use of thin-film metallic magnetic materials for heads, along with the appearance of other new technologies, such as the MR reproductive mode and keepered media, has stimulated the need for an increased understanding of the crossfeed problem by advanced analysis methods and a satisfactory practical solution to achieve the RWW operation. The work described in this thesis to suppress the crossfeed field involves both a novel reproductive mode of a Dual Magnetoresistive (DMR) head, which was originally designed to gain a large reproduce sensitivity at high linear recording densities exceeding 100 kFCI, playing the key role in suppressing the crossfeed (the corresponding signal-noise ratio is over 38 dB), and several other compensation schemes, giving further suppression. Advanced analytical and numerical methods of estimating crossfeed in single and multi track thin-film/MR heads under both DC and AC excitations can often help a head designer understand how the crossfeed field spreads and therefore how to suppress the crossfeed field from the standpoint of an overall head configuration. This work also assesses the scale of the crossfeed problem by making measurements on current and improved heads, thereby adapting the main contributors to crossfeed. The relevance of this work to the computer industry is clear for achieving simultaneous operation of the read head and write head, especially in a thin-film head assembly. This is because computer data rates must increase to meet the demands of storing more and more information in less time as computer graphics packages become more sophisticated.
Style APA, Harvard, Vancouver, ISO itp.
5

Bai, Daniel Zhigang. "Micromagnetic Modeling of Write Heads for High-Density and High-Data-Rate Perpendicular Recording". Research Showcase @ CMU, 2004. http://repository.cmu.edu/dissertations/922.

Pełny tekst źródła
Streszczenie:
In this dissertation, three dimensional dynamic micromagnetic modeling based on Landau-Lifshitz equation with Gilbert damping has been used to study the magnetic processes of the thin film write heads for high density and high data rate perpendicular magnetic recording. In extremely narrow track width regime, for example, around or below 100 nm, the head field is found to suffer from significant loss from the ideal AttM s value for perpendicular recording. In the meantime, remanent head field becomes significant, posing potential issue of head remanence erasure. Using micromagnetic modeling, various novel head designs have been investigated. For an overall head dimension around one micron, the shape and structure of the head yoke have been found to greatly affect the head magnetization reversal performance, therefore the field rise time, especially for moderate driving currents. A lamination of the head across its thickness, both in the yoke and in the pole tip, yields excellent field reversal speed, and more importantly, it suppresses the remanent field very well and thus making itself a simple and effective approach to robust near-zero remanence. A single pole head design with a stitched pole tip and a recessed side yoke can produce significantly enhanced head field compared to a traditional single pole head. Various head design parameters have been examined via micromagnetic modeling. Using the dynamic micromagnetic model, the magnetization reversal processes at data rates beyond 1 G bit/s have been studied. The excitation of spin wave during the head field reversal and the energy dissipation afterwards were found im portant in dictating the field rise time. Both the drive current rise time and the Gilbert damping constant affect the field reversal speed. The effect of the soft underlayer (SUL) in both the write and the read processes have been studied via micromagnetic modeling. Although it is relatively easy to fulfill the requirement for the magnetic imaging in writing, the SUL deteriorates the readback performance and lowers the achievable recording linear density. Various parameters have been investigated and solutions have been proposed. The effect of stress in magnetostrictive thin films has been studied both analytically and by simulation. The micromagnetic model has been extended to incorporate the stress-induced anisotropy effect. Simulation was done on both a magnetic thin film undergoing stresses to show the static domains and a conceptual write head design that utilizes the stress induced anisotropy to achieve better performance. A self-consistent model based on energy minimization has been developed to model both the magnetization and the stress-strain states of a magnetic thin film.
Style APA, Harvard, Vancouver, ISO itp.
6

Söderbäck, Karl. "Organizing HLA data for improved navigation and searchability". Thesis, Linköpings universitet, Databas och informationsteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176029.

Pełny tekst źródła
Streszczenie:
Pitch Technologies specializes their work on the HLA standard, a standard that specifies data exchange between simulators. The company provides a solution for recording HLA data into a database as raw byte data entries. In this thesis, different design solutions to store and organize recorded HLA data in a manner that reflects the content of the data are proposed and implemented, with the aim of making the data possible to query and analyze after recording. The design solutions impact on storage, read- and write performance as well as usability are evaluated through a suite of tests run on a PostgreSQL database and a TimescaleDB database. It is concluded that none of the design alternatives is the best solution for all aspects, but the most promising combination is proposed.
Style APA, Harvard, Vancouver, ISO itp.
7

Kalezhi, Josephat. "Modelling data storage in nano-island magnetic materials". Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/modelling-data-storage-in-nanoisland-magnetic-materials(9b449925-1a39-4711-8d55-82e6d8ac215c).html.

Pełny tekst źródła
Streszczenie:
Data storage in current hard disk drives is limited by three factors. These are thermal stability of recorded data, the ability to store data, and the ability to read back the stored data. An attempt to alleviate one factor can affect others. This ultimately limits magnetic recording densities that can be achieved using traditional forms of data storage. In order to advance magnetic recording and postpone these inhibiting factors, new approaches are required. One approach is recording on Bit Patterned Media (BPM) where the medium is patterned into nanometer-sized magnetic islands where each stores a binary digit.This thesis presents a statistical model of write errors in BPM composed of single domain islands. The model includes thermal activation in a calculation of write errors without resorting to time consuming micromagnetic simulations of huge populations of islands. The model incorporates distributions of position, magnetic and geometric properties of islands. In order to study the impact of island geometry variations on the recording performance of BPM systems, the magnetometric demagnetising factors for a truncated elliptic cone, a generalised geometry that reasonably describe most proposed island shapes, were derived analytically.The inclusion of thermal activation was enabled by an analytic derivation of the energy barrier for a single domain island. The energy barrier is used in a calculation of transition rates that enable the calculation of error rates. The model has been used to study write-error performance of BPM systems having distributions of position, geometric and magnetic property variations. Results showed that island intrinsic anisotropy and position variations have a larger impact on write-error performance than geometric variations.The model was also used to study thermally activated Adjacent Track Erasure (ATE) for a specific write head. The write head had a rectangular main pole of 13 by 40 nm (cross-track x down-track) with pole trailing shield gap of 5 nm and pole side shield gap of 10 nm. The distance from the pole to the top surface of the medium was 5 nm, the medium was 10 nm thick and there was a 2 nm interlayer between the soft underlayer (SUL) and the medium, making a total SUL to pole spacing of 17 nm. The results showed that ATE would be a major problem and that cross-track head field gradients need to be more tightly controlled than down-track. With the write head used, recording at 1 Tb/in² would be possible on single domain islands.
Style APA, Harvard, Vancouver, ISO itp.
8

Amur, Hrishikesh. "Storage and aggregation for fast analytics systems". Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50397.

Pełny tekst źródła
Streszczenie:
Computing in the last decade has been characterized by the rise of data- intensive scalable computing (DISC) systems. In particular, recent years have wit- nessed a rapid growth in the popularity of fast analytics systems. These systems exemplify a trend where queries that previously involved batch-processing (e.g., run- ning a MapReduce job) on a massive amount of data, are increasingly expected to be answered in near real-time with low latency. This dissertation addresses the problem that existing designs for various components used in the software stack for DISC sys- tems do not meet the requirements demanded by fast analytics applications. In this work, we focus specifically on two components: 1. Key-value storage: Recent work has focused primarily on supporting reads with high throughput and low latency. However, fast analytics applications require that new data entering the system (e.g., new web-pages crawled, currently trend- ing topics) be quickly made available to queries and analysis codes. This means that along with supporting reads efficiently, these systems must also support writes with high throughput, which current systems fail to do. In the first part of this work, we solve this problem by proposing a new key-value storage system – called the WriteBuffer (WB) Tree – that provides up to 30× higher write per- formance and similar read performance compared to current high-performance systems. 2. GroupBy-Aggregate: Fast analytics systems require support for fast, incre- mental aggregation of data for with low-latency access to results. Existing techniques are memory-inefficient and do not support incremental aggregation efficiently when aggregate data overflows to disk. In the second part of this dis- sertation, we propose a new data structure called the Compressed Buffer Tree (CBT) to implement memory-efficient in-memory aggregation. We also show how the WB Tree can be modified to support efficient disk-based aggregation.
Style APA, Harvard, Vancouver, ISO itp.
9

Vysocký, Ondřej. "Optimalizace distribuovaného I/O subsystému projektu k-Wave". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255408.

Pełny tekst źródła
Streszczenie:
This thesis deals with an effective solution of the parallel I/O of the k-Wave tool, which is designed for time domain acoustic and ultrasound simulations. k-Wave is a supercomputer application, it runs on a Lustre file system and it requires to be implemented with MPI and stores the data in suitable data format (HDF5). I designed three methods of optimization which fits k-Wave's needs. It uses accumulation and redistribution techniques. In comparison with the native write, every optimization method led to better write speed, up to 13.6GB/s. It is possible to use these methods to optimize every data distributed application with the write speed issue.
Style APA, Harvard, Vancouver, ISO itp.
10

Hernane, Soumeya-Leila. "Modèles et algorithmes de partage de données cohérents pour le calcul parallèle distribué à haut débit". Thesis, Université de Lorraine, 2013. http://www.theses.fr/2013LORR0042/document.

Pełny tekst źródła
Streszczenie:
Data Handover est une librairie de fonctions adaptée aux systèmes distribués à grande échelle. Dho offre des routines qui permettent d'acquérir des ressources en lecture ou en écriture de façon cohérente et transparente pour l'utilisateur. Nous avons modélisé le cycle de vie de Dho par un automate d'état fini puis, constaté expérimentalement, que notre approche produit un recouvrement entre le calcul de l'application et le contrôle de la donnée. Les expériences ont été menées en mode simulé en utilisant la libraire GRAS de SimGrid puis, en exploitant un environnement réel sur la plate-forme Grid'5000. Par la théorie des files d'attente, la stabilité du modèle a été démontrée dans un contexte centralisé. L'algorithme distribué d'exclusion mutuelle de Naimi et Tréhel a été enrichi pour offrir les fonctionnalités suivantes: (1) Permettre la connexion et la déconnexion des processus (ADEMLE), (2) admettre les locks partagés (AEMLEP) et enfin (3) associer les deux propriétés dans un algorithme récapitulatif (ADEMLEP). Les propriétés de sûreté et de vivacité ont été démontrées théoriquement. Le système peer-to-peer proposé combine nos algorithmes étendus et le modèle originel Dho. Les gestionnaires de verrou et de ressource opèrent et interagissent mutuellement dans une architecture à trois niveaux. Suite à l'étude expérimentale du système sous-jacent menée sur Grid'5000, et des résultats obtenus, nous avons démontré la performance et la stabilité du modèle Dho face à une multitude de paramètres
Data Handover is a library of functions adapted to large-scale distributed systems. It provides routines that allow acquiring resources in reading or writing in the ways that are coherent and transparent for users. We modelled the life cycle of Dho by a finite state automaton and through experiments; we have found that our approach produced an overlap between the calculation of the application and the control of the data. These experiments were conducted both in simulated mode and in real environment (Grid'5000). We exploited the GRAS library of the SimGrid toolkit. Several clients try to access the resource concurrently according the client-server paradigm. By the theory of queues, the stability of the model was demonstrated in a centralized environment. We improved, the distributed algorithm for mutual exclusion (of Naimi and Trehel), by introducing following features: (1) Allowing the mobility of processes (ADEMLE), (2) introducing shared locks (AEMLEP) and finally (3) merging both properties cited above into an algorithm summarising (ADEMLEP). We proved the properties, safety and liveliness, theoretically for all extended algorithms. The proposed peer-to-peer system combines our extended algorithms and original Data Handover model. Lock and resource managers operate and interact each other in an architecture based on three levels. Following the experimental study of the underlying system on Grid'5000, and the results obtained, we have proved the performance and stability of the model Dho over a multitude of parameters
Style APA, Harvard, Vancouver, ISO itp.
11

He, Zhenyu. "Writer identification using wavelet, contourlet and statistical models". HKBU Institutional Repository, 2006. http://repository.hkbu.edu.hk/etd_ra/767.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Smith, Cynthia Miller. "A Direct-Write Three-Dimensional Bioassembly Tool for Regenerative Medicine". Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1335%5F1%5Fm.pdf&type=application/pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Kurniawan, Budi. "Offline writer identification system using multiple neural networks". Phd thesis, Department of Electrical Engineering, 1998. http://hdl.handle.net/2123/9392.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Shao, Cheng. "Multi-writer consistency conditions for shared memory objects". Texas A&M University, 2007. http://hdl.handle.net/1969.1/85806.

Pełny tekst źródła
Streszczenie:
Regularity is a shared memory consistency condition that has received considerable attention, notably in connection with quorum-based shared memory. Lamport's original definition of regularity assumed a single-writer model, however, and is not well defined when each shared variable may have multiple writers. In this thesis, we address this need by formally extending the notion of regularity to a multi-writer model. We have shown that the extension is not trivial. While there exist various ways to extend the single-writer definition, the resulting definitions will have different strengths. Specifically, we give several possible definitions of regularity in the presence of multiple writers. We then present a quorum-based algorithm to implement each of the proposed definitions and prove them correct. We study the relationships between these definitions and a number of other well-known consistency conditions, and give a partial order describing the relative strengths of these consistency conditions. Finally, we provide a practical context for our results by studying the correctness of two well-known algorithms for mutual exclusion under each of our proposed consistency conditions.
Style APA, Harvard, Vancouver, ISO itp.
15

Lee, Yi-An, i 李翼安. "Data Compression Ratio-aware Routing for Multiple E-Beam Direct Write Systems". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/x59xpu.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣大學
電子工程學研究所
105
Along with the advancement of technology, the feature size of Integrated Circuits(IC) are shrinking down day after day, but the resolution of the ArF laser is not enough to support next generation lithography. Electron beam lithography have a role to play in next generation lithography with its characteristic of high-accuracy. In order to support the accuracy of Electron beam, the massive data size of the circuit has to be delivered to the E-beam emitter. However, circuits nowadays have become more complicated. In order to synchronize the operation of Electron beam lithography with data transmission, the successfulness of this process relies on the speed of data transmission, which is not sufficiently fast even with technologies today. So in practice, the massive circuit data should be compressed before transmitted by optic fibers, and then decompressed on the chip of E-beam machines. In this thesis, considering the data arrangement after rasterization, we proposed a method to improve router. Besides, we modify data compression algorithm to support the particular arrangement of data. The results of experiments show that we not only improve data compression ratio with our proposed algorithms but establish a procedure of data transformation for multiple electron beam direct write systems.
Style APA, Harvard, Vancouver, ISO itp.
16

Chiu, Yu-Hsiang, i 邱煜翔. "Data Compression Ratio-aware Detailed Routing for Multiple E-Beam Direct Write Systems". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/49819815702155267265.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣大學
電子工程學研究所
104
The feature size of Integrated Circuits(IC) are shrinking down along with the advancement of technology, but the resolution of the ArF laser is far from the target for next generation lithography. Electron beam (E-beam) lithography, with its high-accuracy characteristic, is very likely to become the main role in next generation lithography. Because of the accuracy of E-beam, the exact information of the circuit has to be delivered to the E-beam emitter. However, circuits nowadays has become so complicated that the successfulness of this process relies on the speed of data transmission, which is not sufficiently fast even with technologies today. So in practice, data should be compressed first, transmitted by optic fibers, and then decompressed in the E-beam machines. In this thesis, we proposed a detailed routing method to improve data compression quality before applying the actual compression algorithm. The results of experiments show that, with one particular data compression algorithm, LineDiff Entropy, chosen, we improve data compression ratio with our proposed detailed router. And we can conclude that considering data compression ratio in physical design phase is a field worth studying.
Style APA, Harvard, Vancouver, ISO itp.
17

Wang, Yueh-yi, i 王岳宜. "Architecture Design and RTL Implementation of SA-110 Compatible IMMU and Data Write Buffer". Thesis, 1999. http://ndltd.ncl.edu.tw/handle/03762313072137020886.

Pełny tekst źródła
Streszczenie:
碩士
國立交通大學
資訊工程系
87
SA-110 is a 32-bit general-purpose RISC microprocessor with a 16KB instruction cache (Icache), a 16KB write-back data cache (Dcache), two memory management units (IMMU and MMU), separate 32-entry translation look-aside buffers (ITLB and DTLB), and an 8-entry write buffer combined on a single chip. ITLB and DTLB can map segments, small pages, and large pages respectively. This thesis presents architecture design and RTL implementation of IMMU and Data Write Buffer in SA-110 microprocessor. In this design, the IMMU supports a conventional two-level page table structure and has a dedicated 32-entry ITLB to cache its page table, accelerating the time required. As for the Write Buffer, we design its control logic and the interfaces between Bus Interface Unit and between Dcache. The functions of content's flushing and merging for the Write Buffer are also fulfilled. At last, we implement our design and simulate it to verify each module's functions correctly using Verilog-XL.
Style APA, Harvard, Vancouver, ISO itp.
18

Lu, Yi-Ying, i 呂易穎. "K-Grouping: A Machine-Learning-based Data Classifier to Reduce the Write Amplification in SSDs". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/88vr3x.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣科技大學
電子工程系
107
Solid-state drives (SSDs) composed of flash memory have the advantages of non-volatility, fast speed, shock resistance, low-power consumption, and small size. In recent years, the SSDs have been using as data storage for various devices widely. Two critical characteristics of flash memory are that it does not support the in-place update for the data, and it must write data in units of a page and erase data in units of a block. Due to the two characteristics, when a block is selected as a victim block to erase, we need to move the remaining valid pages from the victim block to another free block. Therefore, how to reduce the amount of valid page movement is a crucial issue for SSDs. By performing data classification, it can sufficiently concentrate the distribution of invalid pages in the flash memory and reduce the data movement cost. This thesis proposes a method to design an adaptive data classifier for different workloads based on the machine learning algorithm. The classifier writes the requests with the same characteristics in the same group of data blocks. Through such a design, it can improve the performance of SSDs by reducing the live page copying and further decreasing the write amplification.
Style APA, Harvard, Vancouver, ISO itp.
19

林勇維. "Analysis and Design of Data-Aware Dynamic Supply Write-Assist Scheme for Cross-Point 8T SRAM". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/52172197104937269628.

Pełny tekst źródła
Streszczenie:
碩士
國立交通大學
電子研究所
99
With fastest access speed among semiconductor memories, embedded Static Random Access Memory (SRAM) plays an important role in various System-on-Chip (SoC) designs. Due to its large ratio, low voltage operation capability of SRAM can lower the total system power significantly. But technology scaling, variation severely degrades functionality of digital circuit. In this thesis, a Data-Aware dynamic supply Write-Assist scheme is proposed and implemented with 128Kb cross-point 8T SRAM. This technique improves Write margin over 20% on average at operating voltage ranges from 0.5V to 0.8V, and features good anti-variation ability with minimum area overhead. Meanwhile, Write performance improves to pico-second scale, otherwise would fail if no Write assist technique is applied, on average at operating voltage ranges from 0.5V to 0.8V. Simulation results show that chip operation speed achieves 474MHz at VDD = 0.6V.
Style APA, Harvard, Vancouver, ISO itp.
20

Tang, Chin-Khai, i 陳晉凱. "Layout Data Compression Algorithm and Its Hardware Decoder Design for Multiple Electron-Beam Direct-Write Systems". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/92474890780302941846.

Pełny tekst źródła
Streszczenie:
博士
國立臺灣大學
電子工程學研究所
104
The advances in optical projection lithography have ensured the steady continuation of Moore''s law. However, the wavelength of light sources has reached the lowest limit of 193-nm, and optical diffraction has become a major problem. Thus, other cost effective solutions are urgently needed. Electron-beam maskless lithography is a powerful technology capable of very-high resolution writing. However, electron-beam maskless lithography suffers from slow electron-beam scanning process and low throughput. In recent years, new research on multiple electron-beam direct-write systems that use massively parallel electron-beam emitters to achieve fast scanning process and high WPH has gained popularity. In multiple electron-beam direct-write systems, one of the technical challenges is to transfer very large amounts of electron-beam layout data that controls electron-beam emitters from the data centers to multiple electron-beam direct-write systems. Furthermore, due to the enormous data transfer rates, a large number of hardware decoders are required in multiple electron-beam direct-write systems. Each hardware decoder must be able to decompress EBL data at high data rates, and the hardware resource requirements should be low so that the cost of implementing and operating the multiple electron-beam direct-write systems can be minimized. In this dissertation, a lossless electron-beam layout data compression algorithm, LineDiff Entropy, and its low-complexity high-performance hardware decoder for multiple electron-beam direct-write lithography systems are proposed. The algorithm compares consecutive data scanlines and encodes the data based on the change/no-change of pixel values and the lengths of pixel sequences. Then, the compaction steps of data omission, merging, and encoding of consecutive long identical scanlines are applied. Then, custom prefix codes are assigned to data with high occurrence frequency. The hardware decoder is designed as three circuit blocks that perform entropy decoding, de-compaction, and electron-beam layout data generation through parallel outputs. The hardware decoder only requires minimum resource.The results demonstrate that our algorithm can achieve excellent compression performance and that the hardware decoder can decompress data at very high data rates.
Style APA, Harvard, Vancouver, ISO itp.
21

Chang, chi-Shin, i 張琦昕. "40nm 1.0Mb 6T Pipeline SRAM with Step-Up Word- Line and Adaptive-Data-Aware Write-Assist Design". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/38003850682518485852.

Pełny tekst źródła
Streszczenie:
碩士
國立交通大學
電子研究所
100
More and more memory is used in today’s electronic products, and consequently the design of memory is becoming crucial. SRAM is usually used in high-performance microprocessor cache and embedded system applications because it has highest operating speed than other memory family. Conventional 6T SRAM use “thincell” layout to achieve high density, so it becomes the mainstream of SRAM design. However, with recently CMOS technology scaling, the greatest barrier to achieving high yield is process variation. The process variation is especially serious for high density SRAM because of the small device size and large capacity. This will seriously degrade the SRAM cell operating margin in advanced technology node. In the low-voltage operation, the conventional 6T SRAM is almost impossible to survive. For the 6T SRAM in the advanced process, in order to promote the survival probability, we proposed Read/Write assist circuit techniques. The proposed Step-Up Word-Line technique improves Read Static Noise Margin with acceptably loss of read speed and Write margin. The Write ability and Write performance are enhanced by a column based Adaptive-Data-Aware Write-Assist scheme. We also use Pipeline scheme to increase the operating speed. In this work, we implement a 1.0Mb high-performance 6T SRAM with 2 stages Pipeline with a single supply voltage in the 40nm Low-Standby-Power bulk complementary metal-oxide semiconductor technology. The chip can operate across wide voltage range from 1.2V to 0.7V, with operating frequency of 800MHz@1.2V and 25oC.
Style APA, Harvard, Vancouver, ISO itp.
22

Jing-CianLin i 林璟謙. "Design of a Temperature-Aware Highly Reliable Resistive Random Access Memory Controller on Data Retention and Write Error Issues". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/tmhvy6.

Pełny tekst źródła
Streszczenie:
碩士
國立成功大學
電機工程學系
102
Resistive random access memory (ReRAM) is a one of emerging non-volatility memories. Main advantages of ReRAM are low power, high speed, simple structure, and compatible with CMOS process. However, the ReRAM has reliability issues due to resistance instability at high temperature and failure during transition. The resistances of ReRAM cells drift toward intermediate state at high temperature. During write operation, some write failure errors occur because some ReRAM cells cannot change their resistances successfully. This thesis proposed a temperature-aware memory controller to deal with data retention errors and write failure errors. The proposed temperature-aware memory controller uses a temperature-aware operation scheme and an adaptive write scheme to improve the reliability of ReRAM. The temperature-aware operation scheme adjusts the ReRAM operation setting according to temperature for reducing resistance instability at the high temperature. On the other hand, the adaptive write scheme considers write-failure and data retention issues simultaneously to improve ReRAM reliability. As a result, the adaptive write scheme improves the bit-error rate (BER) of ReRAM by 96.6%. Additionally, temperature-aware operation scheme reduces the BER of ReRAM by 72.2% at 125°C. The proposed temperature-aware controller reduces the BER of ReRAM by 88.2% at 125°C.
Style APA, Harvard, Vancouver, ISO itp.
23

Sun, Yu-Yang, i 孫煜洋. "The study and implementation of a hybrid DOE read/write module used in the color-inkjet high density data storage". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/m925wz.

Pełny tekst źródła
Streszczenie:
碩士
國立虎尾科技大學
機械與機電工程研究所
96
A high density optical disk storage concept using microholographic multiplexing method has become attractive to the data storage industry. A hybrid diffractive-refractive objective lens is designed and implemented in this study. It creates an extended depth of focus beam with diffraction-limited beam spot size remaining nearly unchanged throughout the recording media volume. In addition, the benefits of using hybrid optical element include (1) high numerical aperture, (2) low cost, (3) stable in volume production (3) strong optical enhancement.   In this study, we first outlined the design procedure and used a commercial available computer code to eliminate the chromatic aberration in the lens design. The Taguchi method was implemented in this work to find the optimum design parameters. The hybrid lens is fabricated by the ultra-precision machine and optical performances are verified by a series of experimental works.   The control program, used in the optical-storage station, was written in Visual Basic. Users can operate the motor in various speeds via this user-friendly interface.
Style APA, Harvard, Vancouver, ISO itp.
24

Chen, Chien-Fu, i 陳建甫. "An 8T SRAM with Dual Data-Aware Write-Assists and Negative Read Wordline for High Cell-Stability, Speed and Area-Efficiency". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/00754860611419607580.

Pełny tekst źródła
Streszczenie:
碩士
國立清華大學
電機工程學系
101
In our livelihood, Static Random Access Memory (SRAM) appears in the almost electronics and it is required much more area than other circuits in the SoC chip. It shows the SRAM used the most power of a chip. So, how to solve the power consumption issue of SRAM and not to reduce the operating performance of SRAM is a big challenge. To make the power consumption reduce for SRAM, lower operating voltage is an useful method. But SRAM operated at low VDD suffers the following: (1) read disturb and half-select disturb (2) write ability and half-select tolerance trade off, and (3) reduced sensing margin (SM) as well as read failure and slow speeds. Previous works achieved read-disturb-free operations; but do not solve the trade-off between the half-select tolerance and write ability without time-consuming and power-consuming write-back (WB) scheme. All of this issue makes previous work can’t operate at ultra low voltage with higher speed. In this work, we proposes an 8T cell with dual data-aware write-assist (D2AW) and negative read word-line (NRWL) schemes. The column-based D2AW provides, for the first time, the solution to the trade-off between the row/column half-select (HS)-static noise margins (SNM) and the write margin (WM) thanks to the dual data-aware controls of: (1) cell-VSS (DA-CVSS) and (2) write word-line (DA-WWL). NRWL expands the RBL voltage swing and improve the read speed. A fabricated 65nm CMOS logic process 128-row 16Kb D2AW8T SRAM achieved 7.3MHz/48MHz at VDD=210mV/300mV by oscilloscope and Personal Kalos 2 testing. The figure of merit (FOM): [cell stability (CS)*cycle frequency (f)]/[cell area (A)*minimum VDD (VDDmin)] is 14+x higher than that of other low-VDDmin SRAM cells.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii