Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Data Storage Representations.

Дисертації з теми "Data Storage Representations"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-15 дисертацій для дослідження на тему "Data Storage Representations".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Munalula, Themba. "Measuring the applicability of Open Data Standards to a single distributed organisation: an application to the COMESA Secretariat." Thesis, University of Cape Town, 2008. http://pubs.cs.uct.ac.za/archive/00000461/.

Повний текст джерела
Анотація:
Open data standardization has many known benefits, including the availability of tools for standard encoding formats, interoperability among systems and long term preservation of data. Mark-up languages and their use on the World Wide Web have implied further ease for data sharing. The Extensible Markup Language (XML), in particular, has succeeded due to its simplicity and ease of use. Its primary purpose is to facilitate the sharing of data across different information systems, particularly systems connected via the Internet. Whether open and standardized or not, organizations generate data daily. Offline exchange of documents and data is undertaken using existing formats that are typically defined by the organizations that generate the data in the documents. With the Internet, the realization of data exchange has had a direct implication on the need for interoperability and comparability. As much as standardization is the accepted approach for online data exchange, little is understood about how a specific organization’s data “fits” a given data standard. This dissertation develops data metrics that represent the extent to which data standards can be applied to an organization’s data. The research identified key issues that affect data interoperability or the feasibility of a move towards interoperability. This research tested the unwritten rule that organizational setups tend to regard and design data requirements more from internal needs than interoperability needs. Essentially, by generating metrics that affect a number of data attributes, the research quantified the extent of the gap that exists between organizational data and data standards. Key data attributes, i.e. completeness, concise representation, relevance and complexity, were selected and used as the basis for metric generation. Additional to the generation of attribute-based metrics, hybrid metrics representing a measure of the “goodness of fit” of the source data to standard data were generated. Regarding the completeness attribute, it was found that most Common Market for Eastern and Southern Africa (COMESA) head office data clusters had lower than desired metrics to match the gap highlighted above. The same applied to the concise representation attribute. Most data clusters had more concise representation for the COMESA data than the data standard. The complexity metrics generated confirmed the fact that the number of data elements is a key determinant in any move towards the adoption of data standards. This fact was also borne out by the magnitude of the hybrid metrics which to some extent depended on the complexity metrics. An additional contribution of the research was the inclusion of expert users’ weights to the data elements and recalculation of all metrics. A comparison with the unweighted metrics yielded a mixed picture. Among the completeness metrics and for the data retention rate in particular, increases were recorded for data clusters for which greater weight was allocated to mapped elements than to those that were not mapped. The same applied to the relative elements ratio. The complexity metrics showed general declines when user-weighted elements were used in the computation as opposed to the unweighted elements. This again was due to the fact that these metrics are dependent on the number of elements. Hence for the former case, the weights were evenly distributed while for the latter case, some elements were given lower weights by the expert users, hence leading to an overall decline in the metric. A number of implications emerged for COMESA. COMESA would have to determine the extent to which its source data rely on data sources for which international standards are being promoted. Secondly, an inventory of users and collectors of the data COMESA uses is necessary in order to determine who would be the beneficiary of a standards-based information system. Thirdly, and from an organizational perspective, COMESA needs to designate a team to guide the process of creation of such a standards-based information system. Lastly there is need for involvement in consortia that are responsible for these data standards. This has an implication on organizational resources. In totality, this research provided a methodology for determination of the feasibility of a move towards standardization and hence makes it possible to answer the critical first stage questions such a move begs answers to.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Munyaradzi, Ngoni. "Transcription of the Bleek and Lloyd Collection using the Bossa Volunteer Thinking Framework." Thesis, University of Cape Town, 2013. http://pubs.cs.uct.ac.za/archive/00000913/.

Повний текст джерела
Анотація:
The digital Bleek and Lloyd Collection is a rare collection that contains artwork, notebooks and dictionaries of the earliest habitants of Southern Africa. Previous attempts have been made to recognize the complex text in the notebooks using machine learning techniques, but due to the complexity of the manuscripts the recognition accuracy was low. In this research, a crowdsourcing based method is proposed to transcribe the historical handwritten manuscripts, where volunteers transcribe the notebooks online. An online crowdsourcing transcription tool was developed and deployed. Experiments were conducted to determine the quality of transcriptions and accuracy of the volunteers compared with a gold standard. The results show that volunteers are able to produce reliable transcriptions of high quality. The inter-transcriber agreement is 80% for |Xam text and 95% for English text. When the |Xam text transcriptions produced by the volunteers are compared with the gold standard, the volunteers achieve an average accuracy of 69.69%. Findings show that there exists a positive linear correlation between the inter-transcriber agreement and the accuracy of transcriptions. The user survey revealed that volunteers found the transcription process enjoyable, though it was difficult. Results indicate that volunteer thinking can be used to crowdsource intellectually-intensive tasks in digital libraries like transcription of handwritten manuscripts. Volunteer thinking outperforms machine learning techniques at the task of transcribing notebooks from the Bleek and Lloyd Collection.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ugail, Hassan, and Eyad Elyan. "Efficient 3D data representation for biometric applications." IOS Press, 2007. http://hdl.handle.net/10454/2683.

Повний текст джерела
Анотація:
Yes
An important issue in many of today's biometric applications is the development of efficient and accurate techniques for representing related 3D data. Such data is often available through the process of digitization of complex geometric objects which are of importance to biometric applications. For example, in the area of 3D face recognition a digital point cloud of data corresponding to a given face is usually provided by a 3D digital scanner. For efficient data storage and for identification/authentication in a timely fashion such data requires to be represented using a few parameters or variables which are meaningful. Here we show how mathematical techniques based on Partial Differential Equations (PDEs) can be utilized to represent complex 3D data where the data can be parameterized in an efficient way. For example, in the case of a 3D face we show how it can be represented using PDEs whereby a handful of key facial parameters can be identified for efficient storage and verification.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Folmer, Brennan Thomas. "Metadata storage for file management systems data storage and representation techniques for a file management system /." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE1001141.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Cheeseman, Bevan. "The Adaptive Particle Representation (APR) for Simple and Efficient Adaptive Resolution Processing, Storage and Simulations." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-234245.

Повний текст джерела
Анотація:
This thesis presents the Adaptive Particle Representation (APR), a novel adaptive data representation that can be used for general data processing, storage, and simulations. The APR is motivated, and designed, as a replacement representation for pixel images to address computational and memory bottlenecks in processing pipelines for studying spatiotemporal processes in biology using Light-sheet Fluo- rescence Microscopy (LSFM) data. The APR is an adaptive function representation that represents a function in a spatially adaptive way using a set of Particle Cells V and function values stored at particle collocation points P∗. The Particle Cells partition space, and implicitly define a piecewise constant Implied Resolution Function R∗(y) and particle sampling locations. As an adaptive data representation, the APR can be used to provide both computational and memory benefits by aligning the number of Particle Cells and particles with the spatial scales of the function. The APR allows reconstruction of a function value at any location y using any positive weighted combination of particles within a distance of R∗(y). The Particle Cells V are selected such that the error between the reconstruction and the original function, when weighted by a function σ(y), is below a user-set relative error threshold E. We call this the Reconstruction Condition and σ(y) the Local Intensity Scale. σ(y) is motivated by local gain controls in the human visual system, and for LSFM data can be used to account for contrast variations across an image. The APR is formed by satisfying an additional condition on R∗(y); we call the Resolution Bound. The Resolution Bound relates the R∗(y) to a local maximum of the absolute value function derivatives within a distance R∗(y) or y. Given restric- tions on σ(y), satisfaction of the Resolution Bound also guarantees satisfaction of the Reconstruction Condition. In this thesis, we present algorithms and approaches that find the optimal Implied Resolution Function to general problems in the form of the Resolution Bound using Particle Cells using an algorithm we call the Pulling Scheme. Here, optimal means the largest R∗(y) at each location. The Pulling Scheme has worst-case linear complexity in the number of pixels when used to rep- resent images. The approach is general in that the same algorithm can be used for general (α,m)-Reconstruction Conditions, where α denotes the function derivative and m the minimum order of the reconstruction. Further, it can also be combined with anisotropic neighborhoods to provide adaptation in both space and time. The APR can be used with both noise-free and noisy data. For noisy data, the Reconstruction Condition can no longer be guaranteed, but numerical results show an optimal range of relative error E that provides a maximum increase in PSNR over the noisy input data. Further, if it is assumed the Implied Resolution Func- tion satisfies the Resolution Bound, then the APR converges to a biased estimate (constant factor of E), at the optimal statistical rate. The APR continues a long tradition of adaptive data representations and rep- resents a unique trade off between the level of adaptation of the representation and simplicity. Both regarding the APRs structure and its use for processing. Here, we numerically evaluate the adaptation and processing of the APR for use with LSFM data. This is done using both synthetic and LSFM exemplar data. It is concluded from these results that the APR has the correct properties to provide a replacement of pixel images and address bottlenecks in processing for LSFM data. Removal of the bottleneck would be achieved by adapting to spatial, temporal and intensity scale variations in the data. Further, we propose the simple structure of the general APR could provide benefit in areas such as the numerical solution of differential equations, adaptive regression methods, and surface representation for computer graphics.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

VanCalcar, Jenny E. (Jenny Elizabeth). "Collection and representation of GIS data to aid household water treatment and safe storage technology implementation in the northern region of Ghana." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/34583.

Повний текст джерела
Анотація:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2006.
Includes bibliographical references (leaves 46-51).
In 2005, a start-up social business called Pure Home Water (PHW) was begun in Ghana to promote and sell household water treatment and safe storage (HWTS) technologies. The original aim of the company was to offer a variety of products, allowing customers to choose the technology which best fit their individual needs. This differed from the typical implementation of HWTS promoters to date, in which an organization often distributes a single technology for the population to use. Instead, Pure Home Water wanted to give users a choice. PHW is also unique because they are attempting to sell their products without any subsidy. The goal is to create a sustainable business that will both bring better quality water to the population and be financially self-supporting. Because the company is new, a need existed to gather data on the demographic, health, and water and sanitation infrastructure within the region. Due to the geographic nature of the project, it was decided that a Geographic Information System (GIS) would be the best tool to store, analyze and represent the data.
(cont.) The system could be used to help plan relevant business strategies, and maps could be created to visually communicate important information among the Pure Home Water team and other interested parties. The final database did achieve the goal of collecting and bringing together important regional information in a form hopefully useful to PHW, future MIT teams and others. However, the use of the database for long-term planning is currently too advanced for the small company.
by Jenny E. VanCalcar.
M.Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Elyan, Eyad, and Hassan Ugail. "Reconstruction of 3D human facial images using partial differential equations." Academy Publisher, 2007. http://hdl.handle.net/10454/2644.

Повний текст джерела
Анотація:
One of the challenging problems in geometric modeling and computer graphics is the construction of realistic human facial geometry. Such geometry are essential for a wide range of applications, such as 3D face recognition, virtual reality applications, facial expression simulation and computer based plastic surgery application. This paper addresses a method for the construction of 3D geometry of human faces based on the use of Elliptic Partial Differential Equations (PDE). Here the geometry corresponding to a human face is treated as a set of surface patches, whereby each surface patch is represented using four boundary curves in the 3-space that formulate the appropriate boundary conditions for the chosen PDE. These boundary curves are extracted automatically using 3D data of human faces obtained using a 3D scanner. The solution of the PDE generates a continuous single surface patch describing the geometry of the original scanned data. In this study, through a number of experimental verifications we have shown the efficiency of the PDE based method for 3D facial surface reconstruction using scan data. In addition to this, we also show that our approach provides an efficient way of facial representation using a small set of parameters that could be utilized for efficient facial data storage and verification purposes.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Fang, Cheng-Hung. "Application for data mining in manufacturing databases." Ohio : Ohio University, 1996. http://www.ohiolink.edu/etd/view.cgi?ohiou1178653424.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Sello, Mpho Constance. "Individual Document Management Techniques: an Explorative Study." Thesis, 2007. http://pubs.cs.uct.ac.za/archive/00000399/.

Повний текст джерела
Анотація:
Individuals are generating, storing and accessing more information than ever before. The information comes from a variety of sources such as the World Wide Web, email and books. Storage media is becoming larger and cheaper. This makes accumulation of information easy. When information is kept in large volumes, retrieving it becomes a problem unless there is a system in place for managing this. This study examined the techniques that users have devised to make retrieval of their documents easy and timely. A survey of user document management techniques was done through interviews. The uncovered techniques were then used to build an expert system that provides assistance with document management decision-making. The system provides recommendations on file naming and organization, document backup and archiving as well as suitable storage media. The system poses a series of questions to the user and offers recommendations on the basis of the responses given. The system was evaluated by two categories of users: those who had been interviewed during data collection and those who had not been interviewed. Both categories of users found the recommendations made by the system to be reasonable and indicated that the system was easy to use. Some users thought the system could be of great benefit to people new to computers.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Kovacevic, Vlado S. "The impact of bus stop micro-locations on pedestrian safety in areas of main attraction." 2005. http://arrow.unisa.edu.au:8081/1959.8/28389.

Повний текст джерела
Анотація:
From the safety point of view, the bus stop is perhaps the most important part of the Bus Public Transport System, as it represents the point where bus passengers may interact directly with other road users and create conflicting situations leading to traffic accidents. For example, travellers could be struck walking to/from or boarding/alighting a bus. At these locations, passengers become pedestrians and at some stage crossing busy arterial roads at the bus stop in areas or at objects of main attraction usually outside of pedestrian designated facilities such as signal controlled intersections, zebra and pelican crossings. Pedestrian exposure to risk or risk-taking occurs when people want to cross the road in front of the stopped bus, at the rear of the bus or between the buses, particularly where bus stops are located on two-way roads (i.e. within the mid-block of the road with side streets, at non-signalised cross-section). However, it is necessary to have a better understanding of the pedestrian road-crossing risk exposure (pedestrian crossing distraction, obscurity and behaviour) within bus stop zones so that it can be incorporated into new design, bus stop placement, and evaluation of traffic management schemes where bus stop locations will play an increasingly important role. A full range of possible incidental interactions are presented in a tabular model that looks at the most common interacting traffic movements within bus stop zones. The thesis focused on pedestrian safety, discusses theoretical foundations of bus stops, and determines the types of accident risks between bus travellers as pedestrians and motor vehicles within the zones of the bus stop. Thus, the objectives of this thesis can be summarized as follows: (I) - Classification of bus stops, particularly according to objects of main attraction (pedestrian-generating activities); (II) - Analysis of traffic movement and interactions as an accident/risk exposure in the zone of bus stops with respect to that structure; (III) - Categorizing traffic accident in the vicinity of bus stops, and to analyse the interactions (interacting movements) that occur within bus stop zones in order to discover the nature of problems; (IV) - Formulation of tabular (pedestrian traffic accident prediction) models/forms (based on traffic interactions that creating and causing possibilities of accident conflict) for practical statistical methods of those accidents related to bus stop, and; (V) - Safety aspects related to the micro-location of bus stops to assist in the micro-location design, operations of bus stop safety facilities and safer pedestrian crossing for access between the bus stop and nearby objects of attraction. The scope of this thesis focuses on the theoretical foundation of bus stop microâ??location in areas of main attractions or at objects of main attraction, and traffic accident risk types as they occur between travellers as pedestrians and vehicle flow in the zone of the bus stop. The knowledge of possible interactions leads to the identification of potential conflict situations between motor vehicles and pedestrians. The problems discussed for each given conflict situation, has a great potential in increasing the knowledge needed to prevent accidents and minimise any pedestrian-vehicle conflict in this area and to aid in the development and planning of safer bus stops.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Wang, Yue. "Data Representation for Efficient and Reliable Storage in Flash Memories." Thesis, 2013. http://hdl.handle.net/1969.1/149536.

Повний текст джерела
Анотація:
Recent years have witnessed a proliferation of flash memories as an emerging storage technology with wide applications in many important areas. Like magnetic recording and optimal recording, flash memories have their own distinct properties and usage environment, which introduce very interesting new challenges for data storage. They include accurate programming without overshooting, error correction, reliable writing data to flash memories under low-voltages and file recovery for flash memories. Solutions to these problems can significantly improve the longevity and performance of the storage systems based on flash memories. In this work, we explore several new data representation techniques for efficient and reliable data storage in flash memories. First, we present a new data representation scheme—rank modulation with multiplicity —to eliminate the overshooting and charge leakage problems for flash memories. Next, we study the Half-Wits — stochastic behavior of writing data to embedded flash memories at voltages lower than recommended by a microcontroller’s specifications—and propose three software- only algorithms that enable reliable storage at low voltages without modifying hard- ware, which can reduce energy consumption by 30%. Then, we address the file erasures recovery problem in flash memories. Instead of only using traditional error- correcting codes, we design a new content-assisted decoder (CAD) to recover text files. The new CAD can be combined with the existing error-correcting codes and the experiment results show CAD outperforms the traditional error-correcting codes.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Gerhards, Karl. "Unscharfe Suche für Terme geringer Frequenz in einem großen Korpus." Doctoral thesis, 2011. https://repositorium.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-201101107278.

Повний текст джерела
Анотація:
Until now infrequent terms have been neglected in searching in order to save time and memory. With the help of a cascaded index and the introduced algorithms, such considerations are no longer necessary. A fast and efficient method was developed in order to find all terms in the largest freely available corpus of texts in the German language by exact search, part-word-search and fuzzy search. The process can be extended to include transliterated passages. In addition, documents that contain the term with a modified spelling, can also be found by a fuzzy search. Time and memory requirements are determined and fall considerably below the requests of common search engines.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Cheeseman, Bevan. "The Adaptive Particle Representation (APR) for Simple and Efficient Adaptive Resolution Processing, Storage and Simulations." Doctoral thesis, 2017. https://tud.qucosa.de/id/qucosa%3A30873.

Повний текст джерела
Анотація:
This thesis presents the Adaptive Particle Representation (APR), a novel adaptive data representation that can be used for general data processing, storage, and simulations. The APR is motivated, and designed, as a replacement representation for pixel images to address computational and memory bottlenecks in processing pipelines for studying spatiotemporal processes in biology using Light-sheet Fluo- rescence Microscopy (LSFM) data. The APR is an adaptive function representation that represents a function in a spatially adaptive way using a set of Particle Cells V and function values stored at particle collocation points P∗. The Particle Cells partition space, and implicitly define a piecewise constant Implied Resolution Function R∗(y) and particle sampling locations. As an adaptive data representation, the APR can be used to provide both computational and memory benefits by aligning the number of Particle Cells and particles with the spatial scales of the function. The APR allows reconstruction of a function value at any location y using any positive weighted combination of particles within a distance of R∗(y). The Particle Cells V are selected such that the error between the reconstruction and the original function, when weighted by a function σ(y), is below a user-set relative error threshold E. We call this the Reconstruction Condition and σ(y) the Local Intensity Scale. σ(y) is motivated by local gain controls in the human visual system, and for LSFM data can be used to account for contrast variations across an image. The APR is formed by satisfying an additional condition on R∗(y); we call the Resolution Bound. The Resolution Bound relates the R∗(y) to a local maximum of the absolute value function derivatives within a distance R∗(y) or y. Given restric- tions on σ(y), satisfaction of the Resolution Bound also guarantees satisfaction of the Reconstruction Condition. In this thesis, we present algorithms and approaches that find the optimal Implied Resolution Function to general problems in the form of the Resolution Bound using Particle Cells using an algorithm we call the Pulling Scheme. Here, optimal means the largest R∗(y) at each location. The Pulling Scheme has worst-case linear complexity in the number of pixels when used to rep- resent images. The approach is general in that the same algorithm can be used for general (α,m)-Reconstruction Conditions, where α denotes the function derivative and m the minimum order of the reconstruction. Further, it can also be combined with anisotropic neighborhoods to provide adaptation in both space and time. The APR can be used with both noise-free and noisy data. For noisy data, the Reconstruction Condition can no longer be guaranteed, but numerical results show an optimal range of relative error E that provides a maximum increase in PSNR over the noisy input data. Further, if it is assumed the Implied Resolution Func- tion satisfies the Resolution Bound, then the APR converges to a biased estimate (constant factor of E), at the optimal statistical rate. The APR continues a long tradition of adaptive data representations and rep- resents a unique trade off between the level of adaptation of the representation and simplicity. Both regarding the APRs structure and its use for processing. Here, we numerically evaluate the adaptation and processing of the APR for use with LSFM data. This is done using both synthetic and LSFM exemplar data. It is concluded from these results that the APR has the correct properties to provide a replacement of pixel images and address bottlenecks in processing for LSFM data. Removal of the bottleneck would be achieved by adapting to spatial, temporal and intensity scale variations in the data. Further, we propose the simple structure of the general APR could provide benefit in areas such as the numerical solution of differential equations, adaptive regression methods, and surface representation for computer graphics.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Heisen, Burkhard Clemens. "New Algorithms for Macromolecular Structure Determination." Doctoral thesis, 2009. http://hdl.handle.net/11858/00-1735-0000-0006-B503-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Zhu, Jihai. "Low-complexity block dividing coding method for image compression using wavelets : a thesis presented in partial fulfillment of the requirements for the degree of Master of Engineering in Computer Systems Engineering at Massey University, Palmerston North, New Zealand." 2007. http://hdl.handle.net/10179/704.

Повний текст джерела
Анотація:
Image coding plays a key role in multimedia signal processing and communications. JPEG2000 is the latest image coding standard, it uses the EBCOT (Embedded Block Coding with Optimal Truncation) algorithm. The EBCOT exhibits excellent compression performance, but with high complexity. The need to reduce this complexity but maintain similar performance to EBCOT has inspired a significant amount of research activity in the image coding community. Within the development of image compression techniques based on wavelet transforms, the EZW (Embedded Zerotree Wavelet) and the SPIHT (Set Partitioning in Hierarchical Trees) have played an important role. The EZW algorithm was the first breakthrough in wavelet based image coding. The SPIHT algorithm achieves similar performance to EBCOT, but with fewer features. The other very important algorithm is SBHP (Sub-band Block Hierarchical Partitioning), which attracted significant investigation during the JPEG2000 development process. In this thesis, the history of the development of wavelet transform is reviewed, and a discussion is presented on the implementation issues for wavelet transforms. The above mentioned four main coding methods for image compression using wavelet transforms are studied in detail. More importantly the factors that affect coding efficiency are identified. The main contribution of this research is the introduction of a new low-complexity coding algorithm for image compression based on wavelet transforms. The algorithm is based on block dividing coding (BDC) with an optimised packet assembly. Our extensive simulation results show that the proposed algorithm outperforms JPEG2000 in lossless coding, even though it still leaves a narrow gap in lossy coding situations
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії