Academic literature on the topic 'Data editing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Data editing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Data editing"

1

van de Pol, Frank, and Jelke Bethlehem. "Data editing perspectives." Statistical Journal of the United Nations Economic Commission for Europe 14, no. 2 (April 1, 1997): 153–71. http://dx.doi.org/10.3233/sju-1997-14203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tsukamoto, Kaoru. "Play data editing device and method of editing play data." Journal of the Acoustical Society of America 123, no. 3 (2008): 1235. http://dx.doi.org/10.1121/1.2901368.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Esmail, Whaj Muneer. "A Critical Analysis of the Intentional Deviation in News Editing." Al-Adab Journal 1, no. 137 (June 15, 2021): 47–72. http://dx.doi.org/10.31973/aj.v1i137.1087.

Full text
Abstract:
News Editing is the clearest coding, which reflects the writer's behavior of the editor who linguistically, socially, or culturally edits and deviates some of the source language aspects. It, furthermore, refers to the writer’s competence of using an influential linguistic style and preserving the SL norms and policies. At the same time, editing news presents a new horizon within a different political framework into TL. The problem of news editing of the same TV in Arabic and English editions lies in discrepancies in meanings; intentional deviation and politics. For instance, BBC, which broadcasts in Arabic, has a different editing from its English edition. This study ascribes such differences to the different socio-cultural and political strategies adopted by the writer. The primary objectives of the study are: Finding out the political reasons behind the discrepancy and the intentional deviation in news editing. Identifying the political attitude of the original editor and the political attitude of the editor. The data set in this study consisted of “TWO” edited news editing (1 from English into Arabic) and (1 from Arabic into English). These two news writings have been broadcasted on BBC English & Arabic editions. A critical-stylistic analysis has been conducted by applying House’s (2001) model of TQA.
APA, Harvard, Vancouver, ISO, and other styles
4

Persch, G. "Editing IDL data structures." ACM SIGPLAN Notices 22, no. 11 (November 1987): 79–86. http://dx.doi.org/10.1145/39305.39313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

de Waal, Ton. "Selective Editing: A Quest for Efficiency and Data Quality." Journal of Official Statistics 29, no. 4 (December 1, 2013): 473–88. http://dx.doi.org/10.2478/jos-2013-0036.

Full text
Abstract:
Abstract National statistical institutes are responsible for publishing high quality statistical information on many different aspects of society. This task is complicated considerably by the fact that data collected by statistical offices often contain errors. The process of correcting errors is referred to as statistical data editing. For many years this has been a purely manual process, with people checking the collected data record by record and correcting them if necessary. For this reason the data editing process has been both expensive and time-consuming. This article sketches some of the important methodological developments aiming to improve the efficiency of the data editing process that have occurred during the past few decades. The article focuses on selective editing, which is based on an idea rather shocking for people working in the production of high-quality data: that it is not necessary to find and correct all errors. Instead of trying to correct all errors, it generally suffices to correct only those errors where data editing has substantial influence on publication figures. This overview article sketches the background of selective editing, describes the most usual form of selective editing up to now, and discusses the contributions to this special issue of the Journal of Official Statistics on selective editing. The article concludes with describing some possible directions for future research on selective editing and statistical data editing in general.
APA, Harvard, Vancouver, ISO, and other styles
6

Jing, Changfeng, Yanli Zhu, Jiayun Fu, and Meng Dong. "A Lightweight Collaborative GIS Data Editing Approach to Support Urban Planning." Sustainability 11, no. 16 (August 16, 2019): 4437. http://dx.doi.org/10.3390/su11164437.

Full text
Abstract:
Collaborative geospatial data editing is different from other collaborative editing systems, such as textual editing, owing to its geospatial nature. This paper presents a version-based lightweight collaborative geospatial editing method for urban planning. This method extracts editing data and generates a version for collaborative editing, which reduces the data size and thus allows for a high feedback speed. A replication mechanism is engaged to replicate a version for the client to freely edit, which ensures constraint-free editing in collaboration. Based on this method, realizing the fact that heterogeneous geospatial data and non-professional users are involved, a lightweight architecture, integrating web services, and component technologies, was proposed. This architecture provides a unified data access interface and powerful editing ability and ensures a high feedback speed and constraint-free editing. The result of the application of the proposed approach in a practical project demonstrates the usability of collaborative geospatial editing in urban planning. While this approach has been designed for urban planning, it can be modified for use in other domains.
APA, Harvard, Vancouver, ISO, and other styles
7

Sengupta, Binanda, Yingjiu Li, Yangguang Tian, and Robert H. Deng. "Editing-Enabled Signatures: A New Tool for Editing Authenticated Data." IEEE Internet of Things Journal 7, no. 6 (June 2020): 4997–5007. http://dx.doi.org/10.1109/jiot.2020.2972741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pannekoek, Jeroen, Sander Scholtus, and Mark Van der Loo. "Automated and Manual Data Editing: A View on Process Design and Methodology." Journal of Official Statistics 29, no. 4 (December 1, 2013): 511–37. http://dx.doi.org/10.2478/jos-2013-0038.

Full text
Abstract:
Abstract Data editing is arguably one of the most resource-intensive processes at NSIs. Forced by everincreasing budget pressure, NSIs keep searching for more efficient forms of data editing. Efficiency gains can be obtained by selective editing, that is, limiting the manual editing to influential errors, and by automating the editing process as much as possible. In our view, an optimal mix of these two strategies should be aimed for. In this article we present a decomposition of the overall editing process into a number of different tasks and give an upto- date overview of all the possibilities of automatic editing in terms of these tasks. During the design of an editing process, this decomposition may be helpful in deciding which tasks can be done automatically and for which tasks (additional) manual editing is required. Such decisions can be made a priori, based on the specific nature of the task, or by empirical evaluation, which is illustrated by examples. The decomposition in tasks, or statistical functions, also naturally leads to reuseable components, resulting in efficiency gains in process design.
APA, Harvard, Vancouver, ISO, and other styles
9

Ferguson, Dania P. "SAS use in data editing." Statistical Journal of the United Nations Economic Commission for Europe 8, no. 2 (October 1, 1991): 167–74. http://dx.doi.org/10.3233/sju-1991-8205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Riera-Ledesma, Jorge, and Juan-José Salazar-González. "Algorithms for automatic data editing." Statistical Journal of the United Nations Economic Commission for Europe 20, no. 3-4 (August 17, 2004): 255–64. http://dx.doi.org/10.3233/sju-2003-203-405.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Data editing"

1

Ivarsson, Jakob. "Real-time collaborative editing using CRDTs." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-249545.

Full text
Abstract:
Real-time collaborative editors such as Google Docs allow users to edit a shared document simultaneously and see each others changes in real-time. This thesis investigates how conflict-free replicated data types (CRDTs) can be used to implement a general purpose data store that supports real-time collaborative editing of semi-structured data. The purpose of the data store is that it can be used by application developers to easily add collaborative behaviour to any application.  The performance of the implemented data store is evaluated and the results show that using CRDTs comes with a performance and memory penalty. However, replication over the internet is very efficient and concurrent updates is handled in a predictable way in most cases.
Kollaborativa realtidseditorer som Google Docs låter användare editera ett gemensamt dokument samtidigt och se varandras ändringar i realtid. Den här rapporten undersöker hur konfliktfria replikerade datastrukturer (CRDTs) kan användas för att implementera en generell databas som hanterar kollaborativ realtidseditering. Syftet med databasen är att den kan användas av applikationsutvecklare för att enkelt kunna lägga till kollaborativt beteende till applikationer. Prestandan av den implementerade databasen utvärderas och resultaten visar att användningen av CRDTs resulterar i en ökad minnesanvändning och sämre prestanda. Att replikera databasen är väldigt effektivt och den hanterar konflikter på ett förutsägbart sätt.
APA, Harvard, Vancouver, ISO, and other styles
2

Watanabe, Toyohide, Yuuji Yoshida, and Teruo Fukumura. "Editing model based on the object-oriented approach." IEEE, 1988. http://hdl.handle.net/2237/6930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gul, Shahzad. "Methods of Graphically Viewing and Editing Busines Logic, Data Structure." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-146633.

Full text
Abstract:
For financial institutions such as, who desire to lend money to the borrower or an invoice factoring companies who are looking to buy invoices, credit policy is a primary concern and vital measure for the risk or marketability associated with the particular issue. The goal of credit risk management is to maximize the profitability while reducing the risk of credit loss within the acceptable parameters. In the wake of high profile bankruptcies, there has been recent speculation about the correctness and reliability of credit risk prediction. Based on these issues, there is sufficient motivation to develop more gainful policy analysis system to predicting credit worthiness of the customers. The system is investigated; designed and implemented that accomplishes the requirements of Risk department at Klarna. The result of the thesis work is KLAPAS, which enables the users to create custom policies and apply them on selective set of transactions.
APA, Harvard, Vancouver, ISO, and other styles
4

Ollis, James A. J. "Optimised editing of variable data documents via partial re-evaluation." Thesis, University of Nottingham, 2011. http://eprints.nottingham.ac.uk/12107/.

Full text
Abstract:
With the advent of digital printing presses and the continued development of associated technologies, variable data printing (VDP) is becoming more and more common. VDP allows for a series of data instances to be bound to a single template document in order to produce a set of result document instances, each customized depending upon the data provided. As it gradually enters the mainstream of digital publishing there is a need for appropriate and powerful editing tools suitable for use by creative professionals. This thesis investigates the problem of representing variable data documents in an editable visual form, and focuses on the technical issues involved with supporting such an editing model. Using a document processing model where the document is produced from a data set and an appropriate programmatic transform, this thesis considers an interactive editor developed to allow visual manipulation of the result documents. It shows how the speed of the reprocessing necessary in such an interactive editing scenario can be increased by selectively re-evaluating only the required parts of the transformation, including how these pieces of the transformation can be identified and subsequently re-executed. The techniques described are demonstrated using a simplified document processing model that closely resembles variable data document frameworks. A workable editor is also presented that builds on this processing model and illustrates its advantages. Finally, an analysis of the performance of the proposed framework is undertaken including a comparison to a standard processing pipeline.
APA, Harvard, Vancouver, ISO, and other styles
5

Warne, Brett M. "A system for scalable 3D visualization and editing of connectomic data." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/52774.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 57-58).
The new field of connectomics is using technological advances in microscopy and neural computation to form a detailed understanding of structure and connectivity of neurons. Using the vast amounts of imagery generated by light and electron microscopes, connectomic analysis segments the image data to define 3D regions, forming neural-networks called connectomes. Yet as the dimensions of these volumes grow from hundreds to thousands of pixels or more, connectomics is pushing the computational limits of what can be interactively displayed and manipulated in a 3D environment. The computational cost of rendering in 3D is compounded by the vast size and number of segmented regions that can be formed from segmentation analysis. As a result, most neural data sets are too large and complex to be handled by conventional hardware using standard rendering techniques. This thesis describes a scalable system for visualizing large connectomic data using multiple resolution meshes for performance while providing focused voxel rendering when editing for precision. After pre-processing a given set of data, users of the system are able to visualize neural data in real-time while having the ability to make detailed adjustments at the single voxel scale. The design and implementation of the system are discussed and evaluated.
by Brett M. Warne.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
6

Wu, Qinyi. "Partial persistent sequences and their applications to collaborative text document editing and processing." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/44916.

Full text
Abstract:
In a variety of text document editing and processing applications, it is necessary to keep track of the revision history of text documents by recording changes and the metadata of those changes (e.g., user names and modification timestamps). The recent Web 2.0 document editing and processing applications, such as real-time collaborative note taking and wikis, require fine-grained shared access to collaborative text documents as well as efficient retrieval of metadata associated with different parts of collaborative text documents. Current revision control techniques only support coarse-grained shared access and are inefficient to retrieve metadata of changes at the sub-document granularity. In this dissertation, we design and implement partial persistent sequences (PPSs) to support real-time collaborations and manage metadata of changes at fine granularities for collaborative text document editing and processing applications. As a persistent data structure, PPSs have two important features. First, items in the data structure are never removed. We maintain necessary timestamp information to keep track of both inserted and deleted items and use the timestamp information to reconstruct the state of a document at any point in time. Second, PPSs create unique, persistent, and ordered identifiers for items of a document at fine granularities (e.g., a word or a sentence). As a result, we are able to support consistent and fine-grained shared access to collaborative text documents by detecting and resolving editing conflicts based on the revision history as well as to efficiently index and retrieve metadata associated with different parts of collaborative text documents. We demonstrate the capabilities of PPSs through two important problems in collaborative text document editing and processing applications: data consistency control and fine-grained document provenance management. The first problem studies how to detect and resolve editing conflicts in collaborative text document editing systems. We approach this problem in two steps. In the first step, we use PPSs to capture data dependencies between different editing operations and define a consistency model more suitable for real-time collaborative editing systems. In the second step, we extend our work to the entire spectrum of collaborations and adapt transactional techniques to build a flexible framework for the development of various collaborative editing systems. The generality of this framework is demonstrated by its capabilities to specify three different types of collaborations as exemplified in the systems of RCS, MediaWiki, and Google Docs respectively. We precisely specify the programming interfaces of this framework and describe a prototype implementation over Oracle Berkeley DB High Availability, a replicated database management engine. The second problem of fine-grained document provenance management studies how to efficiently index and retrieve fine-grained metadata for different parts of collaborative text documents. We use PPSs to design both disk-economic and computation-efficient techniques to index provenance data for millions of Wikipedia articles. Our approach is disk economic because we only save a few full versions of a document and only keep delta changes between those full versions. Our approach is also computation-efficient because we avoid the necessity of parsing the revision history of collaborative documents to retrieve fine-grained metadata. Compared to MediaWiki, the revision control system for Wikipedia, our system uses less than 10% of disk space and achieves at least an order of magnitude speed-up to retrieve fine-grained metadata for documents with thousands of revisions.
APA, Harvard, Vancouver, ISO, and other styles
7

Pratumnopharat, Panu. "Novel methods for fatigue data editing for horizontal axis wind turbine blades." Thesis, Northumbria University, 2012. http://nrl.northumbria.ac.uk/10458/.

Full text
Abstract:
Wind turbine blades are the most critical components of wind turbines. Full-scale blade fatigue testing is required to verify that the blades possess the strength and service life specified in the design. Unfortunately, the test must be run for a long time period. This problem led the blade testing laboratories to accelerate fatigue testing time. To achieve the objective, this thesis proposes two novel methods called STFT- and WT-based fatigue damage part extracting methods which are based on short-time Fourier transform (STFT) and wavelet transform (WT), respectively. For WT, different wavelet functions which are Morl, Meyr, Dmey, Mexh and DB30 are studied. An aerodynamic computer code, HAWTsimulator, based on blade element momentum theory has been developed. This code is used to generate the sets of aerodynamic loads acting along the span of a ‘SERI-8 wind turbine blade’ in the range of wind speed from cut-in to cut-out. SERI-8 blades are installed on 65 kW wind turbines. Each set of aerodynamic loads is applied on the finite element model of the SERI-8 blade in structural software (ANSYS) to generate a plot of von Mises stress at the critical point on the blade versus wind speed. By relating this relationship to the wind speed data, the stress-time history at the critical point on the SERI-8 blade can be generated. It has the same sampling rate and length as the wind speed data. A concept of applying accumulative power spectral density (AccPSD) distribution with time to identify fatigue damage events contained in the stress-time history has been introduced in this thesis. For STFT, AccPSD is the sum of power spectral density (PSD) of each frequency band at each time interval in the spectrogram. For WT, AccPSD is the sum of PSD of wavelet coefficients of each scale at each time interval in the scalogram. It has been found that the locations of AccPSD spikes imply where the fatigue damage events are. Based on an appropriate AccPSD level called a cutoff level, the fatigue damage events can be identified at time location of the stress-time history. A fatigue computer code, HAWTfatigue, based on stress-life approach and Miner’s linear cumulative damage rule has been developed. Basically, the code is used for evaluating the fatigue damage and service lifetime of horizontal axis wind turbine blade. In addition, the author has implemented STFT- and WT-based fatigue damage part extracting methods into the code. Fatigue damage parts are extracted from the stress time history and they are concatenated to form the edited stress-time history. The effectiveness of STFT- and WTbased algorithms is performed by comparing the reduction in length and the difference in fatigue damage per repetition of the edited stress-time histories generated by STFT and WT to those of the edited stress-time history generated by an existing method, Time Correlated Fatigue Damage (TCFD) used by commercial software. The findings of this research project are as follows: 1. The comparison of the reduction in length of the edited stress-time histories generated by TCFD, STFT and WT indicates that WT with the Mexh wavelet has the maximum reduction of 20.77% in length with respect to the original length, followed by Meyr (20.24%), Dmey (19.70%), Morl (19.66%), DB30 (19.19%), STFT (15.38%), and TCFD (10.18%), respectively. 2. The comparison of the retained fatigue damage per repetition in the edited stress-time histories generated by TCFD, STFT and WT indicates that TCFD has the retained fatigue damage per repetition less than the original fatigue damage per repetition by 0.076%, followed by Mexh (0.068%), DB30 (0.063%), STFT (0.045%), Meyr (0.032%), Dmey (0.014%), and Morl (0.013%), respectively. 3. Both comparison of reduction in length and comparison in the retained fatigue damage per repetition of the edited stress-time histories suggest that WT is the best method for extracting fatigue damage parts from the given stress-time history. It has also been indicated that not only do STFT and WT improve accuracy of fatigue damage per repetition retained in the edited stress-time histories, but also they provide the length of the edited stress-time histories shorter than TCFD does. Thus, STFT and WT are useful methods for performing accelerated fatigue tests. 4. It has been found that STFT is controlled by two main factors which are window size and cutoff level. Also, WT is controlled by three main factors which are wavelet decomposition level, cutoff level and wavelet type. To conclude, the edited stress-time history can be used by blade testing laboratories to accelerate fatigue testing time. STFT- and WT-based fatigue damage part extracting methods proposed in this thesis are suggested as alternative methods in accelerating fatigue testing time, especially for the field of wind turbine engineering.
APA, Harvard, Vancouver, ISO, and other styles
8

Carpatorea, Iulian Nicolae. "A graphical traffic scenario editing and evaluation software." Thesis, Högskolan i Halmstad, Halmstad Embedded and Intelligent Systems Research (EIS), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-19438.

Full text
Abstract:
An interactive tool is developed for the purpose of rapid exploration ofdiverse traffic scenario. The focus is on rapidity of design and evaluation rather thenon physical realism. Core aspects are the ability to define the essential elements fora traffic scenario such as a road network and vehicles. Cubic Bezier curves are usedto design the roads and vehicle trajectory. A prediction algorithm is used to visualizevehicle future poses and collisions and thus provide means for evaluation of saidscenario. Such a program was created using C++ with the help of Qt libraries.
APA, Harvard, Vancouver, ISO, and other styles
9

Boskovitz, Agnes. "Data editing and logic : the covering set method from the perspective of logic /." View thesis entry in Australian Digital Theses, 2008. http://thesis.anu.edu.au/public/adt-ANU20080314.163155/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Boskovitz, Agnes, and abvi@webone com au. "Data Editing and Logic: The covering set method from the perspective of logic." The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20080314.163155.

Full text
Abstract:
Errors in collections of data can cause significant problems when those data are used. Therefore the owners of data find themselves spending much time on data cleaning. This thesis is a theoretical work about one part of the broad subject of data cleaning - to be called the covering set method. More specifically, the covering set method deals with data records that have been assessed by the use of edits, which are rules that the data records are supposed to obey. The problem solved by the covering set method is the error localisation problem, which is the problem of determining the erroneous fields within data records that fail the edits. In this thesis I analyse the covering set method from the perspective of propositional logic. I demonstrate that the covering set method has strong parallels with well-known parts of propositional logic. The first aspect of the covering set method that I analyse is the edit generation function, which is the main function used in the covering set method. I demonstrate that the edit generation function can be formalised as a logical deduction function in propositional logic. I also demonstrate that the best-known edit generation function, written here as FH (standing for Fellegi-Holt), is essentially the same as propositional resolution deduction. Since there are many automated implementations of propositional resolution, the equivalence of FH with propositional resolution gives some hope that the covering set method might be implementable with automated logic tools. However, before any implementation, the other main aspect of the covering set method must also be formalised in terms of logic. This other aspect, to be called covering set correctibility, is the property that must be obeyed by the edit generation function if the covering set method is to successfully solve the error localisation problem. In this thesis I demonstrate that covering set correctibility is a strengthening of the well-known logical properties of soundness and refutation completeness. What is more, the proofs of the covering set correctibility of FH and of the soundness / completeness of resolution deduction have strong parallels: while the proof of soundness / completeness depends on the reduction property for counter-examples, the proof of covering set correctibility depends on the related lifting property. In this thesis I also use the lifting property to prove the covering set correctibility of the function defined by the Field Code Forest Algorithm. In so doing, I prove that the Field Code Forest Algorithm, whose correctness has been questioned, is indeed correct. The results about edit generation functions and covering set correctibility apply to both categorical edits (edits about discrete data) and arithmetic edits (edits expressible as linear inequalities). Thus this thesis gives the beginnings of a theoretical logical framework for error localisation, which might give new insights to the problem. In addition, the new insights will help develop new tools using automated logic tools. What is more, the strong parallels between the covering set method and aspects of logic are of aesthetic appeal.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Data editing"

1

Commission, United Nations Statistical. Statistical data editing: Impact on data quality. New York: United Nations, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lou, Burnard, O'Keeffe Katherine O'Brien 1948-, and Unsworth John 1958-, eds. Electronic textual editing. New York: Modern Language Association of America, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Commission, United Nations Statistical, ed. Statistical data editing: Volume No.3, impact on data quality. New York and Geneva: United Nations, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

1944-, Self Charles C., and Mullins Edward 1936-, eds. On-line editing. Northport, Ala: Vision Press, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Orestis, Papasouliotis, ed. A course in categorical data analysis. Boca Raton, Fla: Chapman & Hall/CRC Press, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sarah, Dipko, Urban Institute, Westat inc, Assessing the New Federalism (Program), and Child Trends Incorporated, eds. 1997 NSAF data editing and imputation. Washington, D.C: Urban Institute, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Weng, Stanley. Elimination in linear editing and error localization. [Fairfax, Virginia?]: United States Department of Agriculture, National Agricultural Statistics Service, Research and Development Division, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Division, United Nations Statistical, ed. Handbook on population and housing census editing. New York: United Nations, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ohanian, Thomas A. Digital nonlinear editing: New approaches to editing film and video. Boston: Focal Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

de Waal, Ton, Jeroen Pannekoek, and Sander Scholtus. Handbook of Statistical Data Editing and Imputation. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2011. http://dx.doi.org/10.1002/9780470904848.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Data editing"

1

Nahler, Gerhard. "data editing." In Dictionary of Pharmaceutical Medicine, 46. Vienna: Springer Vienna, 2009. http://dx.doi.org/10.1007/978-3-211-89836-9_344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Katz, Abbott. "Editing Data." In Excel 2010 Made Simple, 63–71. Berkeley, CA: Apress, 2011. http://dx.doi.org/10.1007/978-1-4302-3546-0_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Laaksonen, Seppo. "Statistical Editing." In Survey Methodology and Missing Data, 141–53. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-79011-4_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bajjali, William. "Data Editing and Topology." In Springer Textbooks in Earth Sciences, Geography and Environment, 117–40. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-61158-7_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bajjali, William. "Data Editing and Topology." In Springer Textbooks in Earth Sciences, Geography and Environment, 97–127. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-42227-0_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mothersole, Peter L., and Norman W. White. "Editing terminals and graphics digitizers." In Broadcast Data Systems, 24–37. London: Routledge, 2023. http://dx.doi.org/10.4324/9781003460732-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Noirhomme-Fraiture, Monique, and Manuel Rouard. "Visualizing and Editing Symbolic Objects." In Analysis of Symbolic Data, 125–38. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/978-3-642-57155-8_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bela, Daniel. "Applied Large-Scale Data Editing." In Methodological Issues of Longitudinal Surveys, 649–67. Wiesbaden: Springer Fachmedien Wiesbaden, 2016. http://dx.doi.org/10.1007/978-3-658-11994-2_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tyrrell, A. J. "Data Editing and Report Writing." In COBOL From Pascal, 82–91. London: Palgrave Macmillan UK, 1989. http://dx.doi.org/10.1007/978-1-349-10594-6_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Koch, George S. "Writing and Editing Data Files." In Exploration-Geochemical Data Analysis with the IBM PC, 11–49. Boston, MA: Springer US, 1987. http://dx.doi.org/10.1007/978-1-4613-1973-3_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Data editing"

1

Goodwin, Roger L. "Intuitive Data Editing Examples." In 2007 IEEE International Conference on Service Operations and Logistics, and Informatics. IEEE, 2007. http://dx.doi.org/10.1109/soli.2007.4383921.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Zhenji, Yonghui Yang, Huijun Liu, Jiaju Wu, Hongfu Zuo, and Xinglin Zhu. "IETM Data Module Editing." In 2019 Chinese Automation Congress (CAC). IEEE, 2019. http://dx.doi.org/10.1109/cac48633.2019.8996700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gettler Summa, Mireille, and Vautrain Frederick. "Editing and processing complex data." In 2010 3rd International Conference on Advanced Computer Theory and Engineering (ICACTE 2010). IEEE, 2010. http://dx.doi.org/10.1109/icacte.2010.5579001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lam, Wai-Chun, Feng Zou, and Taku Komura. "Motion editing with data glove." In the 2004 ACM SIGCHI International Conference. New York, New York, USA: ACM Press, 2004. http://dx.doi.org/10.1145/1067343.1067393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Roodt, Daniel, Ulrich Speidel, Vimal Kumar, and Ryan K. L. Ko. "On Random Editing in LZ-End." In 2021 Data Compression Conference (DCC). IEEE, 2021. http://dx.doi.org/10.1109/dcc50243.2021.00074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ahmad, Mumtaz, and Abdessamad Imine. "Decentralized Collaborative Editing Platform." In 2015 16th IEEE International Conference on Mobile Data Management (MDM). IEEE, 2015. http://dx.doi.org/10.1109/mdm.2015.26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pokhriyal, S. K., A. Nautiyal, and O. P. Gupta. "Computer‐aided editing of seismic data." In SEG Technical Program Expanded Abstracts 1991. Society of Exploration Geophysicists, 1991. http://dx.doi.org/10.1190/1.1889023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

El-Zehiry, Noha Youssry, and Andreas Wimmer. "Data driven editing of RIB centerlines." In 2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI 2014). IEEE, 2014. http://dx.doi.org/10.1109/isbi.2014.6867820.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wieschebrink, Stephan. "Collaborative editing of multimodal annotation data." In the 11th ACM symposium. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2034691.2034706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Oster, Gérald, Pascal Urso, Pascal Molli, and Abdessamad Imine. "Data consistency for P2P collaborative editing." In the 2006 20th anniversary conference. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1180875.1180916.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Data editing"

1

Jansen van Beek, G., and L. R. Newitt. Data editing. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 1988. http://dx.doi.org/10.4095/225642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jansen van Beek, G., and L. R. Newitt. Data editing. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 1988. http://dx.doi.org/10.4095/226573.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Macnab, R., J. Verhoef, and J. Woodside. Techniques For the Display and Editing of Marine Potential Field Data. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 1987. http://dx.doi.org/10.4095/122496.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Krishnamagaru, Dharmesh, and Michael S. Foster. Interactive Data Editing and Analysis System (IDEAS) Version 1.0 (Users Manual). Fort Belvoir, VA: Defense Technical Information Center, February 1993. http://dx.doi.org/10.21236/ada265030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hou, Weilin, Derek Burrage, Michael Carnes, and Robert Arnone. Development and Testing of Local Automated Glider Editing Routine for Optical Data Control. Fort Belvoir, VA: Defense Technical Information Center, May 2010. http://dx.doi.org/10.21236/ada521155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Forteza, Nicolás, and Sandra García-Uribe. A Score Function to Prioritize Editing in Household Survey Data: A Machine Learning Approach. Madrid: Banco de España, October 2023. http://dx.doi.org/10.53479/34613.

Full text
Abstract:
Errors in the collection of household finance survey data may proliferate in population estimates, especially when there is oversampling of some population groups. Manual case-by-case revision has been commonly applied in order to identify and correct potential errors and omissions such as omitted or misreported assets, income and debts. We derive a machine learning approach for the purpose of classifying survey data affected by severe errors and omissions in the revision phase. Using data from the Spanish Survey of Household Finances we provide the best-performing supervised classification algorithm for the task of prioritizing cases with substantial errors and omissions. Our results show that a Gradient Boosting Trees classifier outperforms several competing classifiers. We also provide a framework that takes into account the trade-off between precision and recall in the survey agency in order to select the optimal classification threshold.
APA, Harvard, Vancouver, ISO, and other styles
7

Chuchel, B. A. TURBOSEIS---An interactive program for constructing and editing models of seismic refraction traveltime data using a color-graphics terminal. Office of Scientific and Technical Information (OSTI), December 1989. http://dx.doi.org/10.2172/138346.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Binstead, R. A., and J. C. Cooper. EDITSPEC: A FORTRAN 77 Program for Editing and Manipulating Spectral Data from the Varian CARY 2390 UV-VIS-NIR Spectrophotometer. Fort Belvoir, VA: Defense Technical Information Center, October 1988. http://dx.doi.org/10.21236/ada200352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Griffin, Andrew, Sean Griffin, Kristofer Lasko, Megan Maloney, S. Blundell, Michael Collins, and Nicole Wayant. Evaluation of automated feature extraction algorithms using high-resolution satellite imagery across a rural-urban gradient in two unique cities in developing countries. Engineer Research and Development Center (U.S.), April 2021. http://dx.doi.org/10.21079/11681/40182.

Full text
Abstract:
Feature extraction algorithms are routinely leveraged to extract building footprints and road networks into vector format. When used in conjunction with high resolution remotely sensed imagery, machine learning enables the automation of such feature extraction workflows. However, many of the feature extraction algorithms currently available have not been thoroughly evaluated in a scientific manner within complex terrain such as the cities of developing countries. This report details the performance of three automated feature extraction (AFE) datasets: Ecopia, Tier 1, and Tier 2, at extracting building footprints and roads from high resolution satellite imagery as compared to manual digitization of the same areas. To avoid environmental bias, this assessment was done in two different regions of the world: Maracay, Venezuela and Niamey, Niger. High, medium, and low urban density sites are compared between regions. We quantify the accuracy of the data and time needed to correct the three AFE datasets against hand digitized reference data across ninety tiles in each city, selected by stratified random sampling. Within each tile, the reference data was compared against the three AFE datasets, both before and after analyst editing, using the accuracy assessment metrics of Intersection over Union and F1 Score for buildings and roads, as well as Average Path Length Similarity (APLS) to measure road network connectivity. It was found that of the three AFE tested, the Ecopia data most frequently outperformed the other AFE in accuracy and reduced the time needed for editing.
APA, Harvard, Vancouver, ISO, and other styles
10

Dietzmann and Urban. L51565 Emissions Data for Stationary Engines in the Natural Gas Pipeline Transmission Industry. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), January 1988. http://dx.doi.org/10.55274/r0010130.

Full text
Abstract:
In 1972, PRCI Project PR-15-61 was initiated to measure exhaust emissions from stationary reciprocating gas engines and gas turbines used in natural gas compressor stations. Emission rates of oxides of nitrogen (NOx), hydrocarbons (HC), and carbon monoxide (CO) were measured from 59 reciprocating gas engines and nine gas turbines. During and subsequent to 1972, the PRCI laboratories and various PRCI member companies conducted emissions measurements. In 1978, PRCI Project PR-15-92 was initiated to conduct additional emissions measurements. That project involved 55 reciprocating gas engines and 11 gas turbines. All of those data were included in the May 1980 revision to the compilation of emissions data. Subsequent to the May 1980 revision, additional emissions measurements were conducted under PRCI Projects PR-15425 and PR-15-613, and emissions data were obtained from gas transmission companies and gas engine manufacturers in PR-15-613. These additional data are included in this 1988 reissue of the compilation. The emissions data presented in this data book have been obtained from projects conducted or sponsored by the Pipeline Research Committee of the Pipeline Research Council International (PRCI), individual PRCI member companies, or gas engine manufacturers. Data included are from in-use reciprocating engines and gas turbines that have been in-service for anywhere from less than a year to one or more decades. These data are generally presented as received. Editing has been limited to omitting data for entire tests only where the data are obviously in error and to identifying specific erroneous or questionable items within a test.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography