Dissertations / Theses on the topic 'Data editing'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Data editing.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Ivarsson, Jakob. "Real-time collaborative editing using CRDTs." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-249545.
Full textKollaborativa realtidseditorer som Google Docs låter användare editera ett gemensamt dokument samtidigt och se varandras ändringar i realtid. Den här rapporten undersöker hur konfliktfria replikerade datastrukturer (CRDTs) kan användas för att implementera en generell databas som hanterar kollaborativ realtidseditering. Syftet med databasen är att den kan användas av applikationsutvecklare för att enkelt kunna lägga till kollaborativt beteende till applikationer. Prestandan av den implementerade databasen utvärderas och resultaten visar att användningen av CRDTs resulterar i en ökad minnesanvändning och sämre prestanda. Att replikera databasen är väldigt effektivt och den hanterar konflikter på ett förutsägbart sätt.
Watanabe, Toyohide, Yuuji Yoshida, and Teruo Fukumura. "Editing model based on the object-oriented approach." IEEE, 1988. http://hdl.handle.net/2237/6930.
Full textGul, Shahzad. "Methods of Graphically Viewing and Editing Busines Logic, Data Structure." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-146633.
Full textOllis, James A. J. "Optimised editing of variable data documents via partial re-evaluation." Thesis, University of Nottingham, 2011. http://eprints.nottingham.ac.uk/12107/.
Full textWarne, Brett M. "A system for scalable 3D visualization and editing of connectomic data." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/52774.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 57-58).
The new field of connectomics is using technological advances in microscopy and neural computation to form a detailed understanding of structure and connectivity of neurons. Using the vast amounts of imagery generated by light and electron microscopes, connectomic analysis segments the image data to define 3D regions, forming neural-networks called connectomes. Yet as the dimensions of these volumes grow from hundreds to thousands of pixels or more, connectomics is pushing the computational limits of what can be interactively displayed and manipulated in a 3D environment. The computational cost of rendering in 3D is compounded by the vast size and number of segmented regions that can be formed from segmentation analysis. As a result, most neural data sets are too large and complex to be handled by conventional hardware using standard rendering techniques. This thesis describes a scalable system for visualizing large connectomic data using multiple resolution meshes for performance while providing focused voxel rendering when editing for precision. After pre-processing a given set of data, users of the system are able to visualize neural data in real-time while having the ability to make detailed adjustments at the single voxel scale. The design and implementation of the system are discussed and evaluated.
by Brett M. Warne.
M.Eng.
Wu, Qinyi. "Partial persistent sequences and their applications to collaborative text document editing and processing." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/44916.
Full textPratumnopharat, Panu. "Novel methods for fatigue data editing for horizontal axis wind turbine blades." Thesis, Northumbria University, 2012. http://nrl.northumbria.ac.uk/10458/.
Full textCarpatorea, Iulian Nicolae. "A graphical traffic scenario editing and evaluation software." Thesis, Högskolan i Halmstad, Halmstad Embedded and Intelligent Systems Research (EIS), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-19438.
Full textBoskovitz, Agnes. "Data editing and logic : the covering set method from the perspective of logic /." View thesis entry in Australian Digital Theses, 2008. http://thesis.anu.edu.au/public/adt-ANU20080314.163155/index.html.
Full textBoskovitz, Agnes, and abvi@webone com au. "Data Editing and Logic: The covering set method from the perspective of logic." The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20080314.163155.
Full textPearce, Richard William. "The effect of word-processing experience on editing while composing." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/28967.
Full textEducation, Faculty of
Graduate
Bradford, Jacob. "Rapid detection of safe and efficient gene editing targets across entire genomes." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/227671/1/Jacob_Bradford_Thesis.pdf.
Full textNguyen, Hoang Chuong [Verfasser], and Hans-Peter [Akademischer Betreuer] Seidel. "Data-driven approaches for interactive appearance editing / Hoang Chuong Nguyen. Betreuer: Hans-Peter Seidel." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2015. http://d-nb.info/1077007027/34.
Full textAshok, Ashish Kumar. "Predictive data mining in a collaborative editing system: the Wikipedia articles for deletion process." Thesis, Kansas State University, 2011. http://hdl.handle.net/2097/12026.
Full textDepartment of Computing and Information Sciences
William H. Hsu
In this thesis, I examine the Articles for Deletion (AfD) system in /Wikipedia/, a large-scale collaborative editing project. Articles in Wikipedia can be nominated for deletion by registered users, who are expected to cite criteria for deletion from the Wikipedia deletion. For example, an article can be nominated for deletion if there are any copyright violations, vandalism, advertising or other spam without relevant content, advertising or other spam without relevant content. Articles whose subject matter does not meet the notability criteria or any other content not suitable for an encyclopedia are also subject to deletion. The AfD page for an article is where Wikipedians (users of Wikipedia) discuss whether an article should be deleted. Articles listed are normally discussed for at least seven days, after which the deletion process proceeds based on community consensus. Then the page may be kept, merged or redirected, transwikied (i.e., copied to another Wikimedia project), renamed/moved to another title, userfied or migrated to a user subpage, or deleted per the deletion policy. Users can vote to keep, delete or merge the nominated article. These votes can be viewed in article’s view AfD page. However, this polling does not necessarily determine the outcome of the AfD process; in fact, Wikipedia policy specifically stipulates that a vote tally alone should not be considered sufficient basis for a decision to delete or retain a page. In this research, I apply machine learning methods to determine how the final outcome of an AfD process is affected by factors such as the difference between versions of an article, number of edits, and number of disjoint edits (according to some contiguity constraints). My goal is to predict the outcome of an AfD by analyzing the AfD page and editing history of the article. The technical objectives are to extract features from the AfD discussion and version history, as reflected in the edit history page, that reflect factors such as those discussed above, can be tested for relevance, and provide a basis for inductive generalization over past AfDs. Applications of such feature analysis include prediction and recommendation, with the performance goal of improving the precision and recall of AfD outcome prediction.
Demozzi, Michele. "Identification of novel active Cas9 orthologs from metagenomic data." Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/337709.
Full textHedkvist, Pierre. "Collaborative Editing of Graphical Network using Eventual Consistency." Thesis, Linköpings universitet, Programvara och system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-154856.
Full textNguyen, Minh Quoc. "Toward accurate and efficient outlier detection in high dimensional and large data sets." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34657.
Full textEpps, Brian W. "A comparison of cursor control devices on target acquisition, text editing, and graphics tasks." Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/50013.
Full textPh. D.
Enoksson, Fredrik. "Adaptable metadata creation for the Web of Data." Doctoral thesis, KTH, Medieteknik och interaktionsdesign, MID, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-154272.
Full textQC 20141028
Crawley, Sunny Sheliese. "Rethinking phylogenetics using Caryophyllales (angiosperms), matK gene and trnK intron as experimental platform." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/77276.
Full textPh. D.
Nigita, Giovanni. "Knowledge bases and stochastic algorithms for mining biological data: applications on A-to-I RNA editing and RNAi." Doctoral thesis, Università di Catania, 2014. http://hdl.handle.net/10761/1555.
Full textPalladino, Chiara. "Round table report: Epigraphy Edit-a-thon: editing chronological and geographic data in ancient inscriptions: April 20-22, 2016." Epigraphy Edit-a-thon : editing chronological and geographic data in ancient inscriptions ; April 20-22, 2016 / edited by Monica Berti. Leipzig, 2016. Beitrag 15, 2016. https://ul.qucosa.de/id/qucosa%3A15477.
Full textSicilia, Gómez Álvaro. "Supporting Tools for Automated Generation and Visual Editing of Relational-to-Ontology Mappings." Doctoral thesis, Universitat Ramon Llull, 2016. http://hdl.handle.net/10803/398843.
Full textLa integración de datos con formatos heterogéneos y de diversos dominios mediante tecnologías de la Web Semántica permite solventar su disparidad estructural y semántica. El acceso a datos basado en ontologías (OBDA, en inglés) es una solución integral que se basa en el uso de ontologías como esquemas mediadores y mapeos entre los datos y las ontologías para facilitar la consulta de las fuentes de datos. Sin embargo, una de las principales barreras que puede dificultar más la adopción de OBDA es la falta de herramientas para apoyar la creación de mapeos entre datos y ontologías. El objetivo de esta investigación ha sido desarrollar nuevas herramientas que permitan a expertos sin conocimientos de ontologías la creación de mapeos entre datos y ontologías. Con este fin, se han llevado a cabo dos líneas de trabajo: la generación automática de mapeos entre datos relacionales y ontologías y la edición de los mapeos a través de su representación visual. Las herramientas actualmente disponibles para automatizar la generación de mapeos están lejos de proporcionar una solución completa, ya que se basan en los esquemas relacionales y apenas tienen en cuenta los contenidos de la fuente de datos relacional y las características de la ontología. Sin embargo, los datos pueden contener relaciones ocultas que pueden ayudar a la generación de mapeos. Para superar esta limitación, hemos desarrollado AutoMap4OBDA, un sistema que genera automáticamente mapeos R2RML a partir del análisis de los contenidos de la fuente relacional y teniendo en cuenta las características de la ontología. El sistema emplea una técnica de aprendizaje de ontologías para inferir jerarquías de clases, selecciona las métricas de similitud de cadenas en base a las etiquetas de las ontologías y analiza las estructuras de grafos para generar los mapeos a partir de la estructura de la ontología. La representación visual por medio de interfaces intuitivas puede ayudar a los usuarios sin conocimientos técnicos a establecer mapeos entre una fuente relacional y una ontología. Sin embargo, las herramientas existentes para la edición visual de mapeos muestran algunas limitaciones. En particular, la representación de mapeos no contempla las estructuras de la fuente relacional y de la ontología de forma conjunta. Para superar este inconveniente, hemos desarrollado Map-On, un entorno visual web para la edición manual de mapeos. AutoMap4OBDA ha demostrado que supera las prestaciones de las soluciones existentes para la generación de mapeos. Map-On se ha aplicado en proyectos de investigación para verificar su eficacia en la gestión de mapeos.
Integration of data from heterogeneous formats and domains based on Semantic Web technologies enables us to solve their structural and semantic heterogeneity. Ontology-based data access (OBDA) is a comprehensive solution which relies on the use of ontologies as mediator schemas and relational-to-ontology mappings to facilitate data source querying. However, one of the greatest obstacles in the adoption of OBDA is the lack of tools to support the creation of mappings between physically stored data and ontologies. The objective of this research has been to develop new tools that allow non-ontology experts to create relational-to-ontology mappings. For this purpose, two lines of work have been carried out: the automated generation of relational-to-ontology mappings, and visual support for mapping editing. The tools currently available to automate the generation of mappings are far from providing a complete solution, since they rely on relational schemas and barely take into account the contents of the relational data source and features of the ontology. However, the data may contain hidden relationships that can help in the process of mapping generation. To overcome this limitation, we have developed AutoMap4OBDA, a system that automatically generates R2RML mappings from the analysis of the contents of the relational source and takes into account the characteristics of ontology. The system employs an ontology learning technique to infer class hierarchies, selects the string similarity metric based on the labels of ontologies, and analyses the graph structures to generate the mappings from the structure of the ontology. The visual representation through intuitive interfaces can help non-technical users to establish mappings between a relational source and an ontology. However, existing tools for visual editing of mappings show somewhat limitations. In particular, the visual representation of mapping does not embrace the structure of the relational source and the ontology at the same time. To overcome this problem, we have developed Map-On, a visual web environment for the manual editing of mappings. AutoMap4OBDA has been shown to outperform existing solutions in the generation of mappings. Map-On has been applied in research projects to verify its effectiveness in managing mappings.
Klasson, Filip, and Patrik Väyrynen. "Development of an API for creating and editing openEHR archetypes." Thesis, Linköping University, Department of Biomedical Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-17558.
Full textArchetypes are used to standardize a way of creating, presenting and distributing health care data. In this master thesis project the open specifications of openEHR was followed. The objective of this master thesis project has been to develop a Java based API for creating and editing openEHR archetypes. The API is a programming toolbox that can be used when developing archetype editors. Another purpose has been to implement validation functionality for archetypes. An important aspect is that the functionality of the API is well documented, this is important to ease the understanding of the system for future developers. The result was a Java based API that is a platform for future archetype editors. The API-kernel has optional immutability so developed archetypes can be locked for modification by making them immutable. The API is compatible with the openEHR specifications 1.0.1, it can load and save archetypes in ADL (Archetype Definition Language) format. There is also a validation feature that verifies that the archetype follows the right structure with respect to predefined reference models. This master thesis report also presents a basic GUI proposal.
Veneziano, Dario. "Knowledge bases, computational methods and data mining techniques with applications to A-to-I RNA editing, Synthetic Biology and RNA interference." Doctoral thesis, Università di Catania, 2015. http://hdl.handle.net/10761/4085.
Full textRobson, Geoffrey. "Multiple outlier detection and cluster analysis of multivariate normal data." Thesis, Stellenbosch : Stellenbosch University, 2003. http://hdl.handle.net/10019.1/53508.
Full textENGLISH ABSTRACT: Outliers may be defined as observations that are sufficiently aberrant to arouse the suspicion of the analyst as to their origin. They could be the result of human error, in which case they should be corrected, but they may also be an interesting exception, and this would deserve further investigation. Identification of outliers typically consists of an informal inspection of a plot of the data, but this is unreliable for dimensions greater than two. A formal procedure for detecting outliers allows for consistency when classifying observations. It also enables one to automate the detection of outliers by using computers. The special case of univariate data is treated separately to introduce essential concepts, and also because it may well be of interest in its own right. We then consider techniques used for detecting multiple outliers in a multivariate normal sample, and go on to explain how these may be generalized to include cluster analysis. Multivariate outlier detection is based on the Minimum Covariance Determinant (MCD) subset, and is therefore treated in detail. Exact bivariate algorithms were refined and implemented, and the solutions were used to establish the performance of the commonly used heuristic, Fast–MCD.
AFRIKAANSE OPSOMMING: Uitskieters word gedefinieer as waarnemings wat tot s´o ’n mate afwyk van die verwagte gedrag dat die analis wantrouig is oor die oorsprong daarvan. Hierdie waarnemings mag die resultaat wees van menslike foute, in welke geval dit reggestel moet word. Dit mag egter ook ’n interressante verskynsel wees wat verdere ondersoek benodig. Die identifikasie van uitskieters word tipies informeel deur inspeksie vanaf ’n grafiese voorstelling van die data uitgevoer, maar hierdie benadering is onbetroubaar vir dimensies groter as twee. ’n Formele prosedure vir die bepaling van uitskieters sal meer konsekwente klassifisering van steekproefdata tot gevolg hˆe. Dit gee ook geleentheid vir effektiewe rekenaar implementering van die tegnieke. Aanvanklik word die spesiale geval van eenveranderlike data behandel om noodsaaklike begrippe bekend te stel, maar ook aangesien dit in eie reg ’n area van groot belang is. Verder word tegnieke vir die identifikasie van verskeie uitskieters in meerveranderlike, normaal verspreide data beskou. Daar word ook ondersoek hoe hierdie idees veralgemeen kan word om tros analise in te sluit. Die sogenaamde Minimum Covariance Determinant (MCD) subversameling is fundamenteel vir die identifikasie van meerveranderlike uitskieters, en word daarom in detail ondersoek. Deterministiese tweeveranderlike algoritmes is verfyn en ge¨ımplementeer, en gebruik om die effektiwiteit van die algemeen gebruikte heuristiese algoritme, Fast–MCD, te ondersoek.
Mohapatra, Deepankar. "Automatic Removal of Complex Shadows From Indoor Videos." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc804942/.
Full textFeng, Ping Feng. "Examination of the Hollywood Movie Trailers Editing Pattern Evolution over Time by Using the Quantitative Approach of Statistical Stylistic Analysis." Master's thesis, Temple University Libraries, 2016. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/395476.
Full textM.A.
In this study, I took the quantitative research approach of film statistical stylistic analysis to examine the editing pattern evolution of 130 Hollywood movie trailers over the past 60 years from 1951 to 2015; the prior studies on the overall evolution of the Hollywood movies’ editing pattern are compared and discussed. The results suggest that although the movie trailers are much shorter than the whole movies, the average shot lengths of the trailers still display a declining trend over the past 60 years, and the variations in the shot lengths are also decreasing. Second, the motions within each framedo not change significantly over the years, while the correlation coefficients between the shot lengths and the motions within the shots are moving toward a more negative correlation relationship over time, suggesting that the trailers are subject to an editing evolution trend that the shorter the shot is, the more motions there are within it, and this also aligns with the overall movies’ editing pattern evolution trend. Last, the luminance of the trailers remains almost the same over time, which does not align with the overall movies’ editing pattern evolution of becoming darker and darker over decades. Together these findings suggest that the movie trailers’ editing rhythm evolution in general aligns with that of overall movies over time while the visual editing pattern evolution of color luminance does not. The study results will improve our understanding on how the Hollywood movie trailers’ editing pattern and style have evolved over time and pave the way for future advertising studies and cognitive psychology studies on the audience’s attention, immersion and emotional response to various editing patterns of movie trailers.
Temple University--Theses
Clause, James Alexander. "Enabling and supporting the debugging of software failures." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39514.
Full textKuru, Kaya. "A Novel Report Generation Approach For Medical Applications: The Sisds Methodology And Its Applications." Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12611719/index.pdf.
Full texttheir strengths and deficiencies are revealed to shed light on how to set up an ideal medical reporting type. This thesis presents a new medical reporting method, namely &ldquo
Structured, Interactive, Standardized and Decision Supporting Method&rdquo
(SISDS) that encompasses most of the favorable features of the existing medical reporting methods while removing most of their deficiencies such as inefficiency and cognitive overload as well as introducing and promising new advantages. The method enables professionals to produce multilingual medical reports much more efficiently than the existing approaches in a novel way by allowing free-text-like data entry in a structured form. The proposed method in this study is proved to be more effective in many perspectives, such as facilitating the complete and the accurate data collection process and providing opportunities to build DDSS without tedious pre-processing and data preparation steps, mainly helping health care professionals practice better medicine.
Seiss, Mark Thomas. "Improving Survey Methodology Through Matrix Sampling Design, Integrating Statistical Review Into Data Collection, and Synthetic Estimation Evaluation." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/47968.
Full textPh. D.
Busi, Gioia. "Changes in the translation industry: A prospectus on the near future of innovation in machine translation." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019.
Find full textMartin, Stéphane. "Edition collaborative des documents semi-structurés." Phd thesis, Université de Provence - Aix-Marseille I, 2011. http://tel.archives-ouvertes.fr/tel-00684778.
Full textFaudemay, Pascal. "Un processeur VLSI pour les opérations de bases de données." Paris 6, 1986. http://www.theses.fr/1986PA066468.
Full textRountree, Richard John. "Novel technologies for the manipulation of meshes on the CPU and GPU : a thesis presented in partial fulfilment of the requirements for the degree of Masters of Science in Computer Science at Massey University, Palmerston North, New Zealand." Massey University, 2007. http://hdl.handle.net/10179/700.
Full textDuraffourg, Simon. "Analyse de la tenue en endurance de caisses automobiles soumises à des profils de mission sévérisés." Thesis, Paris Est, 2015. http://www.theses.fr/2015PESC1142.
Full textA body-in-white (biw) is a complex structure which consists of several elements that are made of different materials and assembled mainly by spot welds, generally above 80%. At the design stage, several criteria must be verified numerically and experimentally by the car prototype, as the biw durability. In the current economic context, the policy of reducing energy and other costs led automotive companies to optimize the vehicle performances, in particular by reducing very consistently the mass of the biw. As a consequences, some structural design problems appeared. In order to be validated, validation test benches are carried out upstream on a prototype vehicle. They are very costly to the manufacturer, especially when fatigue tests do not confirm the cracks areas identified by numerical simulations. The thesis is focused on numerical biw durability analysis. It covers all the numerical analysis to be implemented to study the biw durability behavior. The main objective is to develop a numerical simulation process to ensure a good level of durability prediction. It means to be able to have a good correlation level between test bench results and numerical fatigue life prediction. This thesis has led to:_ analyze the biw mechanical behavior and the excitation forces applied to the biw during the validation tests,_ establish a new fatigue data editing technique to simplify load signal,_ create a new finite element spot weld model,_ develop a new fatigue life prediction of spot welds. The studies have thus improved the level of biw fatigue life prediction by:_ identifying the majority of critical areas on the full biw,_ reliably assessing the relative criticality of each area,_ accurately estimating the lifetime associated with each of these areas
Valdés, Diana. "Study and Edition of La dama presidente by Francisco de Leiva Ramírez de Arellano." Scholar Commons, 2017. https://scholarcommons.usf.edu/etd/7449.
Full textChan, Yin-hing Yolande. "The normative data and factor structure of the culture-free self-esteem inventory-form a-second edition in Hong Kong adolescents." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B29740253.
Full textBlagojevic, Tim. "A revision of IEC 60891 2nd Edition 2009-12: Data correction procedures 1 and 2 PV module performance at Murdoch University." Thesis, Blagojevic, Tim (2016) A revision of IEC 60891 2nd Edition 2009-12: Data correction procedures 1 and 2 PV module performance at Murdoch University. Honours thesis, Murdoch University, 2016. https://researchrepository.murdoch.edu.au/id/eprint/33935/.
Full textKugel, Rudolf. "Ein Beitrag zur Problematik der Integration virtueller Maschinen." Phd thesis, [S.l.] : [s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=980016371.
Full textvan, Rensburg Rachel Janse. "Resource Description and Access (RDA): continuity in an ever-fluxing information age with reference to tertiary institutions in the Western Cape." University of the Western Cape, 2018. http://hdl.handle.net/11394/6380.
Full textAlthough Resource Description and Access (RDA) has been discussed extensively amongst the ranks of cataloguers internationally, no research on the perceptions of South African cataloguers was available at the time of this research. The aim of this study was to determine how well RDA was faring during the study's timeframe, to give a detailed description regarding cataloguer perceptions within a higher education setting in South Africa. Furthermore, to determine whether the implementation of RDA has overcome most of the limitations that AACR2 had within a digital environment, to identify advantages and/or perceived limitations of RDA as well as to assist cataloguers to adopt and implement the new standard effectively. The study employed a qualitative research design assisted by a phenomenological philosophy to gain insight into how cataloguers experienced the implementation and adoption of RDA by means of two concurrent web-based questionnaires. The study concluded that higher education cataloguing professionals residing in the Western Cape were decidedly positive towards the new cataloguing standard. Although there were some initial reservations, they were overcome to such an extent that ultimately no real limitations were identified, and that RDA has indeed overcome most of the limitations displayed by AACR2. Many advantages of RDA were identified, and participants expressed excitement about the future capabilities of RDA as it continues toward a link-data milieu, making library metadata more easily available.
Janse, van Rensburg Rachel. "Resource Description and Access (RDA): continuity in an ever-fluxing information age with reference to tertiary institutions in the Western Cape." University of the Western Cape, 2018. http://hdl.handle.net/11394/6267.
Full textAlthough Resource Description and Access (RDA) has been discussed extensively amongst the ranks of cataloguers internationally, no research on the perceptions of South African cataloguers was available at the time of this research. The aim of this study was to determine how well RDA was faring during the study's timeframe, to give a detailed description regarding cataloguer perceptions within a higher education setting in South Africa. Furthermore, to determine whether the implementation of RDA has overcome most of the limitations that AACR2 had within a digital environment, to identify advantages and/or perceived limitations of RDA as well as to assist cataloguers to adopt and implement the new standard effectively. The study employed a qualitative research design assisted by a phenomenological philosophy to gain insight into how cataloguers experienced the implementation and adoption of RDA by means of two concurrent web-based questionnaires. The study concluded that higher education cataloguing professionals residing in the Western Cape were decidedly positive towards the new cataloguing standard. Although there were some initial reservations, they were overcome to such an extent that ultimately no real limitations were identified, and that RDA has indeed overcome most of the limitations displayed by AACR2. Many advantages of RDA were identified, and participants expressed excitement about the future capabilities of RDA as it continues toward a link-data milieu, making library metadata more easily available. As this research has revealed a distinctly positive attitude from cataloguers' two main matters for future research remains, being: ? Why South African participants in this study voiced almost no perceived limitations to RDA as a cataloguing standard. Future research might be able to relay information regarding this trend, especially in the light that it was not a global phenomenon. ? A deeper look might have to be taken at how participants' experienced RDA training as this phenomenon might be closely linked to the reasons why the participants did not mention more limitations.
Wesch, Andreas. "Kommentierte Edition und linguistische Untersuchung der "Información de los Jerónimos" (Santo Domingo 1517) : Mit Editionen der "Ordenanzas para el tratamiento de los Indios" (Leyes de Burgos, Burgos / Valladolid 1512/13) und der "Instrucción dada a los Padres de la Orden de San Jerónimo" (Madrid 1516) /." Tübingen : G. Narr Verlag, 1993. http://catalogue.bnf.fr/ark:/12148/cb39172796h.
Full textPotet, Marion. "Vers l'intégration de post-éditions d'utilisateurs pour améliorer les systèmes de traduction automatiques probabilistes." Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00995104.
Full textPierron, Andréa. ""L'Ombre de votre espérance" : repères pour une histoire plastique des revues d'artistes expérimentaux au XXe siècle." Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCA085/document.
Full textThis PhD thesis focuses on analyzing periodicals created during the XXth Century by both visual artists and filmmakers operating in the realm of avantgardes and experimental cinema. The journals become plastic, conceptual, complex, and composite objects because of the interplay between text and image as well as the reproduction of images and realization of photomontages. How these artists’ journals show signs of an experimental approach ? How do artists’ journals contribute to the critical and plastic history of film ? The dissertation aims to understand the unique ways the visual artists and filmmakers make use of the journals to create, defend, document, visualize and analyze some cinematic paradigms. To what extent the journals become in turn experimental works about the relationships between text and image ? We will study how magazines exhibit various plastic, aesthetical, theoretical, and poetical dimensions at stake in the cinematic image, relying on specific technical, graphic and visual undertakings, and how they call into question the perception. Journals become instrumentalized in ensuring the movement of the editors’ ideas, either collective or indivuals. How do journals support the editors’ efforts in building an alternative cinema domain ? Dada I edited by Tristan Tzara and Hans Arp (1916), Dada Sinn der Welt by John Heartfield and George Grosz (1921), Le Promenoir by Jean Epstein, Pierre Deval and Jean Lacroix (1921-1922), G. für elementare Geschaltung by Hans Richter (1923-1926), Close Up by Kenneth Macpherson, Bryher and H.D. (1927-1933), Film Culture by Jonas Mekas (1955-1996) and Cantrill’s Filmnotes by Arthur et Corinne Cantrill (1971-2000) form the corpus of this PhD thesis, which aims to contribute to a plastic history of experimental publications
D'Ambrosio, Antonio. "Tree based methods for data editing and preference rankings." Tesi di dottorato, 2008. http://www.fedoa.unina.it/2746/1/D%27Ambrosio_Statistica.pdf.
Full textCheng, Chengyen, and 鄭丞晏. "Evaluating Data Editing and Imputation Methods based on Monte Carlo Technique." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/66349977077326286387.
Full text國防大學管理學院
財務管理學系
99
Most of the survey will have missing data, if the database contains missing values would seriously affect the quality of data analysis, how to properly handle missing values is an important issue. Although a number of imputation methods have been proposed, but not a perfect imputation method can handle different types of missing values well at the same time. The main purpose of this paper would like to obtain not only use of time but also the appropriate data type of imputation methods. To face the database without missing values and then use pseudo-random number selected to make some fields missing. The last is to compare with original value and the value that after imputation. We use three imputation methods- regression imputation, EM imputation, MCMC imputation and compare the imputation method in the data are highly related and low related with the use of time. when dealing with different types of data , the results provide researchers the rule of selecting the appropriate imputation.
Yang, Huei-Fang. "Reconstruction of 3D Neuronal Structures from Densely Packed Electron Microscopy Data Stacks." Thesis, 2011. http://hdl.handle.net/1969.1/ETD-TAMU-2011-08-10189.
Full textShaw, Peter E. "Advances in cluster editing: linear FPT kernels and comparative implementations." Thesis, 2010. http://hdl.handle.net/1959.13/928253.
Full textExperience has shown that clustering objects into groups is a useful way to analyze and order information. It turns out that many clustering problems are intractable. Several heuristic and approximation algorithms exist; however in many applications what is desired is an optimum solution. Finding an optimum result for the Cluster Edit problem has proven non-trivial, as Cluster Edit is NP-Hard [KM86], and APX-Hard, and therefore cannot be approximated within a factor of (1 + ϵ) unless Poly = NP [SST04]. The algorithmic technique of Parameterized Complexity has proven an effective tool to address hard problems. Recent publications have shown that the Cluster Edit problem is Fixed Parameter Tractable (FPT ). That is, there is a fixed parameter algorithm that can be used to solve the Cluster Edit problem. Traditionally, algorithms, in computer science, are evaluated in terms of the time needed to determine the output as a function of input size only. However, typically in science most real datum contains inherent structure. For Fixed Parameter Tractable (FPT) algorithms, permitting one or more parameters to be given in the input, to further define the question, allows the algorithm to take advantage of any inherit structure in the data [ECFLR05]. A key concept of FPT is kernelization; that is, reducing a problem instance to a core hard sub-problem. The previous best kernelization technique for Cluster Edit was able to reduce the input to within k2 [GGHN05] vertices, when parameterized by k, the edit distance. The edit distance is the number of edit operations required to transform the input graph into a cluster graph (a disjoint union of cliques). Experimental comparisons in [DLL+06] showed that significant improvements were obtained using this reduction rule for the Cluster Edit problem. The study reported in this thesis presents three polynomialtime, many-to-one kernelization algorithms for the Cluster Edit problem, the best of these algorithms produces a linear kernel of at most 6k vertices. In this thesis, we discuss how using new FPT techniques including extremal method compression routine and modelled crown reductions [DFRS04] can be used to kernelize the input for the Cluster Edit problem. Using these new kernelization techniques, it has been possible to improve the number of vertices in the data sets that can be solved optimally, from the previous maximum of around 150 vertices to over 900. More importantly, the edit distance of the graphs that could be solved as also increased from around k = 40 to more than k = 400. This study also provides a comparison of three inductive algorithmic techniques: i) compression routine using a constant factor approximation – Compression Crown Rule Search Algorithm; ii) extremal method (coordinatized kernel) [PR05], using a constructive form of the boundary lemma – Greedy Crown Rule Search Algorithm; iii) extremal method, using an auxiliary (TWIN) graph structure – Crown Rule TWIN Search Algorithm. Algorithms derived using each of the above techniques to obtain linear kernels for the Cluster Edit problem have been evaluated using a variety of data with different exploratory properties. Comparisons have been made in terms of reduction in kernel size, lower bounds obtained and execution time. Novel solutions have been required to obtain approximations within a reasonable time, for the Cluster Edit problem that is within a factor of four of the edit distance (minimum solution size). Most approximation methods performed very badly for some graphs and well for others. Without any guide regarding the quality of the result, a very bad result can be assumed to be close to optimum. Our study has found that just using the highest available lower bound for the approximation is insufficient to improve the result. However, by combining both the highest lower bound obtained and the reduction obtained using kernelization, a 30-fold improvement in the approximation performance ratio is achieved.
Boskovitz, Agnes. "Data Editing and Logic: The covering set method from the perspective of logic." Phd thesis, 2008. http://hdl.handle.net/1885/49318.
Full text