To see the other types of publications on this topic, follow the link: Data editing.

Dissertations / Theses on the topic 'Data editing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data editing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ivarsson, Jakob. "Real-time collaborative editing using CRDTs." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-249545.

Full text
Abstract:
Real-time collaborative editors such as Google Docs allow users to edit a shared document simultaneously and see each others changes in real-time. This thesis investigates how conflict-free replicated data types (CRDTs) can be used to implement a general purpose data store that supports real-time collaborative editing of semi-structured data. The purpose of the data store is that it can be used by application developers to easily add collaborative behaviour to any application.  The performance of the implemented data store is evaluated and the results show that using CRDTs comes with a performance and memory penalty. However, replication over the internet is very efficient and concurrent updates is handled in a predictable way in most cases.
Kollaborativa realtidseditorer som Google Docs låter användare editera ett gemensamt dokument samtidigt och se varandras ändringar i realtid. Den här rapporten undersöker hur konfliktfria replikerade datastrukturer (CRDTs) kan användas för att implementera en generell databas som hanterar kollaborativ realtidseditering. Syftet med databasen är att den kan användas av applikationsutvecklare för att enkelt kunna lägga till kollaborativt beteende till applikationer. Prestandan av den implementerade databasen utvärderas och resultaten visar att användningen av CRDTs resulterar i en ökad minnesanvändning och sämre prestanda. Att replikera databasen är väldigt effektivt och den hanterar konflikter på ett förutsägbart sätt.
APA, Harvard, Vancouver, ISO, and other styles
2

Watanabe, Toyohide, Yuuji Yoshida, and Teruo Fukumura. "Editing model based on the object-oriented approach." IEEE, 1988. http://hdl.handle.net/2237/6930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gul, Shahzad. "Methods of Graphically Viewing and Editing Busines Logic, Data Structure." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-146633.

Full text
Abstract:
For financial institutions such as, who desire to lend money to the borrower or an invoice factoring companies who are looking to buy invoices, credit policy is a primary concern and vital measure for the risk or marketability associated with the particular issue. The goal of credit risk management is to maximize the profitability while reducing the risk of credit loss within the acceptable parameters. In the wake of high profile bankruptcies, there has been recent speculation about the correctness and reliability of credit risk prediction. Based on these issues, there is sufficient motivation to develop more gainful policy analysis system to predicting credit worthiness of the customers. The system is investigated; designed and implemented that accomplishes the requirements of Risk department at Klarna. The result of the thesis work is KLAPAS, which enables the users to create custom policies and apply them on selective set of transactions.
APA, Harvard, Vancouver, ISO, and other styles
4

Ollis, James A. J. "Optimised editing of variable data documents via partial re-evaluation." Thesis, University of Nottingham, 2011. http://eprints.nottingham.ac.uk/12107/.

Full text
Abstract:
With the advent of digital printing presses and the continued development of associated technologies, variable data printing (VDP) is becoming more and more common. VDP allows for a series of data instances to be bound to a single template document in order to produce a set of result document instances, each customized depending upon the data provided. As it gradually enters the mainstream of digital publishing there is a need for appropriate and powerful editing tools suitable for use by creative professionals. This thesis investigates the problem of representing variable data documents in an editable visual form, and focuses on the technical issues involved with supporting such an editing model. Using a document processing model where the document is produced from a data set and an appropriate programmatic transform, this thesis considers an interactive editor developed to allow visual manipulation of the result documents. It shows how the speed of the reprocessing necessary in such an interactive editing scenario can be increased by selectively re-evaluating only the required parts of the transformation, including how these pieces of the transformation can be identified and subsequently re-executed. The techniques described are demonstrated using a simplified document processing model that closely resembles variable data document frameworks. A workable editor is also presented that builds on this processing model and illustrates its advantages. Finally, an analysis of the performance of the proposed framework is undertaken including a comparison to a standard processing pipeline.
APA, Harvard, Vancouver, ISO, and other styles
5

Warne, Brett M. "A system for scalable 3D visualization and editing of connectomic data." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/52774.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 57-58).
The new field of connectomics is using technological advances in microscopy and neural computation to form a detailed understanding of structure and connectivity of neurons. Using the vast amounts of imagery generated by light and electron microscopes, connectomic analysis segments the image data to define 3D regions, forming neural-networks called connectomes. Yet as the dimensions of these volumes grow from hundreds to thousands of pixels or more, connectomics is pushing the computational limits of what can be interactively displayed and manipulated in a 3D environment. The computational cost of rendering in 3D is compounded by the vast size and number of segmented regions that can be formed from segmentation analysis. As a result, most neural data sets are too large and complex to be handled by conventional hardware using standard rendering techniques. This thesis describes a scalable system for visualizing large connectomic data using multiple resolution meshes for performance while providing focused voxel rendering when editing for precision. After pre-processing a given set of data, users of the system are able to visualize neural data in real-time while having the ability to make detailed adjustments at the single voxel scale. The design and implementation of the system are discussed and evaluated.
by Brett M. Warne.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
6

Wu, Qinyi. "Partial persistent sequences and their applications to collaborative text document editing and processing." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/44916.

Full text
Abstract:
In a variety of text document editing and processing applications, it is necessary to keep track of the revision history of text documents by recording changes and the metadata of those changes (e.g., user names and modification timestamps). The recent Web 2.0 document editing and processing applications, such as real-time collaborative note taking and wikis, require fine-grained shared access to collaborative text documents as well as efficient retrieval of metadata associated with different parts of collaborative text documents. Current revision control techniques only support coarse-grained shared access and are inefficient to retrieve metadata of changes at the sub-document granularity. In this dissertation, we design and implement partial persistent sequences (PPSs) to support real-time collaborations and manage metadata of changes at fine granularities for collaborative text document editing and processing applications. As a persistent data structure, PPSs have two important features. First, items in the data structure are never removed. We maintain necessary timestamp information to keep track of both inserted and deleted items and use the timestamp information to reconstruct the state of a document at any point in time. Second, PPSs create unique, persistent, and ordered identifiers for items of a document at fine granularities (e.g., a word or a sentence). As a result, we are able to support consistent and fine-grained shared access to collaborative text documents by detecting and resolving editing conflicts based on the revision history as well as to efficiently index and retrieve metadata associated with different parts of collaborative text documents. We demonstrate the capabilities of PPSs through two important problems in collaborative text document editing and processing applications: data consistency control and fine-grained document provenance management. The first problem studies how to detect and resolve editing conflicts in collaborative text document editing systems. We approach this problem in two steps. In the first step, we use PPSs to capture data dependencies between different editing operations and define a consistency model more suitable for real-time collaborative editing systems. In the second step, we extend our work to the entire spectrum of collaborations and adapt transactional techniques to build a flexible framework for the development of various collaborative editing systems. The generality of this framework is demonstrated by its capabilities to specify three different types of collaborations as exemplified in the systems of RCS, MediaWiki, and Google Docs respectively. We precisely specify the programming interfaces of this framework and describe a prototype implementation over Oracle Berkeley DB High Availability, a replicated database management engine. The second problem of fine-grained document provenance management studies how to efficiently index and retrieve fine-grained metadata for different parts of collaborative text documents. We use PPSs to design both disk-economic and computation-efficient techniques to index provenance data for millions of Wikipedia articles. Our approach is disk economic because we only save a few full versions of a document and only keep delta changes between those full versions. Our approach is also computation-efficient because we avoid the necessity of parsing the revision history of collaborative documents to retrieve fine-grained metadata. Compared to MediaWiki, the revision control system for Wikipedia, our system uses less than 10% of disk space and achieves at least an order of magnitude speed-up to retrieve fine-grained metadata for documents with thousands of revisions.
APA, Harvard, Vancouver, ISO, and other styles
7

Pratumnopharat, Panu. "Novel methods for fatigue data editing for horizontal axis wind turbine blades." Thesis, Northumbria University, 2012. http://nrl.northumbria.ac.uk/10458/.

Full text
Abstract:
Wind turbine blades are the most critical components of wind turbines. Full-scale blade fatigue testing is required to verify that the blades possess the strength and service life specified in the design. Unfortunately, the test must be run for a long time period. This problem led the blade testing laboratories to accelerate fatigue testing time. To achieve the objective, this thesis proposes two novel methods called STFT- and WT-based fatigue damage part extracting methods which are based on short-time Fourier transform (STFT) and wavelet transform (WT), respectively. For WT, different wavelet functions which are Morl, Meyr, Dmey, Mexh and DB30 are studied. An aerodynamic computer code, HAWTsimulator, based on blade element momentum theory has been developed. This code is used to generate the sets of aerodynamic loads acting along the span of a ‘SERI-8 wind turbine blade’ in the range of wind speed from cut-in to cut-out. SERI-8 blades are installed on 65 kW wind turbines. Each set of aerodynamic loads is applied on the finite element model of the SERI-8 blade in structural software (ANSYS) to generate a plot of von Mises stress at the critical point on the blade versus wind speed. By relating this relationship to the wind speed data, the stress-time history at the critical point on the SERI-8 blade can be generated. It has the same sampling rate and length as the wind speed data. A concept of applying accumulative power spectral density (AccPSD) distribution with time to identify fatigue damage events contained in the stress-time history has been introduced in this thesis. For STFT, AccPSD is the sum of power spectral density (PSD) of each frequency band at each time interval in the spectrogram. For WT, AccPSD is the sum of PSD of wavelet coefficients of each scale at each time interval in the scalogram. It has been found that the locations of AccPSD spikes imply where the fatigue damage events are. Based on an appropriate AccPSD level called a cutoff level, the fatigue damage events can be identified at time location of the stress-time history. A fatigue computer code, HAWTfatigue, based on stress-life approach and Miner’s linear cumulative damage rule has been developed. Basically, the code is used for evaluating the fatigue damage and service lifetime of horizontal axis wind turbine blade. In addition, the author has implemented STFT- and WT-based fatigue damage part extracting methods into the code. Fatigue damage parts are extracted from the stress time history and they are concatenated to form the edited stress-time history. The effectiveness of STFT- and WTbased algorithms is performed by comparing the reduction in length and the difference in fatigue damage per repetition of the edited stress-time histories generated by STFT and WT to those of the edited stress-time history generated by an existing method, Time Correlated Fatigue Damage (TCFD) used by commercial software. The findings of this research project are as follows: 1. The comparison of the reduction in length of the edited stress-time histories generated by TCFD, STFT and WT indicates that WT with the Mexh wavelet has the maximum reduction of 20.77% in length with respect to the original length, followed by Meyr (20.24%), Dmey (19.70%), Morl (19.66%), DB30 (19.19%), STFT (15.38%), and TCFD (10.18%), respectively. 2. The comparison of the retained fatigue damage per repetition in the edited stress-time histories generated by TCFD, STFT and WT indicates that TCFD has the retained fatigue damage per repetition less than the original fatigue damage per repetition by 0.076%, followed by Mexh (0.068%), DB30 (0.063%), STFT (0.045%), Meyr (0.032%), Dmey (0.014%), and Morl (0.013%), respectively. 3. Both comparison of reduction in length and comparison in the retained fatigue damage per repetition of the edited stress-time histories suggest that WT is the best method for extracting fatigue damage parts from the given stress-time history. It has also been indicated that not only do STFT and WT improve accuracy of fatigue damage per repetition retained in the edited stress-time histories, but also they provide the length of the edited stress-time histories shorter than TCFD does. Thus, STFT and WT are useful methods for performing accelerated fatigue tests. 4. It has been found that STFT is controlled by two main factors which are window size and cutoff level. Also, WT is controlled by three main factors which are wavelet decomposition level, cutoff level and wavelet type. To conclude, the edited stress-time history can be used by blade testing laboratories to accelerate fatigue testing time. STFT- and WT-based fatigue damage part extracting methods proposed in this thesis are suggested as alternative methods in accelerating fatigue testing time, especially for the field of wind turbine engineering.
APA, Harvard, Vancouver, ISO, and other styles
8

Carpatorea, Iulian Nicolae. "A graphical traffic scenario editing and evaluation software." Thesis, Högskolan i Halmstad, Halmstad Embedded and Intelligent Systems Research (EIS), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-19438.

Full text
Abstract:
An interactive tool is developed for the purpose of rapid exploration ofdiverse traffic scenario. The focus is on rapidity of design and evaluation rather thenon physical realism. Core aspects are the ability to define the essential elements fora traffic scenario such as a road network and vehicles. Cubic Bezier curves are usedto design the roads and vehicle trajectory. A prediction algorithm is used to visualizevehicle future poses and collisions and thus provide means for evaluation of saidscenario. Such a program was created using C++ with the help of Qt libraries.
APA, Harvard, Vancouver, ISO, and other styles
9

Boskovitz, Agnes. "Data editing and logic : the covering set method from the perspective of logic /." View thesis entry in Australian Digital Theses, 2008. http://thesis.anu.edu.au/public/adt-ANU20080314.163155/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Boskovitz, Agnes, and abvi@webone com au. "Data Editing and Logic: The covering set method from the perspective of logic." The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20080314.163155.

Full text
Abstract:
Errors in collections of data can cause significant problems when those data are used. Therefore the owners of data find themselves spending much time on data cleaning. This thesis is a theoretical work about one part of the broad subject of data cleaning - to be called the covering set method. More specifically, the covering set method deals with data records that have been assessed by the use of edits, which are rules that the data records are supposed to obey. The problem solved by the covering set method is the error localisation problem, which is the problem of determining the erroneous fields within data records that fail the edits. In this thesis I analyse the covering set method from the perspective of propositional logic. I demonstrate that the covering set method has strong parallels with well-known parts of propositional logic. The first aspect of the covering set method that I analyse is the edit generation function, which is the main function used in the covering set method. I demonstrate that the edit generation function can be formalised as a logical deduction function in propositional logic. I also demonstrate that the best-known edit generation function, written here as FH (standing for Fellegi-Holt), is essentially the same as propositional resolution deduction. Since there are many automated implementations of propositional resolution, the equivalence of FH with propositional resolution gives some hope that the covering set method might be implementable with automated logic tools. However, before any implementation, the other main aspect of the covering set method must also be formalised in terms of logic. This other aspect, to be called covering set correctibility, is the property that must be obeyed by the edit generation function if the covering set method is to successfully solve the error localisation problem. In this thesis I demonstrate that covering set correctibility is a strengthening of the well-known logical properties of soundness and refutation completeness. What is more, the proofs of the covering set correctibility of FH and of the soundness / completeness of resolution deduction have strong parallels: while the proof of soundness / completeness depends on the reduction property for counter-examples, the proof of covering set correctibility depends on the related lifting property. In this thesis I also use the lifting property to prove the covering set correctibility of the function defined by the Field Code Forest Algorithm. In so doing, I prove that the Field Code Forest Algorithm, whose correctness has been questioned, is indeed correct. The results about edit generation functions and covering set correctibility apply to both categorical edits (edits about discrete data) and arithmetic edits (edits expressible as linear inequalities). Thus this thesis gives the beginnings of a theoretical logical framework for error localisation, which might give new insights to the problem. In addition, the new insights will help develop new tools using automated logic tools. What is more, the strong parallels between the covering set method and aspects of logic are of aesthetic appeal.
APA, Harvard, Vancouver, ISO, and other styles
11

Pearce, Richard William. "The effect of word-processing experience on editing while composing." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/28967.

Full text
Abstract:
This study investigated the implications of using computers in the writing process. The purpose was to determine whether there was a difference between two groups in their editing and revising techniques and their attitude towards writing. It was hypothesized that students who had had three years experience with computer writing would use more sophisticated forms of editing and would feel more positive toward writing than those students who had only a single year of writing with the computer. Two groups of seventh-grade students were identified: the One-year Group consisted of students who had one year of keyboard training and one year of experience with a word processor; the Three-year Group consisted of students who had a minimum of three years of keyboard training and a minimum of three years experience with a word processor. The students had all attended schools within the same district for the past three years. A group of grade-six students were trained as observers. They were given two training sessions, first observing a videotape and then observing another student. About 150 students were trained and the best 60 were used to observe the grade sevens for the study. Each writing group spent one forty-minute period composing an essay on the computer while being observed by the grade-six students. The observers tallied the editing and revising actions that were employed by the two writing groups. The editing activities of the two groups were compared. The grade-seven students were also given a writing opinion survey. Both groups had a positive attitude but there was no significant difference in their attitude toward writing. Three levels of editing are normally discerned (Kurth and Stromberg, 1987; Hillocks, 1987): surface, lexical, and phrase/sentence. The One-year Group made significantly more typing corrections but there was no difference in overall surface editing. The Three-year group did significantly more lexical and phrase/sentence editing. In this way, students with more word-processing experience exhibit an editing style that is characteristic of better writers.
Education, Faculty of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
12

Bradford, Jacob. "Rapid detection of safe and efficient gene editing targets across entire genomes." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/227671/1/Jacob_Bradford_Thesis.pdf.

Full text
Abstract:
CRISPR-Cas9 is a modern technology that can edit the genome of any organism. It has transformative potential both for basic research and in the real world. However, designing the short guide RNA sequence that directs it to the target gene is not trivial: we must maximise the likelihood of obtaining the desired edit, and minimise the risk of any off-target modifications. Computational methods can assist, but it is a difficult and time-consuming task. This thesis describes a highly scalable and precise bioinformatics approach to designing guide RNAs that improve the safety and efficiency of the CRISPR-Cas9 gene editing technology.
APA, Harvard, Vancouver, ISO, and other styles
13

Nguyen, Hoang Chuong [Verfasser], and Hans-Peter [Akademischer Betreuer] Seidel. "Data-driven approaches for interactive appearance editing / Hoang Chuong Nguyen. Betreuer: Hans-Peter Seidel." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2015. http://d-nb.info/1077007027/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Ashok, Ashish Kumar. "Predictive data mining in a collaborative editing system: the Wikipedia articles for deletion process." Thesis, Kansas State University, 2011. http://hdl.handle.net/2097/12026.

Full text
Abstract:
Master of Science
Department of Computing and Information Sciences
William H. Hsu
In this thesis, I examine the Articles for Deletion (AfD) system in /Wikipedia/, a large-scale collaborative editing project. Articles in Wikipedia can be nominated for deletion by registered users, who are expected to cite criteria for deletion from the Wikipedia deletion. For example, an article can be nominated for deletion if there are any copyright violations, vandalism, advertising or other spam without relevant content, advertising or other spam without relevant content. Articles whose subject matter does not meet the notability criteria or any other content not suitable for an encyclopedia are also subject to deletion. The AfD page for an article is where Wikipedians (users of Wikipedia) discuss whether an article should be deleted. Articles listed are normally discussed for at least seven days, after which the deletion process proceeds based on community consensus. Then the page may be kept, merged or redirected, transwikied (i.e., copied to another Wikimedia project), renamed/moved to another title, userfied or migrated to a user subpage, or deleted per the deletion policy. Users can vote to keep, delete or merge the nominated article. These votes can be viewed in article’s view AfD page. However, this polling does not necessarily determine the outcome of the AfD process; in fact, Wikipedia policy specifically stipulates that a vote tally alone should not be considered sufficient basis for a decision to delete or retain a page. In this research, I apply machine learning methods to determine how the final outcome of an AfD process is affected by factors such as the difference between versions of an article, number of edits, and number of disjoint edits (according to some contiguity constraints). My goal is to predict the outcome of an AfD by analyzing the AfD page and editing history of the article. The technical objectives are to extract features from the AfD discussion and version history, as reflected in the edit history page, that reflect factors such as those discussed above, can be tested for relevance, and provide a basis for inductive generalization over past AfDs. Applications of such feature analysis include prediction and recommendation, with the performance goal of improving the precision and recall of AfD outcome prediction.
APA, Harvard, Vancouver, ISO, and other styles
15

Demozzi, Michele. "Identification of novel active Cas9 orthologs from metagenomic data." Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/337709.

Full text
Abstract:
CRISPR-Cas is the state-of-the-art biological tool that allows precise and fast manipulation of the genetic information of cellular genomes. The translation of the CRISPR-Cas technology from in vitro studies into clinical applications highlighted a variety of limitations: the currently available systems are limited by their off-target activity, the availability of a Cas-specific PAM sequence next to the target and the size of the Cas protein. In particular, despite high levels of activity, the size of the CRISPR-SpCas9 editing machinery is not compatible with an all-in-one AAV delivery system and the genomic sequences that can be targeted are limited by the 3-NGG PAM-dependency of the SpCas9 protein. To further expand the CRISPR tools repertoire we turned to metagenomic data of the human microbiome to search for uncharacterized CRISPR-Cas9 systems and we identified a set of novel small Cas9 orthologs derived from the analysis of reconstructed bacterial metagenomes. In this thesis study, ten candidates were chosen according to their size (less than 1100aa). The PAM preference of all the ten orthologs was established exploiting a bacterial-based and an in vitro platform. We demonstrated that three of them are active nucleases in human cells and two out of the three showed robust editing levels at endogenous loci, outperforming SpCas9 at particular targets. We expect these new variants to be very useful in expanding the available genome editing tools both in vitro and in vivo. Knock-out-based Cas9 applications are very efficient but many times a precise control of the repair outcome through HDR-mediated gene targeting is required. To address this issue, we also developed an MS2-based reporter platform to measure the frequency of HDR events and evaluate novel HDR-modulating factors. The platform was validated and could allow the screening of libraries of proteins to assess their influence on the HDR pathway.
APA, Harvard, Vancouver, ISO, and other styles
16

Hedkvist, Pierre. "Collaborative Editing of Graphical Network using Eventual Consistency." Thesis, Linköpings universitet, Programvara och system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-154856.

Full text
Abstract:
This thesis compares different approaches of creating a collaborative editing application using different methods such as OT, CRDT and Locking. After a comparison between these methods an implementation based on CRDT was done. The implementation of a collaborative graphical network was made such that consistency is guaranteed. The implementation uses the 2P2P-Graph which was extended in order to support moving of nodes, and uses the client-server communication model. An evaluation of the implementation was made by creating a time-complexity and a space complexity analysis. The result of the thesis includes a comparison between different methods and by an evaluation of the Extended 2P2P-Graph.
APA, Harvard, Vancouver, ISO, and other styles
17

Nguyen, Minh Quoc. "Toward accurate and efficient outlier detection in high dimensional and large data sets." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34657.

Full text
Abstract:
An efficient method to compute local density-based outliers in high dimensional data was proposed. In our work, we have shown that this type of outlier is present even in any subset of the dataset. This property is used to partition the data set into random subsets to compute the outliers locally. The outliers are then combined from different subsets. Therefore, the local density-based outliers can be computed efficiently. Another challenge in outlier detection in high dimensional data is that the outliers are often suppressed when the majority of dimensions do not exhibit outliers. The contribution of this work is to introduce a filtering method whereby outlier scores are computed in sub-dimensions. The low sub-dimensional scores are filtered out and the high scores are aggregated into the final score. This aggregation with filtering eliminates the effect of accumulating delta deviations in multiple dimensions. Therefore, the outliers are identified correctly. In some cases, the set of outliers that form micro patterns are more interesting than individual outliers. These micro patterns are considered anomalous with respect to the dominant patterns in the dataset. In the area of anomalous pattern detection, there are two challenges. The first challenge is that the anomalous patterns are often overlooked by the dominant patterns using the existing clustering techniques. A common approach is to cluster the dataset using the k-nearest neighbor algorithm. The contribution of this work is to introduce the adaptive nearest neighbor and the concept of dual-neighbor to detect micro patterns more accurately. The next challenge is to compute the anomalous patterns very fast. Our contribution is to compute the patterns based on the correlation between the attributes. The correlation implies that the data can be partitioned into groups based on each attribute to learn the candidate patterns within the groups. Thus, a feature-based method is developed that can compute these patterns efficiently.
APA, Harvard, Vancouver, ISO, and other styles
18

Epps, Brian W. "A comparison of cursor control devices on target acquisition, text editing, and graphics tasks." Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/50013.

Full text
Abstract:
The current study compared the performance of six commonly used cursor devices (absolute touchpad, mouse, trackball, relative touchpad, force joystick, and displacement joystick) on three types of tasks (target acquisition, text editing, and graphics). Prior to these comparisons, each of the devices was optimized for display/control dynamics in independent experiments. A total of 30 subjects were used in the five optimization studies. For each device, the optimization experiment compared a range of control dynamics using a target acquisition task (i.e., positioning a cross-hair cursor over square targets of varying sizes and screen distances). An analysis of variance procedure was used to determine the best control dynamics, of the range studied, for each device. Performance was based on a time-to-target (TT) measure. A comparison of the six optimized devices was then performed on the three task environments. For the target acquisition, text editing, and graphics tasks, a total of 12, seven, and six subjects were required, respectively. For the target acquisition study, the six devices were compared on a task identical to the optimization task; that is, cursor positioning performance for various target sizes and distances. In addition to the TT dependent measure, bipolar scale and subjective rank data were also collected. The text editing task required subjects to perform document correction on the computer using each of the six devices, with cursor keys added as a baseline device. Task completion time (TCT), bipolar scale response, and subjective rank data were collected. For the graphics task, subjects were required to perform basic graphics editing tasks with the six devices. As with the text editing task, TCT, bipolar scale, and rank data were collected. Results indicated a wide variation in the cursor positioning performance of the devices on the three tasks. Without exception, the mouse and trackball performed the best of the six devices, across all tasks. In addition, these devices were most preferred. In general, the two joysticks performed worse on the target acquisition and graphics tasks than the two touchpads. On the text editing task, however, the rate—controlled joysticks performed better than the touchpads.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
19

Enoksson, Fredrik. "Adaptable metadata creation for the Web of Data." Doctoral thesis, KTH, Medieteknik och interaktionsdesign, MID, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-154272.

Full text
Abstract:
One approach to manage collections is to create data about the things in it. This descriptive data is called metadata, and this term is in this thesis used as a collective noun, i.e no plural form exists. A library is a typical example of an organization that uses metadata, to manage a collection of books. The metadata about a book describes certain attributes of it, for example who the author is. Metadata also provides possibilities for a person to judge if a book is interesting without having to deal with the book itself. The metadata of the things in a collection is a representation of the collection that is easier to deal with than the collection itself. Nowadays metadata is often managed in computer-based systems that enable search possibilities and sorting of search results according to different principles. Metadata can be created both by computers and humans. This thesis will deal with certain aspects of the human activity of creating metadata and includes an explorative study of this activity. The increased amount of public information that is produced is also required to be easily accessible and therefore the situation when metadata is a part of the Semantic Web has been considered an important part of this thesis. This situation is also referred to as the Web of Data or Linked Data. With the Web of Data, metadata records living in isolation from each other can now be linked together over the web. This will probably change what kind of metadata that is being created, but also how it is being created. This thesis describes the construction and use of a framework called Annotation Profiles, a set of artifacts developed to enable an adaptable metadata creation environment with respect to what metadata that can be created. The main artifact is the Annotation Profile Model (APM), a model that holds enough information for a software application to generate a customized metadata editor from it. An instance of this model is called an annotation profile, that can be seen as a configuration for metadata editors. Changes to what metadata can be edited in a metadata editor can be done without modifying the code of the application. Two code libraries that implement the APM have been developed and have been evaluated both internally within the research group where they were developed, but also externally via interviews with software developers that have used one of the code-libraries. Another artifact presented is a protocol for how RDF metadata can be remotely updated when metadata is edited through a metadata editor. It is also described how the APM opens up possibilities for end user development and this is one of the avenues of pursuit in future research related to the APM.

QC 20141028

APA, Harvard, Vancouver, ISO, and other styles
20

Crawley, Sunny Sheliese. "Rethinking phylogenetics using Caryophyllales (angiosperms), matK gene and trnK intron as experimental platform." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/77276.

Full text
Abstract:
The recent call to reconstruct a detailed picture of the tree of life for all organisms has forever changed the field of molecular phylogenetics. Sequencing technology has improved to the point that scientics can now routinely sequence complete plastid/mitochondrial genomes and thus, vast amounts of data can be used to reconstruct phylogenies. These data are accumulating in DNA sequence repositories, such as GenBank, where everyone can benefit from the vast growth of information. The trend of generating genomic-region rich datasets has far outpaced the expasion of datasets by sampling a broader array of taxa. We show here that expanding a dataset both by increasing genomic regions and species sampled using GenBank data, despite the inherent missing DNA that comes with GenBank data, can provide a robust phylogeny for the plant order Caryophyllales (angiosperms). We also investigate the utility of trnK intron in phylogeny reconstruction at relativley deep evolutionary history (the caryophyllid order) by comparing it with rapidly evolving matK. We show that trnK intron is comparable to matK in terms of the proportion of variable sites, parsimony informative sites, the distribution of those sites among rate classes, and phylogenetic informativness across the history of the order. This is especailly useful since trnK intron is often sequenced concurrently with matK which saves on time and resources by increasing the phylogenetic utility of a single genomic region (rapidly evolving matK/trnK). Finally, we show that the inclusion of RNA edited sites in datasets for phylogeny reconstruction did not appear to impact resolution or support in the Gnetales indicating that edited sites in such low proportions do not need to be a consideration when building datasets. We also propose an alternate start codon for matK in Ephedra based on the presense of a 38 base pair indel in several species that otherwise result in pre-mature stop codons, and present 20 RNA edited sites in two Zamiaceae and three Pinaceae species.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
21

Nigita, Giovanni. "Knowledge bases and stochastic algorithms for mining biological data: applications on A-to-I RNA editing and RNAi." Doctoral thesis, Università di Catania, 2014. http://hdl.handle.net/10761/1555.

Full text
Abstract:
Until the second half of twenty century, the connection between Biology and Computer Science was not so strict and the data were usually collected on perishable materials such as paper and then stored up in filing cabinets. This situation changed thanks to the Bioinformatics, a relatively novel field that aims to deal with biological problems by making use of computational approaches. This interdisciplinary science has two particular fields of action: on the one hand, the construction of biological databases in order to store in a rational way the huge amount of data, and, on the other hand, the development and application of algorithms also approximate for extracting predicting patterns from such kind of data. This thesis will present novel results on both of the above aspects. It will introduce three new database called miRandola, miReditar and VIRGO, respectively. All of them have been developed as open sources and equipped with user-friendly web interfaces. Then, some results concerning the application of stochastic approaches on microRNA targeting and RNA A-to-I interference will be introduced.
APA, Harvard, Vancouver, ISO, and other styles
22

Palladino, Chiara. "Round table report: Epigraphy Edit-a-thon: editing chronological and geographic data in ancient inscriptions: April 20-22, 2016." Epigraphy Edit-a-thon : editing chronological and geographic data in ancient inscriptions ; April 20-22, 2016 / edited by Monica Berti. Leipzig, 2016. Beitrag 15, 2016. https://ul.qucosa.de/id/qucosa%3A15477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Sicilia, Gómez Álvaro. "Supporting Tools for Automated Generation and Visual Editing of Relational-to-Ontology Mappings." Doctoral thesis, Universitat Ramon Llull, 2016. http://hdl.handle.net/10803/398843.

Full text
Abstract:
La integració de dades amb formats heterogenis i de diversos dominis mitjançant tecnologies de la web semàntica permet solucionar la seva disparitat estructural i semàntica. L'accés a dades basat en ontologies (OBDA, en anglès) és una solució integral que es basa en l'ús d'ontologies com esquemes mediadors i el mapatge entre les dades i les ontologies per facilitar la consulta de les fonts de dades. No obstant això, una de les principals barreres que pot dificultar més l'adopció de OBDA és la manca d'eines per donar suport a la creació de mapatges entre dades i ontologies. L'objectiu d'aquesta investigació ha estat desenvolupar noves eines que permetin als experts sense coneixements d'ontologies la creació de mapatges entre dades i ontologies. Amb aquesta finalitat, s'han dut a terme dues línies de treball: la generació automàtica de mapatges entre dades relacionals i ontologies i l'edició dels mapatges a través de la seva representació visual. Les eines actualment disponibles per automatitzar la generació de mapatges estan lluny de proporcionar una solució completa, ja que es basen en els esquemes relacionals i amb prou feines tenen en compte els continguts de la font de dades relacional i les característiques de l'ontologia. No obstant això, les dades poden contenir relacions ocultes que poden ajudar a la generació de mapatges. Per superar aquesta limitació, hem desenvolupat AutoMap4OBDA, un sistema que genera automàticament mapatges R2RML a partir de l'anàlisi dels continguts de la font relacional i tenint en compte les característiques de l'ontologia. El sistema fa servir una tècnica d'aprenentatge d'ontologies per inferir jerarquies de classes, selecciona les mètriques de similitud de cadenes en base a les etiquetes de les ontologies i analitza les estructures de grafs per generar els mapatges a partir de l'estructura de l'ontologia. La representació visual per mitjà d'interfícies intuïtives pot ajudar els usuaris sense coneixements tècnics a establir mapatges entre una font relacional i una ontologia. No obstant això, les eines existents per a l'edició visual de mapatges mostren algunes limitacions. En particular, la representació visual de mapatges no contempla les estructures de la font relacional i de l'ontologia de forma conjunta. Per superar aquest inconvenient, hem desenvolupat Map-On, un entorn visual web per a l'edició manual de mapatges. AutoMap4OBDA ha demostrat que supera les prestacions de les solucions existents per a la generació de mapatges. Map-On s'ha aplicat en projectes d'investigació per verificar la seva eficàcia en la gestió de mapatges.
La integración de datos con formatos heterogéneos y de diversos dominios mediante tecnologías de la Web Semántica permite solventar su disparidad estructural y semántica. El acceso a datos basado en ontologías (OBDA, en inglés) es una solución integral que se basa en el uso de ontologías como esquemas mediadores y mapeos entre los datos y las ontologías para facilitar la consulta de las fuentes de datos. Sin embargo, una de las principales barreras que puede dificultar más la adopción de OBDA es la falta de herramientas para apoyar la creación de mapeos entre datos y ontologías. El objetivo de esta investigación ha sido desarrollar nuevas herramientas que permitan a expertos sin conocimientos de ontologías la creación de mapeos entre datos y ontologías. Con este fin, se han llevado a cabo dos líneas de trabajo: la generación automática de mapeos entre datos relacionales y ontologías y la edición de los mapeos a través de su representación visual. Las herramientas actualmente disponibles para automatizar la generación de mapeos están lejos de proporcionar una solución completa, ya que se basan en los esquemas relacionales y apenas tienen en cuenta los contenidos de la fuente de datos relacional y las características de la ontología. Sin embargo, los datos pueden contener relaciones ocultas que pueden ayudar a la generación de mapeos. Para superar esta limitación, hemos desarrollado AutoMap4OBDA, un sistema que genera automáticamente mapeos R2RML a partir del análisis de los contenidos de la fuente relacional y teniendo en cuenta las características de la ontología. El sistema emplea una técnica de aprendizaje de ontologías para inferir jerarquías de clases, selecciona las métricas de similitud de cadenas en base a las etiquetas de las ontologías y analiza las estructuras de grafos para generar los mapeos a partir de la estructura de la ontología. La representación visual por medio de interfaces intuitivas puede ayudar a los usuarios sin conocimientos técnicos a establecer mapeos entre una fuente relacional y una ontología. Sin embargo, las herramientas existentes para la edición visual de mapeos muestran algunas limitaciones. En particular, la representación de mapeos no contempla las estructuras de la fuente relacional y de la ontología de forma conjunta. Para superar este inconveniente, hemos desarrollado Map-On, un entorno visual web para la edición manual de mapeos. AutoMap4OBDA ha demostrado que supera las prestaciones de las soluciones existentes para la generación de mapeos. Map-On se ha aplicado en proyectos de investigación para verificar su eficacia en la gestión de mapeos.
Integration of data from heterogeneous formats and domains based on Semantic Web technologies enables us to solve their structural and semantic heterogeneity. Ontology-based data access (OBDA) is a comprehensive solution which relies on the use of ontologies as mediator schemas and relational-to-ontology mappings to facilitate data source querying. However, one of the greatest obstacles in the adoption of OBDA is the lack of tools to support the creation of mappings between physically stored data and ontologies. The objective of this research has been to develop new tools that allow non-ontology experts to create relational-to-ontology mappings. For this purpose, two lines of work have been carried out: the automated generation of relational-to-ontology mappings, and visual support for mapping editing. The tools currently available to automate the generation of mappings are far from providing a complete solution, since they rely on relational schemas and barely take into account the contents of the relational data source and features of the ontology. However, the data may contain hidden relationships that can help in the process of mapping generation. To overcome this limitation, we have developed AutoMap4OBDA, a system that automatically generates R2RML mappings from the analysis of the contents of the relational source and takes into account the characteristics of ontology. The system employs an ontology learning technique to infer class hierarchies, selects the string similarity metric based on the labels of ontologies, and analyses the graph structures to generate the mappings from the structure of the ontology. The visual representation through intuitive interfaces can help non-technical users to establish mappings between a relational source and an ontology. However, existing tools for visual editing of mappings show somewhat limitations. In particular, the visual representation of mapping does not embrace the structure of the relational source and the ontology at the same time. To overcome this problem, we have developed Map-On, a visual web environment for the manual editing of mappings. AutoMap4OBDA has been shown to outperform existing solutions in the generation of mappings. Map-On has been applied in research projects to verify its effectiveness in managing mappings.
APA, Harvard, Vancouver, ISO, and other styles
24

Klasson, Filip, and Patrik Väyrynen. "Development of an API for creating and editing openEHR archetypes." Thesis, Linköping University, Department of Biomedical Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-17558.

Full text
Abstract:

Archetypes are used to standardize a way of creating, presenting and distributing health care data. In this master thesis project the open specifications of openEHR was followed. The objective of this master thesis project has been to develop a Java based API for creating and editing openEHR archetypes. The API is a programming toolbox that can be used when developing archetype editors. Another purpose has been to implement validation functionality for archetypes. An important aspect is that the functionality of the API is well documented, this is important to ease the understanding of the system for future developers. The result was a Java based API that is a platform for future archetype editors. The API-kernel has optional immutability so developed archetypes can be locked for modification by making them immutable. The API is compatible with the openEHR specifications 1.0.1, it can load and save archetypes in ADL (Archetype Definition Language) format. There is also a validation feature that verifies that the archetype follows the right structure with respect to predefined reference models. This master thesis report also presents a basic GUI proposal.

APA, Harvard, Vancouver, ISO, and other styles
25

Veneziano, Dario. "Knowledge bases, computational methods and data mining techniques with applications to A-to-I RNA editing, Synthetic Biology and RNA interference." Doctoral thesis, Università di Catania, 2015. http://hdl.handle.net/10761/4085.

Full text
Abstract:
La Bioinformatica, nota anche come Biologia Computazionale, è un campo relativamente nuovo che mira alla risoluzione di problemi biologici attraverso approcci computazionali. Questa scienza interdisciplinare persegue due obiettivi particolari tra i molti: da un lato, la costruzione di database biologici per memorizzare razionalmente sempre maggiori quantità di dati che divengono sempre più disponibili, e, dall'altro, lo sviluppo e l'applicazione di algoritmi al fine di estrarre pattern di predizione ed inferire nuove conoscenze altrimenti impossibili da ottenere da tali dati. Questa tesi presenterà nuovi risultati su entrambi questi aspetti. Infatti, il lavoro di ricerca descritto in questa tesi di dottorato ha avuto come obiettivo lo sviluppo di euristiche e tecniche di data mining per la raccolta e l'analisi di dati relativi ai meccanismi di regolazione post-trascrizionale ed RNA interference, così come il collegamento del fenomeno dell RNA A-to-I editing con la regolazione genica mediate dai miRNA. In particolare, gli sforzi sono stati finalizzati allo sviluppo di una banca dati per la predizione di siti di legame per miRNA editati tramite RNA A-to-I editing; un algoritmo per la progettazione di miRNA sintetici con alta specificità; e una base di conoscenza dotata di algoritmi di data mining per l'annotazione funzionale dei microRNA, proposta come risorsa unificata per la ricerca sui miRNA.
APA, Harvard, Vancouver, ISO, and other styles
26

Robson, Geoffrey. "Multiple outlier detection and cluster analysis of multivariate normal data." Thesis, Stellenbosch : Stellenbosch University, 2003. http://hdl.handle.net/10019.1/53508.

Full text
Abstract:
Thesis (MscEng)--Stellenbosch University, 2003.
ENGLISH ABSTRACT: Outliers may be defined as observations that are sufficiently aberrant to arouse the suspicion of the analyst as to their origin. They could be the result of human error, in which case they should be corrected, but they may also be an interesting exception, and this would deserve further investigation. Identification of outliers typically consists of an informal inspection of a plot of the data, but this is unreliable for dimensions greater than two. A formal procedure for detecting outliers allows for consistency when classifying observations. It also enables one to automate the detection of outliers by using computers. The special case of univariate data is treated separately to introduce essential concepts, and also because it may well be of interest in its own right. We then consider techniques used for detecting multiple outliers in a multivariate normal sample, and go on to explain how these may be generalized to include cluster analysis. Multivariate outlier detection is based on the Minimum Covariance Determinant (MCD) subset, and is therefore treated in detail. Exact bivariate algorithms were refined and implemented, and the solutions were used to establish the performance of the commonly used heuristic, Fast–MCD.
AFRIKAANSE OPSOMMING: Uitskieters word gedefinieer as waarnemings wat tot s´o ’n mate afwyk van die verwagte gedrag dat die analis wantrouig is oor die oorsprong daarvan. Hierdie waarnemings mag die resultaat wees van menslike foute, in welke geval dit reggestel moet word. Dit mag egter ook ’n interressante verskynsel wees wat verdere ondersoek benodig. Die identifikasie van uitskieters word tipies informeel deur inspeksie vanaf ’n grafiese voorstelling van die data uitgevoer, maar hierdie benadering is onbetroubaar vir dimensies groter as twee. ’n Formele prosedure vir die bepaling van uitskieters sal meer konsekwente klassifisering van steekproefdata tot gevolg hˆe. Dit gee ook geleentheid vir effektiewe rekenaar implementering van die tegnieke. Aanvanklik word die spesiale geval van eenveranderlike data behandel om noodsaaklike begrippe bekend te stel, maar ook aangesien dit in eie reg ’n area van groot belang is. Verder word tegnieke vir die identifikasie van verskeie uitskieters in meerveranderlike, normaal verspreide data beskou. Daar word ook ondersoek hoe hierdie idees veralgemeen kan word om tros analise in te sluit. Die sogenaamde Minimum Covariance Determinant (MCD) subversameling is fundamenteel vir die identifikasie van meerveranderlike uitskieters, en word daarom in detail ondersoek. Deterministiese tweeveranderlike algoritmes is verfyn en ge¨ımplementeer, en gebruik om die effektiwiteit van die algemeen gebruikte heuristiese algoritme, Fast–MCD, te ondersoek.
APA, Harvard, Vancouver, ISO, and other styles
27

Mohapatra, Deepankar. "Automatic Removal of Complex Shadows From Indoor Videos." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc804942/.

Full text
Abstract:
Shadows in indoor scenarios are usually characterized with multiple light sources that produce complex shadow patterns of a single object. Without removing shadow, the foreground object tends to be erroneously segmented. The inconsistent hue and intensity of shadows make automatic removal a challenging task. In this thesis, a dynamic thresholding and transfer learning-based method for removing shadows is proposed. The method suppresses light shadows with a dynamically computed threshold and removes dark shadows using an online learning strategy that is built upon a base classifier trained with manually annotated examples and refined with the automatically identified examples in the new videos. Experimental results demonstrate that despite variation of lighting conditions in videos our proposed method is able to adapt to the videos and remove shadows effectively. The sensitivity of shadow detection changes slightly with different confidence levels used in example selection for classifier retraining and high confidence level usually yields better performance with less retraining iterations.
APA, Harvard, Vancouver, ISO, and other styles
28

Feng, Ping Feng. "Examination of the Hollywood Movie Trailers Editing Pattern Evolution over Time by Using the Quantitative Approach of Statistical Stylistic Analysis." Master's thesis, Temple University Libraries, 2016. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/395476.

Full text
Abstract:
Media Studies & Production
M.A.
In this study, I took the quantitative research approach of film statistical stylistic analysis to examine the editing pattern evolution of 130 Hollywood movie trailers over the past 60 years from 1951 to 2015; the prior studies on the overall evolution of the Hollywood movies’ editing pattern are compared and discussed. The results suggest that although the movie trailers are much shorter than the whole movies, the average shot lengths of the trailers still display a declining trend over the past 60 years, and the variations in the shot lengths are also decreasing. Second, the motions within each framedo not change significantly over the years, while the correlation coefficients between the shot lengths and the motions within the shots are moving toward a more negative correlation relationship over time, suggesting that the trailers are subject to an editing evolution trend that the shorter the shot is, the more motions there are within it, and this also aligns with the overall movies’ editing pattern evolution trend. Last, the luminance of the trailers remains almost the same over time, which does not align with the overall movies’ editing pattern evolution of becoming darker and darker over decades. Together these findings suggest that the movie trailers’ editing rhythm evolution in general aligns with that of overall movies over time while the visual editing pattern evolution of color luminance does not. The study results will improve our understanding on how the Hollywood movie trailers’ editing pattern and style have evolved over time and pave the way for future advertising studies and cognitive psychology studies on the audience’s attention, immersion and emotional response to various editing patterns of movie trailers.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
29

Clause, James Alexander. "Enabling and supporting the debugging of software failures." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39514.

Full text
Abstract:
This dissertation evaluates the following thesis statement: Program analysis techniques can enable and support the debugging of failures in widely-used applications by (1) capturing, replaying, and, as much as possible, anonymizing failing executions and (2) highlighting subsets of failure-inducing inputs that are likely to be helpful for debugging such failures. To investigate this thesis, I developed techniques for recording, minimizing, and replaying executions captured from users' machines, anonymizing execution recordings, and automatically identifying failure-relevant inputs. I then performed experiments to evaluate the techniques in realistic scenarios using real applications and real failures. The results of these experiments demonstrate that the techniques can reduce the cost and difficulty of debugging.
APA, Harvard, Vancouver, ISO, and other styles
30

Kuru, Kaya. "A Novel Report Generation Approach For Medical Applications: The Sisds Methodology And Its Applications." Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12611719/index.pdf.

Full text
Abstract:
In medicine, reliable data are available only in a few areas and necessary information on prognostic implications is generally missing. In spite of the fact that a great amount of money has been invested to ease the process, an effective solution has yet to be found. Unfortunately, existing data collection approaches in medicine seem inadequate to provide accurate and high quality data, which is a prerequisite for building a robust and effective DDSS. In this thesis, many different medical reporting methodologies and systems which have been used up to now are evaluated
their strengths and deficiencies are revealed to shed light on how to set up an ideal medical reporting type. This thesis presents a new medical reporting method, namely &ldquo
Structured, Interactive, Standardized and Decision Supporting Method&rdquo
(SISDS) that encompasses most of the favorable features of the existing medical reporting methods while removing most of their deficiencies such as inefficiency and cognitive overload as well as introducing and promising new advantages. The method enables professionals to produce multilingual medical reports much more efficiently than the existing approaches in a novel way by allowing free-text-like data entry in a structured form. The proposed method in this study is proved to be more effective in many perspectives, such as facilitating the complete and the accurate data collection process and providing opportunities to build DDSS without tedious pre-processing and data preparation steps, mainly helping health care professionals practice better medicine.
APA, Harvard, Vancouver, ISO, and other styles
31

Seiss, Mark Thomas. "Improving Survey Methodology Through Matrix Sampling Design, Integrating Statistical Review Into Data Collection, and Synthetic Estimation Evaluation." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/47968.

Full text
Abstract:
The research presented in this dissertation touches on all aspects of survey methodology, from questionnaire design to final estimation. We first approach the questionnaire development stage by proposing a method of developing matrix sampling designs, a design where a subset of questions are administered to a respondent in such a way that the administered questions are predictive of the omitted questions. The proposed methodology compares favorably to previous methods when applied to data collected from a household survey conducted in the Nampula province of Mozambique. We approach the data collection stage by proposing a structured procedure of implementing small-scale surveys in such a way that non-sampling error attributed to data collection is minimized. This proposed methodology requires the inclusion of the statistician in the data editing process during data collection. We implemented the structured procedure during the collection of household survey data in the city of Maputo, the capital of Mozambique. We found indications that the data resulting from the structured procedure is of higher quality than the data with no editing. Finally, we approach the estimation phase of sample surveys by proposing a model-based approach to the estimation of the mean squared error associated with synthetic (indirect) estimates. Previous methodology aggregates estimates for stability, while our proposed methodology allows area-specific estimates. We applied the proposed mean squared error estimation methodology and methods found during literature review to simulated data and estimates from 2010 Census Coverage Measurement (CCM). We found that our proposed mean squared error estimation methodology compares favorably to the previous methods, while allowing for area-specific estimates.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
32

Busi, Gioia. "Changes in the translation industry: A prospectus on the near future of innovation in machine translation." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
This thesis aims to analyze the supposed inevitability of a breakthrough in machine translation and the role that this breakthrough will play in the evolution of translation companies. It will analyze the changes that are happening and what repercussions those changes will have on the decisions made by students, professionals, agencies, and institutions over the next twenty years. This paper will be divided into three main sections: The first part will provide a background of today’s translation industry and consider the advent of machine translation in translation agencies and its continuous developments. In the second part of this essay, I will illustrate how I carried out my research inside Global Voices - a translation agency based in Stirling, Scotland, in which I have interned as Project Manager in December 2018 - to understand what use the translation agency makes of machine translation, also conveying my colleagues’ thoughts about it. The conclusion will recapitulate the topics approached, revolve around the main findings of this study and try to foresee what translators should expect from the future, how in my opinion they should deal with the changes the future will bring.
APA, Harvard, Vancouver, ISO, and other styles
33

Martin, Stéphane. "Edition collaborative des documents semi-structurés." Phd thesis, Université de Provence - Aix-Marseille I, 2011. http://tel.archives-ouvertes.fr/tel-00684778.

Full text
Abstract:
Les éditeurs collaboratifs permettent à des utilisateurs éloignés de collaborer à une tâche commune qui va de l'utilisation d'un agenda partagé à la réalisation de logiciels. Ce concept est né avec SCCS en 1972 et connait un engouement récent (ex: Wikipedia). L'absence de centralisation et l'asynchronisme sont des aspects essentiels de cette approche qui relève d'un modèle pair-à-pair (P2P). D'un autre côté, le format XML est devenu une référence pour la manipulation et l'échange de documents. Notre travail vise à la réalisation d'un éditeur collaboratif P2P pour l'édition de documents semi-structurés qui sont une abstraction du format XML. Le problème est difficile et de nombreuses propositions se sont révélées erronées ou ne passant pas à l'échelle. Nous rappelons les concepts et l'état de l'art sur l'édition collaborative, les modèles centralisés et le P2P. Ensuite, nous explorons deux approches différentes : les transformées opérationnelles et le CRDT (Commutative Replicated Data Type) avec différentes structures de données arborescentes. L'objectif est de réaliser les opérations de base (ajout, suppression et ré-étiquetage) tout en garantissant la convergence du processus d'édition. Nous proposons un algorithme générique pour l'approche CRDT basée sur une notion d'indépendance dans la structure de données. Nous avons étendu nos travaux afin de réaliser l'opération de déplacement d'un sous-arbre et de prendre en compte le typage XML. Peu de travaux abordent ces deux points qui sont très utiles pour l'édition de documents. Finalement, nous donnons les résultats expérimentaux obtenus avec un prototype permettant de valider notre approche.
APA, Harvard, Vancouver, ISO, and other styles
34

Faudemay, Pascal. "Un processeur VLSI pour les opérations de bases de données." Paris 6, 1986. http://www.theses.fr/1986PA066468.

Full text
Abstract:
Ce processeur est capable d'effectuer toutes les opérations des langages de manipulation de données relationnels, des bases de textes et les opérations essentielles des bases de connaissances. Il est optimum dans un environnement à limitation par les transferts en mémoire. Ce co-processeur est formé d'un vecteur de composants VLSI identiques dont chacun est connecté avec le suivant. Il dérive des filtres à comparateurs et des mémoires associatives, dont il étend les fonctionnalités. Nous montrons que ce VLSI est faisable dans la technologie CMOS actuelle. Il conduit à des performances très compétitives pour la jointure, la sélection relationnelle et l’édition de textes. Ce processeur est en cours de réalisation.
APA, Harvard, Vancouver, ISO, and other styles
35

Rountree, Richard John. "Novel technologies for the manipulation of meshes on the CPU and GPU : a thesis presented in partial fulfilment of the requirements for the degree of Masters of Science in Computer Science at Massey University, Palmerston North, New Zealand." Massey University, 2007. http://hdl.handle.net/10179/700.

Full text
Abstract:
This thesis relates to research and development in the field of 3D mesh data for computer graphics. A review of existing storage and manipulation techniques for mesh data is given followed by a framework for mesh editing. The proposed framework combines complex mesh editing techniques, automatic level of detail generation and mesh compression for storage. These methods work coherently due to the underlying data structure. The problem of storing and manipulating data for 3D models is a highly researched field. Models are usually represented by sparse mesh data which consists of vertex position information, the connectivity information to generate faces from those vertices, surface normal data and texture coordinate information. This sparse data is sent to the graphics hardware for rendering but must be manipulated on the CPU. The proposed framework is based upon geometry images and is designed to store and manipulate the mesh data entirely on the graphics hardware. By utilizing the highly parallel nature of current graphics hardware and new hardware features, new levels of interactivity with large meshes can be gained. Automatic level of detail rendering can be used to allow models upwards of 2 million polygons to be manipulated in real time while viewing a lower level of detail. Through the use of pixels shaders the high detail is preserved in the surface normals while geometric detail is reduced. A compression scheme is then introduced which utilizes the regular structure of the geometry image to compress the floating point data. A number of existing compression schemes are compared as well as custom bit packing. This is a TIF funded project which is partnered with Unlimited Realities, a Palmerston North software development company. The project was to design a system to create, manipulate and store 3D meshes in a compressed and easy to manipulate manner. The goal is to create the underlying technologies to allow for a 3D modelling system to become integrated into the Umajin engine, not to create a user interface/stand alone modelling program. The Umajin engine is a 3D engine created by Unlimited Realities which has a strong focus on multimedia. More information on the Umajin engine can be found at www.umajin.com. In this project we propose a method which gives the user the ability to model with the high level of detail found in packages aimed at creating offline renders but create models which are designed for real time rendering.
APA, Harvard, Vancouver, ISO, and other styles
36

Duraffourg, Simon. "Analyse de la tenue en endurance de caisses automobiles soumises à des profils de mission sévérisés." Thesis, Paris Est, 2015. http://www.theses.fr/2015PESC1142.

Full text
Abstract:
Une caisse automobile est un ensemble complexe formé de plusieurs éléments qui sont souvent constitués de matériaux différents et assemblés principalement par points soudés, généralement à plus de 80%. Au stade de la conception, plusieurs critères doivent être vérifiés numériquement et confirmés expérimentalement par le prototype de la caisse, dont sa tenue en endurance. Dans le contexte économique actuel, la politique de réduction des dépenses énergétiques ou autres a conduit les constructeurs automobiles à optimiser les performances des véhicules actuels, en particulier en réduisant de façon très conséquente la masse de leur caisse. Des problèmes liés à la tenue structurelle ou à la tenue en fatigue de la caisse sont alors apparus. Afin d'être validé, le prototype de caisse doit avoir une résistance suffisante pour supporter les essais de fatigue. Les tests de validation sur bancs d'essais réalisés en amont sur un prototype sont très coûteux pour l'industriel, en particulier lorsque les tests d'essais en fatigue sur la caisse ne permettent pas de confirmer les zones d'apparition des fissures identifiées par simulations numériques. Le sujet de la thèse se limitera à ce dernier point. Il porte sur l'ensemble des analyses à mettre en oeuvre afin d'étudier la tenue en endurance de caisses automobiles soumises à des profils de mission sévérisés. L'objectif principal est de mettre au point un processus d'analyse en simulation numérique permettant de garantir un bon niveau de prédictivité de tenue en endurance des caisses automobiles. On entend par bon niveau de prédictivité, le fait d'être en mesure de corréler correctement les résultats d'essais associés aux profils de missions sévérisés classiquement utilisés dans les plans de validation de la caisse. Cette thèse a conduit à :_ analyser le comportement mécanique de la caisse et les forces d'excitations appliquées au cours de l'essai de validation,_ établir une nouvelle méthode de réduction d'un chargement pour les calculs en endurance,_ mettre au point une nouvelle modélisation EF des liaisons soudées par points,_ améliorer les modèles de prédiction de durée de vie des PSR. Les études menées ont ainsi permis d'améliorer le niveau de prédiction des calculs en fatigue de la caisse afin :_ d'identifier la majorité des zones réellement critiques sur la caisse,_ d'évaluer de manière fiable de la criticité relative de chacune de ces zones,_ d'estimer de façon pertinente la durée de vie associée à chacune de ces zones
A body-in-white (biw) is a complex structure which consists of several elements that are made of different materials and assembled mainly by spot welds, generally above 80%. At the design stage, several criteria must be verified numerically and experimentally by the car prototype, as the biw durability. In the current economic context, the policy of reducing energy and other costs led automotive companies to optimize the vehicle performances, in particular by reducing very consistently the mass of the biw. As a consequences, some structural design problems appeared. In order to be validated, validation test benches are carried out upstream on a prototype vehicle. They are very costly to the manufacturer, especially when fatigue tests do not confirm the cracks areas identified by numerical simulations. The thesis is focused on numerical biw durability analysis. It covers all the numerical analysis to be implemented to study the biw durability behavior. The main objective is to develop a numerical simulation process to ensure a good level of durability prediction. It means to be able to have a good correlation level between test bench results and numerical fatigue life prediction. This thesis has led to:_ analyze the biw mechanical behavior and the excitation forces applied to the biw during the validation tests,_ establish a new fatigue data editing technique to simplify load signal,_ create a new finite element spot weld model,_ develop a new fatigue life prediction of spot welds. The studies have thus improved the level of biw fatigue life prediction by:_ identifying the majority of critical areas on the full biw,_ reliably assessing the relative criticality of each area,_ accurately estimating the lifetime associated with each of these areas
APA, Harvard, Vancouver, ISO, and other styles
37

Valdés, Diana. "Study and Edition of La dama presidente by Francisco de Leiva Ramírez de Arellano." Scholar Commons, 2017. https://scholarcommons.usf.edu/etd/7449.

Full text
Abstract:
Entre los grandes autores de teatro del siglo XVII se puede encontrar a Francisco de Leiva Ramírez de Arellano. El siglo en el que vivió es uno de suma importancia en el mundo del teatro, ya que los escritores del momento crearon cánones estilísticos que cambiaron la forma de escribir estas obras para siempre. De Leiva, que fue seguidor de la escuela de Calderón, se conocen unas catorce obras de teatro y un entremés, y se sabe que sus obras no tuvieron mayor éxito hasta el siglo XVIII. En la modernidad su nombre es poco conocido y sus trabajos han sido escasamente publicados. Esta tesis intentará desenterrar una obra de Leiva, La dama presidente, para entender mejor el teatro español de su tiempo.
APA, Harvard, Vancouver, ISO, and other styles
38

Chan, Yin-hing Yolande. "The normative data and factor structure of the culture-free self-esteem inventory-form a-second edition in Hong Kong adolescents." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B29740253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Blagojevic, Tim. "A revision of IEC 60891 2nd Edition 2009-12: Data correction procedures 1 and 2 PV module performance at Murdoch University." Thesis, Blagojevic, Tim (2016) A revision of IEC 60891 2nd Edition 2009-12: Data correction procedures 1 and 2 PV module performance at Murdoch University. Honours thesis, Murdoch University, 2016. https://researchrepository.murdoch.edu.au/id/eprint/33935/.

Full text
Abstract:
The focus of this project is to review and effectively assess the first two photovoltaic module electrical performance data correction procedures contained in the international engineering standard IEC 60891: “Photovoltaic Devices- Procedures for temperature and irradiance corrections to measured I-V characteristics.” The formulated workings of the project were used to assess the effectiveness of the correction methods in translating electrical performance data for determining the degradation or performance of photovoltaic modules. A preliminary literature review of concepts involved in the implementation of project procedures was conducted, so that appropriate experimental testing conditions could be formulated. This project covers information regarding factors that may affect photovoltaic module performance variation and degradation. Over a period of months in autumn/winter, outdoor field electrical performance data for different PV module technologies at the Murdoch University location was recorded and processed. The data collected was obtained under varying atmospheric conditions, with the tilts and orientations of the modules altered to change the total amount and nature of solar irradiation reaching the modules. The algebraic equations of the first and second standard correction procedures utilised parameters with values that could be measured directly from the outdoor testing of modules, or deduced from electrical performance data obtained from testing modules indoors at known values of irradiance, temperature and atmospheric spectra. Indoor performance data simulated with solar irradiance levels and cell temperatures recognised as those matching international standard test conditions, was obtained for use in effectively implementing the correction procedures. The data was also independently analysed and compared. Outdoor module test performance data was corrected with both correction procedures and collated for analysis. The results highlighted the effects of and correlations between factors that influence module I-V curve dynamics. When implemented for data translation, “correction procedure one” was found to produce a range of maximum power mismatch accuracy levels from 0.09 to 22.97% with an average accuracy mismatch level of 9.54%. “Correction procedure two” was found to produce a range of accuracy maximum power mismatch levels of 0.19 to 28.64%, with an average accuracy mismatch level of 8.58%. An assessment of the correction procedures showed that they could be effectively used to gauge module degradation or for comparison of module performance against factory specifications. Both methods showed similar variations in accuracy, with “correction procedure 2” being better suited to situations where the irradiance level difference between two data sets is more than 20%. “Correction procedure 2” has more working parameters and takes more time to establish for correct implementation.
APA, Harvard, Vancouver, ISO, and other styles
40

Kugel, Rudolf. "Ein Beitrag zur Problematik der Integration virtueller Maschinen." Phd thesis, [S.l.] : [s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=980016371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

van, Rensburg Rachel Janse. "Resource Description and Access (RDA): continuity in an ever-fluxing information age with reference to tertiary institutions in the Western Cape." University of the Western Cape, 2018. http://hdl.handle.net/11394/6380.

Full text
Abstract:
Magister Library and Information Studies - MLIS
Although Resource Description and Access (RDA) has been discussed extensively amongst the ranks of cataloguers internationally, no research on the perceptions of South African cataloguers was available at the time of this research. The aim of this study was to determine how well RDA was faring during the study's timeframe, to give a detailed description regarding cataloguer perceptions within a higher education setting in South Africa. Furthermore, to determine whether the implementation of RDA has overcome most of the limitations that AACR2 had within a digital environment, to identify advantages and/or perceived limitations of RDA as well as to assist cataloguers to adopt and implement the new standard effectively. The study employed a qualitative research design assisted by a phenomenological philosophy to gain insight into how cataloguers experienced the implementation and adoption of RDA by means of two concurrent web-based questionnaires. The study concluded that higher education cataloguing professionals residing in the Western Cape were decidedly positive towards the new cataloguing standard. Although there were some initial reservations, they were overcome to such an extent that ultimately no real limitations were identified, and that RDA has indeed overcome most of the limitations displayed by AACR2. Many advantages of RDA were identified, and participants expressed excitement about the future capabilities of RDA as it continues toward a link-data milieu, making library metadata more easily available.
APA, Harvard, Vancouver, ISO, and other styles
42

Janse, van Rensburg Rachel. "Resource Description and Access (RDA): continuity in an ever-fluxing information age with reference to tertiary institutions in the Western Cape." University of the Western Cape, 2018. http://hdl.handle.net/11394/6267.

Full text
Abstract:
Magister Library and Information Studies - MLIS
Although Resource Description and Access (RDA) has been discussed extensively amongst the ranks of cataloguers internationally, no research on the perceptions of South African cataloguers was available at the time of this research. The aim of this study was to determine how well RDA was faring during the study's timeframe, to give a detailed description regarding cataloguer perceptions within a higher education setting in South Africa. Furthermore, to determine whether the implementation of RDA has overcome most of the limitations that AACR2 had within a digital environment, to identify advantages and/or perceived limitations of RDA as well as to assist cataloguers to adopt and implement the new standard effectively. The study employed a qualitative research design assisted by a phenomenological philosophy to gain insight into how cataloguers experienced the implementation and adoption of RDA by means of two concurrent web-based questionnaires. The study concluded that higher education cataloguing professionals residing in the Western Cape were decidedly positive towards the new cataloguing standard. Although there were some initial reservations, they were overcome to such an extent that ultimately no real limitations were identified, and that RDA has indeed overcome most of the limitations displayed by AACR2. Many advantages of RDA were identified, and participants expressed excitement about the future capabilities of RDA as it continues toward a link-data milieu, making library metadata more easily available. As this research has revealed a distinctly positive attitude from cataloguers' two main matters for future research remains, being: ? Why South African participants in this study voiced almost no perceived limitations to RDA as a cataloguing standard. Future research might be able to relay information regarding this trend, especially in the light that it was not a global phenomenon. ? A deeper look might have to be taken at how participants' experienced RDA training as this phenomenon might be closely linked to the reasons why the participants did not mention more limitations.
APA, Harvard, Vancouver, ISO, and other styles
43

Wesch, Andreas. "Kommentierte Edition und linguistische Untersuchung der "Información de los Jerónimos" (Santo Domingo 1517) : Mit Editionen der "Ordenanzas para el tratamiento de los Indios" (Leyes de Burgos, Burgos / Valladolid 1512/13) und der "Instrucción dada a los Padres de la Orden de San Jerónimo" (Madrid 1516) /." Tübingen : G. Narr Verlag, 1993. http://catalogue.bnf.fr/ark:/12148/cb39172796h.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Potet, Marion. "Vers l'intégration de post-éditions d'utilisateurs pour améliorer les systèmes de traduction automatiques probabilistes." Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00995104.

Full text
Abstract:
Les technologies de traduction automatique existantes sont à présent vues comme une approche prometteuse pour aider à produire des traductions de façon efficace et à coût réduit. Cependant, l'état de l'art actuel ne permet pas encore une automatisation complète du processus et la coopération homme/machine reste indispensable pour produire des résultats de qualité. Une pratique usuelle consiste à post-éditer les résultats fournis par le système, c'est-à-dire effectuer une vérification manuelle et, si nécessaire, une correction des sorties erronées du système. Ce travail de post-édition effectué par les utilisateurs sur les résultats de traduction automatique constitue une source de données précieuses pour l'analyse et l'adaptation des systèmes. La problématique abordée dans nos travaux s'intéresse à développer une approche capable de tirer avantage de ces retro-actions (ou post-éditions) d'utilisateurs pour améliorer, en retour, les systèmes de traduction automatique. Les expérimentations menées visent à exploiter un corpus d'environ 10 000 hypothèses de traduction d'un système probabiliste de référence, post-éditées par des volontaires, par le biais d'une plateforme en ligne. Les résultats des premières expériences intégrant les post-éditions, dans le modèle de traduction d'une part, et par post-édition automatique statistique d'autre part, nous ont permis d'évaluer la complexité de la tâche. Une étude plus approfondie des systèmes de post-éditions statistique nous a permis d'évaluer l'utilisabilité de tels systèmes ainsi que les apports et limites de l'approche. Nous montrons aussi que les post-éditions collectées peuvent être utilisées avec succès pour estimer la confiance à accorder à un résultat de traduction automatique. Les résultats de nos travaux montrent la difficulté mais aussi le potentiel de l'utilisation de post-éditions d'hypothèses de traduction automatiques comme source d'information pour améliorer la qualité des systèmes probabilistes actuels.
APA, Harvard, Vancouver, ISO, and other styles
45

Pierron, Andréa. ""L'Ombre de votre espérance" : repères pour une histoire plastique des revues d'artistes expérimentaux au XXe siècle." Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCA085/document.

Full text
Abstract:
Cette thèse de doctorat se consacre à l’analyse de périodiques créés au cours du XXe siècle par des cinéastes et des plasticiens à l’œuvre dans le champ des avant-gardes et du cinéma expérimental. Les revues forment des objets plastiques et spéculatifs, complexes et composites de par les relations qui se nouent entre le texte et l’image, les montages qui se créent et le défi que constitue la transposition des images filmiques. En quoi ces revues d’artistes témoignent-elles d’une recherche expérimentale ? Comment les revues d’artistes participent-elles à une histoire critique et plastique des formes cinématographiques ? L’étude tente de comprendre les manières originales dont les cinéastes et les plasticiens se saisissent des revues afin d’élaborer, défendre, documenter, objectiver et analyser certains paradigmes cinématographiques. À quels titres les revues deviennent elles-mêmes des propositions expérimentales, des laboratoires de recherche sur les liens entre l’image et le texte ? Nous observerons comment, grâce à leurs propositions techniques, graphiques et visuelles propres, les revues exposent certains enjeux matériels, poétiques, plastiques et théoriques propres à l’image cinématographique, comment elles questionnent le regard. Les revues offrent des plateformes de diffusion et de dissémination esthétiques, servent à ouvrir des réseaux de circulation pour les idées, singulières ou collectives, des rédacteurs en chef. Comment accompagnent-elles leurs efforts dans la construction d’un milieu cinématographique alternatif ? Les revues Dada I de Tristan Tzara et Hans Arp (1916), Dada Sinn der Welt de John Heartfield et George Grosz (1921), Le Promenoir de Jean Epstein, Pierre Deval et Jean Lacroix (1921-1922), G. für elementare Geschaltung de Hans Richter (1923-1926), Close Up du groupe Pool composé de Kenneth Macpherson, Bryher et H.D. (1927-1933), Film Culture de Jonas Mekas (1955-1996) et Cantrill’s Filmnotes d’Arthur et Corinne Cantrill (1971-2000) forment le corpus de cette thèse qui vise à contribuer à une histoire plastique des publications expérimentales
This PhD thesis focuses on analyzing periodicals created during the XXth Century by both visual artists and filmmakers operating in the realm of avantgardes and experimental cinema. The journals become plastic, conceptual, complex, and composite objects because of the interplay between text and image as well as the reproduction of images and realization of photomontages. How these artists’ journals show signs of an experimental approach ? How do artists’ journals contribute to the critical and plastic history of film ? The dissertation aims to understand the unique ways the visual artists and filmmakers make use of the journals to create, defend, document, visualize and analyze some cinematic paradigms. To what extent the journals become in turn experimental works about the relationships between text and image ? We will study how magazines exhibit various plastic, aesthetical, theoretical, and poetical dimensions at stake in the cinematic image, relying on specific technical, graphic and visual undertakings, and how they call into question the perception. Journals become instrumentalized in ensuring the movement of the editors’ ideas, either collective or indivuals. How do journals support the editors’ efforts in building an alternative cinema domain ? Dada I edited by Tristan Tzara and Hans Arp (1916), Dada Sinn der Welt by John Heartfield and George Grosz (1921), Le Promenoir by Jean Epstein, Pierre Deval and Jean Lacroix (1921-1922), G. für elementare Geschaltung by Hans Richter (1923-1926), Close Up by Kenneth Macpherson, Bryher and H.D. (1927-1933), Film Culture by Jonas Mekas (1955-1996) and Cantrill’s Filmnotes by Arthur et Corinne Cantrill (1971-2000) form the corpus of this PhD thesis, which aims to contribute to a plastic history of experimental publications
APA, Harvard, Vancouver, ISO, and other styles
46

D'Ambrosio, Antonio. "Tree based methods for data editing and preference rankings." Tesi di dottorato, 2008. http://www.fedoa.unina.it/2746/1/D%27Ambrosio_Statistica.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Cheng, Chengyen, and 鄭丞晏. "Evaluating Data Editing and Imputation Methods based on Monte Carlo Technique." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/66349977077326286387.

Full text
Abstract:
碩士
國防大學管理學院
財務管理學系
99
Most of the survey will have missing data, if the database contains missing values would seriously affect the quality of data analysis, how to properly handle missing values is an important issue. Although a number of imputation methods have been proposed, but not a perfect imputation method can handle different types of missing values well at the same time. The main purpose of this paper would like to obtain not only use of time but also the appropriate data type of imputation methods. To face the database without missing values and then use pseudo-random number selected to make some fields missing. The last is to compare with original value and the value that after imputation. We use three imputation methods- regression imputation, EM imputation, MCMC imputation and compare the imputation method in the data are highly related and low related with the use of time. when dealing with different types of data , the results provide researchers the rule of selecting the appropriate imputation.
APA, Harvard, Vancouver, ISO, and other styles
48

Yang, Huei-Fang. "Reconstruction of 3D Neuronal Structures from Densely Packed Electron Microscopy Data Stacks." Thesis, 2011. http://hdl.handle.net/1969.1/ETD-TAMU-2011-08-10189.

Full text
Abstract:
The goal of fully decoding how the brain works requires a detailed wiring diagram of the brain network that reveals the complete connectivity matrix. Recent advances in high-throughput 3D electron microscopy (EM) image acquisition techniques have made it possible to obtain high-resolution 3D imaging data that allows researchers to follow axons and dendrites and to identify pre-synaptic and post-synaptic sites, enabling the reconstruction of detailed neural circuits of the nervous system at the level of synapses. However, these massive data sets pose unique challenges to structural reconstruction because the inevitable staining noise, incomplete boundaries, and inhomogeneous staining intensities increase difficulty of 3D reconstruction and visualization. In this dissertation, a new set of algorithms are provided for reconstruction of neuronal morphology from stacks of serial EM images. These algorithms include (1) segmentation algorithms for obtaining the full geometry of neural circuits, (2) interactive segmentation tools for manual correction of erroneous segmentations, and (3) a validation method for obtaining a topologically correct segmentation when a set of segmentation alternatives are available. Experimental results obtained by using EM images containing densely packed cells demonstrate that (1) the proposed segmentation methods can successfully reconstruct full anatomical structures from EM images, (2) the editing tools provide a way for the user to easily and quickly refine incorrect segmentations, (3) and the validation method is effective in combining multiple segmentation results. The algorithms presented in this dissertation are expected to contribute to the reconstruction of the connectome and to open new directions in the development of reconstruction methods.
APA, Harvard, Vancouver, ISO, and other styles
49

Shaw, Peter E. "Advances in cluster editing: linear FPT kernels and comparative implementations." Thesis, 2010. http://hdl.handle.net/1959.13/928253.

Full text
Abstract:
Research Doctorate - Doctor of Philosophy (PhD)
Experience has shown that clustering objects into groups is a useful way to analyze and order information. It turns out that many clustering problems are intractable. Several heuristic and approximation algorithms exist; however in many applications what is desired is an optimum solution. Finding an optimum result for the Cluster Edit problem has proven non-trivial, as Cluster Edit is NP-Hard [KM86], and APX-Hard, and therefore cannot be approximated within a factor of (1 + ϵ) unless Poly = NP [SST04]. The algorithmic technique of Parameterized Complexity has proven an effective tool to address hard problems. Recent publications have shown that the Cluster Edit problem is Fixed Parameter Tractable (FPT ). That is, there is a fixed parameter algorithm that can be used to solve the Cluster Edit problem. Traditionally, algorithms, in computer science, are evaluated in terms of the time needed to determine the output as a function of input size only. However, typically in science most real datum contains inherent structure. For Fixed Parameter Tractable (FPT) algorithms, permitting one or more parameters to be given in the input, to further define the question, allows the algorithm to take advantage of any inherit structure in the data [ECFLR05]. A key concept of FPT is kernelization; that is, reducing a problem instance to a core hard sub-problem. The previous best kernelization technique for Cluster Edit was able to reduce the input to within k2 [GGHN05] vertices, when parameterized by k, the edit distance. The edit distance is the number of edit operations required to transform the input graph into a cluster graph (a disjoint union of cliques). Experimental comparisons in [DLL+06] showed that significant improvements were obtained using this reduction rule for the Cluster Edit problem. The study reported in this thesis presents three polynomialtime, many-to-one kernelization algorithms for the Cluster Edit problem, the best of these algorithms produces a linear kernel of at most 6k vertices. In this thesis, we discuss how using new FPT techniques including extremal method compression routine and modelled crown reductions [DFRS04] can be used to kernelize the input for the Cluster Edit problem. Using these new kernelization techniques, it has been possible to improve the number of vertices in the data sets that can be solved optimally, from the previous maximum of around 150 vertices to over 900. More importantly, the edit distance of the graphs that could be solved as also increased from around k = 40 to more than k = 400. This study also provides a comparison of three inductive algorithmic techniques: i) compression routine using a constant factor approximation – Compression Crown Rule Search Algorithm; ii) extremal method (coordinatized kernel) [PR05], using a constructive form of the boundary lemma – Greedy Crown Rule Search Algorithm; iii) extremal method, using an auxiliary (TWIN) graph structure – Crown Rule TWIN Search Algorithm. Algorithms derived using each of the above techniques to obtain linear kernels for the Cluster Edit problem have been evaluated using a variety of data with different exploratory properties. Comparisons have been made in terms of reduction in kernel size, lower bounds obtained and execution time. Novel solutions have been required to obtain approximations within a reasonable time, for the Cluster Edit problem that is within a factor of four of the edit distance (minimum solution size). Most approximation methods performed very badly for some graphs and well for others. Without any guide regarding the quality of the result, a very bad result can be assumed to be close to optimum. Our study has found that just using the highest available lower bound for the approximation is insufficient to improve the result. However, by combining both the highest lower bound obtained and the reduction obtained using kernelization, a 30-fold improvement in the approximation performance ratio is achieved.
APA, Harvard, Vancouver, ISO, and other styles
50

Boskovitz, Agnes. "Data Editing and Logic: The covering set method from the perspective of logic." Phd thesis, 2008. http://hdl.handle.net/1885/49318.

Full text
Abstract:
Errors in collections of data can cause significant problems when those data are used. Therefore the owners of data find themselves spending much time on data cleaning. This thesis is a theoretical work about one part of the broad subject of data cleaning - to be called the covering set method. More specifically, the covering set method deals with data records that have been assessed by the use of edits, which are rules that the data records are supposed to obey. The problem solved by the covering set method is the error localisation problem, which is the problem of determining the erroneous fields within data records that fail the edits. In this thesis I analyse the covering set method from the perspective of propositional logic. ...
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography