Dissertations / Theses on the topic 'Read data'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Read data.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Burger, Joseph. "Real-time engagement area dvelopment program (READ-Pro)." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02Jun%5FBurger.pdf.
Full textLecompte, Lolita. "Structural variant genotyping with long read data." Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S054.
Full textStructural Variants (SVs) are genomic rearrangements of more than 50 base pairs. Since SVs can reach several thousand base pairs, they can have huge impacts on genome functions, studying SVs is, therefore, of great interest. Recently, a new generation of sequencing technologies has been developed and produce long read data of tens of thousand of base pairs which are particularly useful for spanning over SV breakpoints. So far, bioinformatics methods have focused on the SV discovery problem with long read data. However, no method has been proposed to specifically address the issue of genotyping SVs with long read data. The purpose of SV genotyping is to assess for each variant of a given input set which alleles are present in a newly sequenced sample. This thesis proposes a new method for genotyping SVs with long read data, based on the representation of each allele sequences. We also defined a set of conditions to consider a read as supporting an allele. Our method has been implemented in a tool called SVJedi. Our tool has been validated on both simulated and real human data and achieves high genotyping accuracy. We show that SVJedi obtains better performances than other existing long read genotyping tools and we also demonstrate that SV genotyping is considerably improved with SVJedi compared to other approaches, namely SV discovery and short read SV genotyping approaches
Walter, Sarah. "Parallel read/write system for optical data storage." Diss., Connect to online resource, 2005. http://wwwlib.umi.com/cr/colorado/fullcit?p1425767.
Full textIbanez, Luis Daniel. "Towards a read/write web of linked data." Nantes, 2015. http://archive.bu.univ-nantes.fr/pollux/show.action?id=9089939a-874b-44e1-a049-86a4c5c5d0e6.
Full textThe Linked Data initiative has made available millions of pieces of data for querying through a federation of autonomous participants. However, the Web of Linked data suffers of problems of data heterogeneity and quality. We cast the problem of integrating heterogeneous data sources as a Local-as-View mediation (LAV) problem, unfortunately, LAV may require the execution of a number of “rewritings” exponential on the number of query subgoals. We propose the Graph-Union (GUN) strategy to maximise the results obtained from a subset of rewritings. Compared to traditional rewriting execution strategies, GUN improves execution time and number of results obtained in exchange of higher memory consumption. Once data can be queried data consumers can detect quality issues, but to resolve them they need to write on the data of the sources, i. E. , to evolve Linked Data from Read/Only to Read-Write. However, writing among autonomous participants raises consistency issues. We model the Read-Write Linked Data as a social network where actors copy the data they are interested into, update it and publish updates to exchange with others. We propose two algorithms for update exchange: SU-Set, that achieves Strong Eventual Consistency (SEC) and Col-Graph, that achieves Fragment Consistency, stronger than SEC. We analyze the worst and best case complexities of both algorithms and estimate experimentally the average complexity of Col-Graph, results suggest that is feasible for social network topologies
Horne, Ross J. "Programming languages and principles for read-write linked data." Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/210899/.
Full textHuang, Songbo, and 黄颂博. "Detection of splice junctions and gene fusions via short read alignment." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B45862527.
Full textSaleem, Muhammad. "Automated Analysis of Automotive Read-Out Data for Better Decision Making." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-63785.
Full textFrousios, Kimon. "Bioinformatic analysis of genomic sequencing data : read alignment and variant evaluation." Thesis, King's College London (University of London), 2014. http://kclpure.kcl.ac.uk/portal/en/theses/bioinformatic-analysis-of-genomic-sequencing-data(e3a55df7-543e-4eaa-a81e-6534eacf6250).html.
Full textHoffmann, Steve. "Genome Informatics for High-Throughput Sequencing Data Analysis." Doctoral thesis, Universitätsbibliothek Leipzig, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-152643.
Full textDiese Arbeit stellt drei verschiedene algorithmische und statistische Strategien für die Analyse von Hochdurchsatz-Sequenzierungsdaten vor. Zuerst führen wir eine auf enhanced Suffixarrays basierende heuristische Methode ein, die kurze Sequenzen mit grossen Genomen aligniert. Die Methode basiert auf der Idee einer fehlertoleranten Traversierung eines Suffixarrays für Referenzgenome in Verbindung mit dem Konzept der Matching-Statistik von Chang und einem auf Bitvektoren basierenden Alignmentalgorithmus von Myers. Die vorgestellte Methode unterstützt Paired-End und Mate-Pair Alignments, bietet Methoden zur Erkennung von Primersequenzen und zum trimmen von Poly-A-Signalen an. Auch in unabhängigen Benchmarks zeichnet sich das Verfahren durch hohe Sensitivität und Spezifität in simulierten und realen Datensätzen aus. Für eine große Anzahl von Sequenzierungsprotokollen erzielt es bessere Ergebnisse als andere bekannte Short-Read Alignmentprogramme. Zweitens stellen wir einen auf dynamischer Programmierung basierenden Algorithmus für das spliced alignment problem vor. Der Vorteil dieses Algorithmus ist seine Fähigkeit, nicht nur kollineare Spleiß- Ereignisse, d.h. Spleiß-Ereignisse auf dem gleichen genomischen Strang, sondern auch zirkuläre und andere nicht-kollineare Spleiß-Ereignisse zu identifizieren. Das Verfahren zeichnet sich durch eine hohe Genauigkeit aus: während es bei der Erkennung kollinearer Spleiß-Varianten vergleichbare Ergebnisse mit anderen Methoden erzielt, schlägt es die Wettbewerber mit Blick auf Sensitivität und Spezifität bei der Vorhersage nicht-kollinearer Spleißvarianten. Die Anwendung dieses Algorithmus führte zur Identifikation neuer Isoformen. In unserer Publikation berichten wir über eine neue Isoform des Tumorsuppressorgens p53. Da dieses Gen eines der am besten untersuchten Gene des menschlichen Genoms ist, könnte die Anwendung unseres Algorithmus helfen, eine Vielzahl weiterer Isoformen bei weniger prominenten Genen zu identifizieren. Drittens stellen wir ein datenadaptives Modell zur Identifikation von Single Nucleotide Variations (SNVs) vor. In unserer Arbeit zeigen wir, dass sich unser auf empirischen log-likelihoods basierendes Modell automatisch an die Qualität der Sequenzierungsexperimente anpasst und eine \"Entscheidung\" darüber trifft, welche potentiellen Variationen als SNVs zu klassifizieren sind. In unseren Simulationen ist diese Methode auf Augenhöhe mit aktuell eingesetzten Verfahren. Schließlich stellen wir eine Auswahl biologischer Ergebnisse vor, die mit den Besonderheiten der präsentierten Alignmentverfahren in Zusammenhang stehen
Wang, Frank Zhigang. "Advanced magnetic thin-film heads under read-while-write operation." Thesis, University of Plymouth, 1999. http://hdl.handle.net/10026.1/2353.
Full textGallo, John T. "Design of a holographic read-only-memory for parallel data transfer to integrated CMOS circuits." Diss., Georgia Institute of Technology, 1991. http://hdl.handle.net/1853/15640.
Full textHäuser, Philipp. "Caching and prefetching for efficient read access to multidimensional wave propagation data on disk." [S.l. : s.n.], 2007. http://nbn-resolving.de/urn:nbn:de:bsz:93-opus-33500.
Full textHuang, Chenhao. "Choosing read location: understanding and controlling the performance-staleness trade-off in primary backup data stores." Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/27422.
Full textŠtajner, Sanja. "New data-driven approaches to text simplification." Thesis, University of Wolverhampton, 2016. http://hdl.handle.net/2436/601113.
Full textŠtajner, Sanja. "New data-driven approaches to text simplification." Thesis, University of Wolverhampton, 2015. http://hdl.handle.net/2436/554413.
Full textChristensen, Kathryn S. "Architectural development and performance analysis of a primary data cache with read miss address prediction capability." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1998. http://handle.dtic.mil/100.2/ADA349783.
Full text"June 1998." Thesis advisor(s): Douglas J. Fouts, Frederick Terman. Includes bibliographical references (p. 77). Also available online.
Sahlin, Kristoffer. "Algorithms and statistical models for scaffolding contig assemblies and detecting structural variants using read pair data." Doctoral thesis, KTH, Beräkningsbiologi, CB, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-173580.
Full textQC 20150915
Lenz, Lauren Holt. "Statistical Methods to Account for Gene-Level Covariates in Normalization of High-Dimensional Read-Count Data." DigitalCommons@USU, 2018. https://digitalcommons.usu.edu/etd/7392.
Full textTithi, Saima Sultana. "Computational Analysis of Viruses in Metagenomic Data." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/97194.
Full textDoctor of Philosophy
Virus, the most abundant micro-organism on earth has a profound impact on human health and environment. Analyzing metagenomic data for viruses has the beneFIt of analyzing many viruses at a time without the need of cultivating them in the lab environment. Here, in this dissertation, we addressed three research problems of analyzing viruses from metagenomic data. To analyze viruses in metagenomic data, the first question needs to answer is what viruses are there and at what quantity. To answer this question, we developed a computational pipeline, FastViromeExplorer. Our tool can identify viruses from metagenomic data and quantify the abundances of viruses present in the data quickly and accurately even for a large data set. To recover novel virus genomes from metagenomic data, we developed a computational pipeline named FVE-novel. By applying FVE-novel to an ocean metagenome sample, we successfully recovered two novel viruses and two strains of known phages. Examination of viral assemblies from metagenomic data reveals that due to the complex nature of metagenome data, viral assemblies often contain assembly errors and are incomplete. To solve this problem, we developed a computational pipeline, named VirChecker, to polish, extend and annotate viral assemblies. Application of VirChecker to virus genomes recovered from an ocean metagenome sample shows that our tool successfully extended and completed those virus genomes.
Zeng, Shuai, and 曾帥. "Predicting functional impact of nonsynonymous mutations by quantifying conservation information and detect indels using split-read approach." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/198818.
Full textpublished_or_final_version
Paediatrics and Adolescent Medicine
Doctoral
Doctor of Philosophy
Söderbäck, Karl. "Organizing HLA data for improved navigation and searchability." Thesis, Linköpings universitet, Databas och informationsteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176029.
Full textEsteve, García Albert. "Design of Efficient TLB-based Data Classification Mechanisms in Chip Multiprocessors." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/86136.
Full textLa mayor parte de los datos referenciados por aplicaciones paralelas y secuenciales que se ejecutan enCMPs actuales son referenciadas por un único hilo, es decir, son privados. Recientemente, algunas propuestas aprovechan esta observación para mejorar muchos aspectos de los CMPs, como por ejemplo reducir el sobrecoste de la coherencia o la latencia de los accesos a cachés distribuidas. La efectividad de estas propuestas depende en gran medida de la cantidad de datos que son considerados privados. Sin embargo, los mecanismos propuestos hasta la fecha no consideran la migración de hilos de ejecución ni las fases de una aplicación. Por tanto, una cantidad considerable de datos privados no se detecta apropiadamente. Con el fin de aumentar la detección de datos privados, proponemos un mecanismo basado en las TLBs, capaz de reclasificar los datos a privado, y que detecta la migración de los hilos de ejecución sin añadir complejidad al sistema. Los mecanismos de clasificación en las TLBs se han analizado en estructuras de varios niveles, incluyendo TLBs privadas y con un último nivel de TLB compartido y distribuido. Esta tesis también presenta un mecanismo de clasificación de páginas basado en la inspección de las TLBs de otros núcleos tras cada fallo de TLB. De forma particular, el mecanismo propuesto se basa en el intercambio y el cuenteo de tokens (testigos). Contar tokens en las TLBs supone una forma natural y eficiente para la clasificación de páginas de memoria. Además, evita el uso de solicitudes persistentes o arbitraje alguno, ya que si dos o más TLBs compiten para acceder a una página, los tokens se distribuyen apropiadamente y la clasifican como compartida. Sin embargo, la habilidad de los mecanismos basados en TLB para clasificar páginas privadas depende del tamaño de las TLBs. La clasificación basada en las TLBs se basa en la presencia de una traducción en las TLBs del sistema. Para evitarlo, se han propuesto diversos predictores de uso en las TLBs (UP), los cuales permiten una clasificación independiente del tamaño de las TLBs. En concreto, esta tesis presenta un sistema mediante el que se obtiene información de uso de página a nivel de sistema con la ayuda de un nivel de TLB compartida (SUP) o mediante TLBs cooperando juntas (CUP).
La major part de les dades referenciades per aplicacions paral·leles i seqüencials que s'executen en CMPs actuals són referenciades per un sol fil, és a dir, són privades. Recentment, algunes propostes aprofiten aquesta observació per a millorar molts aspectes dels CMPs, com és reduir el sobrecost de la coherència o la latència d'accés a memòries cau distribuïdes. L'efectivitat d'aquestes propostes depen en gran mesura de la quantitat de dades detectades com a privades. No obstant això, els mecanismes proposats fins a la data no consideren la migració de fils d'execució ni les fases d'una aplicació. Per tant, una quantitat considerable de dades privades no es detecta apropiadament. A fi d'augmentar la detecció de dades privades, aquesta tesi proposa un mecanisme basat en les TLBs, capaç de reclassificar les dades com a privades, i que detecta la migració dels fils d'execució sense afegir complexitat al sistema. Els mecanismes de classificació en les TLBs s'han analitzat en estructures de diversos nivells, incloent-hi sistemes amb TLBs d'últimnivell compartides i distribuïdes. Aquesta tesi presenta un mecanisme de classificació de pàgines basat en inspeccionar les TLBs d'altres nuclis després de cada fallada de TLB. Concretament, el mecanisme proposat es basa en l'intercanvi i el compte de tokens. Comptar tokens en les TLBs suposa una forma natural i eficient per a la classificació de pàgines de memòria. A més, evita l'ús de sol·licituds persistents o arbitratge, ja que si dues o més TLBs competeixen per a accedir a una pàgina, els tokens es distribueixen apropiadament i la classifiquen com a compartida. No obstant això, l'habilitat dels mecanismes basats en TLB per a classificar pàgines privades depenen de la grandària de les TLBs. La classificació basada en les TLBs resta en la presència d'una traducció en les TLBs del sistema. Per a evitar-ho, s'han proposat diversos predictors d'ús en les TLBs (UP), els quals permeten una classificació independent de la grandària de les TLBs. Específicament, aquesta tesi introdueix un predictor que obté informació d'ús de la pàgina a escala de sistema mitjançant un nivell de TLB compartida (SUP) or mitjançant TLBs cooperant juntes (CUP).
Esteve García, A. (2017). Design of Efficient TLB-based Data Classification Mechanisms in Chip Multiprocessors [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/86136
TESIS
Triplett, Josh. "Relativistic Causal Ordering A Memory Model for Scalable Concurrent Data Structures." PDXScholar, 2012. https://pdxscholar.library.pdx.edu/open_access_etds/497.
Full textOtto, Christian. "The mapping task and its various applications in next-generation sequencing." Doctoral thesis, Universitätsbibliothek Leipzig, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-161623.
Full textDiese Arbeit befasst sich mit der Entwicklung und dem Benchmarken von Verfahren zur Analyse von Daten aus Hochdurchsatz-Technologien, wie Tiling Arrays oder Hochdurchsatz-Sequenzierung. Tiling Arrays bildeten lange Zeit die Grundlage für die genomweite Untersuchung des Transkriptoms und kamen beispielsweise bei der Identifizierung funktioneller Elemente im menschlichen Genom zum Einsatz. In dieser Arbeit wird ein neues statistisches Verfahren zur Auswertung von Tiling Array-Daten vorgestellt. Darin werden Segmente als exprimiert klassifiziert, wenn sich deren Signale signifikant von der Hintergrundverteilung unterscheiden. Dadurch werden keine auf den Datensatz abgestimmten Parameterwerte benötigt. Die hier vorgestellte Methode erkennt differentiell exprimierte Segmente in biologischen Daten bei gleicher Sensitivität mit geringerer Falsch-Positiv-Rate im Vergleich zu den derzeit hauptsächlich eingesetzten Verfahren. Zudem ist die Methode bei der Erkennung von Exon-Intron Grenzen präziser. Die Suche nach Anhäufungen exprimierter Segmente hat darüber hinaus zur Entdeckung von sehr langen Regionen geführt, welche möglicherweise eine neue Klasse von macroRNAs darstellen. Nach dem Exkurs zu Tiling Arrays konzentriert sich diese Arbeit nun auf die Hochdurchsatz-Sequenzierung, für die bereits verschiedene Sequenzierungsprotokolle zur Untersuchungen des Genoms, Transkriptoms und Epigenoms etabliert sind. Einer der ersten und entscheidenden Schritte in der Analyse von Sequenzierungsdaten stellt in den meisten Fällen das Mappen dar, bei dem kurze Sequenzen (Reads) auf ein großes Referenzgenom aligniert werden. Die vorliegende Arbeit stellt algorithmische Methoden vor, welche das Mapping-Problem für drei wichtige Sequenzierungsprotokolle (DNA-Seq, RNA-Seq und MethylC-Seq) lösen. Alle Methoden wurden ausführlichen Benchmarks unterzogen und sind in der segemehl-Suite integriert. Als Erstes wird hier der Kern-Algorithmus von segemehl vorgestellt, welcher das Mappen von DNA-Sequenzierungsdaten ermöglicht. Seit der ersten Veröffentlichung wurde dieser kontinuierlich optimiert und erweitert. In dieser Arbeit werden umfangreiche und auf Reproduzierbarkeit bedachte Benchmarks präsentiert, in denen segemehl auf zahlreichen Datensätzen mit bekannten Mapping-Programmen verglichen wird. Die Ergebnisse zeigen, dass segemehl nicht nur sensitiver im Auffinden von optimalen Alignments bezüglich der Editierdistanz sondern auch sehr spezifisch im Vergleich zu anderen Methoden ist. Diese Vorteile sind in realen und simulierten Daten unabhängig von der Sequenzierungstechnologie oder der Länge der Reads erkennbar, gehen aber zu Lasten einer längeren Laufzeit und eines höheren Speicherverbrauchs. Als Zweites wird das Mappen von RNA-Sequenzierungsdaten untersucht, welches bereits von der Split-Read-Erweiterung von segemehl unterstützt wird. Aufgrund von Spleißen ist diese Form des Mapping-Problems rechnerisch aufwendiger. In dieser Arbeit wird das neue Programm lack vorgestellt, welches darauf abzielt, fehlende Read-Alignments mit Hilfe von de novo Spleiß-Information zu finden. Es erzielt hervorragende Ergebnisse und stellt somit eine sinnvolle Ergänzung zu Analyse-Pipelines für RNA-Sequenzierungsdaten dar. Als Drittes wird eine neue Methode zum Mappen von Bisulfit-behandelte Sequenzierungsdaten vorgestellt. Dieses Protokoll gilt als Goldstandard in der genomweiten Untersuchung der DNA-Methylierung, einer der wichtigsten epigenetischen Modifikationen in Tieren und Pflanzen. Dabei wird die DNA vor der Sequenzierung mit Natriumbisulfit behandelt, welches selektiv nicht methylierte Cytosine zu Uracilen konvertiert, während Methylcytosine davon unberührt bleiben. Die hier vorgestellte Bisulfit-Erweiterung führt die Seed-Suche auf einem reduziertem Alphabet durch und verifiziert die erhaltenen Treffer mit einem auf dynamischer Programmierung basierenden Bisulfit-sensitiven Alignment-Algorithmus. Das verwendete Verfahren ist somit unempfindlich gegenüber Bisulfit-Konvertierungen und erfordert im Gegensatz zu anderen Verfahren keine weitere Nachverarbeitung. Im Vergleich zu aktuell eingesetzten Programmen ist die Methode sensitiver und benötigt eine vergleichbare Laufzeit beim Mappen von Millionen von Reads auf große Genome. Bemerkenswerterweise wird die erhöhte Sensitivität bei gleichbleibend guter Spezifizität erreicht. Dadurch könnte diese Methode somit auch bessere Ergebnisse bei der präzisen Bestimmung der Methylierungsraten erreichen. Schließlich wird noch das Potential von Mapping-Strategien für Assemblierungen mit der Einführung eines neuen, Kristallisation-genanntes Verfahren zur unterstützten Assemblierung aufgezeigt. Es enthält Mapping als Hauptbestandteil und nutzt Zusatzinformation (z.B. Annotationen) als Unterstützung. Dieses Verfahren ermöglichte die erfolgreiche Assemblierung des kompletten mitochondrialen Genoms von Eulimnogammarus verrucosus trotz einer vorwiegend aus nukleärer DNA bestehenden genomischen Bibliothek. Zusammenfassend stellt diese Arbeit algorithmische Methoden vor, welche die Analysen von Tiling Array, DNA-Seq, RNA-Seq und MethylC-Seq Daten signifikant verbessern. Es werden zudem Standards für den Vergleich von Programmen zum Mappen von Daten der Hochdurchsatz-Sequenzierung vorgeschlagen. Darüber hinaus wird ein neues Verfahren zur unterstützten Genom-Assemblierung vorgestellt, welches erfolgreich bei der de novo-Assemblierung eines mitochondrialen Krustentier-Genoms eingesetzt wurde
Lama, Luca. "Development and testing of the atlas ibl rod pre production boards." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/6283/.
Full textWesterberg, Ellinor. "Efficient delta based updates for read-only filesystem images : An applied study in how to efficiently update the software of an ECU." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291740.
Full textDetta examensarbete undersöker en metod för att effektivt uppdatera mjukvaran i en styrenhet i en bil. En patch som skickas till en bil ska vara så liten som möjligt och helst enbart innehålla de delar av mjukvaran som ändrats. En populär algorithm för att skapa en sådan patch är bsdiff. Den är dock inte gjord för filsystemsavbildningar, utan för binärer. Därför studeras här ett alternativ. Denna alternativa metod är baserad på Androids updateringsprocess. En fristående variant av Android A/B Update är implementerad och och jämförd med bsdiff, med avseende på tiden det tar att generera en patch och storleken av den. Resultatet visar att bsdiff genererar mindre patchar. Däremot är bsdiff också betydligt långsammare. Vidare ökar tiden linearitmisk då storleken på patchen ökar. Detta innebär att Android A/B Update kan vara en bättre lösning för att updatera en styrenhet som innehåller ett filsystem. Det beror dock på vad som värderas högst; en mindre patch eller att processen att skapa patchen ska vara snabbare.
Pitz, Nora [Verfasser], Harald [Akademischer Betreuer] Appelshäuser, and Christoph [Akademischer Betreuer] Blume. "Gas system, gas quality monitor and detector control of the ALICE Transition Radiation Detector and studies for a pre-trigger data read-out system / Nora Pitz. Gutachter: Harald Appelshäuser ; Christoph Blume." Frankfurt am Main : Univ.-Bibliothek Frankfurt am Main, 2012. http://d-nb.info/1044412801/34.
Full textAyyad, Majed. "Real-Time Event Centric Data Integration." Doctoral thesis, University of Trento, 2014. http://eprints-phd.biblio.unitn.it/1353/1/REAL-TIME_EVENT_CENTRIC_DATA_INTEGRATION.pdf.
Full textFrick, Kolmyr Sara, and Thingvall Katarina Juhlin. "Samhällsinformation för alla? : Hur man anpassar ett informationsmaterial till både en lässvag och lässtark målgrupp." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-1521.
Full textHernane, Soumeya-Leila. "Modèles et algorithmes de partage de données cohérents pour le calcul parallèle distribué à haut débit." Thesis, Université de Lorraine, 2013. http://www.theses.fr/2013LORR0042/document.
Full textData Handover is a library of functions adapted to large-scale distributed systems. It provides routines that allow acquiring resources in reading or writing in the ways that are coherent and transparent for users. We modelled the life cycle of Dho by a finite state automaton and through experiments; we have found that our approach produced an overlap between the calculation of the application and the control of the data. These experiments were conducted both in simulated mode and in real environment (Grid'5000). We exploited the GRAS library of the SimGrid toolkit. Several clients try to access the resource concurrently according the client-server paradigm. By the theory of queues, the stability of the model was demonstrated in a centralized environment. We improved, the distributed algorithm for mutual exclusion (of Naimi and Trehel), by introducing following features: (1) Allowing the mobility of processes (ADEMLE), (2) introducing shared locks (AEMLEP) and finally (3) merging both properties cited above into an algorithm summarising (ADEMLEP). We proved the properties, safety and liveliness, theoretically for all extended algorithms. The proposed peer-to-peer system combines our extended algorithms and original Data Handover model. Lock and resource managers operate and interact each other in an architecture based on three levels. Following the experimental study of the underlying system on Grid'5000, and the results obtained, we have proved the performance and stability of the model Dho over a multitude of parameters
Engel, Heiko [Verfasser], Udo [Gutachter] Kebschull, and Lars [Gutachter] Hedrich. "Development of a read-out receiver card for fast processing of detector data : ALICE HLT run 2 readout upgrade and evaluation of dataflow hardware description for high energy physics readout applications / Heiko Engel ; Gutachter: Udo Kebschull, Lars Hedrich." Frankfurt am Main : Universitätsbibliothek Johann Christian Senckenberg, 2019. http://d-nb.info/1192372166/34.
Full textFujimoto, Masaki Stanley. "Graph-Based Whole Genome Phylogenomics." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8461.
Full textMacias, Filiberto. "Real Time Telemetry Data Processing and Data Display." International Foundation for Telemetering, 1996. http://hdl.handle.net/10150/611405.
Full textThe Telemetry Data Center (TDC) at White Sands Missile Range (WSMR) is now beginning to modernize its existing telemetry data processing system. Modern networking and interactive graphical displays are now being introduced. This infusion of modern technology will allow the TDC to provide our customers with enhanced data processing and display capability. The intent of this project is to outline this undertaking.
Kalinda, Mkenda Beatrice. "Essays on purchasing power parity, real exchange rate, and optimum currency areas /." Göteborg : Nationalekonomiska institutionen, Handelshögsk, 2000. http://www.handels.gu.se/epc/data/html/html/1973.html.
Full textOstroumov, Ivan Victorovich. "Real time sensors data processing." Thesis, Polit. Challenges of science today: XIV International Scientific and Practical Conference of Young Researchers and Students, April 2–3, 2014 : theses. – К., 2014. – 35p, 2014. http://er.nau.edu.ua/handle/NAU/26582.
Full textMaiga, Aïssata, and Johanna Löv. "Real versus Simulated data for Image Reconstruction : A comparison between training with sparse simulated data and sparse real data." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-302028.
Full textVår studie undersöker hur träning med gles simulerad data och gles verklig data från en eventkamera, påverkar bildrekonstruktion. Vi tränade två modeller, en med simulerad data och en med verklig för att sedan jämföra dessa på ett flertal kriterier som antal event, hastighet och high dynamic range, HDR. Resultaten visar att skillnaden mellan att träna med simulerad data och verklig data inte är stor. Modellen tränad med verklig data presterade bättre i de flesta fall, men den genomsnittliga skillnaden mellan resultaten är bara 2%. Resultaten bekräftar vad tidigare studier har visat; träning med simulerad data generaliserar bra, och som denna studie visar även vid träning på glesa datamängder.
Jafar, Fatmeh Nazmi Ahmad. "Simulating traditional traffic data from satellite data-preparing for real satellite data test /." The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488193665235894.
Full textKilpatrick, Stephen, Galen Rasche, Chris Cunningham, Myron Moodie, and Ben Abbott. "REORDERING PACKET BASED DATA IN REAL-TIME DATA ACQUISITION SYSTEMS." International Foundation for Telemetering, 2007. http://hdl.handle.net/10150/604571.
Full textUbiquitous internet protocol (IP) hardware has reached performance and capability levels that allow its use in data collection and real-time processing applications. Recent development experience with IP-based airborne data acquisition systems has shown that the open, pre-existing IP tools, standards, and capabilities support this form of distribution and sharing of data quite nicely, especially when combined with IP multicast. Unfortunately, the packet based nature of our approach also posed some problems that required special handling to achieve performance requirements. We have developed methods and algorithms for the filtering, selecting, and retiming problems associated with packet-based systems and present our approach in this paper.
Ng, Sunny, Mei Y. Wei, Austin Somes, Mich Aoyagi, and Joe Leung. "REAL-TIME DATA SERVER-CLIENT SYSTEM FOR THE NEAR REAL-TIME RESEARCH ANALYSIS OF ENSEMBLE DATA." International Foundation for Telemetering, 1998. http://hdl.handle.net/10150/609671.
Full textThis paper describes a distributed network client-server system developed for researchers to perform real-time or near-real-time analyses on ensembles of telemetry data previously done in post-flight. The client-server software approach provides extensible computing and real-time access to data at multiple remote client sites. Researchers at remote sites can share similar information as those at the test site. The system has been used successfully in numerous commercial, academic and NASA wide aircraft flight testing.
Karlsson, Anders. "Presentation of Real-Time TFR-data." Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-17228.
Full textAchtzehnter, Joachim, and Preston Hauck. "REAL-TIME TENA-ENABLED DATA GATEWAY." International Foundation for Telemetering, 2004. http://hdl.handle.net/10150/605318.
Full textThis paper describes the TENA architecture, which has been proposed by the Foundation Initiative 2010 (FI 2010) project as the basis for future US Test Range software systems. The benefits of this new architecture are explained by comparing the future TENA-enabled range infrastructure with the current situation of largely non-interoperable range resources. Legacy equipment and newly acquired off-the-shelf equipment that does not directly support TENA can be integrated into a TENA environment using TENA Gateways. This paper focuses on issues related to the construction of such gateways, including the important issue of real-time requirements when dealing with real-world data acquisition instruments. The benefits of leveraging commercial off-the-shelf (COTS) Data Acquisition Systems that are based on true real-time operating systems are discussed in the context of TENA Gateway construction.
White, Allan P., and Richard K. Dean. "Real-Time Test Data Processing System." International Foundation for Telemetering, 1989. http://hdl.handle.net/10150/614650.
Full textThe U.S. Army Aviation Development Test Activity at Fort Rucker, Alabama needed a real-time test data collection and processing capability for helicopter flight testing. The system had to be capable of collecting and processing both FM and PCM data streams from analog tape and/or a telemetry receiver. The hardware and software was to be off the shelf whenever possible. The integration was to result in a stand alone telemetry collection and processing system.
Toufie, Moegamat Zahir. "Real-time loss-less data compression." Thesis, Cape Technikon, 2000. http://hdl.handle.net/20.500.11838/1367.
Full textData stored on disks generally contain significant redundancy. A mechanism or algorithm that recodes the data to lessen the data size could possibly double or triple the effective data that could be stored on the media. One mechanism of doing this is by data compression. Many compression algorithms currently exist, but each one has its own advantages as well as disadvantages. The objective of this study', to formulate a new compression algorithm that could be implemented in a real-time mode in any file system. The new compression algorithm should also execute as fast as possible, so as not to cause a lag in the file systems performance. This study focuses on binary data of any type, whereas previous articles such as (Huftnlan. 1952:1098), (Ziv & Lempel, 1977:337: 1978:530), (Storer & Szymanski. 1982:928) and (Welch, 1984:8) have placed particular emphasis on text compression in their discussions of compression algorithms for computer data. The resulting compression algorithm that is formulated by this study is Lempel-Ziv-Toutlc (LZT). LZT is basically an LZ77 (Ziv & Lempel, 1977:337) encoder with a buffer size equal in size to that of the data block of the file system in question. LZT does not make this distinction, it discards the sliding buffer principle and uses each data block of the entire input stream. as one big buffer on which compression can be performed. LZT also handles the encoding of a match slightly different to that of LZ77. An LZT match is encoded by two bit streams, the first specifying the position of the match and the other specifying the length of the match. This combination is commonly referred to as a
Jelecevic, Edin, and Thong Nguyen Minh. "VISUALIZE REAL-TIME DATA USING AUTOSAR." Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-76618.
Full textIdag finns det fler bilar på vägarna än någonsin tidigare, fordonsindustrin expanderar ständigt och anpassar sig efter behovet av ny teknik. För att förbättra komplexitets hanteringen och minska tillverkningstider och kostnader så har Automotive Open System Architecture, närmare känt som AUTOSAR, införts som har målet att standardisera elektroniska styrenheter (ECUn). Idag används AUTOSAR-standardiseringen för fordonsindustrin, vad projektet kommer att utforska är att se om man kan använda standarden för något som inte har någon direkt koppling till fordonsindustrin. Rapporten ger en förklaring om AUTOSAR, i projektet så används AUTOSAR för att visualisera realtidsdata från webben på en LED-karta. I detta projekt har en fysisk visualiseringstavla skapats där kod skrevs inom den integrerade mjukvaruutvecklingsmiljön Arctic Studio, visualiseringstavlan kommer att användas på ARCCOREs egna kontor i Linköping.
Ayyad, Majed. "Real-Time Event Centric Data Integration." Doctoral thesis, Università degli studi di Trento, 2014. https://hdl.handle.net/11572/367750.
Full textEriksson, Ruth, and Miranda Luis Galaz. "Ett digitalt läromedel för barn med lässvårigheter." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-189205.
Full textThe digital age is changing society. New technology provides opportunities to produce and organize knowledge in new ways. The technology available in schools today can also be used to optimize literacy training for students with reading difficulties. This thesis examines how a digital teaching material for literacy training for children with reading difficulties can be designed and implemented, and shows that this is possible to achieve. A digital learning material of good quality should be based on a scientifically accepted method of literacy training. This thesis uses Gunnel Wendick’s training model which is already used by many special education teachers. The training model is used with word lists, without computers, tablets or the like. We analyze Wendick’s training model and employ it, in a creative way, to design a digital equivalent to the original model. Our goal is to create a digital learning material that implements Wendick’s training model, and thus make it possible to use in various smart devices. With this we hope to facilitate the work of both the special education teachers and children with reading difficulties and to make the procedures more appealing and creative. In our study, we examine various technical possibilities to implement Wendick’s training model. We choose to create a prototype of a web application, with suitable functionality for both administrators, special education teachers and students. The prototype’s functionality can be divided into two parts, the administrative part and the exercise part. The administrative part covers the user interface and functionality for handling students and other relevant data. The exercise part includes training views and their functionality. The functionality of the exercises is intended to train the auditory channel, the phonological awarenesswith the goal of reading accurately, and the orthographic decoding - with the goal that students should automate their decoding, that is, to perceive the words as an image. In the development of the digital teaching material, we used proven principles in software technologies and proven implementation techniques. It compiles high-level requirements, the domain model and defines the appropriate use cases. To implement the application, we used the Java EE platform, Web Speech API, Prime Faces specifications, and more. Our prototype is a good start to inspire further development, with the hope that a full web application will be created, that will transform the practices in our schools.
Bennion, Laird. "Identifying data center supply and demand." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/103457.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 66-69).
This thesis documents new methods for gauging supply and demand of data center capacity and addresses issues surrounding potential threats to data center demand. This document is divided between a primer on the composition and engineering of a current data center, discussion of issues surrounding data center demand, Moore's Law and cloud computing, and then transitions to presentation of research on data center demand and supply.
by Laird Bennion.
S.M. in Real Estate Development
Tidball, John E. "REAL-TIME HIGH SPEED DATA COLLECTION SYSTEM WITH ADVANCED DATA LINKS." International Foundation for Telemetering, 1997. http://hdl.handle.net/10150/609754.
Full textThe purpose of this paper is to describe the development of a very high-speed instrumentation and digital data recording system. The system converts multiple asynchronous analog signals to digital data, forms the data into packets, transmits the packets across fiber-optic lines and routes the data packets to destinations such as high speed recorders, hard disks, Ethernet, and data processing. This system is capable of collecting approximately one hundred megabytes per second of filtered packetized data. The significant system features are its design methodology, system configuration, decoupled interfaces, data as packets, the use of RACEway data and VME control buses, distributed processing on mixedvendor PowerPCs, real-time resource management objects, and an extendible and flexible configuration.
Cai, Simin. "Systematic Design of Data Management for Real-Time Data-Intensive Applications." Licentiate thesis, Mälardalens högskola, Inbyggda system, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-35369.
Full textDAGGERS
Park, Sun Jung Park S. M. Massachusetts Institute of Technology. "Data science strategies for real estate development." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/129099.
Full textCataloged from student-submitted PDF of thesis.
Includes bibliographical references (pages 43-45).
Big data and the increasing usage of data science is changing the way the real estate industry is functioning. From pricing estimates and valuation to marketing and leasing, the power of predictive analytics is improving the business processes and presenting new ways of operating. The field of affordable housing development, however, has often lacked investment and seen delays in adopting new technology and data science. With the growing need for housing, every city needs combined efforts from both public and private sectors, as well as a stronger knowledge base of the demands and experiences of people needing these spaces. Data science can provide insights into the needs for affordable housing and enhance efficiencies in development to help get those homes built, leased, or even sold in a new way. This research provides a tool-kit for modern-day real estate professionals in identifying appropriate data to make better-informed decisions in the real estate development process. From public city data to privately gathered data, there is a vast amount of information and numerous sources available in the industry. This research aims to compile a database of data sources, analyze the development process to understand the key metrics for stakeholders to enable decisions and map those sources to each phase or questions that need to be answered to make an optimal development decision. This research reviews the developer's perspective of data science and provides a direction that can be used to orient themselves during the initial phase to incorporate a data-driven strategy into their affordable multi-family housing.
by Sun Jung Park.
S.M. in Real Estate Development
S.M.inRealEstateDevelopment Massachusetts Institute of Technology, Program in Real Estate Development in conjunction with the Center for Real Estate